source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Advanced%20SCSI%20Programming%20Interface
In computing, ASPI (Advanced SCSI Programming Interface) is an Adaptec-developed programming interface which standardizes communication on a computer bus between a SCSI driver module on the one hand and SCSI (and ATAPI) peripherals on the other. ASPI structure The ASPI manager software provides an interface between ASPI modules (device drivers or applications with direct SCSI support), a SCSI host adapter, and SCSI devices connected to the host adapter. The ASPI manager is specific to the host adapter and operating system; its primary role is to abstract the host adapter specifics and provide a generic software interface to SCSI devices. On Windows 9x and Windows NT, the ASPI manager is generic and relies on the services of SCSI miniport drivers. On those systems, the ASPI interface is designed for applications which require SCSI pass-through functionality (such as CD-ROM burning software). The primary operations supported by ASPI are discovery of host adapters and attached devices, and submitting SCSI commands to devices via SRBs (SCSI Request Blocks). ASPI supports concurrent execution of SCSI commands. History Originally inspired by a driver architecture developed by Douglas W. Goodall for Ampro Computers in 1983, ASPI was developed by Adaptec around 1990. It was initially designed to support DOS, OS/2, Windows 3.x, and Novell NetWare. It was originally written to support SCSI devices; support for ATAPI devices was added later. Most other SCSI host adapter vendors (for example BusLogic, DPT, AMI, Future Domain, DTC) shipped their own ASPI managers with their hardware. Adaptec also developed generic SCSI disk and CD-ROM drivers for DOS (ASPICD.SYS and ASPIDISK.SYS). Microsoft licensed the interface for use with Windows 9x series. At the same time Microsoft developed SCSI Pass Through Interface (SPTI), an in-house substitute that worked on the NT platform. Microsoft did not include ASPI in Windows 2000/XP, in favor of its own SPTI. Users may still download
https://en.wikipedia.org/wiki/NSAKEY
_NSAKEY was a variable name discovered in Windows NT 4 SP5 in 1999 by Andrew D. Fernandes of Cryptonym Corporation. The variable contained a 1024-bit public key; such keys are used in public-key cryptography for encryption and authentication. Because of the name, however, it was speculated that the key would allow the United States National Security Agency (NSA) to subvert any Windows user's security. Microsoft denied the speculation and said that the key's name came from the fact that NSA was the technical review authority for U.S. cryptography export controls. Overview Microsoft requires all cryptography suites that interoperate with Microsoft Windows to have a digital signature. Since only Microsoft-approved cryptography suites can be shipped with Windows, it is possible to keep export copies of this operating system in compliance with the Export Administration Regulations (EAR), which are enforced by the Bureau of Industry and Security (BIS). It was already known that Microsoft used two keys, a primary and a spare, either of which can create valid signatures. Upon releasing the Service Pack 5 for Windows NT 4, Microsoft had neglected to remove the debugging symbols in ADVAPI32.DLL, a library used for advanced Windows features such as Registry and Security. Andrew Fernandes, chief scientist with Cryptonym, found the primary key stored in the variable _KEY and the second key was labeled _NSAKEY. Fernandes published his discovery, touching off a flurry of speculation and conspiracy theories, including the possibility that the second key was owned by the United States National Security Agency (the NSA) and allowed the intelligence agency to subvert any Windows user's security. During a presentation at the Computers, Freedom and Privacy 2000 (CFP2000) conference, Duncan Campbell, senior research fellow at the Electronic Privacy Information Center (EPIC), mentioned the _NSAKEY controversy as an example of an outstanding issue related to security and surveillance.
https://en.wikipedia.org/wiki/Ronald%20Graham
Ronald Lewis Graham (October 31, 1935July 6, 2020) was an American mathematician credited by the American Mathematical Society as "one of the principal architects of the rapid development worldwide of discrete mathematics in recent years". He was president of both the American Mathematical Society and the Mathematical Association of America, and his honors included the Leroy P. Steele Prize for lifetime achievement and election to the National Academy of Sciences. After graduate study at the University of California, Berkeley, Graham worked for many years at Bell Labs and later at the University of California, San Diego. He did important work in scheduling theory, computational geometry, Ramsey theory, and quasi-randomness, and many topics in mathematics are named after him. He published six books and about 400 papers, and had nearly 200 co-authors, including many collaborative works with his wife Fan Chung and with Paul Erdős. Graham has been featured in Ripley's Believe It or Not! for being not only "one of the world's foremost mathematicians", but also an accomplished trampolinist and juggler. He served as president of the International Jugglers' Association. Biography Graham was born in Taft, California, on October 31, 1935; his father was an oil field worker and later merchant marine. Despite Graham's later interest in gymnastics, he was small and non-athletic. He grew up moving frequently between California and Georgia, skipping several grades of school in these moves, and never staying at any one school longer than a year. As a teenager, he moved to Florida with his then-divorced mother, where he went to but did not finish high school. Instead, at the age of 15 he won a Ford Foundation scholarship to the University of Chicago, where he learned gymnastics but took no mathematics courses. After three years, when his scholarship expired, he moved to the University of California, Berkeley, officially as a student of electrical engineering but also studying n
https://en.wikipedia.org/wiki/MEDLINE
MEDLINE (Medical Literature Analysis and Retrieval System Online, or MEDLARS Online) is a bibliographic database of life sciences and biomedical information. It includes bibliographic information for articles from academic journals covering medicine, nursing, pharmacy, dentistry, veterinary medicine, and health care. MEDLINE also covers much of the literature in biology and biochemistry, as well as fields such as molecular evolution. Compiled by the United States National Library of Medicine (NLM), MEDLINE is freely available on the Internet and searchable via PubMed and NLM's National Center for Biotechnology Information's Entrez system. History MEDLARS (Medical Literature Analysis and Retrieval System) is a computerised biomedical bibliographic retrieval system. It was launched by the National Library of Medicine in 1964 and was the first large-scale, computer-based, retrospective search service available to the general public. Initial development of MEDLARS Since 1879, the National Library of Medicine has published Index Medicus, a monthly guide to medical articles in thousands of journals. The huge volume of bibliographic citations was manually compiled. In 1957 the staff of the NLM started to plan the mechanization of the Index Medicus, prompted by a desire for a better way to manipulate all this information, not only for Index Medicus but also to produce subsidiary products. By 1960 a detailed specification was prepared, and by the spring of 1961, request for proposals were sent out to 72 companies to develop the system. As a result, a contract was awarded to the General Electric Company. A Minneapolis-Honeywell 800 computer, which was to run MEDLARS, was delivered to the NLM in March 1963, and Frank Bradway Rogers (Director of the NLM 1949 to 1963) said at the time, "..If all goes well, the January 1964 issue of Index Medicus will be ready to emerge from the system at the end of this year. It may be that this will mark the beginning of a new era in medi
https://en.wikipedia.org/wiki/Indicator%20function
In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then if and otherwise, where is a common notation for the indicator function. Other common notations are and The indicator function of is the Iverson bracket of the property of belonging to ; that is, For example, the Dirichlet function is the indicator function of the rational numbers as a subset of the real numbers. Definition The indicator function of a subset of a set is a function defined as The Iverson bracket provides the equivalent notation, or to be used instead of The function is sometimes denoted , , , or even just . Notation and terminology The notation is also used to denote the characteristic function in convex analysis, which is defined as if using the reciprocal of the standard definition of the indicator function. A related concept in statistics is that of a dummy variable. (This must not be confused with "dummy variables" as that term is usually used in mathematics, also called a bound variable.) The term "characteristic function" has an unrelated meaning in classic probability theory. For this reason, traditional probabilists use the term indicator function for the function defined here almost exclusively, while mathematicians in other fields are more likely to use the term characteristic function to describe the function that indicates membership in a set. In fuzzy logic and modern many-valued logic, predicates are the characteristic functions of a probability distribution. That is, the strict true/false valuation of the predicate is replaced by a quantity interpreted as the degree of truth. Basic properties The indicator or characteristic function of a subset of some set maps elements of to the range . This mapping is surjective only when is a non-empty proper subset of . If then By a similar argument,
https://en.wikipedia.org/wiki/Hermann%20Joseph%20Muller
Hermann Joseph Muller (December 21, 1890 – April 5, 1967) was an American geneticist, educator, and Nobel laureate best known for his work on the physiological and genetic effects of radiation (mutagenesis), as well as his outspoken political beliefs. Muller frequently warned of long-term dangers of radioactive fallout from nuclear war and nuclear testing, which resulted in greater public scrutiny of these practices. Early life Muller was born in New York City, the son of Frances (Lyons) and Hermann Joseph Muller Sr., an artisan who worked with metals. Muller was a third-generation American whose father's ancestors were originally Catholic and came to the United States from Koblenz. His mother's family was of mixed Jewish (descended from Spanish and Portuguese Jews) and Anglican background, and had come from Britain. Among his first cousins was Alfred Kroeber (Kroeber was Ursula Le Guin's father) and first cousins once removed was Herbert J. Muller. As an adolescent, Muller attended a Unitarian church and considered himself a pantheist; in high school, he became an atheist. He excelled in the public schools. At 16, he entered Columbia College. From his first semester, he was interested in biology; he became an early convert of the Mendelian-chromosome theory of heredity—and the concept of genetic mutations and natural selection as the basis for evolution. He formed a biology club and also became a proponent of eugenics; the connections between biology and society would be his perennial concern. Muller earned a Bachelor of Arts degree in 1910. Muller remained at Columbia (the pre-eminent American zoology program at the time, due to E. B. Wilson and his students) for graduate school. He became interested in the Drosophila genetics work of Thomas Hunt Morgan's fly lab after undergraduate bottle washers Alfred Sturtevant and Calvin Bridges joined his biology club. In 1911–1912, he studied metabolism at Cornell University, but remained involved with Columbia. He follow
https://en.wikipedia.org/wiki/Gaussian%20year
A Gaussian year is defined as 365.2568983 days. It was adopted by Carl Friedrich Gauss as the length of the sidereal year in his studies of the dynamics of the solar system. A slightly different value is now accepted as the length of the sidereal year, and the value accepted by Gauss is given a special name. A particle of negligible mass, that orbits a body of 1 solar mass in this period, has a mean axis for its orbit of 1 astronomical unit by definition. The value is derived from Kepler's third law as where k is the Gaussian gravitational constant. See also References Types of year Astronomical coordinate systems
https://en.wikipedia.org/wiki/S-box
In cryptography, an S-box (substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext, thus ensuring Shannon's property of confusion. Mathematically, an S-box is a nonlinear vectorial Boolean function. In general, an S-box takes some number of input bits, m, and transforms them into some number of output bits, n, where n is not necessarily equal to m. An m×n S-box can be implemented as a lookup table with 2m words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g. the Blowfish and the Twofish encryption algorithms). Example One good example of a fixed table is the S-box from DES (S5), mapping 6-bit input into a 4-bit output: Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; the corresponding output would be "1001". Analysis and properties When DES was first published in 1977, the design criteria of its S-boxes were kept secret to avoid compromising the technique of differential cryptanalysis (which was not yet publicly known). As a result, research in what made good S-boxes was sparse at the time. Rather, the eight S-boxes of DES were the subject of intense study for many years out of a concern that a backdoor (a vulnerability known only to its designers) might have been planted in the cipher. As the S-boxes are the only nonlinear part of the cipher, compromising those would compromise the entire cipher. The S-box design criteria were eventually published (in ) after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack such that
https://en.wikipedia.org/wiki/Internet%20Speculative%20Fiction%20Database
The Internet Speculative Fiction Database (ISFDB) is a database of bibliographic information on genres considered speculative fiction, including science fiction and related genres such as fantasy, alternate history, and horror fiction. The ISFDB is a volunteer effort, with the database being open for moderated editing and user contributions, and a wiki that allows the database editors to coordinate with each other. the site had catalogued 2,002,324 story titles from 232,816 authors. The code for the site has been used in books and tutorials as examples of database schema and organizing content. The ISFDB database and code are available under Creative Commons licensing. The site won the Wooden Rocket Award in the Best Directory Site category in 2005. Purpose The ISFDB database indexes speculative fiction (science fiction, fantasy, horror, and alternate history) authors, novels, short fiction, essays, publishers, awards, and magazines in print, electronic, and audio formats. It supports author pseudonyms, series, and cover art plus interior illustration credits, which are combined into integrated author, artist, and publisher bibliographies with brief biographical data. An ongoing effort is verification of publication contents and secondary bibliographic sources against the database, with the goals being data accuracy and to improve the coverage of speculative fiction to 100 percent. History Several speculative fiction author bibliographies were posted to the USENET newsgroup rec.arts.sf.written from 1984 to 1994 by Jerry Boyajian, Gregory J. E. Rawlins and John Wenn. A more or less standard bibliographic format was developed for these postings. Many of these bibliographies can still be found at The Linköping Science Fiction Archive. In 1993, a searchable database of awards information was developed by Al von Ruff. In 1994, John R. R. Leavitt created the Speculative Fiction Clearing House (SFCH). In late 1994, he asked for help in displaying awards information,
https://en.wikipedia.org/wiki/DNA%E2%80%93DNA%20hybridization
In genomics, DNA–DNA hybridization is a molecular biology technique that measures the degree of genetic similarity between pools of DNA sequences. It is usually used to determine the genetic distance between two organisms and has been used extensively in phylogeny and taxonomy. Method The DNA of one organism is labelled, then mixed with the unlabelled DNA to be compared against. The mixture is incubated to allow DNA strands to dissociate and then cooled to form renewed hybrid double-stranded DNA. Hybridized sequences with a high degree of similarity will bind more firmly, and require more energy to separate them: i.e. they separate when heated at a higher temperature than dissimilar sequences, a process known as "DNA melting". To assess the melting profile of the hybridized DNA, the double-stranded DNA is bound to a column or filter and the mixture is heated in small steps. At each step, the column or filter is washed; sequences that melt become single-stranded and wash off. The temperatures at which labelled DNA comes off reflects the amount of similarity between sequences (and the self-hybridization sample serves as a control). These results are combined to determine the degree of genetic similarity between organisms. One method was introduced for hybridizing large numbers of DNA samples against large numbers of DNA probes on a single membrane. These samples would have to be separated in their own lanes inside the membranes and then the membrane would have to be rotated to a different angle where it would result in simultaneous hybridization with many different DNA probes. Uses When several species are compared, similarity values allow organisms to be arranged in a phylogenetic tree; it is therefore one possible approach to carrying out molecular systematics. In microbiology DNA–DNA hybridization (DDH) is used as a primary method to distinguish bacterial species as it is difficult to visually classify them accurately. This technique is not widely used on
https://en.wikipedia.org/wiki/Poisson%27s%20ratio
In materials science and solid mechanics, Poisson's ratio (nu) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2–0.3. The ratio is named after the French mathematician and physicist Siméon Poisson. Origin Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio. The Poisson's ratio of a stable, isotropic, linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus, the shear modulus and bulk modulus to have positive values. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poi
https://en.wikipedia.org/wiki/Rader%27s%20FFT%20algorithm
Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution (the other algorithm for FFTs of prime sizes, Bluestein's algorithm, also works by rewriting the DFT as a convolution). Since Rader's algorithm only depends upon the periodicity of the DFT kernel, it is directly applicable to any other transform (of prime order) with a similar property, such as a number-theoretic transform or the discrete Hartley transform. The algorithm can be modified to gain a factor of two savings for the case of DFTs of real data, using a slightly modified re-indexing/permutation to obtain two half-size cyclic convolutions of real data; an alternative adaptation for DFTs of real data uses the discrete Hartley transform. Winograd extended Rader's algorithm to include prime-power DFT sizes , and today Rader's algorithm is sometimes described as a special case of Winograd's FFT algorithm, also called the multiplicative Fourier transform algorithm (Tolimieri et al., 1997), which applies to an even larger class of sizes. However, for composite sizes such as prime powers, the Cooley–Tukey FFT algorithm is much simpler and more practical to implement, so Rader's algorithm is typically only used for large-prime base cases of Cooley–Tukey's recursive decomposition of the DFT. Algorithm Begin with the definition of the discrete Fourier transform: If N is a prime number, then the set of non-zero indices forms a group under multiplication modulo N. One consequence of the number theory of such groups is that there exists a generator of the group (sometimes called a primitive root, which can be found by exhaustive search or slightly better algorithms). This generator is an integer g such that for any non-zero index n and for a unique (forming a bijection from q to non-zero n). Similarly, for any non-zero index
https://en.wikipedia.org/wiki/Chirp%20Z-transform
The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT. Specifically, the chirp Z transform calculates the Z transform at a finite number of points zk along a logarithmic spiral contour, defined as: where A is the complex starting point, W is the complex ratio between points, and M is the number of points to calculate. Like the DFT, the chirp Z-transform can be computed in O(n log n) operations where . An O(N log N) algorithm for the inverse chirp Z-transform (ICZT) was described in 2003, and in 2019. Bluestein's algorithm Bluestein's algorithm expresses the CZT as a convolution and implements it efficiently using FFT/IFFT. As the DFT is a special case of the CZT, this allows the efficient calculation of discrete Fourier transform (DFT) of arbitrary sizes, including prime sizes. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.) It was conceived in 1968 by Leo Bluestein. Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969). Recall that the DFT is defined by the formula If we replace the product nk in the exponent by the identity we thus obtain: This summation is precisely a convolution of the two sequences an and bn defined by: with the output of the convolution multiplied by N phase factors bk*. That is: This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of complex chirp bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length g
https://en.wikipedia.org/wiki/Prime-factor%20FFT%20algorithm
The prime-factor algorithm (PFA), also called the Good–Thomas algorithm (1958/1963), is a fast Fourier transform (FFT) algorithm that re-expresses the discrete Fourier transform (DFT) of a size N = N1N2 as a two-dimensional N1×N2 DFT, but only for the case where N1 and N2 are relatively prime. These smaller transforms of size N1 and N2 can then be evaluated by applying PFA recursively or by using some other FFT algorithm. PFA should not be confused with the mixed-radix generalization of the popular Cooley–Tukey algorithm, which also subdivides a DFT of size N = N1N2 into smaller transforms of size N1 and N2. The latter algorithm can use any factors (not necessarily relatively prime), but it has the disadvantage that it also requires extra multiplications by roots of unity called twiddle factors, in addition to the smaller transforms. On the other hand, PFA has the disadvantages that it only works for relatively prime factors (e.g. it is useless for power-of-two sizes) and that it requires more complicated re-indexing of the data based on the additive group isomorphisms. Note, however, that PFA can be combined with mixed-radix Cooley–Tukey, with the former factorizing N into relatively prime components and the latter handling repeated factors. PFA is also closely related to the nested Winograd FFT algorithm, where the latter performs the decomposed N1 by N2 transform via more sophisticated two-dimensional convolution techniques. Some older papers therefore also call Winograd's algorithm a PFA FFT. (Although the PFA is distinct from the Cooley–Tukey algorithm, Good's 1958 work on the PFA was cited as inspiration by Cooley and Tukey in their 1965 paper, and there was initially some confusion about whether the two algorithms were different. In fact, it was the only prior FFT work cited by them, as they were not then aware of the earlier research by Gauss and others.) Algorithm Let a polynomial and a principal th root of unity. We define the DFT of as the -tuple .
https://en.wikipedia.org/wiki/Cochlear%20implant
A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound. The implant has two main components. The outside component is generally worn behind the ear, but could also be attached to clothing, for example, in young children. This component, the sound processor, contains microphones, electronics that include digital signal processor (DSP) chips, battery, and a coil that transmits a signal to the implant across the skin. The inside component, the actual implant, has a coil to receive signals, electronics, and an array of electrodes which is placed into the cochlea, which stimulate the cochlear nerve. The surgical procedure is performed under general anesthesia. Surgical risks are minimal and most individuals will undergo outpatient surgery and go home the same day. However, some individuals will experience dizziness, and on rare occasions, tinnitus or facial nerve bruising. From the early days of implants in the 1970s and the 1980s, speech perception via an implant has steadily increased. More than 200,000 people in the United States had received a CI through 2019. Many users of modern implants gain reasonable to good hearing and speech perception skills post-implantation, especially when combined with lipreading. One of the challenges that remain with these implants is that hearing and speech understanding skills after implantation show a wide range of variation across individual implant users. Factors such as age of implantation, parental involvement and education level, duration and cause of he
https://en.wikipedia.org/wiki/LaTeX%20Project%20Public%20License
The LaTeX Project Public License (LPPL) is a software license originally written for the LaTeX system. Software distributed under the terms of the LPPL can be regarded as free software; however, it is not copylefted. Besides the LaTeX base system, the LPPL is also used for most third-party LaTeX packages. Software projects other than LaTeX rarely use it. Unique features of the license The LPPL grew from Donald Knuth's original license for TeX, which states that the source code for TeX may be used for any purpose but a system built with it can only be called 'TeX' if it strictly conforms to his canonical program. The incentive for this provision was to ensure that documents written for TeX will be readable for the foreseeable future and TeX and its extensions will still compile documents written from the early 1980s to produce output exactly as intended. Quoting Frank Mittelbach, the main author of the license: "LPPL attempts to preserve the fact that something like LaTeX is a language which is used for communication, that is if you write a LaTeX document you expect to be able to send it to me and to work at my end like it does at yours". The most unusual part of the LPPL and equally the most controversial used to be the 'filename clause': You must not distribute the modified file with the filename of the original file. This feature made some people deny that the LPPL is a free software license. In particular the Debian community considered in 2003 excluding LaTeX from its core distribution because of this. However, version 1.3 of the LPPL has weakened this restriction. Now it is only necessary that modified components identify themselves "clearly and unambiguously" as modified versions, both in the source and also when called in some sort of interactive mode. A name change of the work is still recommended, however. In order to provide project continuity in the case that the copyright holder no longer wishes to maintain the work, maintenance can be passed on
https://en.wikipedia.org/wiki/List%20of%20mathematical%20shapes
Following is a list of some mathematically well-defined shapes. Algebraic curves Cubic plane curve Quartic plane curve Rational curves Degree 2 Conic sections Unit circle Unit hyperbola Degree 3 Folium of Descartes Cissoid of Diocles Conchoid of de Sluze Right strophoid Semicubical parabola Serpentine curve Trident curve Trisectrix of Maclaurin Tschirnhausen cubic Witch of Agnesi Degree 4 Ampersand curve Bean curve Bicorn Bow curve Bullet-nose curve Cruciform curve Deltoid curve Devil's curve Hippopede Kampyle of Eudoxus Kappa curve Lemniscate of Booth Lemniscate of Gerono Lemniscate of Bernoulli Limaçon Cardioid Limaçon trisectrix Trifolium curve Degree 5 Quintic of l'Hospital Degree 6 Astroid Atriphtaloid Nephroid Quadrifolium Families of variable degree Epicycloid Epispiral Epitrochoid Hypocycloid Lissajous curve Poinsot's spirals Rational normal curve Rose curve Curves of genus one Bicuspid curve Cassini oval Cassinoide Cubic curve Elliptic curve Watt's curve Curves with genus greater than one Butterfly curve Elkies trinomial curves Hyperelliptic curve Klein quartic Classical modular curve Bolza surface Macbeath surface Curve families with variable genus Polynomial lemniscate Fermat curve Sinusoidal spiral Superellipse Hurwitz surface Transcendental curves Bowditch curve Brachistochrone Butterfly curve Catenary Clélies Cochleoid Cycloid Horopter Isochrone Isochrone of Huygens (Tautochrone) Isochrone of Leibniz Isochrone of Varignon Lamé curve Pursuit curve Rhumb line Spirals Archimedean spiral Cornu spiral Cotes' spiral Fermat's spiral Galileo's spiral Hyperbolic spiral Lituus Logarithmic spiral Nielsen's spiral Syntractrix Tractrix Trochoid Piecewise constructions Bézier curve Splines B-spline Nonuniform rational B-spline Ogee Loess curve Lowess Polygonal curve Maurer rose Reuleaux triangle Bézier triangle Curves generated by other curves Caustic including Catacaustic and Diacaustic Cissoid Conchoid Evolute Glissette Inverse curve Inv
https://en.wikipedia.org/wiki/Difference%20quotient
In single-variable calculus, the difference quotient is usually the name for the expression which when taken to the limit as h approaches 0 gives the derivative of the function f. The name of the expression stems from the fact that it is the quotient of the difference of values of the function by the difference of the corresponding values of its argument (the latter is (x + h) - x = h in this case). The difference quotient is a measure of the average rate of change of the function over an interval (in this case, an interval of length h). The limit of the difference quotient (i.e., the derivative) is thus the instantaneous rate of change. By a slight change in notation (and viewpoint), for an interval [a, b], the difference quotient is called the mean (or average) value of the derivative of f over the interval [a, b]. This name is justified by the mean value theorem, which states that for a differentiable function f, its derivative f′ reaches its mean value at some point in the interval. Geometrically, this difference quotient measures the slope of the secant line passing through the points with coordinates (a, f(a)) and (b, f(b)). Difference quotients are used as approximations in numerical differentiation, but they have also been subject of criticism in this application. Difference quotients may also find relevance in applications involving Time discretization, where the width of the time step is used for the value of h. The difference quotient is sometimes also called the Newton quotient (after Isaac Newton) or Fermat's difference quotient (after Pierre de Fermat). Overview The typical notion of the difference quotient discussed above is a particular case of a more general concept. The primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point ("P") expressible on a graph. The difference between two points, themselves, is known as their Delta (ΔP), as is the difference in their function re
https://en.wikipedia.org/wiki/Traceability%20matrix
In software development, a traceability matrix (TM) is a document, usually in the form of a table, used to assist in determining the completeness of a relationship by correlating any two baselined documents using a many-to-many relationship comparison. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the product to the matching parts of high-level design, detailed design, test plan, and test cases. A requirements traceability matrix may be used to check if the current project requirements are being met, and to help in the creation of a request for proposal, software requirements specification, various deliverable documents, and project plan tasks. Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if a relationship must be made. Large values imply that the relationship is too complex and should be simplified. To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward and forward traceability. That way, when an item is changed in one baselined document, it is easy to see what needs to be changed in the other. Sample traceability matrix See also Requirements traceability Software engineering List of requirements engineering tools References External links Bidirectional Requirements Traceability by Linda Westfall Why Software Requirements Traceability Remains a Challenge by Andrew Kannenberg and Dr. Hossein Saiedian Software testing Software requirements
https://en.wikipedia.org/wiki/Baroreceptor
Baroreceptors (or archaically, pressoreceptors) are sensors located in the carotid sinus (at the bifurcation of common carotid artery into external and internal carotids) and in the aortic arch. They sense the blood pressure and relay the information to the brain, so that a proper blood pressure can be maintained. Baroreceptors are a type of mechanoreceptor sensory neuron that are excited by a stretch of the blood vessel. Thus, increases in the pressure of blood vessel triggers increased action potential generation rates and provides information to the central nervous system. This sensory information is used primarily in autonomic reflexes that in turn influence the heart cardiac output and vascular smooth muscle to influence vascular resistance. Baroreceptors act immediately as part of a negative feedback system called the baroreflex, as soon as there is a change from the usual mean arterial blood pressure, returning the pressure toward a normal level. These reflexes help regulate short-term blood pressure. The solitary nucleus in the medulla oblongata of the brain recognizes changes in the firing rate of action potentials from the baroreceptors, and influences cardiac output and systemic vascular resistance. Baroreceptors can be divided into two categories based on the type of blood vessel in which they are located: high-pressure arterial baroreceptors and low-pressure baroreceptors (also known as cardiopulmonary or volume receptors). Arterial baroreceptors Arterial baroreceptors are stretch receptors that are stimulated by distortion of the arterial wall when pressure changes. The baroreceptors can identify the changes in both the average blood pressure or the rate of change in pressure with each arterial pulse. Action potentials triggered in the baroreceptor ending are then directly conducted to the brainstem where central terminations (synapses) transmit this information to neurons within the solitary nucleus which lies in the medulla. Reflex responses from
https://en.wikipedia.org/wiki/Projective%20space
In mathematics, the concept of a projective space originated from the visual effect of perspective, where parallel lines seem to meet at infinity. A projective space may thus be viewed as the extension of a Euclidean space, or, more generally, an affine space with points at infinity, in such a way that there is one point at infinity of each direction of parallel lines. This definition of a projective space has the disadvantage of not being isotropic, having two different sorts of points, which must be considered separately in proofs. Therefore, other definitions are generally preferred. There are two classes of definitions. In synthetic geometry, point and line are primitive entities that are related by the incidence relation "a point is on a line" or "a line passes through a point", which is subject to the axioms of projective geometry. For some such set of axioms, the projective spaces that are defined have been shown to be equivalent to those resulting from the following definition, which is more often encountered in modern textbooks. Using linear algebra, a projective space of dimension is defined as the set of the vector lines (that is, vector subspaces of dimension one) in a vector space of dimension . Equivalently, it is the quotient set of by the equivalence relation "being on the same vector line". As a vector line intersects the unit sphere of in two antipodal points, projective spaces can be equivalently defined as spheres in which antipodal points are identified. A projective space of dimension 1 is a projective line, and a projective space of dimension 2 is a projective plane. Projective spaces are widely used in geometry, as allowing simpler statements and simpler proofs. For example, in affine geometry, two distinct lines in a plane intersect in at most one point, while, in projective geometry, they intersect in exactly one point. Also, there is only one class of conic sections, which can be distinguished only by their intersections with the li
https://en.wikipedia.org/wiki/MLDonkey
MLDonkey is an open-source, multi-protocol, peer-to-peer file sharing application that runs as a back-end server application on many platforms. It can be controlled through a user interface provided by one of many separate front-ends, including a Web interface, telnet interface and over a dozen native client programs. Originally a Linux client for the eDonkey protocol, it now runs on many flavors of Unix-like, OS X, Microsoft Windows and MorphOS and supports numerous peer-to-peer protocols. It is written in OCaml, with some C and some assembly. History Development of the software began in late 2001. The original developer of MLDonkey is Fabrice Le Fessant from INRIA. It was originally conceived as an effort to spread the use of OCaml in the open source community. In January 2003, Slyck.com reported brief friction between MLDonkey developers and the official Overnet MetaMachine developers, which denounced MLDonkey as a "rogue client", allegedly for incorrect behavior on the network. Versions before 3.0 have a known security vulnerability that allows an attacker with access to the web interface to read any file on the file system. Features Features of MLdonkey core: Peer to peer (p2p) program that supports the following network protocols, either partially or completely: FastTrack (Kazaa) eDonkey network (with Overnet and Kad network) BitTorrent (with Mainline DHT) Direct Connect HTTP/FTP Multiple control interfaces: telnet, web interface, third party GUIs. Written in the OCaml programming language and licensed under the GPL-2.0-or-later license, the application separates the user interface (which can be a web browser, telnet, or a third-party GUI application) and the code that interacts with the peer-to-peer networks. MLDonkey is able to connect simultaneously to different peers using different network protocols. In addition it can download and merge parts of one file from different network protocols although this feature is currently documented as ex
https://en.wikipedia.org/wiki/Hampshire%20College%20Summer%20Studies%20in%20Mathematics
The Hampshire College Summer Studies in Mathematics (HCSSiM) is an American residential program for mathematically talented high school students. The program has been conducted each summer since 1971, with the exceptions of 1981 and 1996, and has more than 1500 alumni. Due to the Coronavirus pandemic, the 2020 Summer Studies ran online for a shortened program of four weeks. The program was created by and is still headed by David Kelly, a professor emeritus of mathematics at Hampshire College. Background The program is housed at Hampshire College in Amherst, Massachusetts, and generally runs for six weeks from early July until mid-August. The program itself consists of lectures, study sessions, math workshops (general-knowledge classes), maxi-courses (three-week classes run by the senior staff members), and mini-courses (specialized shorter classes). On a typical day, students spend four hours in the morning in class, have lunch together with the faculty, and then have several hours to use at their leisure. During this "down time" students and faculty members often host quasis, where they participate in an activity as a small group, such as juggling or making sushi. They return for the "Prime Time Theorem" (an hour-long talk on an interesting piece of mathematics given by a faculty member or a visitor), have dinner, and then spend three hours in a problem solving session. One of the instructors blogged the content of her class. Many students go on to professional careers in mathematics. An occasional publication has resulted from work done at the program. Well-known alumni of the program include two MacArthur Fellows, Eric Lander and Erik Winfree, as well as Lisa Randall, Dana Randall, and Eugene Volokh. Many alumni return to the campus for a few days around Yellow Pig's Day (July 17) of each year. This observance was formalized for 2006 in "Yellow Pig Math Days," which was conducted in observance of 2006 being the 34th offering of the HCSSiM Program (34 be
https://en.wikipedia.org/wiki/Transducer
A transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another. Transducers are often employed at the boundaries of automation, measurement, and control systems, where electrical signals are converted to and from other physical quantities (energy, force, torque, light, motion, position, etc.). The process of converting one form of energy to another is known as transduction. Types Mechanical transducers, so-called as they convert physical quantities into mechanical outputs or vice versa; Electrical transducers however convert physical quantities into electrical outputs or signals. Examples of these are: a thermocouple that changes temperature differences into a small voltage; a linear variable differential transformer (LVDT), used to measure displacement (position) changes by means of electrical signals. Sensors, actuators and transceivers Transducers can be categorized by which direction information passes through them: A sensor is a transducer that receives and responds to a signal or stimulus from a physical system. It produces a signal, which represents information about the system, which is used by some type of telemetry, information or control system. An actuator is a device that is responsible for moving or controlling a mechanism or system. It is controlled by a signal from a control system or manual control. It is operated by a source of energy, which can be mechanical force, electrical current, hydraulic fluid pressure, or pneumatic pressure, and converts that energy into motion. An actuator is the mechanism by which a control system acts upon an environment. The control system can be simple (a fixed mechanical or electrical system), software-based (e.g. a printer driver, robot control system), a human, or any other input. Bidirectional transducers can convert physical phenomena to electrical signals and electrical signals into physical phenomena. An exa
https://en.wikipedia.org/wiki/Traffic%20shaping
Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking. The most common type of traffic shaping is application-based traffic shaping. In application-based traffic shaping, fingerprinting tools are first used to identify applications of interest, which are then subject to shaping policies. Some controversial cases of application-based traffic shaping include bandwidth throttling of peer-to-peer file sharing traffic. Many application protocols use encryption to circumvent application-based traffic shaping. Another type of traffic shaping is route-based traffic shaping. Route-based traffic shaping is conducted based on previous-hop or next-hop information. Functionality If a link becomes utilized to the point where there is a significant level of congestion, latency can rise substantially. Traffic shaping can be used to prevent this from occurring and keep latency in check. Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as generic cell rate algorithm. This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets. Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network card) or by an element in the network. Uses Traffic shaping is sometimes applied by traffic sources to ensure the traffic they send complies
https://en.wikipedia.org/wiki/Hall%20effect%20sensor
A Hall effect sensor (or simply Hall sensor) is a type of sensor which detects the presence and magnitude of a magnetic field using the Hall effect. The output voltage of a Hall sensor is directly proportional to the strength of the field. It is named for the American physicist Edwin Hall. Hall sensors are used for proximity sensing, positioning, speed detection, and current sensing applications. Frequently, a Hall sensor is combined with threshold detection to act as a binary switch. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example, some computer printers use them to detect missing paper and open covers. Some 3D printers use them to measure filament thickness. Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the sensor peaks twice for each revolution. This arrangement is commonly used to regulate the speed of disk drives. Principles In a Hall sensor, a current is applied to a thin strip of metal. In the presence of a magnetic field perpendicular to the direction of the current, the charge carriers are deflected by the Lorentz force, producing a difference in electric potential (voltage) between the two sides of the strip. This voltage difference (the Hall voltage) is proportional to the strength of the magnetic field. Hall effect sensors respond to static (non-changing) magnetic fields, in contrast to inductive sensors, which respond only to changes in fields. Characteristics Hall sensors are capable of measuring a wide range of magnetic fields, and are sensitive to both the magnitude and orientation of the field. When used as electronic switches, they are less prone to mechanical failur
https://en.wikipedia.org/wiki/List%20of%20mathematics%20reference%20tables
See also: List of reference tables Mathematics List of mathematical topics List of statistical topics List of mathematical functions List of mathematical theorems List of mathematical proofs List of matrices List of numbers List of relativistic equations List of small groups Mathematical constants Sporadic group Table of bases Table of Clebsch-Gordan coefficients Table of derivatives Table of divisors Table of integrals Table of mathematical symbols Table of prime factors Taylor series Timeline of mathematics Trigonometric identities Truth table Reference tables List
https://en.wikipedia.org/wiki/Transposition%20table
A transposition table is a cache of previously seen positions, and associated evaluations, in a game tree generated by a computer game playing program. If a position recurs via a different sequence of moves, the value of the position is retrieved from the table, avoiding re-searching the game tree below that position. Transposition tables are primarily useful in perfect-information games (where the entire state of the game is known to all players at all times). The usage of transposition tables is essentially memoization applied to the tree search and is a form of dynamic programming. Transposition tables are typically implemented as hash tables encoding the current board position as the hash index. The number of possible positions that may occur in a game tree is an exponential function of depth of search, and can be thousands to millions or even much greater. Transposition tables may therefore consume most of available system memory and are usually most of the memory footprint of game playing programs. Functionality Game-playing programs work by analyzing millions of positions that could arise in the next few moves of the game. Typically, these programs employ strategies resembling depth-first search, which means that they do not keep track of all the positions analyzed so far. In many games, it is possible to reach a given position in more than one way. These are called transpositions. In chess, for example, the sequence of moves 1. d4 Nf6 2. c4 g6 (see algebraic chess notation) has 4 possible transpositions, since either player may swap their move order. In general, after n moves, an upper limit on the possible transpositions is (n!)2. Although many of these are illegal move sequences, it is still likely that the program will end up analyzing the same position several times. To avoid this problem, transposition tables are used. Such a table is a hash table of each of the positions analyzed so far up to a certain depth. On encountering a new position, th
https://en.wikipedia.org/wiki/Nurse%20crop
Nurse crops are a subtype of nurse plants, facilitating the growth of other species of plants. The term is used primarily in agriculture, but also in forestry. Cover crops are a type of nurse crop. Agriculture In agriculture, a nurse crop is an annual crop used to assist in establishment of a perennial crop. The widest use of nurse crops is in the establishment of legumaceous plants such as alfalfa, clover, and trefoil. Occasionally, nurse crops are used for establishment of perennial grasses. Nurse crops reduce the incidence of weeds, prevent erosion, and prevent excessive sunlight from reaching tender seedlings. Often, the nurse crop can be harvested for grain, straw, hay, or pasture. Oats are the most common nurse crop, though other annual grains are also used. Nurse cropping of tall or dense-canopied plants can protect more vulnerable species through shading or by providing a wind break. However, if ill-maintained, nurse crops can block sunlight from reaching seedlings. Trap crops prevent pests from affecting the desired plant. Forestry In forestry, 'nurse crop' can be applied to trees or shrubs that help the development of other species of trees. Wind breaking, frost protection, thermal insulation, and shade can all be provided by nurse crops in forests. Aspens especially provide partial shade, allowing understory growth. See also Companion planting Multiple cropping References Agriculture Crops Forestry Symbiosis
https://en.wikipedia.org/wiki/Homogeneous%20coordinates
In mathematics, homogeneous coordinates or projective coordinates, introduced by August Ferdinand Möbius in his 1827 work , are a system of coordinates used in projective geometry, just as Cartesian coordinates are used in Euclidean geometry. They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision, where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix. They are also used in fundamental elliptic curve cryptography algorithms. If homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are also given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two homogeneous coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane. Introduction The real projective plane can be thought of as the Euclidean plane with additional points added, which are called points at infinity, and are considered to lie on a new line, the line at infinity. There is a point at infinity corresponding to each direction (numerically given by the slope of a line), informally defined as the limit of a point that moves in that direction away from the origin. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction. Given a point on the Euclidean plane, for any non-zero real number Z, the triple is called a set of homogeneous coordinates for the point. By this definit
https://en.wikipedia.org/wiki/Green%27s%20theorem
In vector calculus, Green's theorem relates a line integral around a simple closed curve to a double integral over the plane region bounded by . It is the two-dimensional special case of Stokes' theorem. Theorem Let be a positively oriented, piecewise smooth, simple closed curve in a plane, and let be the region bounded by . If and are functions of defined on an open region containing and have continuous partial derivatives there, then where the path of integration along is anticlockwise. In physics, Green's theorem finds many applications. One is solving two-dimensional flow integrals, stating that the sum of fluid outflowing from a volume is equal to the total outflow summed about an enclosing area. In plane geometry, and in particular, area surveying, Green's theorem can be used to determine the area and centroid of plane figures solely by integrating over the perimeter. Proof when D is a simple region The following is a proof of half of the theorem for the simplified area D, a type I region where C1 and C3 are curves connected by vertical lines (possibly of zero length). A similar proof exists for the other half of the theorem when D is a type II region where C2 and C4 are curves connected by horizontal lines (again, possibly of zero length). Putting these two parts together, the theorem is thus proven for regions of type III (defined as regions which are both type I and type II). The general case can then be deduced from this special case by decomposing D into a set of type III regions. If it can be shown that and are true, then Green's theorem follows immediately for the region D. We can prove () easily for regions of type I, and () for regions of type II. Green's theorem then follows for regions of type III. Assume region D is a type I region and can thus be characterized, as pictured on the right, by where g1 and g2 are continuous functions on . Compute the double integral in (): Now compute the line integral in (). C can be rewritten as
https://en.wikipedia.org/wiki/George%20Green%20%28mathematician%29
George Green (14 July 1793 – 31 May 1841) was a British mathematical physicist who wrote An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828. The essay introduced several important concepts, among them a theorem similar to the modern Green's theorem, the idea of potential functions as currently used in physics, and the concept of what are now called Green's functions. Green was the first person to create a mathematical theory of electricity and magnetism and his theory formed the foundation for the work of other scientists such as James Clerk Maxwell, William Thomson, and others. His work on potential theory ran parallel to that of Carl Friedrich Gauss. Green's life story is remarkable in that he was almost entirely self-taught. He received only about one year of formal schooling as a child, between the ages of 8 and 9. Early life Green was born and lived for most of his life in the English town of Sneinton, Nottinghamshire, now part of the city of Nottingham. His father, also named George, was a baker who had built and owned a brick windmill used to grind grain. In his youth, Green was described as having a frail constitution and a dislike for doing work in his father's bakery. He had no choice in the matter, however, and as was common for the time he likely began working daily to earn his living at the age of five. Robert Goodacre's Academy During this era it was common for only 25–50% of children in Nottingham to receive any schooling. The majority of schools were Sunday schools, run by the Church, and children would typically attend for one or two years only. Recognizing the young Green's above average intellect, and being in a strong financial situation due to his successful bakery, his father enrolled him in March 1801 at Robert Goodacre's Academy in Upper Parliament Street. Robert Goodacre was a well-known science populariser and educator of the time. He published Essay on the Education of Youth, in
https://en.wikipedia.org/wiki/Erlangen%20program
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometry. It was published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen. It is named after the University Erlangen-Nürnberg, where Klein worked. By 1872, non-Euclidean geometries had emerged, but without a way to determine their hierarchy and relationships. Klein's method was fundamentally innovative in three ways: Projective geometry was emphasized as the unifying frame for all other geometries considered by him. In particular, Euclidean geometry was more restrictive than affine geometry, which in turn is more restrictive than projective geometry. Klein proposed that group theory, a branch of mathematics that uses algebraic methods to abstract the idea of symmetry, was the most useful way of organizing geometrical knowledge; at the time it had already been introduced into the theory of equations in the form of Galois theory. Klein made much more explicit the idea that each geometrical language had its own, appropriate concepts, thus for example projective geometry rightly talked about conic sections, but not about circles or angles because those notions were not invariant under projective transformations (something familiar in geometrical perspective). The way the multiple languages of geometry then came back together could be explained by the way subgroups of a symmetry group related to each other. Later, Élie Cartan generalized Klein's homogeneous model spaces to Cartan connections on certain principal bundles, which generalized Riemannian geometry. The problems of nineteenth century geometry Since Euclid, geometry had meant the geometry of Euclidean space of two dimensions (plane geometry) or of three dimensions (solid geometry). In the first half of the nineteenth century there had been several developments complicating the picture. Mathematical applications required geometry of four or more d
https://en.wikipedia.org/wiki/EcoSCOPE
The ecoSCOPE is an optical sensor system, deployed from a small remotely operated vehicle (ROV) or fibre optic cable, to investigate behavior and microdistribution of small organisms in the ocean. Deployment Although an ROV may be very small and quiet, it is impossible to approach feeding herring closer than 40 cm. The ecoSCOPE allows observation of feeding herring from a distance of only 4 cm. From 40 cm, the herrings' prey (copepods) in front of the herring are invisible due to the deflection of light by phytoplankton and microparticles in highly productive waters where herring live. With the ecoSCOPE, the predators are illuminated by natural light, the prey by a light sheet, projected via a second endoscope from strobed LEDs (2 ms, 100% relative intensity at 700 nm, 53% at 690 nm, 22% at 680 nm, 4% at 660 nm, 0% at 642 nm). By imitating the long, thin snout of the garfish protruding into the security sphere of the alert herrings, an endoscope with a tip diameter of 11 mm is used. The endoscope is camouflaged to reduce the brightness-contrast against the background: the top is black and the sides are silvery. Additionally, the front of the ROV is covered by a mirror, reflecting a light gradient resembling the natural scene and making the instrument body virtually invisible to the animals. A second sensor images other copepods, phytoplankton and particles at very high magnification. Another advantage of these small "optical probes" is the minimal disruption of the current-field in the measuring volume, allowing for less disturbed surveys of microturbulence and shear. Another video can be seen in the article for Atlantic herring. An ecoSCOPE was also deployed to measure the dynamics of particles in a polluted estuary: see image on Particle (ecology), another as an underwater environmental monitoring system, utilizing the orientation capacity of juvenile glasseel. Specifications The ecoSCOPE is a product of the new initiative of "Ocean Online Biosensors": a syn
https://en.wikipedia.org/wiki/Cubic%20function
In mathematics, a cubic function is a function of the form that is, a polynomial function of degree three. In many texts, the coefficients , , , and are supposed to be real numbers, and the function is considered as a real function that maps real numbers to real numbers or as a complex function that maps complex numbers to complex numbers. In other cases, the coefficients may be complex numbers, and the function is a complex function that has the set of the complex numbers as its codomain, even when the domain is restricted to the real numbers. Setting produces a cubic equation of the form whose solutions are called roots of the function. A cubic function with real coefficients has either one or three real roots (which may not be distinct); all odd-degree polynomials with real coefficients have at least one real root. The graph of a cubic function always has a single inflection point. It may have two critical points, a local minimum and a local maximum. Otherwise, a cubic function is monotonic. The graph of a cubic function is symmetric with respect to its inflection point; that is, it is invariant under a rotation of a half turn around this point. Up to an affine transformation, there are only three possible graphs for cubic functions. Cubic functions are fundamental for cubic interpolation. History Critical and inflection points The critical points of a cubic function are its stationary points, that is the points where the slope of the function is zero. Thus the critical points of a cubic function defined by , occur at values of such that the derivative of the cubic function is zero. The solutions of this equation are the -values of the critical points and are given, using the quadratic formula, by The sign of the expression inside the square root determines the number of critical points. If it is positive, then there are two critical points, one is a local maximum, and the other is a local minimum. If , then there is only one critical point, w
https://en.wikipedia.org/wiki/System%20context%20diagram
A system context diagram in engineering is a diagram that defines the boundary between the system, or part of a system, and its environment, showing the entities that interact with it. This diagram is a high level view of a system. It is similar to a block diagram. Overview System context diagrams show a system, as a whole and its inputs and outputs from/to external factors. According to Kossiakoff and Sweet (2011): System context diagrams are used early in a project to get agreement on the scope under investigation. Context diagrams are typically included in a requirements document. These diagrams must be read by all project stakeholders and thus should be written in plain language, so the stakeholders can understand items within the document. Building blocks Context diagrams can be developed with the use of two types of building blocks: Entities (Actors): labeled boxes; one in the center representing the system, and around it multiple boxes for each external actor Relationships: labeled lines between the entities and system For example, "customer places order." Context diagrams can also use many different drawing types to represent external entities. They can use ovals, stick figures, pictures, clip art or any other representation to convey meaning. Decision trees and data storage are represented in system flow diagrams. A context diagram can also list the classifications of the external entities as one of a set of simple categories (Examples:), which add clarity to the level of involvement of the entity with regards to the system. These categories include: Active: Dynamic to achieve some goal or purpose (Examples: "Article readers" or "customers"). Passive: Static external entities which infrequently interact with the system (Examples: "Article editors" or "database administrator"). Cooperative: Predictable external entities which are used by the system to bring about some desired outcome (Examples: "Internet service providers" or "shipping companie
https://en.wikipedia.org/wiki/Coroutine
Coroutines are computer program components that allow execution to be suspended and resumed, generalizing subroutines for cooperative multitasking. Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes. They have been described as "functions whose execution you can pause". Melvin Conway coined the term coroutine in 1958 when he applied it to the construction of an assembly program. The first published explanation of the coroutine appeared later, in 1963. Definition and Types There is no single precise definition of coroutine. In 1980 Christopher D. Marlin summarized two widely-acknowledged fundamental characteristics of a coroutine: the values of data local to a coroutine persist between successive calls; the execution of a coroutine is suspended as control leaves it, only to carry on where it left off when control re-enters the coroutine at some later stage. Besides that, a coroutine implementation has 3 features: the control-transfer mechanism. Asymmetric coroutines usually provide keywords like yield and resume. Programmers cannot freely choose which frame to yield to. The runtime only yields to the nearest caller of the current coroutine. On the other hand, in symmetric coroutine, programmers must specify a yield destination. whether coroutines are provided in the language as first-class objects, which can be freely manipulated by the programmer, or as constrained constructs; whether a coroutine is able to suspend its execution from within nested function calls. Such a coroutine is stackful. One to the contrary is called stackless coroutine, where unless marked as coroutine, a regular function can't use keyword yield. Revisiting Coroutines published in 2009 proposed term Full Coroutine to denote one that supports first-class coroutine and is stackful. Full Coroutines deserve their own name in that they have the same expressive power as one-sho
https://en.wikipedia.org/wiki/Extraterrestrial%20intelligence
Extraterrestrial intelligence (often abbreviated ETI) refers to hypothetical intelligent extraterrestrial life. No such life has ever been proven to exist in the Solar System except for humans on Earth, and its existence on other star systems is still speculative. The question of whether other inhabited worlds might exist has been debated since ancient times. The modern form of the concept emerged when the Copernican Revolution demonstrated that the Earth was a planet revolving around the Sun, and other planets were, conversely, other worlds. The question of whether other inhabited planets or moons exist was a natural consequence of this new understanding. It has become one of the most speculative questions in science and is a central theme of science fiction and popular culture. Intelligence Intelligence is, along with the more precise concept of sapience, used to describe extraterrestrial life with similar cognitive abilities as humans. Another interchangeable term is sophoncy, first coined by Karen Anderson and published in the 1966 works by her husband Poul Anderson. Sentience, like consciousness, is a concept sometimes mistakenly used to refer to the concept of extraterrestrial sapience and intelligence, since it does not exclude forms of life that are non-sapient. The term extraterrestrial civilization frames a more particular case of extraterrestrial intelligence. It is the possible long-term result of intelligent and specifically sapient extraterrestrial life. Probability The Copernican principle is generalized to the relativistic concept that humans are not privileged observers of the universe. Many prominent scientists, including Stephen Hawking have proposed that the sheer scale of the universe makes it improbable for intelligent life not to have emerged elsewhere. However, Fermi's Paradox highlights the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity's lack of contact w
https://en.wikipedia.org/wiki/Projective%20geometry
In mathematics, projective geometry is the study of geometric properties that are invariant with respect to projective transformations. This means that, compared to elementary Euclidean geometry, projective geometry has a different setting, projective space, and a selective set of basic geometric concepts. The basic intuitions are that projective space has more points than Euclidean space, for a given dimension, and that geometric transformations are permitted that transform the extra points (called "points at infinity") to Euclidean points, and vice-versa. Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations). The first issue for geometers is what kind of geometry is adequate for a novel situation. It is not possible to refer to angles in projective geometry as it is in Euclidean geometry, because angle is an example of a concept not invariant with respect to projective transformations, as is seen in perspective drawing. One source for projective geometry was indeed the theory of perspective. Another difference from elementary geometry is the way in which parallel lines can be said to meet in a point at infinity, once the concept is translated into projective geometry's terms. Again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See projective plane for the basics of projective geometry in two dimensions. While the ideas were available earlier, projective geometry was mainly a development of the 19th century. This included the theory of complex projective space, the coordinates used (homogeneous coordinates) being complex numbers. Several major types of more abstract mathematics (including invariant theory, the Italian school of algebraic geometry, and Felix Klein's Erlangen programme resulting in the study of the classical groups)
https://en.wikipedia.org/wiki/D%20%28programming%20language%29
D, also known as dlang, is a multi-paradigm system programming language created by Walter Bright at Digital Mars and released in 2001. Andrei Alexandrescu joined the design and development effort in 2007. Though it originated as a re-engineering of C++, D is a profoundly different language —features of D can be considered streamlined and expanded-upon ideas from C++, however D also draws inspiration from other high-level programming languages, notably Java, Python, Ruby, C#, and Eiffel. D combines the performance and safety of compiled languages with the expressive power of modern dynamic and functional programming languages. Idiomatic D code is commonly as fast as equivalent C++ code, while also being shorter. The language as a whole is not memory-safe but includes optional attributes designed to guarantee memory safety of either subsets of or the whole program. Type inference, automatic memory management and syntactic sugar for common types allow faster development, while bounds checking and design by contract find bugs earlier at runtime, and a concurrency-aware type system catches bugs at compile time. Features D was designed with lessons learned from practical C++ usage, rather than from a purely theoretical perspective. Although the language uses many C and C++ concepts, it also discards some, or uses different approaches (and syntax) to achieve some goals. As such, it is not source compatible (nor does it aim to be) with C and C++ source code in general (some simpler code bases from these languages might by luck work with D, or require some porting). D has, however, been constrained in its design by the rule that any code that was legal in both C and D should behave in the same way. D gained some features before C++, such as closures, anonymous functions, compile-time function execution, ranges, built-in container iteration concepts and type inference. D adds to the functionality of C++ by also implementing design by contract, unit testing, true modules,
https://en.wikipedia.org/wiki/Unix%20security
Unix security refers to the means of securing a Unix or Unix-like operating system. A secure environment is achieved not only by the design concepts of these operating systems, but also through vigilant user and administrative practices. Design concepts Permissions A core security feature in these systems is the file system permissions. All files in a typical Unix filesystem have permissions set enabling different access to a file. Permissions on a file are commonly set using the chmod command and seen through the ls command. For example: -r-xr-xr-x 1 root wheel 745720 Sep 8 2002 /bin/sh Unix permissions permit different users access to a file. Different user groups have different permissions on a file. More advanced Unix filesystems include the Access Control List concept which allows permissions to be granted to multiple users or groups. An Access Control List may be used to grant permission to additional individual users or groups. For example: /pvr [u::rwx,g::r-x,o::r-x/u::rwx,u:sue:rwx,g::r-x,m::rwx,o::r-x] In this example, which is from the command on the Linux operating system, the user sue is granted write permission to the /pvr directory. User groups Users under Unix style operating systems often belong to managed groups with specific access permissions. This enables users to be grouped by the level of access they have to this system. Many Unix implementations add an additional layer of security by requiring that a user be a member of the wheel user privileges group in order to access the su command. Root access Most Unix and Unix-like systems have an account or group which enables a user to exact complete control over the system, often known as a root account. If access to this account is gained by an unwanted user, this results in a complete breach of the system. A root account however is necessary for administrative purposes, and for the above security reasons the root account is seldom used for day to day purposes (the sudo program i
https://en.wikipedia.org/wiki/Gtk-gnutella
gtk-gnutella is a peer-to-peer file sharing application which runs on the gnutella network. gtk-gnutella uses the GTK+ toolkit for its graphical user interface. Released under the GNU General Public License, gtk-gnutella is free software. History Initially gtk-gnutella was written to look like the original Nullsoft Gnutella client. The original author Yann Grossel stopped working on the client in early 2001. After a while Raphael Manfredi took over as the main software architect, and the client has been in active development ever since. Versions released after July 2002 do not look like the original Nullsoft client. Features gtk-gnutella is programmed in C with an emphasis on efficiency and portability without being minimalistic but rather head-on with most of the modern features of the gnutella network. Therefore, it requires fewer resources (such as CPU and/or RAM) than the major gnutella clients. It can also be used as headless gnutella client not requiring GTK+ at all. gtk-gnutella has a filtering engine that can reduce the amount of spam and other irrelevant results. gtk-gnutella supports a large range of the features of modern gnutella clients. gtk-gnutella was the first gnutella client to support IPv6 and encryption using TLS. It can handle and export magnet links. It has strong internationalization features, supporting English, German, Greek, French, Hungarian, Spanish, Japanese, Norwegian, Dutch and Chinese. gtk-gnutella also has support to prevent spamming and other hostile peer activity. Several software distributions provide pre-compiled packages, but they are usually outdated as many distributions version freeze old stable releases. The gnutella network benefits from running the latest version obtainable as peer and hostile IP address lists change rapidly, making building the latest SVN snapshot the best option. There are also pre-compiled packages for many Linux distributions available online. Persons concerned about security might wish to compi
https://en.wikipedia.org/wiki/Window%20function
In signal processing and statistics, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually approaching a maximum in the middle, and usually tapering away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions. The reasons for examining segments of a longer function include detection of transient events and time-averaging of frequency spectra. The duration of the segments is determined in each application by requirements like time and frequency resolution. But that method also changes the frequency content of the signal by an effect called spectral leakage. Window functions allow us to distribute the leakage spectrally in different ways, according to the needs of the particular application. There are many choices detailed in this article, but many of the differences are so subtle as to be insignificant in practice. In typical applications, the window functions used are non-negative, smooth, "bell-shaped" curves. Rectangle, triangle, and other functions can also be used. A more general definition of window functions does not require them to be identically zero outside an interval, as long as the product of the window multiplied by its argument is square integrable, and, more specifically, that the function goes sufficiently rapidly toward zero. Applications Window functions are used in spectral analysis/modification/resynthesis, the design of finite impulse respons
https://en.wikipedia.org/wiki/Euclid%27s%20Elements
The Elements ( ) is a mathematical treatise consisting of 13 books attributed to the ancient Greek mathematician Euclid 300 BC. It is a collection of definitions, postulates, propositions (theorems and constructions), and mathematical proofs of the propositions. The books cover plane and solid Euclidean geometry, elementary number theory, and incommensurable lines. Elements is the oldest extant large-scale deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science, and its logical rigor was not surpassed until the 19th century. Euclid's Elements has been referred to as the most successful and influential textbook ever written. It was one of the very earliest mathematical works to be printed after the invention of the printing press and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482, the number reaching well over one thousand. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through other school textbooks, did it cease to be considered something all educated people had read. History Basis in earlier work Scholars believe that the Elements is largely a compilation of propositions based on books by earlier Greek mathematicians. Proclus (412–485 AD), a Greek mathematician who lived around seven centuries after Euclid, wrote in his commentary on the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors". Pythagoras ( 570–495 BC) was probably the source for most of books I and II, Hippocrates of Chios ( 470–410 BC, not the better known Hippocrates of Kos) for book II
https://en.wikipedia.org/wiki/Contention%20free%20pollable
Contention-free pollable (CF-Pollable) is a state of operation for wireless networking nodes. The condition is saying that the node is able to use the Point Coordination Function, as opposed to the Distributed Coordination Function, within a wireless LAN. A device that is able to use point coordination function is one that is able to participate in a method to provide limited Quality of service (for time sensitive data) within the network. See also Contention (telecommunications) References Wireless networking
https://en.wikipedia.org/wiki/Matroid
In combinatorics, a branch of mathematics, a matroid is a structure that abstracts and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid axiomatically, the most significant being in terms of: independent sets; bases or circuits; rank functions; closure operators; and closed sets or flats. In the language of partially ordered sets, a finite simple matroid is equivalent to a geometric lattice. Matroid theory borrows extensively from the terminology of both linear algebra and graph theory, largely because it is the abstraction of various notions of central importance in these fields. Matroids have found applications in geometry, topology, combinatorial optimization, network theory and coding theory. Definition There are many equivalent ways to define a (finite) matroid. Independent sets In terms of independence, a finite matroid is a pair , where is a finite set (called the ground set) and is a family of subsets of (called the independent sets) with the following properties: (I1) The empty set is independent, i.e., . (I2) Every subset of an independent set is independent, i.e., for each , if then . This is sometimes called the hereditary property, or the downward-closed property. (I3) If and are two independent sets (i.e., each set is independent) and has more elements than , then there exists such that is in . This is sometimes called the augmentation property or the independent set exchange property. The first two properties define a combinatorial structure known as an independence system (or abstract simplicial complex). Actually, assuming (I2), property (I1) is equivalent to the fact that at least one subset of is independent, i.e., . Bases and circuits A subset of the ground set that is not independent is called dependent. A maximal independent set—that is, an independent set that becomes dependent upon adding any element of —is called a basis for the matroid. A circuit in
https://en.wikipedia.org/wiki/X86-64
x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit version of the x86 instruction set, first announced in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode. With 64-bit mode and the new paging mode, it supports vastly larger amounts of virtual memory and physical memory than was possible on its 32-bit predecessors, allowing programs to store larger amounts of data in memory. x86-64 also expands general-purpose registers to 64-bit, and expands the number of them from 8 (some of which had limited or fixed functionality, e.g. for stack management) to 16 (fully general), and provides numerous other enhancements. Floating-point arithmetic is supported via mandatory SSE2-like instructions, and x87/MMX style registers are generally not used (but still available even in 64-bit mode); instead, a set of 16 vector registers, 128 bits each, is used. (Each register can store one or two double-precision numbers or one to four single-precision numbers, or various integer formats.) In 64-bit mode, instructions are modified to support 64-bit operands and 64-bit addressing mode. The compatibility mode defined in the architecture allows 16-bit and 32-bit user applications to run unmodified, coexisting with 64-bit applications if the 64-bit operating system supports them. As the full x86 16-bit and 32-bit instruction sets remain implemented in hardware without any intervening emulation, these older executables can run with little or no performance penalty, while newer or modified applications can take advantage of new features of the processor design to achieve performance improvements. Also, a processor supporting x86-64 still powers on in real mode for full backward compatibility with the 8086, as x86 processors supporting protected mode have done since the 80286. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, and VIA. The AMD K8 microarchit
https://en.wikipedia.org/wiki/Adjacency%20matrix
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. In the special case of a finite simple graph, the adjacency matrix is a (0,1)-matrix with zeros on its diagonal. If the graph is undirected (i.e. all of its edges are bidirectional), the adjacency matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory. The adjacency matrix of a graph should be distinguished from its incidence matrix, a different matrix representation whose elements indicate whether vertex–edge pairs are incident or not, and its degree matrix, which contains information about the degree of each vertex. Definition For a simple graph with vertex set , the adjacency matrix is a square matrix such that its element is one when there is an edge from vertex to vertex , and zero when there is no edge. The diagonal elements of the matrix are all zero, since edges from a vertex to itself (loops) are not allowed in simple graphs. It is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables. The same concept can be extended to multigraphs and graphs with loops by storing the number of edges between each two vertices in the corresponding matrix element, and by allowing nonzero diagonal elements. Loops may be counted either once (as a single edge) or twice (as two vertex-edge incidences), as long as a consistent convention is followed. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. Of a bipartite graph The adjacency matrix of a bipartite graph whose two parts have and vertices can be written in the form where is an matrix, and and represent the and zero matrices. In this case, the smaller matrix uniquely rep
https://en.wikipedia.org/wiki/Hodge%20conjecture
In mathematics, the Hodge conjecture is a major unsolved problem in algebraic geometry and complex geometry that relates the algebraic topology of a non-singular complex algebraic variety to its subvarieties. In simple terms, the Hodge conjecture asserts that the basic topological information like the number of holes in certain geometric spaces, complex algebraic varieties, can be understood by studying the possible nice shapes sitting inside those spaces, which look like zero sets of polynomial equations. The latter objects can be studied using algebra and the calculus of analytic functions, and this allows one to indirectly understand the broad shape and structure of often higher-dimensional spaces which can not be otherwise easily visualized. More specifically, the conjecture states that certain de Rham cohomology classes are algebraic; that is, they are sums of Poincaré duals of the homology classes of subvarieties. It was formulated by the Scottish mathematician William Vallance Douglas Hodge as a result of a work in between 1930 and 1940 to enrich the description of de Rham cohomology to include extra structure that is present in the case of complex algebraic varieties. It received little attention before Hodge presented it in an address during the 1950 International Congress of Mathematicians, held in Cambridge, Massachusetts. The Hodge conjecture is one of the Clay Mathematics Institute's Millennium Prize Problems, with a prize of $1,000,000 US for whoever can prove or disprove the Hodge conjecture. Motivation Let X be a compact complex manifold of complex dimension n. Then X is an orientable smooth manifold of real dimension , so its cohomology groups lie in degrees zero through . Assume X is a Kähler manifold, so that there is a decomposition on its cohomology with complex coefficients where is the subgroup of cohomology classes which are represented by harmonic forms of type . That is, these are the cohomology classes represented by differential f
https://en.wikipedia.org/wiki/Water%20integrator
The Water Integrator ( Gidravlicheskiy integrator) was an early analog computer built in the Soviet Union in 1936 by Vladimir Sergeevich Lukyanov. It functioned by careful manipulation of water through a room full of interconnected pipes and pumps. The water level in various chambers (with precision to fractions of a millimeter) represented stored numbers, and the rate of flow between them represented mathematical operations. This machine was capable of solving inhomogeneous differential equations. The first versions of Lukyanov's integrators were rather experimental, made of tin and glass tubes, and each integrator could be used to solve only one problem. In the 1930s it was the only computer in the Soviet Union for solving partial differential equations. In 1941, Lukyanov created a hydraulic integrator of modular design, which made it possible to assemble a machine for solving various problems. Two-dimensional and three-dimensional hydraulic integrators were designed. In 1949–1955, an integrator in the form of standard unified units was developed at the NIISCHETMASH Institute. In 1955, the Ryazan plant of calculating and analytical machines began the serial production of integrators with the factory brand name “IGL” (russian: Интегратор Гидравлический Лукьянова - integrator of the Lukyanov hydraulic system). Integrators were widely distributed, delivered to Czechoslovakia, Poland, Bulgaria and China. A water integrator was used in the design of the Karakum Canal in the 1940s, and the construction of the Baikal–Amur Mainline in the 1970s. Water analog computers were used in the Soviet Union until the 1980s for large-scale modelling. They were used in geology, mine construction, metallurgy, rocket production and other fields. Currently, two hydraulic integrators are kept in the Polytechnic Museum in Moscow. See also History of computing hardware MONIAC Computer Fluidics References Further reading Collection of Water Integrator Patents Technical Re
https://en.wikipedia.org/wiki/Injective%20cogenerator
In category theory, a branch of mathematics, the concept of an injective cogenerator is drawn from examples such as Pontryagin duality. Generators are objects which cover other objects as an approximation, and (dually) cogenerators are objects which envelope other objects as an approximation. More precisely: A generator of a category with a zero object is an object G such that for every nonzero object H there exists a nonzero morphism f:G → H. A cogenerator is an object C such that for every nonzero object H there exists a nonzero morphism f:H → C. (Note the reversed order). The abelian group case Assuming one has a category like that of abelian groups, one can in fact form direct sums of copies of G until the morphism f: Sum(G) →H is surjective; and one can form direct products of C until the morphism f:H→ Prod(C) is injective. For example, the integers are a generator of the category of abelian groups (since every abelian group is a quotient of a free abelian group). This is the origin of the term generator. The approximation here is normally described as generators and relations. As an example of a cogenerator in the same category, we have Q/Z, the rationals modulo the integers, which is a divisible abelian group. Given any abelian group A, there is an isomorphic copy of A contained inside the product of |A| copies of Q/Z. This approximation is close to what is called the divisible envelope - the true envelope is subject to a minimality condition. General theory Finding a generator of an abelian category allows one to express every object as a quotient of a direct sum of copies of the generator. Finding a cogenerator allows one to express every object as a subobject of a direct product of copies of the cogenerator. One is often interested in projective generators (even finitely generated projective generators, called progenerators) and minimal injective cogenerators. Both examples above have these extra properties. The cogenerator Q/Z is useful in
https://en.wikipedia.org/wiki/Lock%20%28computer%20science%29
In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications. Types Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access. The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade. Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread. Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation. Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special inst
https://en.wikipedia.org/wiki/Spinlock
In software engineering, a spinlock is a lock that causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one that holds the lock) blocks or "goes to sleep". Because they avoid overhead from operating system process rescheduling or context switching, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason, operating-system kernels often use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished. Implementing spinlocks correctly is challenging because programmers must take into account the possibility of simultaneous access to the lock, which could cause race conditions. Generally, such an implementation is possible only with special assembly language instructions, such as atomic (i.e. un-interruptible) test-and-set operations and cannot be easily implemented in programming languages not supporting truly atomic operations
https://en.wikipedia.org/wiki/Weil%20conjectures
In mathematics, the Weil conjectures were highly influential proposals by . They led to a successful multi-decade program to prove them, in which many leading researchers developed the framework of modern algebraic geometry and number theory. The conjectures concern the generating functions (known as local zeta functions) derived from counting points on algebraic varieties over finite fields. A variety over a finite field with elements has a finite number of rational points (with coordinates in the original field), as well as points with coordinates in any finite extension of the original field. The generating function has coefficients derived from the numbers of points over the extension field with elements. Weil conjectured that such zeta functions for smooth varieties are rational functions, satisfy a certain functional equation, and have their zeros in restricted places. The last two parts were consciously modelled on the Riemann zeta function, a kind of generating function for prime integers, which obeys a functional equation and (conjecturally) has its zeros restricted by the Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis by . Background and history The earliest antecedent of the Weil conjectures is by Carl Friedrich Gauss and appears in section VII of his Disquisitiones Arithmeticae , concerned with roots of unity and Gaussian periods. In article 358, he moves on from the periods that build up towers of quadratic extensions, for the construction of regular polygons; and assumes that is a prime number congruent to 1 modulo 3. Then there is a cyclic cubic field inside the cyclotomic field of th roots of unity, and a normal integral basis of periods for the integers of this field (an instance of the Hilbert–Speiser theorem). Gauss constructs the order-3 periods, corresponding to the cyclic group of non-zero residues modulo under multiplication and its unique subgroup of inde
https://en.wikipedia.org/wiki/GEnie
GEnie (General Electric Network for Information Exchange) was an online service created by a General Electric business, GEIS (now GXS), that ran from 1985 through the end of 1999. In 1994, GEnie claimed around 350,000 users. Peak simultaneous usage was around 10,000 users. It was one of the pioneering services in the field, though eventually replaced by the World Wide Web and graphics-based services, most notably AOL. Early history GEnie was founded by Bill Louden on October 1, 1985 and was launched as an ASCII text-based service by GE's Information Services division in October 1985, and received attention as the first serious commercial competition to CompuServe. Louden was originally CompuServe's product manager for Computing, Community (forums), Games, eCommerce, and email product lines. Louden purchased DECWAR source code and had MegaWars developed, one of the earliest multi-player online games (or MMOG), in 1985. The service was run by General Electric Information Services (GEIS, now GXS) based in Rockville, Maryland. GEIS served a diverse set of large-scale, international, commercial network-based custom application needs, including banking, electronic data interchange and e-mail services to companies worldwide, but was able to run GEnie on their many GE Mark III time-sharing mainframe computers that otherwise would have been underutilized after normal U.S. business hours. This orientation was part of GEnie's downfall. Although it became very popular and a national force in the on-line marketplace, GEnie was not allowed to grow. GEIS executives steadfastly refused to view the service as anything but "fill in" load and would not expand the network by a single phone line, let alone expand mainframe capacity, to accommodate GEnie's growing user base. (Later, however, GE did consent to make the service available through the SprintNet time-sharing network, which had its own dial-up points of presence; an Internet-to-SprintNet gateway operated by Merit Network a
https://en.wikipedia.org/wiki/Sphenic%20number
In number theory, a sphenic number (from , 'wedge') is a positive integer that is the product of three distinct prime numbers. Because there are infinitely many prime numbers, there are also infinitely many sphenic numbers. Definition A sphenic number is a product pqr where p, q, and r are three distinct prime numbers. In other words, the sphenic numbers are the square-free 3-almost primes. Examples The smallest sphenic number is 30 = 2 × 3 × 5, the product of the smallest three primes. The first few sphenic numbers are 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, ... the largest known sphenic number is (282,589,933 − 1) × (277,232,917 − 1) × (274,207,281 − 1). It is the product of the three largest known primes. Divisors All sphenic numbers have exactly eight divisors. If we express the sphenic number as , where p, q, and r are distinct primes, then the set of divisors of n will be: The converse does not hold. For example, 24 is not a sphenic number, but it has exactly eight divisors. Properties All sphenic numbers are by definition squarefree, because the prime factors must be distinct. The Möbius function of any sphenic number is −1. The cyclotomic polynomials , taken over all sphenic numbers n, may contain arbitrarily large coefficients (for n a product of two primes the coefficients are or 0). Any multiple of a sphenic number (except by 1) isn't a sphenic number. This is easily provable by the multiplication process at a minimum adding another prime factor, or raising an existing factor to a higher power. Consecutive sphenic numbers The first case of two consecutive sphenic integers is 230 = 2×5×23 and 231 = 3×7×11. The first case of three is 1309 = 7×11×17, 1310 = 2×5×131, and 1311 = 3×19×23. There is no case of more than three, because every fourth consecutive positive integer is divisible by 4 = 2×2 and therefore not squarefree. The numbers 2013 (3×11×61), 2014 (2×19×53), and 2015 (5×13×31) are all sphenic. The next three conse
https://en.wikipedia.org/wiki/Plus%20and%20minus%20signs
The plus sign and the minus sign are mathematical symbols used to represent the notions of positive and negative, respectively. In addition, represents the operation of addition, which results in a sum, while represents subtraction, resulting in a difference. Their use has been extended to many other meanings, more or less analogous. and are Latin terms meaning "more" and "less", respectively. History Though the signs now seem as familiar as the alphabet or the Hindu-Arabic numerals, they are not of great antiquity. The Egyptian hieroglyphic sign for addition, for example, resembled a pair of legs walking in the direction in which the text was written (Egyptian could be written either from right to left or left to right), with the reverse sign indicating subtraction: Nicole Oresme's manuscripts from the 14th century show what may be one of the earliest uses of as a sign for plus. In early 15th century Europe, the letters "P" and "M" were generally used. The symbols (P with overline, , for (more), i.e., plus, and M with overline, , for (less), i.e., minus) appeared for the first time in Luca Pacioli's mathematics compendium, , first printed and published in Venice in 1494. The sign is a simplification of the (comparable to the evolution of the ampersand ). The may be derived from a tilde written over when used to indicate subtraction; or it may come from a shorthand version of the letter itself. In his 1489 treatise, Johannes Widmann referred to the symbols and as minus and mer (Modern German ; "more"): They weren't used for addition and subtraction in the treatise, but were used to indicate surplus and deficit; usage in the modern sense is attested in a 1518 book by Henricus Grammateus. Robert Recorde, the designer of the equals sign, introduced plus and minus to Britain in 1557 in The Whetstone of Witte: "There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made − and betokeneth lesse.
https://en.wikipedia.org/wiki/Barcan%20formula
In quantified modal logic, the Barcan formula and the converse Barcan formula (more accurately, schemata rather than formulas) (i) syntactically state principles of interchange between quantifiers and modalities; (ii) semantically state a relation between domains of possible worlds. The formulas were introduced as axioms by Ruth Barcan Marcus, in the first extensions of modal propositional logic to include quantification. Related formulas include the Buridan formula. The Barcan formula The Barcan formula is: . In English, the schema reads: If every x is necessarily F, then it is necessary that every x is F. It is equivalent to . The Barcan formula has generated some controversy because—in terms of possible world semantics—it implies that all objects which exist in any possible world (accessible to the actual world) exist in the actual world, i.e. that domains cannot grow when one moves to accessible worlds. This thesis is sometimes known as actualism—i.e. that there are no merely possible individuals. There is some debate as to the informal interpretation of the Barcan formula and its converse. An informal argument against the plausibility of the Barcan formula would be the interpretation of the predicate Fx as "x is a machine that can tap all the energy locked in the waves of the Atlantic Ocean in a practical and efficient way". In its equivalent form above, the antecedent seems plausible since it is at least theoretically possible that such a machine could exist. However, it is not obvious that this implies that there exists a machine that possibly could tap the energy of the Atlantic. Converse Barcan formula The converse Barcan formula is: . It is equivalent to . If a frame is based on a symmetric accessibility relation, then the Barcan formula will be valid in the frame if, and only if, the converse Barcan formula is valid in the frame. It states that domains cannot shrink as one moves to accessible worlds, i.e. that individuals cannot cease to
https://en.wikipedia.org/wiki/List%20of%20feeding%20behaviours
Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat". Evolutionary history The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials). Evolutionary adaptations The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as: mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc. distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc. specialized claws and other appendages, for apprehending or killing (including fingers in primates) changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc. changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis Classification By mode of ingestion There are many modes of feeding that animals exhibit, including: Filter feeding: obtaining nutrients from particles suspended in water Deposit feeding: obtaining nutrients from particles suspended in soil Fluid feeding: obtaining nutrients by consuming other organisms' fluids Bulk feeding: obtaining nutrients by eating all of an organism. Ram feeding and suction feeding: in
https://en.wikipedia.org/wiki/Sheaf%20%28mathematics%29
In mathematics, a sheaf (: sheaves) is a tool for systematically tracking data (such as sets, abelian groups, rings) attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could be the ring of continuous functions defined on that open set. Such data is well behaved in that it can be restricted to smaller open sets, and also the data assigned to an open set is equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original open set (intuitively, every piece of data is the sum of its parts). The field of mathematics that studies sheaves is called sheaf theory. Sheaves are understood conceptually as general and abstract objects. Their correct definition is rather technical. They are specifically defined as sheaves of sets or as sheaves of rings, for example, depending on the type of data assigned to the open sets. There are also maps (or morphisms) from one sheaf to another; sheaves (of a specific type, such as sheaves of abelian groups) with their morphisms on a fixed topological space form a category. On the other hand, to each continuous map there is associated both a direct image functor, taking sheaves and their morphisms on the domain to sheaves and morphisms on the codomain, and an inverse image functor operating in the opposite direction. These functors, and certain variants of them, are essential parts of sheaf theory. Due to their general nature and versatility, sheaves have several applications in topology and especially in algebraic and differential geometry. First, geometric structures such as that of a differentiable manifold or a scheme can be expressed in terms of a sheaf of rings on the space. In such contexts, several geometric constructions such as vector bundles or divisors are naturally specified in terms of sheaves. Second, sheaves provide the framework for a very general cohomology theory, which encompasses also the
https://en.wikipedia.org/wiki/Weight%20function
A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is a weighted sum or weighted average. Weight functions occur frequently in statistics and analysis, and are closely related to the concept of a measure. Weight functions can be employed in both discrete and continuous settings. They can be used to construct systems of calculus called "weighted calculus" and "meta-calculus". Discrete weights General definition In the discrete setting, a weight function is a positive function defined on a discrete set , which is typically finite or countable. The weight function corresponds to the unweighted situation in which all elements have equal weight. One can then apply this weight to various concepts. If the function is a real-valued function, then the unweighted sum of on is defined as but given a weight function , the weighted sum or conical combination is defined as One common application of weighted sums arises in numerical integration. If B is a finite subset of A, one can replace the unweighted cardinality |B| of B by the weighted cardinality If A is a finite non-empty set, one can replace the unweighted mean or average by the weighted mean or weighted average In this case only the relative weights are relevant. Statistics Weighted means are commonly used in statistics to compensate for the presence of bias. For a quantity measured multiple independent times with variance , the best estimate of the signal is obtained by averaging all the measurements with weight and the resulting variance is smaller than each of the independent measurements The maximum likelihood method weights the difference between fit and data using the same weights The expected value of a random variable is the weighted average of the possible values it might take on, with the we
https://en.wikipedia.org/wiki/Gaussian%20function
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants , and non-zero . It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter is the height of the curve's peak, is the position of the center of the peak, and (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell". Gaussian functions are often used to represent the probability density function of a normally distributed random variable with expected value and variance . In this case, the Gaussian is of the form Gaussian functions are widely used in statistics to describe the normal distributions, in signal processing to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics to solve heat equations and diffusion equations and to define the Weierstrass transform. Properties Gaussian functions arise by composing the exponential function with a concave quadratic function: where (Note: in , not to be confused with ) The Gaussian functions are thus those functions whose logarithm is a concave quadratic function. The parameter is related to the full width at half maximum (FWHM) of the peak according to The function may then be expressed in terms of the FWHM, represented by : Alternatively, the parameter can be interpreted by saying that the two inflection points of the function occur at . The full width at tenth of maximum (FWTM) for a Gaussian could be of interest and is Gaussian functions are analytic, and their limit as is 0 (for the above case of ). Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function: Nonetheless, their improper integrals over the whole real line can be
https://en.wikipedia.org/wiki/Trigonometric%20integral
In mathematics, trigonometric integrals are a family of integrals involving trigonometric functions. Sine integral The different sine integral definitions are Note that the integrand is the sinc function, and also the zeroth spherical Bessel function. Since is an even entire function (holomorphic over the entire complex plane), is entire, odd, and the integral in its definition can be taken along any path connecting the endpoints. By definition, is the antiderivative of whose value is zero at , and is the antiderivative whose value is zero at . Their difference is given by the Dirichlet integral, In signal processing, the oscillations of the sine integral cause overshoot and ringing artifacts when using the sinc filter, and frequency domain ringing if using a truncated sinc filter as a low-pass filter. Related is the Gibbs phenomenon: If the sine integral is considered as the convolution of the sinc function with the heaviside step function, this corresponds to truncating the Fourier series, which is the cause of the Gibbs phenomenon. Cosine integral The different cosine integral definitions are where is the Euler–Mascheroni constant. Some texts use instead of . is the antiderivative of (which vanishes as ). The two definitions are related by is an even, entire function. For that reason, some texts treat as the primary function, and derive in terms of . Hyperbolic sine integral The hyperbolic sine integral is defined as It is related to the ordinary sine integral by Hyperbolic cosine integral The hyperbolic cosine integral is where is the Euler–Mascheroni constant. It has the series expansion Auxiliary functions Trigonometric integrals can be understood in terms of the so-called "auxiliary functions" Using these functions, the trigonometric integrals may be re-expressed as (cf. Abramowitz & Stegun, p. 232) Nielsen's spiral The spiral formed by parametric plot of is known as Nielsen's spiral. The spiral is closely related to the
https://en.wikipedia.org/wiki/ARX%20%28operating%20system%29
ARX was an unreleased Mach-like operating system written in Modula-2+ developed by Acorn Computers Ltd in the Acorn Research Centre (ARC) United Kingdom (UK) and later by Olivetti - which purchased Acorn - for Acorn's new Archimedes personal computers based on the ARM architecture reduced instruction set computer (RISC) central processing unit (CPUs). Overview According to the project Application Manager Richard Cownie, during the project, while Acorn was developing the kernel, it used the C and Acorn Modula Execution Library (CAMEL) in the Acorn Extended Modula-2 (AEM2) compiler (ported from Modula-2 ETH Zurich (ETH) using Econet hardware). Though never released externally, CAMEL was ported to use on Sun Microsystems Unix computers. In an effort to port Sun's workstations Sun NeWS to the Archimedes, David Chase developed a compiler based on AEM2 for the programming language Modula-3. ARX was a preemptive multitasking, multithreading, multi-user operating system. Much of the OS ran in user mode and as a result suffered performance problems due to switches into kernel mode to perform mutexes, which led to the introduction of the SWP instruction to the instruction set of the ARMv2a version of the ARM processor. It had support of a file system for optical (write once read many (WORM)) disks and featured a window system, a window toolkit (and a direct manipulation user interface (UI) editor) and an Interscript-based text editor, for enriched documents written in Interpress (a HTML precursor). The OS had to be fitted in a 512 KB read-only memory (ROM) ROM image. This suggests that ARX had a microkernel-type design. It was not finished in time to be used in the Acorn Archimedes range of computers, which shipped in 1987 with an operating system named Arthur, later renamed RISC OS, derived from the earlier Machine Operating System (MOS) from Acorn's earlier 8-bit BBC Micro range. Confusion persisted about the nature of ARX amongst the wider public and press, with some b
https://en.wikipedia.org/wiki/Durability%20%28database%20systems%29
In database systems, durability is the ACID property that guarantees that the effects of transactions that have been committed will survive permanently, even in case of failures, including incidents and catastrophic events. For example, if a flight booking reports that a seat has successfully been booked, then the seat will remain booked even if the system crashes. Formally, a database system ensures the durability property if it tolerates three types of failures: transaction, system, and media failures. In particular, a transaction fails if its execution is interrupted before all its operations have been processed by the system. These kinds of interruptions can be originated at the transaction level by data-entry errors, operator cancellation, timeout, or application-specific errors, like withdrawing money from a bank account with insufficient funds. At the system level, a failure occurs if the contents of the volatile storage are lost, due, for instance, to system crashes, like out-of-memory events. At the media level, where media means a stable storage that withstands system failures, failures happen when the stable storage, or part of it, is lost. These cases are typically represented by disk failures. Thus, to be durable, the database system should implement strategies and operations that guarantee that the effects of transactions that have been committed before the failure will survive the event (even by reconstruction), while the changes of incomplete transactions, which have not been committed yet at the time of failure, will be reverted and will not affect the state of the database system. These behaviours are proven to be correct when the execution of transactions has respectively the resilience and recoverability properties. Mechanisms In transaction-based systems, the mechanisms that assure durability are historically associated with the concept of reliability of systems, as proposed by Jim Gray in 1981. This concept includes durability, but it also
https://en.wikipedia.org/wiki/Instruction-level%20parallelism
Instruction-level parallelism (ILP) is the parallel or simultaneous execution of a sequence of instructions in a computer program. More specifically ILP refers to the average number of instructions run per step of this parallel execution. Discussion ILP must not be confused with concurrency. In ILP there is a single specific thread of execution of a process. On the other hand, concurrency involves the assignment of multiple threads to a CPU's core in a strict alternation, or in true parallelism if there are enough CPU cores, ideally one core for each runnable thread. There are two approaches to instruction-level parallelism: hardware and software. Hardware level works upon dynamic parallelism, whereas the software level works on static parallelism. Dynamic parallelism means the processor decides at run time which instructions to execute in parallel, whereas static parallelism means the compiler decides which instructions to execute in parallel. The Pentium processor works on the dynamic sequence of parallel execution, but the Itanium processor works on the static level parallelism. Consider the following program: e = a + b f = c + d m = e * f Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. If we assume that each operation can be completed in one unit of time then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2. A goal of compiler and processor designers is to identify and take advantage of as much ILP as possible. Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.
https://en.wikipedia.org/wiki/Elimination%20theory
In commutative algebra and algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating some variables between polynomials of several variables, in order to solve systems of polynomial equations. Classical elimination theory culminated with the work of Francis Macaulay on multivariate resultants, as described in the chapter on Elimination theory in the first editions (1930) of Bartel van der Waerden's Moderne Algebra. After that, elimination theory was ignored by most algebraic geometers for almost thirty years, until the introduction of new methods for solving polynomial equations, such as Gröbner bases, which were needed for computer algebra. History and connection to modern theories The field of elimination theory was motivated by the need of methods for solving systems of polynomial equations. One of the first results was Bézout's theorem, which bounds the number of solutions (in the case of two polynomials in two variables at Bézout time). Except for Bézout's theorem, the general approach was to eliminate variables for reducing the problem to a single equation in one variable. The case of linear equations was completely solved by Gaussian elimination, where the older method of Cramer's rule does not proceed by elimination, and works only when the number of equations equals the number of variables. In the 19th century, this was extended to linear Diophantine equations and abelian group with Hermite normal form and Smith normal form. Before the 20th century, different types of eliminants were introduced, including resultants, and various kinds of discriminants. In general, these eliminants are also invariant under various changes of variables, and are also fundamental in invariant theory. All these concepts are effective, in the sense that their definitions include a method of computation. Around 1890, David Hilbert introduced non-effective methods, and this was seen as a revolution, which led most algebraic geomet
https://en.wikipedia.org/wiki/Commutative%20algebra
Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers ; and p-adic integers. Commutative algebra is the main technical tool in the local study of schemes. The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras. Overview Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry. In algebraic number theory, the rings of algebraic integers are Dedekind rings, which constitute therefore an important class of commutative rings. Considerations related to modular arithmetic have led to the notion of a valuation ring. The restriction of algebraic field extensions to subrings has led to the notions of integral extensions and integrally closed domains as well as the notion of ramification of an extension of valuation rings. The notion of localization of a ring (in particular the localization with respect to a prime ideal, the localization consisting in inverting a single element and the total quotient ring) is one of the main differences between commutative algebra and the theory of non-commutative rings. It leads to an important class of commutative rings, the local rings that have only one maximal ideal. The set of the prime ideals of a commutative ring is naturally equipped with a topology, the Zariski topology. All these notions are widely used in algebraic geometry and are the basic technical tools for the definition of scheme theory, a generalization of algebraic geometry introduced by Grothendieck. Many other notions of commutative algebra are counterparts of geometrical notions occurring
https://en.wikipedia.org/wiki/UN/LOCODE
UN/LOCODE, the United Nations Code for Trade and Transport Locations, is a geographic coding scheme developed and maintained by United Nations Economic Commission for Europe (UNECE). UN/LOCODE assigns codes to locations used in trade and transport with functions such as seaports, rail and road terminals, airports, Postal Exchange Office and border crossing points. The first issue in 1981 contained codes for 8,000 locations. The version from 2011 contained codes for about 82,000 locations. Structure UN/LOCODEs have five characters. The first two letters code a country by the table defined in ISO 3166-1 alpha-2. The three remaining characters code a location within that country. Letters are preferred, but if necessary digits 2 through 9 may be used, excluding "0" and "1" to avoid confusion with the letters "O" and "I" respectively. For each country there can be a maximum of 17,576 entries using only letters (26×26×26), or 39,304 entries using letters and digits (34×34×34). For the US, the letter combinations have almost all been exhausted. So in 2006, the Secretariat added 646 entries with a digit as the last character. Loose consistency with existing IATA airport codes For airports, the three letters following the country code are not always identical to the IATA airport code. According to the Secretariat note for Issue 2006-2, there are 720 locations showing a different IATA code. Official UN/LOCODE tables UN/LOCODEs are released as a table. An individual revision is officially referred to as an "issue". A discussion of the table's structure follows. Examples Explanations   for New York City in the United States. Subdivision is the U.S. state of New York (see ISO 3166-2:US). Function: port, rail, road, airport, postal. IATA code is NYC. Coordinates: .   for Berlin (city) in Germany. Subdivision is the German state of Berlin (see ISO 3166-2:DE). Function: port, rail, road, airport, postal. IATA code is BER. Coordinates: .   for Berlin-Tegel Airport in Germ
https://en.wikipedia.org/wiki/NonStop%20SQL
NonStop SQL is a commercial relational database management system that is designed for fault tolerance and scalability, currently offered by Hewlett Packard Enterprise. The latest version is SQL/MX 3.4. The product was originally developed by Tandem Computers. Tandem was acquired by Compaq in 1997. Compaq was later acquired by Hewlett-Packard in 2002. When Hewlett-Packard split in 2015 into HP Inc. and Hewlett Packard Enterprise, NonStop SQL and the rest of the NonStop product line went to Hewlett Packard Enterprise. The product primarily is used for online transaction processing and is tailored for organizations that need high availability and scalability for their database system. Typical users of the product are stock exchanges, telecommunications, POS, and bank ATM networks. History NonStop SQL is designed to run effectively on parallel computers, adding functionality for distributed data, distributed execution, and distributed transactions. First released in 1987, a second version in 1989 added the ability to run queries in parallel, and the product became fairly famous for being one of the few systems that scales almost linearly with the number of processors in the machine: adding a second CPU to an existing NonStop SQL server almost exactly doubled its performance. The second version added /MP to its name, for Massively Parallel. A third version, NonStop SQL/MX, created a product that was more ANSI SQL compliant than its predecessor. NonStop SQL/MX has shipped on the NonStop platform since 2002, and can access tables created by NonStop SQL/MP, although only "Native SQL/MX tables" offer the ANSI compliance and many "Oracle-like" enhancements. The HP Neoview business intelligence platform was built using NonStop SQL as its origins. NonStop SQL/MX is HP's only OLTP database product. Parts of the Neoview code base were open-sourced in 2014 under the name Trafodion, which is now a top-level Apache project. See also List of relational database management
https://en.wikipedia.org/wiki/Summation
In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as . Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example, Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article. Notation Capital-sigma notation Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek l
https://en.wikipedia.org/wiki/Psittacosis
Psittacosis—also known as parrot fever, and ornithosis—is a zoonotic infectious disease in humans caused by a bacterium called Chlamydia psittaci and contracted from infected parrots, such as macaws, cockatiels, and budgerigars, and from pigeons, sparrows, ducks, hens, gulls and many other species of birds. The incidence of infection in canaries and finches is believed to be lower than in psittacine birds. In certain contexts, the word is used when the disease is carried by any species of birds belonging to the family Psittacidae, whereas ornithosis is used when other birds carry the disease. In humans Signs and symptoms In humans, after an incubation period of 5–19 days, the disease course ranges from asymptomatic to systemic illness with severe pneumonia. It presents chiefly as an atypical pneumonia. In the first week of psittacosis, the symptoms mimic typhoid fever, causing high fevers, joint pain, diarrhea, conjunctivitis, nose bleeds, and low level of white blood cells. Rose spots called Horder's spots sometimes appear during this stage. Spleen enlargement is common towards the end of the first week, after which psittacosis may develop into a serious lung infection. Diagnosis is indicated where respiratory infection occurs simultaneously with splenomegaly and/or epistaxis. Headache can be so severe that it suggests meningitis and some nuchal rigidity is not unusual. Towards the end of the first week, stupor or even coma can result in severe cases. The second week is more akin to acute bacteremic pneumococcal pneumonia with continuous high fevers, headaches, cough, and dyspnea. X-rays at that stage show patchy infiltrates or a diffuse whiteout of lung fields. Complications in the form of endocarditis, liver inflammation, inflammation of the heart's muscle, joint inflammation, keratoconjunctivitis (occasionally extranodal marginal zone lymphoma of the lacrimal gland/orbit), and neurologic complications (brain inflammation) may occasionally occur. Severe pne
https://en.wikipedia.org/wiki/Walter%20H.%20Schottky
Walter Hans Schottky (23 July 1886 – 4 March 1976) was a German physicist who played a major early role in developing the theory of electron and ion emission phenomena, invented the screen-grid vacuum tube in 1915 while working at Siemens, co-invented the ribbon microphone and ribbon loudspeaker along with Dr. Erwin Gerlach in 1924 and later made many significant contributions in the areas of semiconductor devices, technical physics and technology. Early life Schottky's father was mathematician Friedrich Hermann Schottky (1851–1935). Schottky had one sister and one brother. His father was appointed professor of mathematics at the University of Zurich in 1882, and Schottky was born four years later. The family then moved back to Germany in 1892, where his father took up an appointment at the University of Marburg. Schottky graduated from the Steglitz Gymnasium in Berlin in 1904. He completed his B.S. degree in physics, at the University of Berlin in 1908, and he completed his PhD in physics at the Humboldt University of Berlin in 1912, studying under Max Planck and Heinrich Rubens, with a thesis entitled: Zur relativtheoretischen Energetik und Dynamik (translates as About Relative-Theoretical Energetics and Dynamics). Career Schottky's postdoctoral period was spent at University of Jena (1912–14). He then lectured at the University of Würzburg (1919–23). He became a professor of theoretical physics at the University of Rostock (1923–27). For two considerable periods of time, Schottky worked at the Siemens Research laboratories (1914–19 and 1927–58). Inventions In 1924, Schottky co-invented the ribbon microphone along with Erwin Gerlach. The idea was that a very fine ribbon suspended in a magnetic field could generate electric signals. This led to the invention of the ribbon loudspeaker by using it in the reverse order, but it was not practical until high flux permanent magnets became available in the late 1930s. Major scientific achievements In 1914, Schottky de
https://en.wikipedia.org/wiki/Generalized%20permutation%20matrix
In mathematics, a generalized permutation matrix (or monomial matrix) is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is Structure An invertible matrix A is a generalized permutation matrix if and only if it can be written as a product of an invertible diagonal matrix D and an (implicitly invertible) permutation matrix P: i.e., Group structure The set of n × n generalized permutation matrices with entries in a field F forms a subgroup of the general linear group GL(n, F), in which the group of nonsingular diagonal matrices Δ(n, F) forms a normal subgroup. Indeed, the generalized permutation matrices are the normalizer of the diagonal matrices, meaning that the generalized permutation matrices are the largest subgroup of GL(n, F) in which diagonal matrices are normal. The abstract group of generalized permutation matrices is the wreath product of F× and Sn. Concretely, this means that it is the semidirect product of Δ(n, F) by the symmetric group Sn: Sn ⋉ Δ(n, F), where Sn acts by permuting coordinates and the diagonal matrices Δ(n, F) are isomorphic to the n-fold product (F×)n. To be precise, the generalized permutation matrices are a (faithful) linear representation of this abstract wreath product: a realization of the abstract group as a subgroup of matrices. Subgroups The subgroup where all entries are 1 is exactly the permutation matrices, which is isomorphic to the symmetric group. The subgroup where all entries are ±1 is the signed permutation matrices, which is the hyperoctahedral group. The subgroup where the entries are mth roots of unity is isomorphic to a generalized symmetric group. The subgroup of diagonal matrices is abelian, normal, and a maximal abelian subgroup. Th
https://en.wikipedia.org/wiki/Diagonalizable%20matrix
In linear algebra, a square matrix  is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix  and a diagonal matrix such that or equivalently (Such are not unique.) For a finite-dimensional vector space a linear map  is called diagonalizable if there exists an ordered basis of  consisting of eigenvectors of . These definitions are equivalent: if  has a matrix representation as above, then the column vectors of  form a basis consisting of eigenvectors of and the diagonal entries of  are the corresponding eigenvalues of with respect to this eigenvector basis,  is represented by Diagonalization is the process of finding the above  and Diagonalizing a matrix makes many subsequent computations easier. One can raise a diagonal matrix  to a power by simply raising the diagonal entries to that power. The determinant of a diagonal matrix is simply the product of all diagonal entries. Such computations generalize easily to The geometric transformation represented by a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling), meaning that it scales the space by a different amount in different directions. In particular, the direction of each eigenvector is scaled by a factor given by the corresponding eigenvalue. A inhomogeneous dilation is in contrast to a homogeneous dilation, which scales by the same amount in every direction. A square matrix that is not diagonalizable is called defective. It can happen that a matrix with real entries is defective over the real numbers, meaning that is impossible for any invertible and diagonal with real entries, but it is possible with complex entries, so that is diagonalizable over the complex numbers. For example, this is the case for a generic rotation matrix. Many results for diagonalizable matrices hold only over an algebraically closed field (such as the complex numbers). In this case, diagonalizable matrices are dense in the
https://en.wikipedia.org/wiki/Clipper%20%28programming%20language%29
Clipper is an xBase compiler that implements a variant of the xBase computer programming language. It is used to create or extend software programs that originally operated primarily under MS-DOS. Although it is a powerful general-purpose programming language, it was primarily used to create database/business programs. One major dBase feature not implemented in Clipper is the dot-prompt (. prompt) interactive command set, which was an important part of the original dBase implementation. Clipper, from Nantucket Corp and later Computer Associates, started out as a native code compiler for dBase III databases, and later evolved. History Clipper was created by Nantucket Corporation, a company that was started in 1984 by Barry ReBell (management) and Brian Russell (technical); Larry Heimendinger was Nantucket's president. In 1992, the company was sold to Computer Associates for 190 million dollars and the product was renamed to CA-Clipper. Clipper was created as a replacement programming language for Ashton Tate's dBASE III, a very popular database language at the time. The advantage of Clipper over dBASE was that it could be compiled and executed under MS-DOS as a standalone application. In the years between 1985 and 1992, millions of Clipper applications were built, typically for small businesses dealing with databases concerning many aspects of client management and inventory management. For many smaller businesses, having a Clipper application designed to their specific needs was their first experience with software development. Also a lot of applications for banking and insurance companies were developed, here especially in those cases where the application was considered too small to be developed and run on traditional mainframes. In these environments Clipper also served as a front end for existing mainframe applications. As the product matured, it remained a DOS tool for many years, but added elements of the C programming language and Pascal programming
https://en.wikipedia.org/wiki/Axiom%20of%20infinity
In axiomatic set theory and the branches of mathematics and philosophy that use it, the axiom of infinity is one of the axioms of Zermelo–Fraenkel set theory. It guarantees the existence of at least one infinite set, namely a set containing the natural numbers. It was first published by Ernst Zermelo as part of his set theory in 1908. Formal statement In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: In words, there is a set I (the set that is postulated to be infinite), such that the empty set is in I, and such that whenever any x is a member of I, the set formed by taking the union of x with its singleton {x} is also a member of I. Such a set is sometimes called an inductive set. Interpretation and consequences This axiom is closely related to the von Neumann construction of the natural numbers in set theory, in which the successor of x is defined as x ∪ {x}. If x is a set, then it follows from the other axioms of set theory that this successor is also a uniquely defined set. Successors are used to define the usual set-theoretic encoding of the natural numbers. In this encoding, zero is the empty set: 0 = {}. The number 1 is the successor of 0: 1 = 0 ∪ {0} = {} ∪ {0} = {0} = {{}}. Likewise, 2 is the successor of 1: 2 = 1 ∪ {1} = {0} ∪ {1} = {0, 1} = { {}, {{}} }, and so on: 3 = {0, 1, 2} = { {}, {{}}, {{}, {{}}} }; 4 = {0, 1, 2, 3} = { {}, {{}}, { {}, {{}} }, { {}, {{}}, {{}, {{}}} } }. A consequence of this definition is that every natural number is equal to the set of all preceding natural numbers. The count of elements in each set, at the top level, is the same as the represented natural number, and the nesting depth of the most deeply nested empty set {}, including its nesting in the set that represents the number of which it is a part, is also equal to the natural number that the set represents. This construction forms the natural numbers. However, the other axioms are insufficient to prove the existence of the set of all
https://en.wikipedia.org/wiki/Crab-eating%20macaque
The crab-eating macaque (Macaca fascicularis), also known as the long-tailed macaque and referred to as the cynomolgus monkey in laboratories, is a cercopithecine primate native to Southeast Asia. A species of macaque, the crab-eating macaque has a long history alongside humans. The species has been alternately seen as an agricultural pest, a sacred animal, and, more recently, the subject of medical experiments. The crab-eating macaque lives in matrilineal social groups of up to eight individuals dominated by females. Male members leave the group when they reach puberty. It is an opportunistic omnivore and has been documented using tools to obtain food in Thailand and Myanmar. The crab-eating macaque is a known invasive species and a threat to biodiversity in several locations, including Hong Kong and western New Guinea. The significant overlap in macaque and human living space has resulted in greater habitat loss, synanthropic living, and inter- and intraspecies conflicts over resources. Etymology Macaca comes from the Portuguese word macaco, which was derived from makaku, a word in Ibinda, a language of Central Africa (kaku means monkey in Ibinda). The specific epithet fascicularis is Latin for a small band or stripe. Sir Thomas Raffles, who gave the animal its scientific name in 1821, did not specify what he meant by the use of this word. In Indonesia and Malaysia, the crab-eating macaque and other macaque species are known generically as kera, possibly because of their high-pitched cries. The crab-eating macaque has several common names. It is often referred to as the long-tailed macaque due to its tail, which is often longer than its body. The name crab-eating macaque refers to its being often seen foraging beaches for crabs. Another common name for M. fascicularis is the cynomolgus monkey, from the name of a race of humans with long hair and handsome beards who used dogs for hunting according to Aristophanes of Byzantium, who seemingly derived the etymolog
https://en.wikipedia.org/wiki/Yellow%20Dog%20Linux
Yellow Dog Linux (YDL) is a discontinued free and open-source operating system for high-performance computing on multi-core processor computer architectures, focusing on GPU systems and computers using the POWER7 processor. The original developer was Terra Soft Solutions, which was acquired by Fixstars in October 2008. Yellow Dog Linux was first released in the spring of 1999 for Apple Macintosh PowerPC-based computers. The most recent version, Yellow Dog Linux 7, was released on August 6, 2012. Yellow Dog Linux lent its name to the popular YUM Linux software updater, derived from YDL's YUP (Yellowdog UPdater) and thus called Yellowdog Updater, Modified. Features Yellow Dog Linux is based on Red Hat Enterprise Linux/CentOS and relies on the RPM Package Manager. Its software includes applications such as Ekiga (a voice-over-IP and videoconferencing application), GIMP (a raster graphics editor), Gnash (a free Adobe Flash player), gThumb (an image viewer), the Mozilla Firefox Web browser, the Mozilla Thunderbird e-mail and news client, the OpenOffice.org productivity suite, Pidgin (an instant messaging and IRC client), the Rhythmbox music player, and the KDE Noatun and Totem media players. Starting with YDL version 5.0 'Phoenix', Enlightenment is the Yellow Dog Linux default desktop environment, although GNOME and KDE are also included. Like other Linux distributions, Yellow Dog Linux supports software development with GCC (compiled with support for C, C++, Java, and Fortran), the GNU C Library, GDB, GLib, the GTK+ toolkit, Python, the Qt toolkit, Ruby and Tcl. Standard text editors such as Vim and Emacs are complemented with IDEs such as Eclipse and KDevelop, as well as by graphical debuggers such as KDbg. Standard document preparation tools such as TeX and LaTeX are also included. Yellow Dog Linux includes software for running a Web server (such as Apache/httpd, Perl, and PHP), database server (such as MySQL and PostgreSQL), and network server (NFS and Webmin).
https://en.wikipedia.org/wiki/Operator%20norm
In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its . Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Informally, the operator norm of a linear map is the maximum factor by which it "lengthens" vectors. Introduction and definition Given two normed vector spaces and (over the same base field, either the real numbers or the complex numbers ), a linear map is continuous if and only if there exists a real number such that The norm on the left is the one in and the norm on the right is the one in . Intuitively, the continuous operator never increases the length of any vector by more than a factor of Thus the image of a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known as bounded operators. In order to "measure the size" of one can take the infimum of the numbers such that the above inequality holds for all This number represents the maximum scalar factor by which "lengthens" vectors. In other words, the "size" of is measured by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm of as The infimum is attained as the set of all such is closed, nonempty, and bounded from below. It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spaces and . Examples Every real -by- matrix corresponds to a linear map from to Each pair of the plethora of (vector) norms applicable to real vector spaces induces an operator norm for all -by- matrices of real numbers; these induced norms form a subset of matrix norms. If we specifically choose the Euclidean norm on both and then the matrix norm given to a matrix is the square root of the largest eigenvalue of the matrix (where denotes the conjugate transpose of ). This is equivalent to assigning the larg
https://en.wikipedia.org/wiki/Audio%20power
Audio power is the electrical power transferred from an audio amplifier to a loudspeaker, measured in watts. The electrical power delivered to the loudspeaker, together with its efficiency, determines the sound power generated (with the rest of the electrical power being converted to heat). Amplifiers are limited in the electrical energy they can output, while loudspeakers are limited in the electrical energy they can convert to sound energy without being damaged or distorting the audio signal. These limits, or power ratings, are important to consumers finding compatible products and comparing competitors. Power handling In audio electronics, there are several methods of measuring power output (for such things as amplifiers) and power handling capacity (for such things as loudspeakers). Amplifiers Amplifier output power is limited by voltage, current, and temperature: Voltage: The amp's power supply voltage limits the maximum amplitude of the waveform it can output. This determines the peak momentary output power for a given load resistance. Current: The amp's output devices (transistors or tubes) have a current limit, above which they are damaged. This determines the minimum load resistance that the amp can drive at its maximum voltage. Temperature: The amp's output devices waste some of the electrical energy as heat, and if it is not removed quickly enough, they will rise in temperature to the point of damage. This determines the continuous output power. As an amplifier's power output strongly influences its price, there is an incentive for manufacturers to exaggerate output power specs to increase sales. Without regulations, imaginative approaches to advertising power ratings became so common that in 1975 the US Federal Trade Commission intervened in the market and required all amplifier manufacturers to use an engineering measurement (continuous average power) in addition to any other value they might cite. Loudspeakers For loudspeakers, there is a
https://en.wikipedia.org/wiki/Partial%20fraction%20decomposition
In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator. The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives, Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz. In symbols, the partial fraction decomposition of a rational fraction of the form where and are polynomials, is its expression as where is a polynomial, and, for each , the denominator is a power of an irreducible polynomial (that is not factorable into polynomials of positive degrees), and the numerator is a polynomial of a smaller degree than the degree of this irreducible polynomial. When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers. Basic principles Let be a rational fraction, where and are univariate polynomials in the indeterminate over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps. Polynomial part There exist two polynomials and such that and where denotes the degree of the polynomial . This results immediately from the Euclidean division of
https://en.wikipedia.org/wiki/Communicating%20sequential%20processes
In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels. CSP was highly influential in the design of the occam programming language and also influenced the design of programming languages such as Limbo, RaftLib, Erlang, Go, Crystal, and Clojure's core.async. CSP was first described in a 1978 article by Tony Hoare, but has since evolved substantially. CSP has been practically applied in industry as a tool for specifying and verifying the concurrent aspects of a variety of different systems, such as the T9000 Transputer, as well as a secure ecommerce system. The theory of CSP itself is also still the subject of active research, including work to increase its range of practical applicability (e.g., increasing the scale of the systems that can be tractably analyzed). History The version of CSP presented in Hoare's original 1978 article was essentially a concurrent programming language rather than a process calculus. It had a substantially different syntax than later versions of CSP, did not possess mathematically defined semantics, and was unable to represent unbounded nondeterminism. Programs in the original CSP were written as a parallel composition of a fixed number of sequential processes communicating with each other strictly through synchronous message-passing. In contrast to later versions of CSP, each process was assigned an explicit name, and the source or destination of a message was defined by specifying the name of the intended sending or receiving process. For example, the process COPY = *[c:character; west?c → east!c] repeatedly receives a character from the process named west and sends that character to process named east. The parallel composition [west::DISASSEMBLE || X::COPY || east::ASSEMBLE] assigns the
https://en.wikipedia.org/wiki/Null%20cipher
A null cipher, also known as concealment cipher, is an ancient form of encryption where the plaintext is mixed with a large amount of non-cipher material. Today it is regarded as a simple form of steganography, which can be used to hide ciphertext. This is one of three categories of cipher used in classical cryptography along with substitution ciphers and transposition ciphers. Classical cryptography In classical cryptography, a null is an extra character intended to confuse the cryptanalyst. In the most common form of a null cipher, the plaintext is included within the ciphertext and one needs to discard certain characters in order to decrypt the message (such as first letter, last letter, third letter of every second word, etc.) Most characters in such a cryptogram are nulls, only some are significant, and some others can be used as pointers to the significant ones. Here is an example null cipher message, sent by a German during World War I: Taking the first letter of every word reveals the hidden message "Pershing sails from N.Y. June I". Following is a more complicated example from England's Civil War which aided Royalist Sir John Trevanian in his escape from a Puritan castle in Colchester:WORTHIE SIR JOHN, HOPE, THAT IS YE BESTE COMFORT OF YE AFFLICTED, CANNOT MUCH, I FEAR ME, HELP YOU NOW. THAT I WOULD SAY TO YOU, IS THIS ONLY: IF EVER I MAY BE ABLE TO REQUITE THAT I DO OWE YOU, STAND NOT UPON ASKING ME. TIS NOT MUCH THAT I CAN DO; BUT WHAT I CAN DO, BEE YE VERY SURE I WILL. I KNOW THAT, IF DETHE COMES, IF ORDINARY MEN FEAR IT, IT FRIGHTS NOT YOU, ACCOUNTING IT FOR A HIGH HONOUR, TO HAVE SUCH A REWARDE OF YOUR LOYALTY. PRAY YET YOU MAY BE SPARED THIS SOE BITTER, CUP. I FEAR NOT THAT YOU WILL GRUDGE ANY SUFFERINGS; ONLY IF BIE SUBMISSIONS YOU CAN TURN THEM AWAY, TIS THE PART OF A WISE MAN. TELL ME, AN IF YOU CAN, TO DO FOR YOU ANYTHINGE THAT YOU WOLDE HAVE DONE. THE GENERAL GOES BACK ON WEDNESDAY. RESTINGE YOUR SERVANT TO COMMAND.The third letter after e
https://en.wikipedia.org/wiki/Uniform%20boundedness%20principle
In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus, but it was also proven independently by Hans Hahn. Theorem The completeness of enables the following short proof, using the Baire category theorem. There are also simple proofs not using the Baire theorem . Corollaries The above corollary does claim that converges to in operator norm, that is, uniformly on bounded sets. However, since is bounded in operator norm, and the limit operator is continuous, a standard "" estimate shows that converges to uniformly on sets. Indeed, the elements of define a pointwise bounded family of continuous linear forms on the Banach space which is the continuous dual space of By the uniform boundedness principle, the norms of elements of as functionals on that is, norms in the second dual are bounded. But for every the norm in the second dual coincides with the norm in by a consequence of the Hahn–Banach theorem. Let denote the continuous operators from to endowed with the operator norm. If the collection is unbounded in then the uniform boundedness principle implies: In fact, is dense in The complement of in is the countable union of closed sets By the argument used in proving the theorem, each is nowhere dense, i.e. the subset is . Therefore is the complement of a subset of first category in a Baire space. By definition of a Baire space, such sets (called or ) are dense. Such reasoning leads to the , which can be formulated as follows: Exampl
https://en.wikipedia.org/wiki/Graph%20isomorphism
In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H such that any two vertices u and v of G are adjacent in G if and only if and are adjacent in H. This kind of bijection is commonly described as "edge-preserving bijection", in accordance with the general notion of isomorphism being a structure-preserving bijection. If an isomorphism exists between two graphs, then the graphs are called isomorphic and denoted as . In the case when the bijection is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the bijection is called an automorphism of G. If a graph is finite, we can prove it to be bijective by showing it is one-one/onto; no need to show both. Graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs. The question of whether graph isomorphism can be determined in polynomial time is a major unsolved problem in computer science, known as the Graph Isomorphism problem. The two graphs shown below are isomorphic, despite their different looking drawings Variations In the above definition, graphs are understood to be undirected non-labeled non-weighted graphs. However, the notion of isomorphic may be applied to all other variants of the notion of graph, by adding the requirements to preserve the corresponding additional elements of structure: arc directions, edge weights, etc., with the following exception. Isomorphism of labeled graphs For labeled graphs, two definitions of isomorphism are in use. Under one definition, an isomorphism is a vertex bijection which is both edge-preserving and label-preserving. Under another definition, an isomorphism is an edge-preserving vertex bijection which preserves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels are mapped onto the vertices with equivalent
https://en.wikipedia.org/wiki/Gene%20Amdahl
Gene Myron Amdahl (November 16, 1922 – November 10, 2015) was an American computer architect and high-tech entrepreneur, chiefly known for his work on mainframe computers at IBM and later his own companies, especially Amdahl Corporation. He formulated Amdahl's law, which states a fundamental limitation of parallel computing. Childhood and education Amdahl was born to immigrant parents of Norwegian and Swedish descent in Flandreau, South Dakota. After serving in the Navy during World War II he completed a degree in engineering physics at South Dakota State University in 1948. He went on to study theoretical physics at the University of Wisconsin–Madison under Robert G. Sachs. However, in 1950, Amdahl and Charles H. "Charlie" Davidson, a fellow PhD student in the Department of Physics, approached Harold A. Peterson with the idea of a digital computer. Amdahl and Davidson gained the support of Peterson and fellow electrical engineering professor Vincent C. Rideout, who encouraged them to build a computer of their unique design. Amdahl completed his doctorate at UW–Madison in 1952 with a thesis titled A Logical Design of an Intermediate Speed Digital Computer and created his first computer, the Wisconsin Integrally Synchronized Computer (WISC). He then went straight from Wisconsin to a position at IBM in June 1952. IBM At IBM, Amdahl worked on the IBM 704, the IBM 709, and then the Stretch project, the basis for the IBM 7030. He left IBM in December 1955, but returned in September 1960 (after working at Ramo-Wooldridge and at Aeronutronic). He quit out of frustration with the bureaucratic structure of the organization. In an interview conducted in 1989 for the Charles Babbage Institute, he addressed this: On his return, he became chief architect of IBM System/360 and was named an IBM Fellow in 1965, and head of the ACS Laboratory in Menlo Park, California. Amdahl Corporation He left IBM again in September 1970, after his ideas for computer development were rejecte
https://en.wikipedia.org/wiki/Timeline%20of%20operating%20systems
This article presents a timeline of events in the history of computer operating systems from 1951 to the current day. For a narrative explaining the overall developments, see the History of operating systems. 1950s 1951 LEO I 'Lyons Electronic Office' was the commercial development of EDSAC computing platform, supported by British firm J. Lyons and Co. 1955 MIT's Tape Director operating system made for UNIVAC 1103 1955 General Motors Operating System made for IBM 701 1956 GM-NAA I/O for IBM 704, based on General Motors Operating System 1957 Atlas Supervisor (Manchester University) (Atlas computer project start) BESYS (Bell Labs), for IBM 704, later IBM 7090 and IBM 7094 1958 University of Michigan Executive System (UMES), for IBM 704, 709, and 7090 1959 SHARE Operating System (SOS), based on GM-NAA I/O 1960s 1960 IBSYS (IBM for its 7090 and 7094) 1961 CTSS demonstration (MIT's Compatible Time-Sharing System for the IBM 7094) MCP (Burroughs Master Control Program) 1962 Atlas Supervisor (Manchester University) (Atlas computer commissioned) BBN Time-Sharing System GCOS (GE's General Comprehensive Operating System, originally GECOS, General Electric Comprehensive Operating Supervisor) 1963 AN/FSQ-32, another early time-sharing system begun CTSS becomes operational (MIT's Compatible Time-Sharing System for the IBM 7094) JOSS, an interactive time-shared system that did not distinguish between operating system and language Titan Supervisor, early time-sharing system begun 1964 Berkeley Timesharing System (for Scientific Data Systems' SDS 940) Dartmouth Time Sharing System (Dartmouth College's DTSS for GE computers) EXEC 8 (UNIVAC) KDF9 Timesharing Director (English Electric) – an early, fully hardware secured, fully pre-emptive process switching, multi-programming operating system for KDF9 (originally announced in 1960) OS/360 (IBM's primary OS for its S/360 series) (announced) PDP-6 Monitor (DEC) descendant renamed TOPS-10 in 1970
https://en.wikipedia.org/wiki/Adele%20ring
In mathematics, the adele ring of a global field (also adelic ring, ring of adeles or ring of adèles) is a central object of class field theory, a branch of algebraic number theory. It is the restricted product of all the completions of the global field and is an example of a self-dual topological ring. An adele derives from a particular kind of idele. "Idele" derives from the French "idèle" and was coined by the French mathematician Claude Chevalley. The word stands for 'ideal element' (abbreviated: id.el.). Adele (French: "adèle") stands for 'additive idele' (that is, additive ideal element). The ring of adeles allows one to describe the Artin reciprocity law, which is a generalisation of quadratic reciprocity, and other reciprocity laws over finite fields. In addition, it is a classical theorem from Weil that -bundles on an algebraic curve over a finite field can be described in terms of adeles for a reductive group . Adeles are also connected with the adelic algebraic groups and adelic curves. The study of geometry of numbers over the ring of adeles of a number field is called adelic geometry. Definition Let be a global field (a finite extension of or the function field of a curve over a finite field). The adele ring of is the subring consisting of the tuples where lies in the subring for all but finitely many places . Here the index ranges over all valuations of the global field , is the completion at that valuation and the corresponding valuation ring. Motivation The ring of adeles solves the technical problem of "doing analysis on the rational numbers ." The classical solution was to pass to the standard metric completion and use analytic techniques there. But, as was learned later on, there are many more absolute values other than the Euclidean distance, one for each prime number , as was classified by Ostrowski. The Euclidean absolute value, denoted , is only one among many others, , but the ring of adeles makes it possible to comprehend
https://en.wikipedia.org/wiki/IBM%20i
IBM i (the i standing for integrated) is an operating system developed by IBM for IBM Power Systems. It was originally released in 1988 as OS/400, as the sole operating system of the IBM AS/400 line of systems. It was renamed to i5/OS in 2004, before being renamed a second time to IBM i in 2008. It is an evolution of the System/38 CPF operating system, with compatibility layers for System/36 SSP and AIX applications. It inherits a number of distinctive features from the System/38 platform, including the Machine Interface, the implementation of object-based addressing on top of a single-level store, and the tight integration of a relational database into the operating system. History Origin OS/400 was developed alongside the AS/400 hardware platform beginning in December 1985. Development began in the aftermath of the failure of the Fort Knox project, which left IBM without a competitive midrange system. During the Fort Knox project, a skunkworks project was started at Rochester by engineers, who succeeded in developing code which allowed System/36 applications to run on top of the System/38, and when Fort Knox was cancelled, this project evolved into an official project to replace both the System/36 and System/38 with a single new hardware and software platform. The project became known as Silverlake (named for Silver Lake in Rochester, Minnesota). The operating system for Silverlake was codenamed XPF (Extended CPF), and had originally begun as a port of CPF to the Fort Knox hardware. In addition to adding support for System/36 applications, some of the user interface and ease-of-use features from the System/36 were carried over to the new operating system. Silverlake was available for field test in June 1988, and was officially announced in August of that year. By that point, it had been renamed to the Application System/400, and the operating system had been named Operating System/400. The move to PowerPC The port to PowerPC required a rewrite of most of t
https://en.wikipedia.org/wiki/Zero%20sharp
In the mathematical discipline of set theory, 0# (zero sharp, also 0#) is the set of true formulae about indiscernibles and order-indiscernibles in the Gödel constructible universe. It is often encoded as a subset of the integers (using Gödel numbering), or as a subset of the hereditarily finite sets, or as a real number. Its existence is unprovable in ZFC, the standard form of axiomatic set theory, but follows from a suitable large cardinal axiom. It was first introduced as a set of formulae in Silver's 1966 thesis, later published as , where it was denoted by Σ, and rediscovered by , who considered it as a subset of the natural numbers and introduced the notation O# (with a capital letter O; this later changed to the numeral '0'). Roughly speaking, if 0# exists then the universe V of sets is much larger than the universe L of constructible sets, while if it does not exist then the universe of all sets is closely approximated by the constructible sets. Definition Zero sharp was defined by Silver and Solovay as follows. Consider the language of set theory with extra constant symbols c1, c2, ... for each positive integer. Then 0# is defined to be the set of Gödel numbers of the true sentences about the constructible universe, with ci interpreted as the uncountable cardinal . (Here means in the full universe, not the constructible universe.) If there is in V an uncountable set of Silver order-indiscernibles in the constructible universe L, then 0# is the set of Gödel numbers of formulas θ of set theory such that where ω1, ... ωω are the "small" uncountable initial ordinals in V, but have all large cardinal properties consistent with V=L relative to L. There is a subtlety about this definition: by Tarski's undefinability theorem it is not, in general, possible to define the truth of a formula of set theory in the language of set theory. To solve this, Silver and Solovay assumed the existence of a suitable large cardinal, such as a Ramsey cardinal, and showed
https://en.wikipedia.org/wiki/Apple%20IIGS
The Apple IIGS (styled as II), the fifth and most powerful of the Apple II family, is a 16-bit personal computer produced by Apple Computer. While featuring the Macintosh look and feel, and resolution and color similar to the Amiga and Atari ST, it remains compatible with earlier Apple II models. The "GS" in the name stands for "Graphics and Sound", referring to its enhanced multimedia hardware, especially its state-of-the-art audio. The microcomputer is a radical departure from any previous Apple II, with a 16-bit 65C816 microprocessor, direct access to megabytes of random-access memory (RAM), and bundled mouse. It is the first computer from Apple with a color graphical user interface (color was introduced on the Macintosh II six months later) and Apple Desktop Bus interface for keyboards, mice, and other input devices. It is the first personal computer with a wavetable synthesis chip, utilizing technology from Ensoniq. The IIGS set forth a promising future and evolutionary advancement of the Apple II line, but Apple chose to focus on the Macintosh and no new Apple IIGS models were released. Apple ceased IIGS production in December 1992. Hardware The Apple IIGS made significant improvements over the Apple IIe and Apple IIc. It emulates its predecessors via a custom chip called the Mega II and uses the then-new WDC 65C816 16-bit microprocessor. The processor runs at , which is faster than the 8-bit processors used in the earlier Apple II models. The 65C816 allows the IIGS to address considerably more RAM. The clock was a deliberate decision to limit the IIGS's performance to less than that of the Macintosh. This decision had a critical effect on the IIGS's success; the original 65C816 processor used in the IIGS was certified to run at up to . Faster versions of the 65C816 processor were readily available, with speeds of between 5 and 14 MHz, but Apple kept the machine at 2.8 MHz throughout its production run. Its graphical capabilities are superior to the re
https://en.wikipedia.org/wiki/Measurable%20cardinal
In mathematics, a measurable cardinal is a certain kind of large cardinal number. In order to define the concept, one introduces a two-valued measure on a cardinal , or more generally on any set. For a cardinal , it can be described as a subdivision of all of its subsets into large and small sets such that itself is large, and all singletons are small, complements of small sets are large and vice versa. The intersection of fewer than large sets is again large. It turns out that uncountable cardinals endowed with a two-valued measure are large cardinals whose existence cannot be proved from ZFC. The concept of a measurable cardinal was introduced by Stanislaw Ulam in 1930. Definition Formally, a measurable cardinal is an uncountable cardinal number κ such that there exists a κ-additive, non-trivial, 0-1-valued measure on the power set of κ. (Here the term κ-additive means that, for any sequence Aα, α<λ of cardinality λ < κ, Aα being pairwise disjoint sets of ordinals less than κ, the measure of the union of the Aα equals the sum of the measures of the individual Aα.) Equivalently, κ is measurable means that it is the critical point of a non-trivial elementary embedding of the universe V into a transitive class M. This equivalence is due to Jerome Keisler and Dana Scott, and uses the ultrapower construction from model theory. Since V is a proper class, a technical problem that is not usually present when considering ultrapowers needs to be addressed, by what is now called Scott's trick. Equivalently, κ is a measurable cardinal if and only if it is an uncountable cardinal with a -complete, non-principal ultrafilter. Again, this means that the intersection of any strictly less than κ-many sets in the ultrafilter, is also in the ultrafilter. Properties It is trivial to note that if κ admits a non-trivial κ-additive measure, then κ must be regular. (By non-triviality and κ-additivity, any subset of cardinality less than κ must have measure 0, and then by κ-a
https://en.wikipedia.org/wiki/Woodin%20cardinal
In set theory, a Woodin cardinal (named for W. Hugh Woodin) is a cardinal number such that for all functions there exists a cardinal with and an elementary embedding from the Von Neumann universe into a transitive inner model with critical point and An equivalent definition is this: is Woodin if and only if is strongly inaccessible and for all there exists a which is --strong. being --strong means that for all ordinals , there exist a which is an elementary embedding with critical point , , and . (See also strong cardinal.) A Woodin cardinal is preceded by a stationary set of measurable cardinals, and thus it is a Mahlo cardinal. However, the first Woodin cardinal is not even weakly compact. Consequences Woodin cardinals are important in descriptive set theory. By a result of Martin and Steel, existence of infinitely many Woodin cardinals implies projective determinacy, which in turn implies that every projective set is Lebesgue measurable, has the Baire property (differs from an open set by a meager set, that is, a set which is a countable union of nowhere dense sets), and the perfect set property (is either countable or contains a perfect subset). The consistency of the existence of Woodin cardinals can be proved using determinacy hypotheses. Working in ZF+AD+DC one can prove that is Woodin in the class of hereditarily ordinal-definable sets. is the first ordinal onto which the continuum cannot be mapped by an ordinal-definable surjection (see Θ (set theory)). Mitchell and Steel showed that assuming a Woodin cardinal exists, there is an inner model containing a Woodin cardinal in which there is a -well-ordering of the reals, ◊ holds, and the generalized continuum hypothesis holds. Shelah proved that if the existence of a Woodin cardinal is consistent then it is consistent that the nonstationary ideal on is -saturated. Woodin also proved the equiconsistency of the existence of infinitely many Woodin cardinals and the existence of a
https://en.wikipedia.org/wiki/Superstrong%20cardinal
In mathematics, a cardinal number κ is called superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and ⊆ M. Similarly, a cardinal κ is n-superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and ⊆ M. Akihiro Kanamori has shown that the consistency strength of an n+1-superstrong cardinal exceeds that of an n-huge cardinal for each n > 0. References Set theory Large cardinals
https://en.wikipedia.org/wiki/Arachnology
Arachnology is the scientific study of arachnids, which comprise spiders and related invertebrates such as scorpions, pseudoscorpions, and harvestmen. Those who study spiders and other arachnids are arachnologists. More narrowly, the study of spiders alone (order Araneae) is known as araneology. The word "arachnology" derives from Greek , arachnē, "spider"; and , -logia, "the study of a particular subject". Arachnology as a science Arachnologists are primarily responsible for classifying arachnids and studying aspects of their biology. In the popular imagination, they are sometimes referred to as spider experts. Disciplines within arachnology include naming species and determining their evolutionary relationships to one another (taxonomy and systematics), studying how they interact with other members of their species and/or their environment (behavioural ecology), or how they are distributed in different regions and habitats (faunistics). Other arachnologists perform research on the anatomy or physiology of arachnids, including the venom of spiders and scorpions. Others study the impact of spiders in agricultural ecosystems and whether they can be used as biological control agents. Subdisciplines Arachnology can be broken down into several specialties, including: acarology – the study of ticks and mites araneology – the study of spiders scorpiology – the study of scorpions Arachnological societies Arachnologists are served by a number of scientific societies, both national and international in scope. Their main roles are to encourage the exchange of ideas between researchers, to organise meetings and congresses, and in a number of cases, to publish academic journals. Some are also involved in science outreach programs, such as the European spider of the year, which raise awareness of these animals among the general public. International International Society of Arachnology (ISA) website Africa African Arachnological Society (AFRAS) website Asia Arach
https://en.wikipedia.org/wiki/Space%20charge
Space charge is an interpretation of a collection of electric charges in which excess electric charge is treated as a continuum of charge distributed over a region of space (either a volume or an area) rather than distinct point-like charges. This model typically applies when charge carriers have been emitted from some region of a solid—the cloud of emitted carriers can form a space charge region if they are sufficiently spread out, or the charged atoms or molecules left behind in the solid can form a space charge region. Space charge effects are most pronounced in dielectric media (including vacuum); in highly conductive media, the charge tends to be rapidly neutralized or screened. The sign of the space charge can be either negative or positive. This situation is perhaps most familiar in the area near a metal object when it is heated to incandescence in a vacuum. This effect was first observed by Thomas Edison in light bulb filaments, where it is sometimes called the Edison effect. Space charge is a significant phenomenon in many vacuum and solid-state electronic devices. Cause Physical explanation When a metal object is placed in a vacuum and is heated to incandescence, the energy is sufficient to cause electrons to "boil" away from the surface atoms and surround the metal object in a cloud of free electrons. This is called thermionic emission. The resulting cloud is negatively charged, and can be attracted to any nearby positively charged object, thus producing an electric current which passes through the vacuum. Space charge can result from a range of phenomena, but the most important are: Combination of the current density and spatially inhomogeneous resistance Ionization of species within the dielectric to form heterocharge Charge injection from electrodes and from a stress enhancement Polarization in structures such as water trees. "Water tree" is a name given to a tree-like figure appearing in a water-impregnated polymer insulating cable. It has
https://en.wikipedia.org/wiki/Gaia%20hypothesis
The Gaia hypothesis (), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet. The Gaia hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. Following the suggestion by his neighbour, novelist William Golding, Lovelock named the hypothesis after Gaia, the primordial deity who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis. Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth. The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence. Overview Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, Ages of Gaia, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life. A reduced version of the hypothesis has been called "influenti
https://en.wikipedia.org/wiki/Refractory%20period%20%28physiology%29
Refractoriness is the fundamental property of any object of autowave nature (especially excitable medium) not responding to stimuli, if the object stays in the specific refractory state. In common sense, refractory period is the characteristic recovery time, a period that is associated with the motion of the image point on the left branch of the isocline (for more details, see also Reaction–diffusion and Parabolic partial differential equation). In physiology, a refractory period is a period of time during which an organ or cell is incapable of repeating a particular action, or (more precisely) the amount of time it takes for an excitable membrane to be ready for a second stimulus once it returns to its resting state following an excitation. It most commonly refers to electrically excitable muscle cells or neurons. Absolute refractory period corresponds to depolarization and repolarization, whereas relative refractory period corresponds to hyperpolarization. Electrochemical usage After initiation of an action potential, the refractory period is defined two ways: The absolute refractory period coincides with nearly the entire duration of the action potential. In neurons, it is caused by the inactivation of the Na+ channels that originally opened to depolarize the membrane. These channels remain inactivated until the membrane hyperpolarizes. The channels then close, de-inactivate, and regain their ability to open in response to stimulus. The relative refractory period immediately follows the absolute. As voltage-gated potassium channels open to terminate the action potential by repolarizing the membrane, the potassium conductance of the membrane increases dramatically. K+ ions moving out of the cell bring the membrane potential closer to the equilibrium potential for potassium. This causes brief hyperpolarization of the membrane, that is, the membrane potential becomes transiently more negative than the normal resting potential. Until the potassium conductance
https://en.wikipedia.org/wiki/Local%20homeomorphism
In mathematics, more specifically topology, a local homeomorphism is a function between topological spaces that, intuitively, preserves local (though not necessarily global) structure. If is a local homeomorphism, is said to be an étale space over Local homeomorphisms are used in the study of sheaves. Typical examples of local homeomorphisms are covering maps. A topological space is locally homeomorphic to if every point of has a neighborhood that is homeomorphic to an open subset of For example, a manifold of dimension is locally homeomorphic to If there is a local homeomorphism from to then is locally homeomorphic to but the converse is not always true. For example, the two dimensional sphere, being a manifold, is locally homeomorphic to the plane but there is no local homeomorphism Formal definition A function between two topological spaces is called a if every point has an open neighborhood whose image is open in and the restriction is a homeomorphism (where the respective subspace topologies are used on and on ). Examples and sufficient conditions Local homeomorphisms versus homeomorphisms Every homeomorphism is a local homeomorphism. But a local homeomorphism is a homeomorphism if and only if it is bijective. A local homeomorphism need not be a homeomorphism. For example, the function defined by (so that geometrically, this map wraps the real line around the circle) is a local homeomorphism but not a homeomorphism. The map defined by which wraps the circle around itself times (that is, has winding number ), is a local homeomorphism for all non-zero but it is a homeomorphism only when it is bijective (that is, only when or ). Generalizing the previous two examples, every covering map is a local homeomorphism; in particular, the universal cover of a space is a local homeomorphism. In certain situations the converse is true. For example: if is a proper local homeomorphism between two Hausdorff spaces and if is also