source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Room%20acoustics | Room acoustics is a subfield of acoustics dealing with the behaviour of sound in enclosed or partially-enclosed spaces. The architectural details of a room influences the behaviour of sound waves within it, with the effects varying by frequency. Acoustic reflection, diffraction, and diffusion can combine to create audible phenomena such as room modes and standing waves at specific frequencies and locations, echos, and unique reverberation patterns.
Frequency zones
The way that sound behaves in a room can be broken up into four different frequency zones:
The first zone is below the frequency that has a wavelength of twice the longest length of the room. In this zone, sound behaves very much like changes in static air pressure.
Above that zone, until wavelengths are comparable to the dimensions of the room, room resonances dominate. This transition frequency is popularly known as the Schroeder frequency, or the cross-over frequency, and it differentiates the low frequencies which create standing waves within small rooms from the mid and high frequencies.
The third region which extends approximately 2 octaves is a transition to the fourth zone.
In the fourth zone, sounds behave like rays of light bouncing around the room.
Natural modes
For frequencies under the Schroeder frequency, certain wavelengths of sound will build up as resonances within the boundaries of the room, and the resonating frequencies can be determined using the room's dimensions. Similar to the calculation of standing waves inside a pipe with two closed ends, the modal frequencies and the sound pressure of those modes at a particular position of a rectilinear room can be defined as
where are mode numbers corresponding to the x-,y-, and z-axis of the room, is the speed of sound in , are the dimensions of the room in meters. is the amplitude of the sound wave, and are coordinates of a point contained inside the room.
Modes can occur in all three dimensions of a room. Axial modes |
https://en.wikipedia.org/wiki/Group%20cohomology | In mathematics (more specifically, in homological algebra), group cohomology is a set of mathematical tools used to study groups using cohomology theory, a technique from algebraic topology. Analogous to group representations, group cohomology looks at the group actions of a group G in an associated G-module M to elucidate the properties of the group. By treating the G-module as a kind of topological space with elements of representing n-simplices, topological properties of the space may be computed, such as the set of cohomology groups . The cohomology groups in turn provide insight into the structure of the group G and G-module M themselves. Group cohomology plays a role in the investigation of fixed points of a group action in a module or space and the quotient module or space with respect to a group action. Group cohomology is used in the fields of abstract algebra, homological algebra, algebraic topology and algebraic number theory, as well as in applications to group theory proper. As in algebraic topology, there is a dual theory called group homology. The techniques of group cohomology can also be extended to the case that instead of a G-module, G acts on a nonabelian G-group; in effect, a generalization of a module to non-Abelian coefficients.
These algebraic ideas are closely related to topological ideas. The group cohomology of a discrete group G is the singular cohomology of a suitable space having G as its fundamental group, namely the corresponding Eilenberg–MacLane space. Thus, the group cohomology of can be thought of as the singular cohomology of the circle S1, and similarly for and
A great deal is known about the cohomology of groups, including interpretations of low-dimensional cohomology, functoriality, and how to change groups. The subject of group cohomology began in the 1920s, matured in the late 1940s, and continues as an area of active research today.
Motivation
A general paradigm in group theory is that a group G should be studied vi |
https://en.wikipedia.org/wiki/Asymptotic%20equipartition%20property | In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.
Roughly speaking, the theorem states that although there are many series of results that may be produced by a random process, the one actually produced is most probably from a loosely defined set of outcomes that all have approximately the same chance of being the one actually realized. (This is a consequence of the law of large numbers and ergodic theory.) Although there are individual outcomes which have a higher probability than any outcome in this set, the vast number of outcomes in the set almost guarantees that the outcome will come from the set. One way of intuitively understanding the property is through Cramér's large deviation theorem, which states that the probability of a large deviation from mean decays exponentially with the number of samples. Such results are studied in large deviations theory; intuitively, it is the large deviations that would violate equipartition, but these are unlikely.
In the field of pseudorandom number generation, a candidate generator of undetermined quality whose output sequence lies too far outside the typical set by some statistical criteria is rejected as insufficiently random. Thus, although the typical set is loosely defined, practical notions arise concerning sufficient typicality.
Definition
Given a discrete-time stationary ergodic stochastic process on the probability space , the asymptotic equipartition property is an assertion that, almost surely,
where or simply denotes the entropy rate of , which must exist for all discrete-time stationary processes including the ergodic ones. The asymptotic equipartition property is proved for finite-valued (i.e. ) stationary ergodic stochastic processes in the Shannon–McMillan–Breiman theorem using the ergodic theory and for any i.i.d. sources directly u |
https://en.wikipedia.org/wiki/Typical%20set | In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source.
The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases.
(Weakly) typical sequences (weak typicality, entropy typicality)
If a sequence x1, ..., xn is drawn from an i.i.d. distribution X defined over a finite alphabet , then the typical set, Aε(n)(n) is defined as those sequences which satisfy:
where
is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as
For i.i.d sequence, since
we further have
By the law of large numbers, for sufficiently large n
Properties
An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any , one can choose n such that:
The probability of a sequence from X(n) being drawn from Aε(n) is greater than 1 − ε, i.e.
If the distribution over is not uniform, then the fraction of sequences that are typical is
as n becomes very large, since where is the cardi |
https://en.wikipedia.org/wiki/Online%20encyclopedia | Online encyclopedias, also called Internet encyclopedias, are digital encyclopedias accessible through the Internet. Examples include Wikipedia, the Encyclopædia Britannica since 2016 and Encyclopedia.com.
Digitization of old content
In January 1995, Project Gutenberg started to publish the ASCII text of the Encyclopædia Britannica, 11th edition (1911), but disagreement about the method halted the work after the first volume. For trademark reasons this has been published as the Gutenberg Encyclopedia. Project Gutenberg later restarted work on digitising and proofreading this encyclopedia. Project Gutenberg has published volumes in alphabetic order; the most recent publication is Volume 17 Slice 8: Matter–Mecklenburg published on 7 April 2013. The latest Britannica was digitized by its publishers, and sold first as a CD-ROM, and later as an online service.
In 2001, ASCII text of all 28 volumes was published on Encyclopædia Britannica Eleventh Edition by source; a copyright claim was added to the materials included. The website no longer exists.
Other digitization projects have made progress in other titles. One example is Easton's Bible Dictionary (1897) digitized by the Christian Classics Ethereal Library.
A successful digitization of an encyclopedia was the Bartleby Project's online adaptation of the Columbia Encyclopedia, Sixth Edition, in early 2000 and is updated periodically.
Other websites provide online encyclopedias, some of which are also available on Wikisource, but which may be more complete than those on Wikisource, or maybe different editions (see List of online encyclopedias).
Creation of new content
Another related branch of activity is the creation of new, free content on a volunteer basis. In 1991, the participants of the Usenet newsgroup started a project to produce a real version of The Hitchhiker's Guide to the Galaxy, a fictional encyclopedia used in the works of Douglas Adams. It became known as Project Galactic Guide. Although it |
https://en.wikipedia.org/wiki/B%20News | B News was a Usenet news server developed at the University of California, Berkeley by Matt Glickman and Mary Ann Horton as a replacement for A News. It was used on Unix systems from 1981 into the 1990s and is the reference implementation for the de facto Usenet standard described in and . Releases from 2.10.2 were maintained by UUNET founder Rick Adams.
B News introduced numerous changes from its predecessor. Articles used an extensible format with named headers, first by using labeled equivalents to the A News format. A further refinement in 1983 with News B2.10 was a move to e-mail-compatible headers, to ease message transfers with the ARPAnet. A history database was introduced, allowing articles to be placed in separate directories by newsgroup, improving retrieval speeds and easing the development of separate newsreader programs such as rn. Support was provided for expiring old articles, and control messages (special articles that can automatically cause articles to be erased, or newsgroups to be added or removed) were added.
News B2.10 introduced the hierarchical article storage format carried into C News and InterNetNews, and still commonly seen in many newsreaders and cache programs. Before B2.10, all groups were stored beneath a single parent directory, impairing performance when the group list became large, and requiring that the first 14 characters be unique among all groups due to an old Unix limitation. The hierarchical layout split the groups at the periods, reducing directory sizes and ameliorating the uniqueness problem.
B2.10 contained limited support for moderated newsgroups, with posters needing to manually mail in submissions to an intermediate party who would post articles on their behalf. Moderated groups needed to be prefixed with "mod." In 1986, version B2.11 allowed moderated newsgroups to appear in any hierarchy, and it transparently mailed out moderated group submissions using the normal posting software.
The last B News patch set |
https://en.wikipedia.org/wiki/Algebraic%20variety | Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.
Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility.
The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry.
Many algebraic varieties are manifolds, but an algebraic variety may have singular points while a manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces.
In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type.
Overview and definitions
An affine variety over a |
https://en.wikipedia.org/wiki/Theodolite | A theodolite () is a precision optical instrument for measuring angles between designated visible points in the horizontal and vertical planes. The traditional use has been for land surveying, but it is also used extensively for building and infrastructure construction, and some specialized applications such as meteorology and rocket launching.
It consists of a moveable telescope mounted so it can rotate around horizontal and vertical axes and provide angular readouts. These indicate the orientation of the telescope, and are used to relate the first point sighted through the telescope to subsequent sightings of other points from the same theodolite position. These angles can be measured with accuracies down to microradians or seconds of arc. From these readings a plan can be drawn, or objects can be positioned in accordance with an existing plan. The modern theodolite has evolved into what is known as a total station where angles and distances are measured electronically, and are read directly to computer memory.
In a transit theodolite, the telescope is short enough to rotate about the trunnion axis, turning the telescope through the vertical plane through the zenith; for non-transit instruments vertical rotation is restricted to a limited arc.
The optical level is sometimes mistaken for a theodolite, but it does not measure vertical angles, and is used only for leveling on a horizontal plane (though often combined with medium accuracy horizontal range and direction measurements).
Principles of operation
Preparation for making sightings
Temporary adjustments are a set of operations necessary in order to make a theodolite ready for taking observations at a station. These include its setting up, centering, leveling up and elimination of parallax, and are achieved in four steps:
Setting up: fixing the theodolite onto a tripod along with approximate leveling and centering over the station mark.
Centering: bringing the vertical axis of theodolite immediately o |
https://en.wikipedia.org/wiki/Software%20development | Software development is the process used to conceive, specify, design, program, document, test, and bug fix in order to create and maintain applications, frameworks, or other software components. Software development involves writing and maintaining the source code, but in a broader sense, it includes all processes from the conception of the desired software through the final manifestation, typically in a planned and structured process often overlapping with software engineering. Software development also includes research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.
Methodologies
One system development methodology is not necessarily suitable for use by all projects.
Each of the available methodologies are best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.
Activities
Identification of need
The sources of ideas for software products are plentiful. These ideas can come from market research including the demographics of potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated by marketing personnel for economic feasibility, fit with existing channels of distribution, possible effects on existing product lines, required features, and fit with the company's marketing objectives. In the marketing evaluation phase, the cost and time assumptions become evaluated. A decision is reached early in the first phase as to whether, based on the more detailed information generated by the marketing and development staff, the project should be pursued further.
In the book "Great Software Debates", Alan M. Davis states in the chapter "Requirements", sub-chapter "The Missing Piece of Software Development"
Students of engineering learn engineering and are rarely expose |
https://en.wikipedia.org/wiki/Product%20rule | In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as
The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.
Discovery
Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u(x) and v(x) be two differentiable functions of x. Then the differential of uv is
Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that
and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain
which can also be written in Lagrange's notation as
Examples
Suppose we want to differentiate By using the product rule, one gets the derivative (since the derivative of is and the derivative of the sine function is the cosine function).
One special case of the product rule is the constant multiple rule, which states: if is a number, and is a differentiable function, then is also differentiable, and its derivative is This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear.
The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is it is differentiable.)
Proofs
Limit definition of derivative
Let and suppose that and are each differentiable at . We want to prove that is differentiable at and that its derivative, , is given by . To do this, (which is z |
https://en.wikipedia.org/wiki/Multiplication%20sign | The multiplication sign, also known as the times sign or the dimension sign, is the symbol ×, used in mathematics to denote the multiplication operation and its resulting product. While similar to a lowercase X (), the form is properly a four-fold rotationally symmetric saltire.
History
The earliest known use of the symbol to represent multiplication appears in an anonymous appendix to the 1618 edition of John Napier's . This appendix has been attributed to William Oughtred, who used the same symbol in his 1631 algebra text, , stating:"Multiplication of species [i.e. unknowns] connects both proposed magnitudes with the symbol 'in' or : or ordinarily without the symbol if the magnitudes be denoted with one letter." Two earlier uses of a notation have been identified, but do not stand critical examination.
Uses
In mathematics, the symbol × has a number of uses, including
Multiplication of two numbers, where it is read as "times" or "multiplied by"
Cross product of two vectors, where it is usually read as "cross"
Cartesian product of two sets, where it is usually read as "cross"
Geometric dimension of an object, such as noting that a room is 10 feet × 12 feet in area, where it is usually read as "by" (e.g., "10 feet by 12 feet")
Screen resolution in pixels, such as 1920 pixels across × 1080 pixels down. Read as "by".
Dimensions of a matrix, where it is usually read as "by"
A statistical interaction between two explanatory variables, where it is usually read as "by"
In biology, the multiplication sign is used in a botanical hybrid name, for instance Ceanothus papillosus × impressus (a hybrid between C. papillosus and C. impressus) or Crocosmia × crocosmiiflora (a hybrid between two other species of Crocosmia). However, the communication of these hybrid names with a Latin letter "x" is common, when the actual "×" symbol is not readily available.
The multiplication sign is also used by historians for an event between two dates. When employed between two dates |
https://en.wikipedia.org/wiki/Na%C3%AFve%20empiricism | Naïve empiricism is a term used in several ways in different fields.
In the philosophy of science, it is used by opponents to describe the position, associated with some logical positivists, that "knowledge can be clearly learnt through evaluation of the natural world and its substances, and, through empirical means, learn truths".
The term also is used to describe a particular methodology for literary analysis.
See also:
Empiricism
Falsifiability (especially, "Naïve falsification")
References
Empiricism
Epistemological theories
Metatheory of science
Epistemology of science
Logical positivism |
https://en.wikipedia.org/wiki/Divergence%20of%20the%20sum%20of%20the%20reciprocals%20of%20the%20primes | The sum of the reciprocals of all prime numbers diverges; that is:
This was proved by Leonhard Euler in 1737, and strengthens Euclid's 3rd-century-BC result that there are infinitely many prime numbers and Nicole Oresme's 14th-century proof of the divergence of the sum of the reciprocals of the integers (harmonic series).
There are a variety of proofs of Euler's result, including a lower bound for the partial sums stating that
for all natural numbers . The double natural logarithm () indicates that the divergence might be very slow, which is indeed the case. See Meissel–Mertens constant.
The harmonic series
First, we describe how Euler originally discovered the result. He was considering the harmonic series
He had already used the following "product formula" to show the existence of infinitely many primes.
Here the product is taken over the set of all primes.
Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series.
Proofs
Euler's proof
Euler considered the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the natural logarithm of each side, then he used the Taylor series expansion for as well as the sum of a converging series:
for a fixed constant . Then he invoked the relation
which he explained, for instance in a later 1748 work, by setting in the Taylor series expansion
This allowed him to conclude that
It is almost certain that Euler meant that the sum of the reciprocals of the primes less than is asymptotic to as approaches infinity. It turns out this is indeed the case, and a more precise version of this fact was rigorously proved by Franz Mertens in 1874. Thus Euler obtained a correct result by questionable means.
Erdős's proof by upper and lower esti |
https://en.wikipedia.org/wiki/Glossary%20of%20group%20theory | A group is a set together with an associative operation which admits an identity element and such that every element has an inverse.
Throughout the article, we use to denote the identity element of a group.
A
C
D
F
G
H
I
L
N
O
P
Q
R
S
T
Basic definitions
Subgroup. A subset of a group which remains a group when the operation is restricted to is called a subgroup of .
Given a subset of . We denote by the smallest subgroup of containing . is called the subgroup of generated by .
Normal subgroup. is a normal subgroup of if for all in and in , also belongs to .
Both subgroups and normal subgroups of a given group form a complete lattice under inclusion of subsets; this property and some related results are described by the lattice theorem.
Group homomorphism. These are functions that have the special property that
for any elements and of .
Kernel of a group homomorphism. It is the preimage of the identity in the codomain of a group homomorphism. Every normal subgroup is the kernel of a group homomorphism and vice versa.
Group isomorphism. Group homomorphisms that have inverse functions. The inverse of an isomorphism, it turns out, must also be a homomorphism.
Isomorphic groups. Two groups are isomorphic if there exists a group isomorphism mapping from one to the other. Isomorphic groups can be thought of as essentially the same, only with different labels on the individual elements.
One of the fundamental problems of group theory is the classification of groups up to isomorphism.
Direct product, direct sum, and semidirect product of groups. These are ways of combining groups to construct new groups; please refer to the corresponding links for explanation.
Types of groups
Finitely generated group. If there exists a finite set such that then is said to be finitely generated. If can be taken to have just one element, is a cyclic group of finite order, an infinite cyclic group, or possibly a group with just one element. |
https://en.wikipedia.org/wiki/Mirepoix | A mirepoix ( ; ) is a mixture of diced vegetables cooked with fat (usually butter) for a long time on low heat without coloring or browning. The ingredients are not sautéed or otherwise hard-cooked, because the intention is to sweeten rather than caramelize them. Mirepoix is a long-standing part of French cuisine and is the flavor base for a wide variety of dishes, including stocks, soups, stews, and sauces.
When the mirepoix is not precooked, the constituent vegetables may be cut to a larger size, depending on the overall cooking time for the dish. Usually the vegetable mixture is onions, carrots, and celery (either common 'Pascal' celery or celeriac), with the traditional ratio being 2:1:1—two parts onion, one part carrot, and one part celery. Further cooking, with the addition of tomato purée, creates a darkened brown mixture called .
Similar flavor bases include the Italian , the Spanish and Portuguese / (braised onions, garlic and tomato), a variation with tomato paste instead of fresh tomato of the Eastern Mediterranean and Balkans region, the German (leeks, carrots and celeriac), the Polish (leeks, carrots, celeriac and parsley root), the Russian/Ukrainian or (onion, carrot and possibly celery, beets or pepper), the United States Cajun/Creole holy trinity (onions, celery and bell peppers), and possibly the French duxelles (mushrooms and often onion or shallot and herbs, reduced to a paste).
History
Though the cooking technique is probably older, the word mirepoix dates from the 18th century and derives, as do many other appellations in French cuisine, from the aristocratic employer of the cook credited with establishing and stabilizing it: in this case, Charles-Pierre-Gaston François de Lévis, duc de Lévis-Mirepoix (1699–1757), French field marshal and ambassador and a member of the noble family of Lévis, lords of Mirepoix in Languedoc (nowadays in the department of Ariège) since the 11th century. According to Pierre Larousse (quoted in The Oxford Com |
https://en.wikipedia.org/wiki/Open%20spectrum | Open spectrum (also known as free spectrum) is a movement to get the Federal Communications Commission to provide more unlicensed radio-frequency spectrum that is available for use by all. Proponents of the "commons model" of open spectrum advocate a future where all the spectrum is shared, and in which people use Internet protocols to communicate with each other, and smart devices, which would find the most effective energy level, frequency, and mechanism. Previous government-imposed limits on who can have stations and who cannot would be removed, and everyone would be given equal opportunity to use the airwaves for their own radio station, television station, or even broadcast their own website. A notable advocate for Open Spectrum is Lawrence Lessig.
National governments currently allocate bands of spectrum (sometimes based on guidelines from the ITU) for use by anyone so long as they respect certain technical limits, most notably, a limit on total transmission power. Unlicensed spectrum is decentralized: there are no license payments or central control for users. However, sharing spectrum between unlicensed equipment requires that mitigation techniques (e.g.: power limitation, duty cycle, dynamic frequency selection) are imposed to ensure that these devices operate without interference.
Traditional users of unlicensed spectrum include cordless telephones, and baby monitors. A collection of new technologies are taking advantage of unlicensed spectrum including Wi-Fi, Ultra Wideband, spread spectrum, software-defined radio, cognitive radio, and mesh networks.
Radio astronomy needs
Astronomers use many radio telescopes to look up at objects such as pulsars in our own Galaxy and at distant radio galaxies up to about half the distance of the observable sphere of our Universe. The use of radio frequencies for communication creates pollution from the point of view of astronomers, at best, creating noise or, at worst, totally blinding the astronomical community for c |
https://en.wikipedia.org/wiki/Computer%20terminal | A computer terminal is an electronic or electromechanical hardware device that can be used for entering data into, and transcribing data from, a computer or a computing system. The teletype was an example of an early-day hard-copy terminal and predated the use of a computer screen by decades.
Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input, yet as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was time-sharing systems, which evolved in parallel and made up for any inefficiencies in the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal or terminals.
The function of a terminal is typically confined to transcription and input of data; a device with significant local, programmable data-processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a "dumb terminal" or a thin client. In the era of serial (RS-232) terminals there was a conflicting usage of the term "smart terminal" as a dumb terminal with no user-accessible local computing power but a particularly rich set of control codes for manipulating the display; this conflict was not resolved before hardware serial terminals became obsolete.
A personal computer can run terminal emulator software that replicates functions of a real-world terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system, either over a direct serial connection or over a network using, e.g., SSH.
History
The console of Konrad Zuse's Z3 had a keyboard in 1941, as did the Z4 in 1942–1945. But these consoles could only be used to enter numeric inputs and were thus analogous to those of calculating machines; programs, commands, and other data were entered via paper tape. Both machines had a |
https://en.wikipedia.org/wiki/Cambridge%20Ring%20%28computer%20network%29 | The Cambridge Ring was an experimental local area network architecture developed at the Computer Laboratory, University of Cambridge starting in 1974 and continuing into the 1980s. It was a ring network with a theoretical limit of 255 nodes (though such a large number would have badly affected performance), around which cycled a fixed number of packets. Free packets would be "loaded" with data by a sending machine, marked as received by the destination machine, and "unloaded" on return to the sender; thus in principle, there could be as many simultaneous senders as packets. The network ran over twin twisted-pair cabling (plus a fibre-optic section).
There are strong similarities between the Cambridge Ring and an earlier ring network developed at Bell Labs based on a design by John R. Pierce. That network used T1 lines at bit rate of 1.544 MHz and accommodating 522 bit messages (data plus address).
People associated with the project include Andy Hopper, David Wheeler, Maurice Wilkes, and Roger Needham.
In 2002, the Computer Laboratory launched a graduate society called the Cambridge Computer Lab Ring named after the Cambridge Ring.
See also
Cambridge Distributed Computing System
Internet in the United Kingdom § History
JANET
NPL network
Packet switching
Token Ring
University of London Computer Centre
References
External links
Cambridge Ring Hardware
Cambridge Fast Ring
Cambridge Backbone Ring Hardware
Cambridge Computer Lab Ring
1974 introductions
Experimental computer networks
History of computing in the United Kingdom
Local area networks
Network topology
University of Cambridge Computer Laboratory |
https://en.wikipedia.org/wiki/System%20of%20equations | In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single equations, namely as a:
System of linear equations,
System of nonlinear equations,
System of bilinear equations,
System of polynomial equations,
System of differential equations, or a
System of difference equations
See also
Simultaneous equations model, a statistical model in the form of simultaneous linear equations
Elementary algebra, for elementary methods
Equations
Broad-concept articles
de:Gleichung#Gleichungssysteme |
https://en.wikipedia.org/wiki/Royal%20Radar%20Establishment | The Royal Radar Establishment was a research centre in Malvern, Worcestershire in the United Kingdom. It was formed in 1953 as the Radar Research Establishment by the merger of the Air Ministry's Telecommunications Research Establishment (TRE) and the British Army's Radar Research and Development Establishment (RRDE). It was given its new name after a visit by Queen Elizabeth II in 1957. Both names were abbreviated to RRE. In 1976 the Signals Research and Development Establishment (SRDE), involved in communications research, joined the RRE to form the Royal Signals and Radar Establishment (RSRE).
The two groups had been closely associated since before the opening of World War II, when the predecessor to RRDE was formed as a small group within the Air Ministry's research centre in Bawdsey Manor in Suffolk. Forced to leave Bawdsey due to its exposed location on the east coast of England, both groups moved several times before finally settling in separate locations in Malvern beginning in May 1942. The merger in 1953 that formed the RRE renamed these as the North Site (RRDE) and South Site (TRE).
The earlier research and development work of TRE and RRDE on radar was expanded into solid state physics, electronics, and computer hardware and software. The RRE's overall scope was extended to include cryogenics and other topics. Infrared detection for guided missiles and heat sensing devices was a major defence application. The SRDE brought satellite communications and fibre optics knowledge.
In 1991 they were partially privatized as part of the Defence Research Agency, which became the Defence Evaluation and Research Agency in 1996. The North Site was closed in 2003 and the work was consolidated at the South Site, while the former North Site was sold off for housing developments. Qinetiq now occupies a part of the former RSRE site.
Administrative history
The earliest concerted effort to develop radar in the UK dates to 1935, and Robert Watt replied to an Air Ministry q |
https://en.wikipedia.org/wiki/Three-valued%20logic | In logic, a three-valued logic (also trinary logic, trivalent, ternary, or trilean, sometimes abbreviated 3VL) is any of several many-valued logic systems in which there are three truth values indicating true, false and some third value. This is contrasted with the more commonly known bivalent logics (such as classical sentential or Boolean logic) which provide only for true and false.
Emil Leon Post is credited with first introducing additional logical truth degrees in his 1921 theory of elementary propositions. The conceptual form and basic ideas of three-valued logic were initially published by Jan Łukasiewicz and Clarence Irving Lewis. These were then re-formulated by Grigore Constantin Moisil in an axiomatic algebraic form, and also extended to n-valued logics in 1945.
Pre-discovery
Around 1910, Charles Sanders Peirce defined a many-valued logic system. He never published it. In fact, he did not even number the three pages of notes where he defined his three-valued operators. Peirce soundly rejected the idea all propositions must be either true or false; boundary-propositions, he writes, are "at the limit between P and not P." However, as confident as he was that "Triadic Logic is universally true," he also jotted down that "All this is mighty close to nonsense." Only in 1966, when Max Fisch and Atwell Turquette began publishing what they rediscovered in his unpublished manuscripts, did Peirce's triadic ideas become widely known.
Representation of values
As with bivalent logic, truth values in ternary logic may be represented numerically using various representations of the ternary numeral system. A few of the more common examples are:
in balanced ternary, each digit has one of 3 values: −1, 0, or +1; these values may also be simplified to −, 0, +, respectively;
in the redundant binary representation, each digit can have a value of −1, 0, 0/1 (the value 0/1 has two different representations);
in the ternary numeral system, each digit is a trit (trinary |
https://en.wikipedia.org/wiki/Setun | Setun () was a computer developed in 1958 at Moscow State University. It was built under the leadership of Sergei Sobolev and Nikolay Brusentsov. It was the most modern ternary computer, using the balanced ternary numeral system and three-valued ternary logic instead of the two-valued binary logic prevalent in other computers.
Overview
The computer was built to fulfill the needs of Moscow State University. It was manufactured at the Kazan Mathematical plant. Fifty computers were built from 1959 until 1965, when production was halted. The characteristic operating memory consisted of 81 words of memory, each word composed of 18 trits (ternary digits) with additional 1944 words on magnetic drum (total of about 7 KB). Between 1965 and 1970, a regular binary computer was used at Moscow State University to replace it. Although this replacement binary computer performed equally well, it was 2.5 times the cost of the Setun.
In 1970, a new ternary computer architecture, the Setun-70, was developed. Edsger W. Dijkstra's ideas of structured programming were implemented in the hardware of this computer. The short instructions set was developed and implemented by Nikolay Brusentsov independently from RISC architecture principles.
The Setun-70 hardware architecture was transformed into the Dialogue System of Structured Programming (DSSP). DSSP emulates the "Setun 70" architecture on binary computers, thus it fulfills the advantages of structured programming. DSSP programming language has similar syntax to the Forth programming language but has a different sequence of base instructions, especially conditional jump instructions. DSSP was developed by Nikolay Brusentsov and doctoral students in the 1980s at Moscow State University. A 32-bit version was implemented in 1989.
See also
History of computing in the Soviet Union
References
Early computers
Soviet computer systems
Soviet inventions |
https://en.wikipedia.org/wiki/Phonotactics | Phonotactics (from Ancient Greek "voice, sound" and "having to do with arranging") is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.
Phonotactic constraints are highly language-specific. For example, in Japanese, consonant clusters like do not occur. Similarly, the clusters and are not permitted at the beginning of a word in Modern English but are in German and Dutch (in which the latter appears as ) and were permitted in Old and Middle English. In contrast, in some Slavic languages and are used alongside vowels as syllable nuclei.
Syllables have the following internal segmental structure:
Onset (optional)
Rhyme (obligatory, comprises nucleus and coda):
Nucleus (obligatory)
Coda (optional)
Both onset and coda may be empty, forming a vowel-only syllable, or alternatively, the nucleus can be occupied by a syllabic consonant. Phonotactics is known to affect second language vocabulary acquisition.
English phonotactics
The English syllable (and word) twelfths is divided into the onset , the nucleus and the coda ; thus, it can be described as CCVCCCC (C = consonant, V = vowel). On this basis it is possible to form rules for which representations of phoneme classes may fill the cluster. For instance, English allows at most three consonants in an onset, but among native words under standard accents (and excluding a few obscure loanwords such as sphragistics), phonemes in a three-consonantal onset are limited to the following scheme:
+ stop + approximant:
+ +
stream
+ + (not in most accents of American English)
stew
+ +
sputum
sprawl
splat
+ +
skew
scream
sclerosis
squirrel
This constraint can be observed in the pronunciation of the word blue: originally, the vowel of blue was identical to the vowel of cue, approximately . In most dialects of English, s |
https://en.wikipedia.org/wiki/Binary%20logarithm | In mathematics, the binary logarithm () is the power to which the number must be raised to obtain the value . That is, for any real number ,
For example, the binary logarithm of is , the binary logarithm of is , the binary logarithm of is , and the binary logarithm of is .
The binary logarithm is the logarithm to the base and is the inverse function of the power of two function. As well as , an alternative notation for the binary logarithm is (the notation preferred by ISO 31-11 and ISO 80000-2).
Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas
in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.
Binary logarithms are included in the standard C mathematical functions and other mathematical software packages.
The integer part of a binary logarithm can be found using the find first set operation on an integer value, or by looking up the exponent of a floating point value.
The fractional part of the logarithm can be calculated efficiently.
History
The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements, Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem, on the structure of even perfect numbers).
And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two.
On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Ar |
https://en.wikipedia.org/wiki/Molecular%20clock | The molecular clock is a figurative term for a technique that uses the mutation rate of biomolecules to deduce the time in prehistory when two or more life forms diverged. The biomolecular data used for such calculations are usually nucleotide sequences for DNA, RNA, or amino acid sequences for proteins. The benchmarks for determining the mutation rate are often fossil or archaeological dates. The molecular clock was first tested in 1962 on the hemoglobin protein variants of various animals, and is commonly used in molecular evolution to estimate times of speciation or radiation. It is sometimes called a gene clock or an evolutionary clock.
Early discovery and genetic equidistance
The notion of the existence of a so-called "molecular clock" was first attributed to Émile Zuckerkandl and Linus Pauling who, in 1962, noticed that the number of amino acid differences in hemoglobin between different lineages changes roughly linearly with time, as estimated from fossil evidence. They generalized this observation to assert that the rate of evolutionary change of any specified protein was approximately constant over time and over different lineages (known as the molecular clock hypothesis).
The genetic equidistance phenomenon was first noted in 1963 by Emanuel Margoliash, who wrote: "It appears that the number of residue differences between cytochrome c of any two species is mostly conditioned by the time elapsed since the lines of evolution leading to these two species originally diverged. If this is correct, the cytochrome c of all mammals should be equally different from the cytochrome c of all birds. Since fish diverges from the main stem of vertebrate evolution earlier than either birds or mammals, the cytochrome c of both mammals and birds should be equally different from the cytochrome c of fish. Similarly, all vertebrate cytochrome c should be equally different from the yeast protein." For example, the difference between the cytochrome c of a carp and a frog, turt |
https://en.wikipedia.org/wiki/Herapathite | Herapathite, or iodoquinine sulfate, is a chemical compound whose crystals are dichroic and thus can be used for polarizing light.
It was discovered in 1852 by William Bird Herapath, a Bristol surgeon and chemist. One of his pupils found that adding iodine to the urine of a dog that had been fed quinine produced unusual green crystals. Herapath noticed while studying the crystals under a microscope that they appeared to polarize light.
In the 1930s, invented a process to grow single herapathite crystals large enough to be sandwiched between two sheets of glass to create a polarizing filter; these were sold under the Bernotar name by Carl Zeiss. Herapathite can be formed by precipitation by dissolving quinine sulfate in acetic acid and adding iodine tincture.
Herapathite's dichroic properties came to the attention of Sir David Brewster, and were later used by Edwin H. Land in 1929 to construct the first type of Polaroid sheet polarizer. He did this by embedding herapathite crystals in a polymer instead of growing a single large crystal.
Structurally, herapathite consists of quinine (in a cationic doubly-protonated ammonium form), sulfate counterions, and triiodide units, all as a hydrate. They combine as 4C20H26N2O2•3SO4•2I3•6H2O, or sometimes other ratios and higher polyiodides.
References
Further reading
Bernauer, F. (1935). "Neue Wege zur Herstellung von Polarisatoren". Forschritte der Mineralogie, Kristallographie und Petrographie Neunzehnter Band.
Nitrogen heterocycles
Organoiodides
Polarization (waves)
Vinyl compounds |
https://en.wikipedia.org/wiki/Potometer | A potometer (from Greek ποτό = drunken, and μέτρο = measure), sometimes known as transpirometer, is a device used for measuring the rate of water uptake of a leafy shoot which is almost equal to the water lost through transpiration. The causes of water uptake are photosynthesis and transpiration.
The rate of transpiration can be estimated in two ways:
Indirectly - by measuring the distance the water level drops in the graduated tube over a measured length of time. It is assumed that this is due to the cutting taking in water which in turn is necessary to replace an equal volume of water lost by transpiration.
Directly - by measuring the reduction in mass of the potometer over a period of time. Here it is assumed that any loss in mass is due to transpiration.
There are two main types of potometers: the bubble potometer (as detailed below), and the mass potometer. The mass potometer consists of a plant with its root submerged in a beaker. This beaker is then placed on a digital balance; readings can be made to determine the amount of water lost by the plant.
Design
Potometers come in a variety of designs, but all follow the same basic principle.A length of capillary tube. A bubble is introduced to the capillary; as water is taken up by the plant, the bubble moves. By marking regular gradations on the tube, it is possible to measure water uptake. A reservoir. Typically a funnel with a tap; turning the tap on the reservoir resets the bubble. Some designs use a syringe instead.
A tube for holding the shoot'''. The shoot must be held in contact with the water; additionally, the surface of the water should not be exposed to the air. Otherwise, evaporation will interfere with measurements. A rubber bung greased with petroleum jelly suffices.
Preparation
Cut a leafy'' shoot for a plant and plunge its base into water. This prevents the xylem from taking up any air. Wetting the leaves themselves will alter the rate of transpiration.
Immerse the whole of the poto |
https://en.wikipedia.org/wiki/Binomial%20options%20pricing%20model | In finance, the binomial options pricing model (BOPM) provides a generalizable numerical method for the valuation of options. Essentially, the model uses a "discrete-time" (lattice based) model of the varying price over time of the underlying financial instrument, addressing cases where the closed-form Black–Scholes formula is wanting.
The binomial model was first proposed by William Sharpe in the 1978 edition of Investments (), and formalized by Cox, Ross and Rubinstein in 1979 and by Rendleman and Bartter in that same year.
For binomial trees as applied to fixed income and interest rate derivatives see .
Use of the model
The Binomial options pricing model approach has been widely used since it is able to handle a variety of conditions for which other models cannot easily be applied. This is largely because the BOPM is based on the description of an underlying instrument over a period of time rather than a single point. As a consequence, it is used to value American options that are exercisable at any time in a given interval as well as Bermudan options that are exercisable at specific instances of time. Being relatively simple, the model is readily implementable in computer software (including a spreadsheet).
Although computationally slower than the Black–Scholes formula, it is more accurate, particularly for longer-dated options on securities with dividend payments. For these reasons, various versions of the binomial model are widely used by practitioners in the options markets.
For options with several sources of uncertainty (e.g., real options) and for options with complicated features (e.g., Asian options), binomial methods are less practical due to several difficulties, and Monte Carlo option models are commonly used instead. When simulating a small number of time steps Monte Carlo simulation will be more computationally time-consuming than BOPM (cf. Monte Carlo methods in finance). However, the worst-case runtime of BOPM will be O(2n), where n is the nu |
https://en.wikipedia.org/wiki/Skewes%27s%20number | In number theory, Skewes's number is any of several large numbers used by the South African mathematician Stanley Skewes as upper bounds for the smallest natural number for which
where is the prime-counting function and is the logarithmic integral function. Skewes's number is much larger, but it is now known that there is a crossing between and near It is not known whether it is the smallest crossing.
Skewes's numbers
J.E. Littlewood, who was Skewes's research supervisor, had proved in that there is such a number (and so, a first such number); and indeed found that the sign of the difference changes infinitely many times. All numerical evidence then available seemed to suggest that was always less than Littlewood's proof did not, however, exhibit a concrete such number .
proved that, assuming that the Riemann hypothesis is true, there exists a number violating below
In , without assuming the Riemann hypothesis, Skewes proved that there must exist a value of below
Skewes's task was to make Littlewood's existence proof effective: exhibiting some concrete upper bound for the first sign change. According to Georg Kreisel, this was at the time not considered obvious even in principle.
More recent estimates
These upper bounds have since been reduced considerably by using large-scale computer calculations of zeros of the Riemann zeta function. The first estimate for the actual value of a crossover point was given by , who showed that somewhere between and there are more than consecutive integers with .
Without assuming the Riemann hypothesis, proved an upper bound of . A better estimate was discovered by , who showed there are at least consecutive integers somewhere near this value where . Bays and Hudson found a few much smaller values of where gets close to ; the possibility that there are crossover points near these values does not seem to have been definitely ruled out yet, though computer calculations suggest they are unlikely to exist. |
https://en.wikipedia.org/wiki/Greeks%20%28finance%29 | In mathematical finance, the Greeks are the quantities (known in calculus as partial derivatives; first-order or higher) representing the sensitivity of the price of a derivative instrument such as an option to changes in one or more underlying parameters on which the value of an instrument or portfolio of financial instruments is dependent. The name is used because the most common of these sensitivities are denoted by Greek letters (as are some other finance measures). Collectively these have also been called the risk sensitivities, risk measures or hedge parameters.
Use of the Greeks
The Greeks are vital tools in risk management. Each Greek measures the sensitivity of the value of a portfolio to a small change in a given underlying parameter, so that component risks may be treated in isolation, and the portfolio rebalanced accordingly to achieve a desired exposure; see for example delta hedging.
The Greeks in the Black–Scholes model (a relatively simple idealised model of certain financial markets) are relatively easy to calculate — a desirable property of financial models — and are very useful for derivatives traders, especially those who seek to hedge their portfolios from adverse changes in market conditions. For this reason, those Greeks which are particularly useful for hedging—such as delta, theta, and vega—are well-defined for measuring changes in the parameters spot price, time and volatility. Although rho (the partial derivative with respect to the risk-free interest rate) is a primary input into the Black–Scholes model, the overall impact on the value of a short-term option corresponding to changes in the risk-free interest rate is generally insignificant and therefore higher-order derivatives involving the risk-free interest rate are not common.
The most common of the Greeks are the first order derivatives: delta, vega, theta and rho; as well as gamma, a second-order derivative of the value function. The remaining sensitivities in this list are comm |
https://en.wikipedia.org/wiki/Rhumb%20line | In navigation, a rhumb line, rhumb (), or loxodrome is an arc crossing all meridians of longitude at the same angle, that is, a path with constant bearing as measured relative to true north.
Introduction
The effect of following a rhumb line course on the surface of a globe was first discussed by the Portuguese mathematician Pedro Nunes in 1537, in his Treatise in Defense of the Marine Chart, with further mathematical development by Thomas Harriot in the 1590s.
A rhumb line can be contrasted with a great circle, which is the path of shortest distance between two points on the surface of a sphere. On a great circle, the bearing to the destination point does not remain constant. If one were to drive a car along a great circle one would hold the steering wheel fixed, but to follow a rhumb line one would have to turn the wheel, turning it more sharply as the poles are approached. In other words, a great circle is locally "straight" with zero geodesic curvature, whereas a rhumb line has non-zero geodesic curvature.
Meridians of longitude and parallels of latitude provide special cases of the rhumb line, where their angles of intersection are respectively 0° and 90°. On a north–south passage the rhumb line course coincides with a great circle, as it does on an east–west passage along the equator.
On a Mercator projection map, any rhumb line is a straight line; a rhumb line can be drawn on such a map between any two points on Earth without going off the edge of the map. But theoretically a loxodrome can extend beyond the right edge of the map, where it then continues at the left edge with the same slope (assuming that the map covers exactly 360 degrees of longitude).
Rhumb lines which cut meridians at oblique angles are loxodromic curves which spiral towards the poles. On a Mercator projection the north and south poles occur at infinity and are therefore never shown. However the full loxodrome on an infinitely high map would consist of infinitely many line segments bet |
https://en.wikipedia.org/wiki/Covering%20space | In topology, a covering or covering projection is a surjective map between topological spaces that, intuitively, locally acts like a projection of multiple copies of a space onto itself. In particular, coverings are special types of local homeomorphisms. If is a covering, is said to be a covering space or cover of , and is said to be the base of the covering, or simply the base. By abuse of terminology, and may sometimes be called covering spaces as well. Since coverings are local homeomorphisms, a covering space is a special kind of étale space.
Covering spaces first arose in the context of complex analysis (specifically, the technique of analytic continuation), where they were introduced by Riemann as domains on which naturally multivalued complex functions become single-valued. These spaces are now called Riemann surfaces.
Covering spaces are an important tool in several areas of mathematics. In modern geometry, covering spaces (or branched coverings, which have slightly weaker conditions) are used in the construction of manifolds, orbifolds, and the morphisms between them. In algebraic topology, covering spaces are closely related to the fundamental group: for one, since all coverings have the homotopy lifting property, covering spaces are an important tool in the calculation of homotopy groups. A standard example in this vein is the calculation of the fundamental group of the circle by means of the covering of by (see below). Under certain conditions, covering spaces also exhibit a Galois correspondance with the subgroups of the fundamental group.
Definition
Let be a topological space. A covering of is a continuous map
such that for every there exists an open neighborhood of and a discrete space such that and is a homeomorphism for every .
The open sets are called sheets, which are uniquely determined up to homeomorphism if is connected. For each the discrete set is called the fiber of . If is connected, it can be shown that the cardi |
https://en.wikipedia.org/wiki/Demarcation%20point | In telephony, the demarcation point is the point at which the public switched telephone network ends and connects with the customer's on-premises wiring. It is the dividing line which determines who is responsible for installation and maintenance of wiring and equipment—customer/subscriber, or telephone company/provider. The demarcation point varies between countries and has changed over time.
Demarcation point is sometimes abbreviated as demarc, DMARC, or similar. The term MPOE (minimum or main point of entry) is synonymous, with the added implication that it occurs as soon as possible upon entering the customer premises. A network interface device often serves as the demarcation point.
History
Prior to Federal Communications Commission (FCC) regulations separating the ownership of customer premises telecommunication equipment from the telephone network, there was no need for a public standard governing the interconnection of customer premises equipment (CPE) to the United States' telephone network, since both the devices and the “local loop” wiring to the central office were owned and maintained by the local telephone company.
Concurrent with the transfer of existing "embedded" CPE to the customer (customers could buy new telephones at retail or could continue to lease their existing equipment from the company), it was necessary to provide a standardized way to connect equipment, and also provide a way to test the phone company's service separately from the customer's equipment.
The ability of customers to buy and maintain their CPE and attach it to the network was stimulated by lawsuits by equipment manufacturers, such as the Hush-a-Phone v. FCC suit. Additionally, computer companies’ ability to offer enhanced services to customers was likewise constrained by the telephone companies’ control of all devices connected to the network. As the Bell telephone companies were themselves restricted from offering such enhanced services, there was little momentum to |
https://en.wikipedia.org/wiki/Bielefeld%20conspiracy | The Bielefeld conspiracy (German: or , ) is a satirical conspiracy theory that claims that the city of Bielefeld, Germany, does not exist, but is an illusion propagated by various forces. First posted on the German Usenet in 1994, the conspiracy has since been mentioned in the city's marketing, and alluded to in a speech by former Chancellor Angela Merkel.
Synopsis
The theory proposes that the city of Bielefeld (population of 341,755 ) in the German state of North Rhine-Westphalia does not actually exist. Rather, its existence is merely propagated by an entity known only as ("they" in German, always in block capitals), which has conspired with the authorities to create the illusion of the city's existence.
The theory is based on three questions:
Do you know anybody from Bielefeld?
Have you ever been to Bielefeld?
Do you know anybody who has ever been to Bielefeld?
A majority are expected to answer no to all three queries. Anybody who can answer yes to any of the queries, or claim any other knowledge about Bielefeld is promptly disregarded as being in on the conspiracy, or having been themselves deceived.
The origins of and reasons for this conspiracy are not a part of the original theory. Speculated originators jokingly include the Central Intelligence Agency, Mossad, or aliens who use Bielefeld University as a disguise for their spaceship.
History
The conspiracy theory was first made public in a posting to the newsgroup de.talk.bizarre on 16 May 1994 by Achim Held, a computer science student at the University of Kiel. When a friend of Held met someone from Bielefeld at a student party in 1993, he said "", meaning "That doesn't exist", it spread throughout the German-speaking Internet community.
In a television interview conducted for the 10th anniversary of the newsgroup posting, Held stated that this myth definitely originated from his Usenet posting, which was intended only as a joke. According to Held, the idea for the conspiracy theory formed in his m |
https://en.wikipedia.org/wiki/Ring%20theory | In algebra, ring theory is the study of rings—algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies the structure of rings, their representations, or, in different language, modules, special classes of rings (group rings, division rings, universal enveloping algebras), as well as an array of properties that proved to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities.
Commutative rings are much better understood than noncommutative ones. Algebraic geometry and algebraic number theory, which provide many natural examples of commutative rings, have driven much of the development of commutative ring theory, which is now, under the name of commutative algebra, a major area of modern mathematics. Because these three fields (algebraic geometry, algebraic number theory and commutative algebra) are so intimately connected it is usually difficult and meaningless to decide which field a particular result belongs to. For example, Hilbert's Nullstellensatz is a theorem which is fundamental for algebraic geometry, and is stated and proved in terms of commutative algebra. Similarly, Fermat's Last Theorem is stated in terms of elementary arithmetic, which is a part of commutative algebra, but its proof involves deep results of both algebraic number theory and algebraic geometry.
Noncommutative rings are quite different in flavour, since more unusual behavior can arise. While the theory has developed in its own right, a fairly recent trend has sought to parallel the commutative development by building the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'. This trend started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups. It has led to a better |
https://en.wikipedia.org/wiki/Generative%20art | Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that the generative system represents their own artistic idea, and in others that the system takes on the role of the creator.
"Generative art" often refers to algorithmic art (algorithmically determined computer generated artwork) and synthetic media (general term for any algorithmically generated media), but artists can also make it using systems of chemistry, biology, mechanics and robotics, smart materials, manual randomization, mathematics, data mapping, symmetry, tiling, and more.
History
The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent". Margaret Boden and Ernest Edmonds have noted the use of the term "generative art" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965: A. Michael Noll did his initial computer art, combining randomness with order, in 1962, and exhibited it along with works by Bell Julesz in 1965.
The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was "computer-grafik". "Generative ar |
https://en.wikipedia.org/wiki/Qiblih |
In the Baháʼí Faith the Qiblih (, "direction") is the location to which Baháʼís face when saying their daily obligatory prayers. The Qiblih is fixed at the Shrine of Baháʼu'lláh, near Acre, in present-day Israel; approximately at .
In Bábism the Qiblih was originally identified by the Báb with "the One Whom God will make manifest", a messianic figure predicted by the Báb. Baháʼu'lláh, the Prophet-founder of the Baháʼí Faith claimed to be the figure predicted by the Báb. In the Kitáb-i-Aqdas, Baháʼu'lláh confirms the Báb's ordinance and further ordains his final resting-place as the Qiblih for his followers. ʻAbdu'l-Bahá describes that spot as the "luminous Shrine", "the place around which circumambulate the Concourse on High". The concept exists in other religions. Jews face Jerusalem, more specifically the site of the former Temple of Jerusalem. Muslims face the Kaaba in Mecca, which they also call the Qibla (another transliteration of Qiblih).
Baháʼís do not worship the Shrine of Baháʼu'lláh or its contents, the Qiblih is simply a focal point for the obligatory prayers. When praying obligatory prayers the members of the Baháʼí Faith face in the direction of the Qiblih. It is a fixed requirement for the recitation of an obligatory prayer, but for other prayers and devotions one may follow what is written in the Qurʼan: "Whichever way ye turn, there is the face of God."
Burial of the dead
"The dead should be buried with their face turned towards the Qiblih. This also is in accordance with what is practiced in Islam. There is also a congregational prayer to be recited. Besides this there is no other ceremony to be performed" (From a letter written on behalf of Shoghi Effendi to an individual believer, July 6, 1935).
See also
Qibla, the Islamic equivalent of the Qiblih
Ad orientem, the Christian practice of facing east in prayer, also informs orientation of many church building
Mizrah, the Jewish practice of praying facing the Temple Mount in Jerusalem
Citatio |
https://en.wikipedia.org/wiki/Programme%20Delivery%20Control | Programme delivery control (PDC) is specified by the standard ETS 300 231 (ETSI EN 300 231), published by the European Telecommunications Standards Institute (ETSI). This specifies the signals sent as hidden codes in the teletext service, indicating when transmission of a programme starts and finishes.
PDC (also known as Enhanced Teletext Packet 8/30 Format 2) is often used together with StarText, enabling the user to select a programme to record using specially coded teletext programme listings. The combination of features is often called PDC/StarText.
In Germany and some other European countries, the older standard video programming system (VPS) is in use also known as format 2. Effectively, the two systems do the same thing and most modern VCRs and stand-alone DVD recorders work with both signals.
In digital TV (see Freeview+), the feature Accurate Recording (AR) that was based on the PDC specification for analogue recording devices is now used for a DVB-SI event based scheduling system. This was due to the BBC discontinuing the Ceefax service.
PDC Packets
PDC is transmitted once a second in special packets addressed as magazine 8 and text row 30. Since this row is not displayable it does not interfere with normal pages. Packet 8/30 has various formats specified by ETSI and PDC is format 2. Each packet 8/30 format 2 also has a label number and there can be up to four labels transmitted at a time. Each label contains the scheduled start time and date for a programme and flags to indicate the state. Each programme is assigned a label and in general a label will follow this sequence.
PRF Set – Prepare for Record. This will tell a VCR to wake up and get ready. This happens about 40 seconds before the programme is active.
PRF Clear – The VCR should be recording.
RTI – Record Terminate/Interrupt – Tells the VCR to stop recording. This label is held for 30 seconds after the programme ends.
There are complicated rules for the case where a programme is interrupted |
https://en.wikipedia.org/wiki/Selectron%20tube | The Selectron was an early form of digital computer memory developed by Jan A. Rajchman and his group at the Radio Corporation of America (RCA) under the direction of Vladimir K. Zworykin. It was a vacuum tube that stored digital data as electrostatic charges using technology similar to the Williams tube storage device. The team was never able to produce a commercially viable form of Selectron before magnetic-core memory became almost universal.
Development
Development of Selectron started in 1946 at the behest of John von Neumann of the Institute for Advanced Study, who was in the midst of designing the IAS machine and was looking for a new form of high-speed memory.
RCA's original design concept had a capacity of 4096 bits, with a planned production of 200 by the end of 1946. They found the device to be much more difficult to build than expected, and they were still not available by the middle of 1948. As development dragged on, the IAS machine was forced to switch to Williams tubes for storage, and the primary customer for Selectron disappeared. RCA lost interest in the design and assigned its engineers to improve televisions
A contract from the US Air Force led to a re-examination of the device in a 256-bit form. Rand Corporation took advantage of this project to switch their own IAS machine, the JOHNNIAC, to this new version of the Selectron, using 80 of them to provide 512 40-bit words of main memory. They signed a development contract with RCA to produce enough tubes for their machine at a projected cost of $500 per tube ($ in ).
Around this time IBM expressed an interest in the Selectron as well, but this did not lead to additional production. As a result, RCA assigned their engineers to color television development, and put the Selectron in the hands of "the mothers-in-law of two deserving employees (the Chairman of the Board and the President)."
Both the Selectron and the Williams tube were superseded in the market by the compact and cost-effectiv |
https://en.wikipedia.org/wiki/Portable%20computer | A portable computer is a computer designed to be easily moved from one place to another, as opposed to those designed to remain stationary at a single location such as desktops and workstations. These computers usually include a display and keyboard that are directly connected to the main case, all sharing a single power plug together, much like later desktop computers called all-in-ones (AIO) that integrate the system's internal components into the same case as the display. In modern usage, a portable computer usually refers to a very light and compact personal computer such as a laptop, miniature or pocket-sized computer, while touchscreen-based handheld ("palmtop") devices such as tablet, phablet and smartphone are called mobile devices instead.
The first commercially sold portable computer might be the MCM/70, released 1974. The next major portables were the IBM 5100 (1975), Osborne's CP/M-based Osborne 1 (1981) and Compaq's , advertised as 100% IBM PC compatible Compaq Portable (1983). These luggable computers still required a continuous connection to an external power source; this limitation was later overcome by the laptop computers. Laptops were followed by lighter models such as netbooks, so that in the 2000s mobile devices and by 2007 smartphones made the term "portable" rather meaningless. The 2010s introduced wearable computers such as smartwatches.
Portable computers, by their nature, are generally microcomputers. Larger portable computers were commonly known as 'Lunchbox' or 'Luggable' computers. They are also called 'Portable Workstations' or 'Portable PCs'. In Japan they were often called from "bento".
Portable computers, more narrowly defined, are distinct from desktop replacement computers in that they usually were constructed from full-specification desktop components, and often do not incorporate features associated with laptops or mobile devices. A portable computer in this usage, versus a laptop or other mobile computing device, have |
https://en.wikipedia.org/wiki/Implicit%20function | In mathematics, an implicit equation is a relation of the form where is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is
An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments. For example, the equation of the unit circle defines as an implicit function of if , and is restricted to nonnegative values.
The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable.
Examples
Inverse functions
A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If is a function of that has a unique inverse, then the inverse function of , called , is the unique function giving a solution of the equation
for in terms of . This solution can then be written as
Defining as the inverse of is an implicit definition. For some functions , can be written out explicitly as a closed-form expression — for instance, if , then . However, this is often not possible, or only by introducing a new notation (as in the product log example below).
Intuitively, an inverse function is obtained from by interchanging the roles of the dependent and independent variables.
Example: The product log is an implicit function giving the solution for of the equation .
Algebraic functions
An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable gives a solution for of an equation
where the coefficients are polynomial functions of . This algebraic function can be written as the right side of the solution equation . Written like this, is a multi-valued impli |
https://en.wikipedia.org/wiki/Luminous%20efficiency%20function | A luminous efficiency function or luminosity function represents the average spectral sensitivity of human visual perception of light. It is based on subjective judgements of which of a pair of different-colored lights is brighter, to describe relative sensitivity to light of different wavelengths. It is not an absolute reference to any particular individual, but is a standard observer representation of visual sensitivity of theoretical human eye. It is valuable as a baseline for experimental purposes, and in colorimetry. Different luminous efficiency functions apply under different lighting conditions, varying from photopic in brightly lit conditions through mesopic to scotopic under low lighting conditions. When not specified, the luminous efficiency function generally refers to the photopic luminous efficiency function.
The CIE photopic luminous efficiency function or is a standard function established by the Commission Internationale de l'Éclairage (CIE) and standardized in collaboration with the ISO, and may be used to convert radiant energy into luminous (i.e., visible) energy. It also forms the central color matching function in the CIE 1931 color space.
Details
There are two luminous efficiency functions in common use. For everyday light levels, the photopic luminosity function best approximates the response of the human eye. For low light levels, the response of the human eye changes, and the scotopic curve applies. The photopic curve is the CIE standard curve used in the CIE 1931 color space.
The luminous flux (or visible power) in a light source is defined by the photopic luminosity function. The following equation calculates the total luminous flux in a source of light:
where
Φv is the luminous flux, in lumens;
Φe,λ is the spectral radiant flux, in watts per nanometre;
(λ), also known as V(λ), is the luminosity function, dimensionless;
λ is the wavelength, in nanometres.
Formally, the integral is the inner product of the luminosity func |
https://en.wikipedia.org/wiki/Red%20River%20Floodway | The Red River Floodway () is an artificial flood control waterway in Western Canada. It is a long channel which, during flood periods, takes part of the Red River's flow around the city of Winnipeg, Manitoba to the east and discharges it back into the Red River below the dam at Lockport. It can carry floodwater at a rate of up to , expanded in the 2000s from its original channel capacity of .
The Floodway was pejoratively nicknamed "Duff's Ditch" by opponents of its construction, after Premier Duff Roblin, whose Progressive Conservative government initiated the project, partly in response to the disastrous 1950 Red River flood. It was completed in time and under budget. Subsequent events have vindicated the plan. Since its completion in 1968, the Floodway is estimated to have prevented over $40 billion (CAD) in cumulative flood damage. It was designated a National Historic Site of Canada in 2000, as the floodway is an outstanding engineering achievement both in terms of function and impact.
From south to north, the Floodway passes through the extreme southeastern part of Winnipeg and the rural municipalities of Ritchot, Springfield, East St. Paul, and St. Clements.
History
Following the submission of the Royal Commission report Manitobans were strongly divided as to whether the province could afford the capital costs of a mammoth engineering project that would benefit primarily Winnipeg. The project was championed by Dufferin (Duff) Roblin, the Leader of the Opposition and head of the Manitoba Progressive Conservative Party, but it was vehemently denounced by opponents as a monumental, and potentially ruinous, waste of money. Indeed, the projected Red River Floodway was derisively referred to as “Duff”s Folly” and “Duff’s Ditch”, and decried as “approximating the building of the pyramids of Egypt in terms of usefulness.” The construction of the floodway and Assiniboine River works, would entail a capital cost of over $72 million, amortized over fifty years at |
https://en.wikipedia.org/wiki/Glass-ceramic | Glass-ceramics are polycrystalline materials produced through controlled crystallization of base glass, producing a fine uniform dispersion of crystals throughout the bulk material. Crystallization is accomplished by subjecting suitable glasses to a carefully regulated heat treatment schedule, resulting in the nucleation and growth of crystal phases. In many cases, the crystallization process can proceed to near completion, but in a small proportion of processes, the residual glass phase often remains. Glass-ceramic materials share many properties with both glasses and ceramics. Glass-ceramics have an amorphous phase and one or more crystalline phases and are produced by a so-called "controlled crystallization" in contrast to a spontaneous crystallization, which is usually not wanted in glass manufacturing. Glass-ceramics have the fabrication advantage of glass, as well as special properties of ceramics. When used for sealing, some glass-ceramics do not require brazing but can withstand brazing temperatures up to 700 °C. Glass-ceramics usually have between 30% [m/m] and 90% [m/m] crystallinity and yield an array of materials with interesting properties like zero porosity, high strength, toughness, translucency or opacity, pigmentation, opalescence, low or even negative thermal expansion, high temperature stability, fluorescence, machinability, ferromagnetism, resorbability or high chemical durability, biocompatibility, bioactivity, ion conductivity, superconductivity, isolation capabilities, low dielectric constant and loss, corrosion resistance, high resistivity and break-down voltage. These properties can be tailored by controlling the base-glass composition and by controlled heat treatment/crystallization of base glass. In manufacturing, glass-ceramics are valued for having the strength of ceramic but the hermetic sealing properties of glass.
Glass-ceramics are mostly produced in two steps: First, a glass is formed by a glass-manufacturing process, after whic |
https://en.wikipedia.org/wiki/Existence%20theorem | In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase "there exist(s)", or it might be a universal statement whose last quantifier is existential (e.g., "for all , , ... there exist(s) ..."). In the formal terms of symbolic logic, an existence theorem is a theorem with a prenex normal form involving the existential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that the sine function is continuous everywhere, or any theorem written in big O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used.
A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as the axiom of infinity, the axiom of choice or the law of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From a constructivist viewpoint, such approaches are not viable as it lends to mathematics losing its concrete applicability, while the opposing viewpoint is that abstract methods are far-reaching, in a way that numerical analysis cannot be.
'Pure' existence results
In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive, since the whole approach may not lend itself to construction. In terms of algorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called "constructive" existence theorems, which many constructivist mathematicians working in extended logics (such as intuitionistic logic) b |
https://en.wikipedia.org/wiki/Twistor%20memory | Twistor memory is a form of computer memory formed by wrapping magnetic tape around a current-carrying wire. Operationally, twistor was very similar to core memory. Twistor could also be used to make ROM memories, including a re-programmable form known as piggyback twistor. Both forms were able to be manufactured using automated processes, which was expected to lead to much lower production costs than core-based systems.
Introduced by Bell Labs in 1957, the first commercial use was in their 1ESS switch which went into operation in 1965. Twistor was used only briefly in the late 1960s and early 1970s, when semiconductor memory devices replaced almost all earlier memory systems. The basic ideas behind twistor also led to the development of bubble memory, although this had a similarly short commercial lifespan.
Core memory
Construction
In core memory, small ring-shaped magnets - the cores - are threaded by two crossed wires, X and Y, to make a matrix known as a plane. When one X and one Y wire are powered, a magnetic field is generated at a 45-degree angle to the wires. The core magnets sit on the wires at a 45-degree angle, so the single core wrapped around the crossing point of the powered X and Y wires will be affected by the induced field.
The materials used for the core magnets were specially chosen to have a very "square" magnetic hysteresis pattern. This meant that fields just below a certain threshold will do nothing, but those just above this threshold will cause the core to be affected by that magnetic field; it will abruptly flip its magnetization state. The square pattern and sharp flipping states ensures that a single core can be addressed within a grid; nearby cores will see a slightly different field, and not be affected.
Data retrieval
The basic operation in a core memory is writing. This is accomplished by powering a selected X and Y wire both to the current level that will, by itself, create ½ the critical magnetic field. This will cause the fie |
https://en.wikipedia.org/wiki/Monodromy | In mathematics, monodromy is the study of how objects from mathematical analysis, algebraic topology, algebraic geometry and differential geometry behave as they "run round" a singularity. As the name implies, the fundamental meaning of monodromy comes from "running round singly". It is closely associated with covering maps and their degeneration into ramification; the aspect giving rise to monodromy phenomena is that certain functions we may wish to define fail to be single-valued as we "run round" a path encircling a singularity. The failure of monodromy can be measured by defining a monodromy group: a group of transformations acting on the data that encodes what happens as we "run round" in one dimension. Lack of monodromy is sometimes called polydromy.
Definition
Let be a connected and locally connected based topological space with base point , and let be a covering with fiber . For a loop based at , denote a lift under the covering map, starting at a point , by . Finally, we denote by the endpoint , which is generally different from . There are theorems which state that this construction gives a well-defined group action of the fundamental group on , and that the stabilizer of is exactly , that is, an element fixes a point in if and only if it is represented by the image of a loop in based at . This action is called the monodromy action and the corresponding homomorphism into the automorphism group on is the algebraic monodromy. The image of this homomorphism is the monodromy group. There is another map whose image is called the topological monodromy group.
Example
These ideas were first made explicit in complex analysis. In the process of analytic continuation, a function that is an analytic function in some open subset of the punctured complex plane may be continued back into , but with different values. For example, take
then analytic continuation anti-clockwise round the circle
will result in the return, not to but
In this cas |
https://en.wikipedia.org/wiki/Workers%27%20compensation | Workers' compensation or workers' comp is a form of insurance providing wage replacement and medical benefits to employees injured in the course of employment in exchange for mandatory relinquishment of the employee's right to sue his or her employer for the tort of negligence. The trade-off between assured, limited coverage and lack of recourse outside the worker compensation system is known as "the compensation bargain.” One of the problems that the compensation bargain solved is the problem of employers becoming insolvent as a result of high damage awards. The system of collective liability was created to prevent that and thus to ensure security of compensation to the workers.
While plans differ among jurisdictions, provision can be made for weekly payments in place of wages (functioning in this case as a form of disability insurance), compensation for economic loss (past and future), reimbursement or payment of medical and like expenses (functioning in this case as a form of health insurance), and benefits payable to the dependents of workers killed during employment.
General damage for pain and suffering and punitive damages for employer negligence are generally not available in workers' compensation plans, and negligence is generally not an issue in the case.
Origin and international comparison
Laws regarding workers compensation vary, but the Workers' Accident Insurance system put into place by Prussian Chancellor Otto von Bismarck in 1884 with the start of Workers' Accident Laws is often cited as a model for the rest of Europe and, later, the United States. After the early Prussian experiments, the development of compensation laws around the world was in important respects the result of transnational networks among policymakers and social scientists. Thus while different countries have their own unique history of workers' compensation, compensation laws developed around the world as a global phenomenon, with each country's deliberation on compensation la |
https://en.wikipedia.org/wiki/Application%20framework | In computer programming, an application framework consists of a software framework used by software developers to implement the standard structure of application software.
Application frameworks became popular with the rise of graphical user interfaces (GUIs), since these tended to promote a standard structure for applications. Programmers find it much simpler to create automatic GUI creation tools when using a standard framework, since this defines the underlying code structure of the application in advance. Developers usually use object-oriented programming (OOP) techniques to implement frameworks such that the unique parts of an application can simply inherit from classes extant in the framework.
Examples
Apple Computer developed one of the first commercial application frameworks, MacApp (first release 1985), for the Macintosh. Originally written in an extended (object-oriented) version of Pascal termed Object Pascal, it was later rewritten in C++. Another notable framework for the Mac is Metrowerks' PowerPlant, based on Carbon. Cocoa for macOS offers a different approach to an application framework, based on the OpenStep framework developed at NeXT.
Free and open-source software frameworks exist as part of the Mozilla, LibreOffice, GNOME, KDE, NetBeans, and Eclipse projects.
Microsoft markets a framework for developing Windows applications in C++ called the Microsoft Foundation Class Library, and a similar framework for developing applications with Visual Basic or C#, named .NET Framework.
Several frameworks can build cross-platform applications for Linux, Macintosh, and Windows from common source code, such as Qt, wxWidgets, Juce, Fox toolkit, or Eclipse Rich Client Platform (RCP).
Oracle Application Development Framework (Oracle ADF) aids in producing Java-oriented systems.
Silicon Laboratories offers an embedded application framework for developing wireless applications on its series of wireless chips.
MARTHA is a proprietary software Java framework |
https://en.wikipedia.org/wiki/Glossary%20of%20ring%20theory | Ring theory is the branch of mathematics in which rings are studied: that is, structures supporting both an addition and a multiplication operation. This is a glossary of some terms of the subject.
For the items in commutative algebra (the theory of commutative rings), see Glossary of commutative algebra. For ring-theoretic concepts in the language of modules, see also Glossary of module theory.
For specific types of algebras, see also: Glossary of field theory and Glossary of Lie groups and Lie algebras. Since, currently, there is no glossary on not-necessarily-associative algebra structures in general, this glossary includes some concepts that do not need associativity; e.g., a derivation.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z
See also
Glossary of module theory
Citations
References
Ring theory
Wikipedia glossaries using description lists |
https://en.wikipedia.org/wiki/Pie%20rule | The pie rule, sometimes referred to as the swap rule, is a rule used to balance abstract strategy games where a first-move advantage has been demonstrated. After the first move is made in a game that uses the pie rule, the second player must select one of two options:
Letting the move stand. The second player remains the second player and moves immediately.
Switching places. The second player becomes the first-moving player with the move already done by the opponent, and the opponent plays the first move of their new color.
Depending on the game, there may be two ways to implement switching places.
Switching colors means that the players exchange pieces. The player who made the first move becomes the second player and makes the second move on the board. This is demonstrated in the chess diagrams shown here.
Switching the first piece can occur in games where the board starts empty and the first move consists of placing one piece. Suppose the colors are white versus black, and black places the first piece. This piece is replaced by a white piece in the corresponding location for white, and the black piece is returned to black's supply. In a game such as Hex or TwixT, the corresponding location is at a cell "reflected" across the nearest (or either) diagonal. In games such as Y, where the board is not directional, the white stone replaces the black stone in the same cell. Players keep their respective color pieces, and play continues with black making the next move. This is effectively the same as switching colors.
The use of pie rule was first reported in 1909 for a game in the Mancala family. Among modern games, Hex uses this rule. TwixT in tournament play uses a swap rule. The rule can be applied to other games which are otherwise solved for one player, such as Gomoku or Tablut.
The rule gets its name from the divide and choose method of ensuring fairness in when dividing a pie between two people: one person cuts the pie in half, then the other person chooses |
https://en.wikipedia.org/wiki/Quadratic%20form | In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example,
is a quadratic form in the variables and . The coefficients usually belong to a fixed field , such as the real or complex numbers, and one speaks of a quadratic form over . If , and the quadratic form equals zero only when all variables are simultaneously zero, then it is a definite quadratic form; otherwise it is an isotropic quadratic form.
Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form )
Quadratic forms are not to be confused with a quadratic equation, which has only one variable and includes terms of degree two or less. A quadratic form is one case of the more general concept of homogeneous polynomials.
Introduction
Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form:
where a, ..., f are the coefficients.
The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp. Binary quadratic forms hav |
https://en.wikipedia.org/wiki/Quartic%20equation | In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is
where a ≠ 0.
The quartic is the highest order polynomial equation that can be solved by radicals in the general case (i.e., one in which the coefficients can take any value).
History
Lodovico Ferrari is attributed with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna (1545).
The proof that this was the highest order general polynomial for which such solutions could be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois before his death in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
Solving a quartic equation, special cases
Consider a quartic equation expressed in the form :
There exists a general formula for finding the roots to quartic equations, provided the coefficient of the leading term is non-zero. However, since the general method is quite complex and susceptible to errors in execution, it is better to apply one of the special cases listed below if possible.
Degenerate case
If the constant term a4 = 0, then one of the roots is x = 0, and the other roots can be found by dividing by x, and solving the resulting cubic equation,
Evident roots: 1 and −1 and −
Call our quartic polynomial . Since 1 raised to any power is 1,
Thus if and so = 1 is a root of . It can similarly be shown that if = −1 is a root.
In either case the full quartic can then be divided by the factor or respectively yielding a new cubic polynom |
https://en.wikipedia.org/wiki/Bargaining | In the social sciences, bargaining or haggling is a type of negotiation in which the buyer and seller of a good or service debate the price or nature of a transaction. If the bargaining produces agreement on terms, the transaction takes place. It is often commonplace in poorer countries, or poorer localities within any specific country. Haggling can mostly be seen within street markets worldwide, wherein there remains no guarantee of the origin and authenticity of available products. Many people attribute it as a skill, but there remains no guarantee that the price put forth by the buyer would be acknowledged by the seller, resulting in losses of profit and even turnover in some cases. A growth in the country's GDP Per Capita Income is bound to reduce both the ill-effects of bargaining and the unscrupulous practices undertaken by vendors at street markets.
Although the most apparent aspect of bargaining in markets is as an alternative pricing strategy to fixed prices, it can also include making arrangements for credit or bulk purchasing, as well as serving as an important method of clienteling.
Bargaining has largely disappeared in parts of the world where retail stores with fixed prices are the most common place to purchase goods. However, for expensive goods such as homes, antiques and collectibles, jewellery and automobiles, bargaining can remain commonplace.
Dickering and "haggling" refer to the same process.
Where it takes place
Haggling is associated commonly with bazaars and other markets where centralized regulation is difficult or impossible. Both religious beliefs and regional custom may determine whether or not the sellers or buyers are willing to bargain.
Regional differences
In North America and Europe, bargaining is restricted to expensive or one-of-a-kind items (automobiles, antiques, jewelry, art, real estate, trade sales of businesses) and informal sales settings such as flea markets and garage sales. In other regions of the world, barga |
https://en.wikipedia.org/wiki/Qibla | The qibla () is the direction towards the Kaaba in the Sacred Mosque in Mecca, which is used by Muslims in various religious contexts, particularly the direction of prayer for the salah. In Islam, the Kaaba is believed to be a sacred site built by prophets Ibrahim and Ismail, and that its use as the qibla was ordained by Allah in several verses of the Quran revealed to Muhammad in the second Hijri year. Prior to this revelation, Muhammad and his followers in Medina faced Jerusalem for prayers. Most mosques contain a mihrab (a wall niche) that indicates the direction of the qibla.
The qibla is also the direction for entering the ihram (sacred state for the hajj pilgrimage); the direction to which animals are turned during dhabihah (Islamic slaughter); the recommended direction to make dua (supplications); the direction to avoid when relieving oneself or spitting; and the direction to which the deceased are aligned when buried. The qibla may be observed facing the Kaaba accurately (ayn al-ka'bah) or facing in the general direction (jihat al-ka'bah). Most Islamic scholars consider that jihat al-ka'bah is acceptable if the more precise ayn al-ka'bah cannot be ascertained.
The most common technical definition used by Muslim astronomers for a location is the direction on the great circle—in the Earth's Sphere—passing through the location and the Kaaba. This is the direction of the shortest possible path from a place to the Kaaba, and allows the exact calculation (hisab) of the qibla using a spherical trigonometric formula that takes the coordinates of a location and of the Kaaba as inputs (see formula below). The method is applied to develop mobile applications and websites for Muslims, and to compile qibla tables used in instruments such as the qibla compass. The qibla can also be determined at a location by observing the shadow of a vertical rod on the twice-yearly occasions when the sun is directly overhead in Mecca—on 27 and 28 May at 12:18 Saudi Arabia Standard Tim |
https://en.wikipedia.org/wiki/Flexible%20electronics | Flexible electronics, also known as flex circuits, is a technology for assembling electronic circuits by mounting electronic devices on flexible plastic substrates, such as polyimide, PEEK or transparent conductive polyester film. Additionally, flex circuits can be screen printed silver circuits on polyester. Flexible electronic assemblies may be manufactured using identical components used for rigid printed circuit boards, allowing the board to conform to a desired shape, or to flex during its use.
Manufacturing
Flexible printed circuits (FPC) are made with a photolithographic technology. An alternative way of making flexible foil circuits or flexible flat cables (FFCs) is laminating very thin (0.07 mm) copper strips in between two layers of PET. These PET layers, typically 0.05 mm thick, are coated with an adhesive which is thermosetting, and will be activated during the lamination process. FPCs and FFCs have several advantages in many applications:
Tightly assembled electronic packages, where electrical connections are required in 3 axes, such as cameras (static application).
Electrical connections where the assembly is required to flex during its normal use, such as folding cell phones (dynamic application).
Electrical connections between sub-assemblies to replace wire harnesses, which are heavier and bulkier, such as in cars, rockets and satellites.
Electrical connections where board thickness or space constraints are driving factors.
Advantage of FPCs
Potential to replace multiple rigid boards or connectors
Single-sided circuits are ideal for dynamic or high-flex applications
Stacked FPCs in various configurations
Disadvantages of FPCs
Cost increase over rigid PCBs
Increased risk of damage during handling or use
More difficult assembly process
Repair and rework is difficult or impossible
Generally worse panel utilization resulting in increased cost
Applications
Flex circuits are often used as connectors in various applications where flexibility |
https://en.wikipedia.org/wiki/Alanine%20transaminase | Alanine transaminase (ALT) is a transaminase enzyme (). It is also called alanine aminotransferase (ALT or ALAT) and was formerly called serum glutamate-pyruvate transaminase or serum glutamic-pyruvic transaminase (SGPT) and was first characterized in the mid-1950s by Arthur Karmen and colleagues. ALT is found in plasma and in various body tissues but is most common in the liver. It catalyzes the two parts of the alanine cycle. Serum ALT level, serum AST (aspartate transaminase) level, and their ratio (AST/ALT ratio) are commonly measured clinically as biomarkers for liver health. The tests are part of blood panels.
The half-life of ALT in the circulation approximates 47 hours. Aminotransferase is cleared by sinusoidal cells in the liver.
Function
ALT catalyzes the transfer of an amino group from L-alanine to α-ketoglutarate, the products of this reversible transamination reaction being pyruvate and L-glutamate.
L-alanine + α-ketoglutarate ⇌ pyruvate + L-glutamate
ALT (and all aminotransferases) require the coenzyme pyridoxal phosphate, which is converted into pyridoxamine in the first phase of the reaction, when an amino acid is converted into a keto acid.
Clinical significance
ALT is commonly measured clinically as part of liver function tests and is a component of the AST/ALT ratio. When used in diagnostics, it is almost always measured in international units/liter (IU/L) or µkat. While sources vary on specific reference range values for patients, 0-40 IU/L is the standard reference range for experimental studies.
Elevated levels
Test results should always be interpreted using the reference range from the laboratory that produced the result. However typical reference intervals for ALT are:
Significantly elevated levels of ALT (SGPT) often suggest the existence of other medical problems such as viral hepatitis, diabetes, congestive heart failure, liver damage, bile duct problems, infectious mononucleosis, or myopathy, so ALT is commonly used as a way of |
https://en.wikipedia.org/wiki/Language%20technology | Language technology, often called human language technology (HLT), studies methods of how computer programs or electronic devices can analyze, produce, modify or respond to human texts and speech. Working with language technology often requires broad knowledge not only about linguistics but also about computer science. It consists of natural language processing (NLP) and computational linguistics (CL) on the one hand, many application oriented aspects of these, and more low-level aspects such as encoding and speech technology on the other hand.
Note that these elementary aspects are normally not considered to be within the scope of related terms such as natural language processing and (applied) computational linguistics, which are otherwise near-synonyms. As an example, for many of the world's lesser known languages, the foundation of language technology is providing communities with fonts and keyboard setups so their languages can be written on computers or mobile devices.
References
External links
Johns Hopkins University Human Language Technology Center of Excellence
Carnegie Mellon University Language Technologies Institute
Institute for Applied Linguistics (IULA) at Universitat Pompeu Fabra. Barcelona, Spain
German Research Centre for Artificial Intelligence (DFKI) Language Technology Lab
CLT: Centre for Language Technology in Gothenburg, Sweden
The Center for Speech and Language Technologies (CSaLT) at the Lahore University [sic] of Management Sciences (LUMS)
Globalization and Localization Association (GALA)
ScriptSource, a reference to the writing systems of the world and the remaining needs for supporting them in the computing realm.
Speech processing
Natural language processing |
https://en.wikipedia.org/wiki/Radical%20of%20an%20ideal | In ring theory, a branch of mathematics, the radical of an ideal of a commutative ring is another ideal defined by the property that an element is in the radical if and only if some power of is in . Taking the radical of an ideal is called radicalization. A radical ideal (or semiprime ideal) is an ideal that is equal to its radical. The radical of a primary ideal is a prime ideal.
This concept is generalized to non-commutative rings in the Semiprime ring article.
Definition
The radical of an ideal in a commutative ring , denoted by or , is defined as
(note that ).
Intuitively, is obtained by taking all roots of elements of within the ring . Equivalently, is the preimage of the ideal of nilpotent elements (the nilradical) of the quotient ring (via the natural map ). The latter proves that is an ideal.
If the radical of is finitely generated, then some power of is contained in . In particular, if and are ideals of a Noetherian ring, then and have the same radical if and only if contains some power of and contains some power of .
If an ideal coincides with its own radical, then is called a radical ideal or semiprime ideal.
Examples
Consider the ring of integers.
The radical of the ideal of integer multiples of is .
The radical of is .
The radical of is .
In general, the radical of is , where is the product of all distinct prime factors of , the largest square-free factor of (see Radical of an integer). In fact, this generalizes to an arbitrary ideal (see the Properties section).
Consider the ideal . It is trivial to show (using the basic property ), but we give some alternative methods: The radical corresponds to the nilradical of the quotient ring , which is the intersection of all prime ideals of the quotient ring. This is contained in the Jacobson radical, which is the intersection of all maximal ideals, which are the kernels of homomorphisms to fields. Any ring homomorphism must have in the kernel in order to have a w |
https://en.wikipedia.org/wiki/De%20Branges%27s%20theorem | In complex analysis, de Branges's theorem, or the Bieberbach conjecture, is a theorem that gives a necessary condition on a holomorphic function in order for it to map the open unit disk of the complex plane injectively to the complex plane. It was posed by and finally proven by .
The statement concerns the Taylor coefficients of a univalent function, i.e. a one-to-one holomorphic function that maps the unit disk into the complex plane, normalized as is always possible so that and . That is, we consider a function defined on the open unit disk which is holomorphic and injective (univalent) with Taylor series of the form
Such functions are called schlicht. The theorem then states that
The Koebe function (see below) is a function in which for all , and it is schlicht, so we cannot find a stricter limit on the absolute value of the th coefficient.
Schlicht functions
The normalizations
mean that
This can always be obtained by an affine transformation: starting with an arbitrary injective holomorphic function defined on the open unit disk and setting
Such functions are of interest because they appear in the Riemann mapping theorem.
A schlicht function is defined as an analytic function that is one-to-one and satisfies and . A family of schlicht functions are the rotated Koebe functions
with a complex number of absolute value . If is a schlicht function and for some
, then is a rotated Koebe function.
The condition of de Branges' theorem is not sufficient to show the function is schlicht, as the function
shows: it is holomorphic on the unit disc and satisfies for all , but it is not injective since
.
History
A survey of the history is given by Koepf (2007).
proved , and stated the conjecture that . and independently proved the conjecture for starlike functions.
Then Charles Loewner () proved , using the Löwner equation. His work was used by most later attempts, and is also applied in the theory of Schramm–Loewner evolution.
proved |
https://en.wikipedia.org/wiki/Food%20science | Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology.
Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example.
Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties.
Definition
The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing".
Disciplines
Some of the subdisciplines of food science are described below.
Food chemistry
Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk.
It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This |
https://en.wikipedia.org/wiki/Localization%20%28commutative%20algebra%29 | In commutative algebra and algebraic geometry, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing ring/module R, so that it consists of fractions such that the denominator s belongs to a given subset S of R. If S is the set of the non-zero elements of an integral domain, then the localization is the field of fractions: this case generalizes the construction of the field of rational numbers from the ring of integers.
The technique has become fundamental, particularly in algebraic geometry, as it provides a natural link to sheaf theory. In fact, the term localization originated in algebraic geometry: if R is a ring of functions defined on some geometric object (algebraic variety) V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions that are not zero at p and localizes R with respect to S. The resulting ring contains information about the behavior of V near p, and excludes information that is not "local", such as the zeros of functions that are outside V (c.f. the example given at local ring).
Localization of a ring
The localization of a commutative ring by a multiplicatively closed set is a new ring whose elements are fractions with numerators in and denominators in .
If the ring is an integral domain the construction generalizes and follows closely that of the field of fractions, and, in particular, that of the rational numbers as the field of fractions of the integers. For rings that have zero divisors, the construction is similar but requires more care.
Multiplicative set
Localization is commonly done with respect to a multiplicatively closed set (also called a multiplicative set or a multiplicative system) of elements of a ring , that is a subset of that is closed under multiplication, and contains .
The requirement that must be a multiplicative set is natural, since it implies that all denomin |
https://en.wikipedia.org/wiki/Nilpotent | In mathematics, an element of a ring is called nilpotent if there exists some positive integer , called the index (or sometimes the degree), such that .
The term, along with its sister idempotent, was introduced by Benjamin Peirce in the context of his work on the classification of algebras.
Examples
This definition can be applied in particular to square matrices. The matrix
is nilpotent because . See nilpotent matrix for more.
In the factor ring , the equivalence class of 3 is nilpotent because 32 is congruent to 0 modulo 9.
Assume that two elements and in a ring satisfy . Then the element is nilpotent as An example with matrices (for a, b): Here and .
By definition, any element of a nilsemigroup is nilpotent.
Properties
No nilpotent element can be a unit (except in the trivial ring, which has only a single element ). All nilpotent elements are zero divisors.
An matrix with entries from a field is nilpotent if and only if its characteristic polynomial is .
If is nilpotent, then is a unit, because entails
More generally, the sum of a unit element and a nilpotent element is a unit when they commute.
Commutative rings
The nilpotent elements from a commutative ring form an ideal ; this is a consequence of the binomial theorem. This ideal is the nilradical of the ring. Every nilpotent element in a commutative ring is contained in every prime ideal of that ring, since . So is contained in the intersection of all prime ideals.
If is not nilpotent, we are able to localize with respect to the powers of : to get a non-zero ring . The prime ideals of the localized ring correspond exactly to those prime ideals of with . As every non-zero commutative ring has a maximal ideal, which is prime, every non-nilpotent is not contained in some prime ideal. Thus is exactly the intersection of all prime ideals.
A characteristic similar to that of Jacobson radical and annihilation of simple modules is available for nilradical: nilpotent elements |
https://en.wikipedia.org/wiki/Orbit%20%28dynamics%29 | In mathematics, specifically in the study of dynamical systems, an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions, as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems.
For discrete-time dynamical systems, the orbits are sequences; for real dynamical systems, the orbits are curves; and for holomorphic dynamical systems, the orbits are Riemann surfaces.
Definition
Given a dynamical system (T, M, Φ) with T a group, M a set and Φ the evolution function
where with
we define
then the set
is called orbit through x. An orbit which consists of a single point is called constant orbit. A non-constant orbit is called closed or periodic if there exists a in such that
.
Real dynamical system
Given a real dynamical system (R, M, Φ), I(x) is an open interval in the real numbers, that is . For any x in M
is called positive semi-orbit through x and
is called negative semi-orbit through x.
Discrete time dynamical system
For discrete time dynamical system :
forward orbit of x is a set :
backward orbit of x is a set :
and orbit of x is a set :
where :
is an evolution function which is here an iterated function,
set is dynamical space,
is number of iteration, which is natural number and
is initial state of system and
Usually different notation is used :
is written as
where is in the above notation.
General dynamical system
For a general dynamical system, especially in homogeneous dynamics, when one h |
https://en.wikipedia.org/wiki/PEEK%20and%20POKE | In computing, PEEK and POKE are commands used in some high-level programming languages for accessing the contents of a specific memory cell referenced by its memory address. PEEK gets the byte located at the specified memory address.
POKE sets the memory byte at the specified address. These commands originated with machine code monitors such as the DECsystem-10 monitor;
these commands are particularly associated with the BASIC programming language, though some other languages such as Pascal and COMAL also have these commands. These commands are comparable in their roles to pointers in the C language and some other programming languages.
One of the earliest references to these commands in BASIC, if not the earliest, is in Altair BASIC. The PEEK and POKE commands were conceived in early personal computing systems to serve a variety of purposes, especially for modifying special memory-mapped hardware registers to control particular functions of the computer such as the input/output peripherals. Alternatively programmers might use these commands to copy software or even to circumvent the intent of a particular piece of software (e.g. manipulate a game program to allow the user to cheat). Today it is unusual to control computer memory at such a low level using a high-level language like BASIC. As such the notions of PEEK and POKE commands are generally seen as antiquated.
The terms peek and poke are sometimes used colloquially in computer programming to refer to memory access in general.
Statement syntax
The PEEK function and POKE commands are usually invoked as follows, either in direct mode (entered and executed at the BASIC prompt) or in indirect mode (as part of a program):
integer_variable = PEEK(address)
POKE address, value
The address and value parameters may contain expressions, as long as the evaluated expressions correspond to valid memory addresses or values, respectively. A valid address in this context is an address within the computer's address space, |
https://en.wikipedia.org/wiki/Sequent%20calculus | In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
Hilbert style. Every line is an unconditional tautology (or theorem).
Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typic |
https://en.wikipedia.org/wiki/Sequent | In mathematical logic, a sequent is a very general kind of conditional assertion.
A sequent may have any number m of condition formulas Ai (called "antecedents") and any number n of asserted formulas Bj (called "succedents" or "consequents"). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus.
Introduction
The form and semantics of sequents
Sequents are best understood in the context of the following three kinds of logical judgments:
<li>Unconditional assertion. No antecedent formulas.
Example: ⊢ B
Meaning: B is true.
<li>Conditional assertion. Any number of antecedent formulas.
<li>Simple conditional assertion. Single consequent formula.
Example: A1, A2, A3 ⊢ B
Meaning: IF A1 AND A2 AND A3 are true, THEN B is true.
<li>Sequent. Any number of consequent formulas.
Example: A1, A2, A3 ⊢ B1, B2, B3, B4
Meaning: IF A1 AND A2 AND A3 are true, THEN B1 OR B2 OR B3 OR B4 is true.
Thus sequents are a generalization of simple conditional assertions, which are a generalization of unconditional assertions.
The word "OR" here is the inclusive OR. The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits.
The symmetry of the classical inference rules for sequents with such semantics.
The ease and simplicity of converting such classical rules to intuitionistic rules.
The ability to prove completeness for predicate calculus when it is expressed in this way.
All three of these benefits were identified in the founding paper by .
Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula. The same single-consequent definition for a sequent is given by .
Syntax details
In a gene |
https://en.wikipedia.org/wiki/Mutual%20Aid%3A%20A%20Factor%20of%20Evolution | Mutual Aid: A Factor of Evolution is a 1902 collection of anthropological essays by Russian naturalist and anarchist philosopher Peter Kropotkin. The essays, initially published in the English periodical The Nineteenth Century between 1890 and 1896, explore the role of mutually beneficial cooperation and reciprocity (or "mutual aid") in the animal kingdom and human societies both past and present. It is an argument against theories of social Darwinism that emphasize competition and survival of the fittest, and against the romantic depictions by writers such as Jean-Jacques Rousseau, who thought that cooperation was motivated by universal love. Instead, Kropotkin argues that mutual aid has pragmatic advantages for the survival of human and animal communities and, along with the conscience, has been promoted through natural selection.
Mutual Aid is considered a fundamental text in anarchist communism. It presents a scientific basis for communism as an alternative to the historical materialism of the Marxists. Kropotkin considers the importance of mutual aid for prosperity and survival in the animal kingdom, in indigenous and early European societies, in the medieval free cities (especially through the guilds), and in the late 19th century village, labor movement, and impoverished people. He criticizes the State for destroying historically important mutual aid institutions, particularly through the imposition of private property.
Many biologists (including Stephen Jay Gould, one of the most influential evolutionary biologists of his generation) also consider it an important catalyst in the scientific study of cooperation.
Reception
Daniel P. Todes, in his account of Russian naturalism in the 19th century, concludes that Kropotkin's work "cannot be dismissed as the idiosyncratic product of an anarchist dabbling in biology" and that his views "were but one expression of a broad current in Russian evolutionary thought that pre-dated, indeed encouraged, his work on the |
https://en.wikipedia.org/wiki/Winning%20Ways%20for%20Your%20Mathematical%20Plays | Winning Ways for Your Mathematical Plays (Academic Press, 1982) by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy is a compendium of information on mathematical games. It was first published in 1982 in two volumes.
The first volume introduces combinatorial game theory and its foundation in the surreal numbers; partizan and impartial games; Sprague–Grundy theory and misère games. The second volume applies the theorems of the first volume to many games, including nim, sprouts, dots and boxes, Sylver coinage, philosopher's phutball, fox and geese. A final section on puzzles analyzes the Soma cube, Rubik's Cube, peg solitaire, and Conway's Game of Life.
A republication of the work by A K Peters split the content into four volumes.
Editions
1st edition, New York: Academic Press, 2 vols., 1982; vol. 1, hardback: , paperback: ; vol. 2, hardback: , paperback: .
2nd edition, Wellesley, Massachusetts: A. K. Peters Ltd., 4 vols., 2001–2004; vol. 1: ; vol. 2: ; vol. 3: ; vol. 4: .
Games mentioned in the book
This is a partial list of the games mentioned in the book.
Note: Misère games not included
Hackenbush
Blue-Red Hackenbush
Blue-Red-Green Hackenbush (Introduced as Hackenbush Hotchpotch in the book)
Childish Hackenbush
Ski-Jumps
Toads-and-Frogs
Cutcake
Maundy Cake
(2nd Unnamed Cutcake variant by Dean Hickerson)
Hotcake
Coolcakes
Baked Alaska
Eatcake
Turn-and-Eatcake
Col
Snort
Nim (Green Hackenbush)
Prim
Dim
Lasker's Nim
Seating Couples
Northcott's Game (Poker-Nim)
The White Knight
Wyt Queens (Wythoff's Game)
Kayles
Double Kayles
Quadruple Kayles
Dawson's Chess
Dawson's Kayles
Treblecross
Grundy's Game
Mrs. Grundy
Domineering
No Highway
De Bono's L-Game
Snakes-and-Ladders (Adders-and-Ladders)
Jelly Bean Game
Dividing Rulers
Reviews
Games
See also
On Numbers and Games by John H. Conway, one of the three coauthors of Winning Ways
References
1982 non-fiction books
Books about game theory
Combinatorial game theory
John Horton Conway |
https://en.wikipedia.org/wiki/Raytheon%20BBN | Raytheon BBN (originally Bolt Beranek and Newman Inc.) is an American research and development company based in Cambridge, Massachusetts, United States.
In 1966, the Franklin Institute awarded the firm the Frank P. Brown Medal, in 1999 BBN received the IEEE Corporate Innovation Recognition, and on 1 February 2013, BBN was awarded the National Medal of Technology and Innovation, the highest honors that the U.S. government bestows upon scientists, engineers and inventors, by President Barack Obama. It became a wholly owned subsidiary of Raytheon in 2009.
History
BBN has its roots in an initial partnership formed on 15 October 1948 between Leo Beranek and Richard Bolt, professors at the Massachusetts Institute of Technology. Bolt had won a commission to be an acoustic consultant for the new United Nations permanent headquarters to be built in New York City. Realizing the magnitude of the project at hand, Bolt had pulled in his MIT colleague Beranek for help and the partnership between the two was born. The firm, Bolt and Beranek, started out in two rented rooms on the MIT campus. Robert Newman joined the firm soon after in 1950, and the firm became Bolt Beranek Newman. Beranek remained the company's president and chief executive officer until 1967, and Bolt was chairman until 1976.
From 1957 to 1962, J. C. R. Licklider served as vice president of engineering psychology for BBN. Foreseeing the potential to obtain federal grants for basic computer research, Licklider convinced the BBN leadership to purchase a then state-of-the-art Royal McBee LGP-30 digital computer in 1958 for US$25,000. Within a year, Ken Olsen, president of the newly formed Digital Equipment Corporation (DEC), approached BBN to test the prototype of DEC's first computer, the PDP-1. Within one month, BBN completed its tests and recommendations of the PDP-1. BBN ultimately purchased the first PDP-1 for around US$150,000 and received the machine in November 1960.
After the PDP-1 arrived, BBN hired |
https://en.wikipedia.org/wiki/Landscape%20ecology | Landscape ecology is the science of studying and improving relationships between ecological processes in the environment and particular ecosystems. This is done within a variety of landscape scales, development spatial patterns, and organizational levels of research and policy. Concisely, landscape ecology can be described as the science of "landscape diversity" as the synergetic result of biodiversity and geodiversity.
As a highly interdisciplinary field in systems science, landscape ecology integrates biophysical and analytical approaches with humanistic and holistic perspectives across the natural sciences and social sciences. Landscapes are spatially heterogeneous geographic areas characterized by diverse interacting patches or ecosystems, ranging from relatively natural terrestrial and aquatic systems such as forests, grasslands, and lakes to human-dominated environments including agricultural and urban settings.
The most salient characteristics of landscape ecology are its emphasis on the relationship among pattern, process and scales, and its focus on broad-scale ecological and environmental issues. These necessitate the coupling between biophysical and socioeconomic sciences. Key research topics in landscape ecology include ecological flows in landscape mosaics, land use and land cover change, scaling, relating landscape pattern analysis with ecological processes, and landscape conservation and sustainability. Landscape ecology also studies the role of human impacts on landscape diversity in the development and spreading of new human pathogens that could trigger epidemics.
Terminology
The German term – thus landscape ecology – was coined by German geographer Carl Troll in 1939. He developed this terminology and many early concepts of landscape ecology as part of his early work, which consisted of applying aerial photograph interpretation to studies of interactions between environment and vegetation.
Explanation
Heterogeneity is the measure of how p |
https://en.wikipedia.org/wiki/Simplicial%20complex | In mathematics, a simplicial complex is a set composed of points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. To distinguish a simplicial complex from an abstract simplicial complex, the former is often called a geometric simplicial complex.
Definitions
A simplicial complex is a set of simplices that satisfies the following conditions:
1. Every face of a simplex from is also in .
2. The non-empty intersection of any two simplices is a face of both and .
See also the definition of an abstract simplicial complex, which loosely speaking is a simplicial complex without an associated geometry.
A simplicial k-complex is a simplicial complex where the largest dimension of any simplex in equals k. For instance, a simplicial 2-complex must contain at least one triangle, and must not contain any tetrahedra or higher-dimensional simplices.
A pure or homogeneous simplicial k-complex is a simplicial complex where every simplex of dimension less than k is a face of some simplex of dimension exactly k. Informally, a pure 1-complex "looks" like it's made of a bunch of lines, a 2-complex "looks" like it's made of a bunch of triangles, etc. An example of a non-homogeneous complex is a triangle with a line segment attached to one of its vertices. Pure simplicial complexes can be thought of as triangulations and provide a definition of polytopes.
A facet is a maximal simplex, i.e., any simplex in a complex that is not a face of any larger simplex. (Note the difference from a "face" of a simplex). A pure simplicial complex can be thought of as a complex where all facets have the same dimension. For (boundary complexes of) simplicial polytopes this coincides with the meaning from polyhedral combinatoric |
https://en.wikipedia.org/wiki/Unit%20disk | In mathematics, the open unit disk (or disc) around P (where P is a given point in the plane), is the set of points whose distance from P is less than 1:
The closed unit disk around P is the set of points whose distance from P is less than or equal to one:
Unit disks are special cases of disks and unit balls; as such, they contain the interior of the unit circle and, in the case of the closed unit disk, the unit circle itself.
Without further specifications, the term unit disk is used for the open unit disk about the origin, , with respect to the standard Euclidean metric. It is the interior of a circle of radius 1, centered at the origin. This set can be identified with the set of all complex numbers of absolute value less than one. When viewed as a subset of the complex plane (C), the unit disk is often denoted .
The open unit disk, the plane, and the upper half-plane
The function
is an example of a real analytic and bijective function from the open unit disk to the plane; its inverse function is also analytic. Considered as a real 2-dimensional analytic manifold, the open unit disk is therefore isomorphic to the whole plane. In particular, the open unit disk is homeomorphic to the whole plane.
There is however no conformal bijective map between the open unit disk and the plane. Considered as a Riemann surface, the open unit disk is therefore different from the complex plane.
There are conformal bijective maps between the open unit disk and the open upper half-plane. So considered as a Riemann surface, the open unit disk is isomorphic ("biholomorphic", or "conformally equivalent") to the upper half-plane, and the two are often used interchangeably.
Much more generally, the Riemann mapping theorem states that every simply connected open subset of the complex plane that is different from the complex plane itself admits a conformal and bijective map to the open unit disk.
One bijective conformal map from the open unit disk to the open upper half-plane is th |
https://en.wikipedia.org/wiki/Altered%20state%20of%20consciousness | An altered state of consciousness (ASC), also called altered state of mind or mind alteration, is any condition which is significantly different from a normal waking state. By 1892, the expression was in use in relation to hypnosis, though there is an ongoing debate as to whether hypnosis is to be identified as an ASC according to its modern definition. The next retrievable instance, by Max Mailhouse from his 1904 presentation to conference, however, is unequivocally identified as such, as it was in relation to epilepsy, and is still used today. In academia, the expression was used as early as 1966 by Arnold M. Ludwig and brought into common usage from 1969 by Charles Tart. It describes induced changes in one's mental state, almost always temporary. A synonymous phrase is "altered state of awareness".
Definitions
There is no general definition of an altered state of consciousness, as any definitional attempt would first have to rely on a definition of a normal state of consciousness. Attempts to define the term can however be found in philosophy, psychology and neuroscience. There is no final consensus on what the most accurate definition is. In the following, the best established and latest definitions are provided:
Arnold M. Ludwig attempted a first definition in 1966.
"An altered state is any mental state(s), induced by various physiological, psychological, or pharmacological maneuvers or agents, which can be recognized subjectively by the individual himself (or by an objective observer of the individual) as representing a sufficient deviation in subjective experience of psychological functioning from certain general norms for that individual during alert, waking consciousness."
Starting from this, Charles Tart focuses his definition on the subjective experience of a state of consciousness and its deviation from a normal waking state.
"Altered states of consciousness are alternate patterns or configurations of experience, which differ qualitatively from a b |
https://en.wikipedia.org/wiki/Method%20of%20Fluxions | Method of Fluxions () is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus. The book was completed in 1671 and published in 1736. Fluxion is Newton's term for a derivative. He originally developed the method at Woolsthorpe Manor during the closing of Cambridge during the Great Plague of London from 1665 to 1667, but did not choose to make his findings known (similarly, his findings which eventually became the Philosophiae Naturalis Principia Mathematica were developed at this time and hidden from the world in Newton's notes for many years). Gottfried Leibniz developed his form of calculus independently around 1673, 7 years after Newton had developed the basis for differential calculus, as seen in surviving documents like “the method of fluxions and fluents..." from 1666. Leibniz, however, published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693. The calculus notation in use today is mostly that of Leibniz, although Newton's dot notation for differentiation for denoting derivatives with respect to time is still in current use throughout mechanics and circuit analysis.
Newton's Method of Fluxions was formally published posthumously, but following Leibniz's publication of the calculus a bitter rivalry erupted between the two mathematicians over who had developed the calculus first, provoking Newton to reveal his work on fluxions.
Newton's development of analysis
For a period of time encompassing Newton's working life, the discipline of analysis was a subject of controversy in the mathematical community. Although analytic techniques provided solutions to long-standing problems, including problems of quadrature and the finding of tangents, the proofs of these solutions were not known to be reducible to the synthetic rules of Euclidean geometry. Instead, analysts were often forced to invoke infinitesimal, o |
https://en.wikipedia.org/wiki/The%20Perfect%20General | The Perfect General is a computer wargame published in 1991 by Quantum Quality Productions.
Publication
The game was designed by Peter Zaccagnino and published in 1991 for the Amiga and DOS. A sequel, The Perfect General II, was released in 1994. The original game was modified for the 3DO by Game Guild in 1996 and published by Kirin Entertainment. The 3DO version includes a few scenarios which are absent from the personal computer versions. A refurbished version is available for Windows since 2003.
The rights for the original version were purchased by Mark Kinkead in 2002, and later released in 2003 as "The Perfect General Internet Edition" by Killer Bee Software. As the name suggests, this version can be played via Internet.
Gameplay
The game is a turn-based map-oriented military simulation game. Along with Modem Wars and Populous, it was one of the early games offering an online mode for real-time-matches via telecommunication networks. The original online-game was played via modem or null modem serial connection.
Reception
The Perfect General sold 75,000 copies by June 1993. Computer Gaming World in 1992 described The Perfect General as "a wonderful game system with a mediocre AI and great two-player potential", and later named it the best wargame of the year. A 1993 survey in the magazine of wargames gave the game three-plus stars out of five, stating that it "sacrifices realism for playability". A 1994 survey gave the Greatest Battles of the 20th Century two-plus stars out of five, noting the game's ease of use and "enjoyable", but inaccurate, scenarios.
In 1996, Computer Gaming World declared The Perfect General the 107th-best computer game ever released. The magazine's wargame columnist Terry Coleman named it his pick for the 12th-best computer wargame released by late 1996.
Reviews
Casus Belli #71 (Sep 1992)
References
External links
Killer Bee Software: The Perfect General Internet Edition
1991 video games
3DO Interactive Multiplayer games
Amiga g |
https://en.wikipedia.org/wiki/Dimension%20of%20an%20algebraic%20variety | In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways.
Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding.
Dimension of an affine algebraic set
Let be a field, and be an algebraically closed extension.
An affine algebraic set is the set of the common zeros in of the elements of an ideal in a polynomial ring Let be the algebra of the polynomial functions over . The dimension of is any of the following integers. It does not change if is enlarged, if is replaced by another algebraically closed extension of and if is replaced by another ideal having the same zeros (that is having the same radical). The dimension is also independent of the choice of coordinates; in other words it does not change if the are replaced by linearly independent linear combinations of them. The dimension of is
The maximal length of the chains of distinct nonempty (irreducible) subvarieties of .
This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the definition that gives the easiest intuitive description of the notion.
The Krull dimension of the coordinate ring .
This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension being the maximal length of the chains of prime ideals of .
The maximal Krull dimension of the local rings at the points of .
This definition shows that the dimension is a local property if is irreducible. If is irreducible, it turns out that all the local rings at closed points have the same Krull dimension (see ).
If is a variety, the Krull dimens |
https://en.wikipedia.org/wiki/List%20of%20industrial%20engineers | This is a list of notable industrial engineers, people who were trained in or practiced industrial engineering who have established prominence in their profession.
A
Bud Adams – oil tycoon and owner of the Tennessee Titans.
Ravindra K. Ahuja – editor of journals Operations Research, Transportation Science, and Networks
Horace Lucian Arnold – American engineer, inventor, engineering journalist, and early writer on management
B
Ali Babacan – State Minister for Economy of Republic of Turkey (Middle East Technical University)
Carl Georg Barth – Norwegian-American mathematician and mechanical engineer who improved and popularized the industrial use of compound slide rules
Leslie Benmark – known for work in engineering education, specifically accreditation
C
Alexander Hamilton Church – English efficiency engineer, accountant and early writer on accountancy and management
Richard W. Conway – Emerson Electric Company Professor of Manufacturing Management, Emeritus, at Cornell University
Tim Cook – Chief Executive Officer of Apple Inc. (Auburn University) aug 2011
Roger Corman – American film producer and director (Stanford University)
Nancy Currie – astronaut
D
John Dasburg – former CEO of Northwest Airlines and Burger King (University of Florida)
John Z. DeLorean – former General Motors executive; founder of DeLorean Motors
W. Edwards Deming – forerunner of Total Quality Management (TQC)
Mike Duke – President and CEO of Wal-Mart Stores USA (Georgia Institute of Technology)
E
Harrington Emerson – American efficiency engineer and early management theorist
A. K. Erlang – communications, queueing (University of Copenhagen)
Michael Eskew – CEO of United Parcel Service (Purdue University)
F
Adel Fakeih – Saudi Arabian politician
Giacomo Ferrari – Italian politician and mayor of Parma
Henry Ford – founder of the Ford Motor Company; revolutionized industrial production by being the first to apply assembly line manufacturing to a production process
|
https://en.wikipedia.org/wiki/The%20Great%20Giana%20Sisters | The Great Giana Sisters is a 1987 platform game developed by German studio Time Warp Productions and published by Rainbow Arts. The scroll screen melody of the game was composed by Chris Huelsbeck and is a popular Commodore 64 soundtrack. The game is heavily based on Nintendo's Super Mario Bros. (1985), which led to production being stopped shortly after release, but it later inspired a number of sequels.
Plot
The player takes the role of Giana (referred to as "Gianna" in the scrolling intro and also the intended name before a typo was made on the cover art and the developers just went with that rather than having the cover remade), a girl who suffers from a nightmare, in which she travels through 33 stages full of monsters, while collecting diamonds and looking for her sister Maria. If the player wins the final battle, Giana will be awakened by her sister.
Gameplay
The Great Giana Sisters is a 2D side-scrolling arcade game in which the player controls either Giana or her sister Maria. The game supports alternating 2 players, with the second player taking control of Maria.
Each level contains a number of dream crystals, which, when collected, give points to make the game's high score. An extra life can be gained by collecting 100 dream crystals. Extra lives can also be found in the form of hidden "Lollipop" items.
Enemies can be defeated by jumping on them or shooting them after obtaining the relevant power-ups. The enemies include owls, rolling eyeballs, flesh-eating fish and deadly insects. The "fire wheel" transforms Giana into a punk with the ability to crush rocks by jumping and hitting them from below. The "lightning bolt" awards Giana "dream bubbles", a single projectile shot. "Double lightning" gives her the ability to shoot recoiling projectiles. "Strawberries" give her the ability to shoot homing projectiles. There is one defensive item in the game, the "water drop", which protects Giana against fire. A number of special items can also be triggered th |
https://en.wikipedia.org/wiki/DNA%20computing | DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional electronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application by Len Adleman in 1994, it has now been expanded to several other avenues such as the development of storage technologies, nanoscale imaging modalities, synthetic controllers and reaction networks, etc.
History
Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made almost after a decade.
The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.
In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the d |
https://en.wikipedia.org/wiki/Collective%20bargaining | Collective bargaining is a process of negotiation between employers and a group of employees aimed at agreements to regulate working salaries, working conditions, benefits, and other aspects of workers' compensation and rights for workers. The interests of the employees are commonly presented by representatives of a trade union to which the employees belong. A collective agreement reached by these negotiations functions as a labour contract between an employer and one or more unions, and typically establishes terms regarding wage scales, working hours, training, health and safety, overtime, grievance mechanisms, and rights to participate in workplace or company affairs. Such agreements can also include 'productivity bargaining' in which workers agree to changes to working practices in return for higher pay or greater job security.
The union may negotiate with a single employer (who is typically representing a company's shareholders) or may negotiate with a group of businesses, depending on the country, to reach an industry-wide agreement. Collective bargaining consists of the process of negotiation between representatives of a union and employers (generally represented by management, or, in some countries such as Austria, Sweden and the Netherlands, by an employers' organization) in respect of the terms and conditions of employment of employees, such as wages, hours of work, working conditions, grievance procedures, and about the rights and responsibilities of trade unions. The parties often refer to the result of the negotiation as a collective bargaining agreement (CBA) or as a collective employment agreement (CEA).
History
The term "collective bargaining" was first used in 1891 by Beatrice Webb, a founder of the field of industrial relations in Britain. It refers to the sort of collective negotiations and agreements that had existed since the rise of trade unions during the 18th century.
United States
In the United States, the National Labor Relations Act o |
https://en.wikipedia.org/wiki/Almost | In set theory, when dealing with sets of infinite size, the term almost or nearly is used to refer to all but a negligible amount of elements in the set. The notion of "negligible" depends on the context, and may mean "of measure zero" (in a measure space), "finite" (when infinite sets are involved), or "countable" (when uncountably infinite sets are involved).
For example:
The set is almost for any in , because only finitely many natural numbers are less than .
The set of prime numbers is not almost , because there are infinitely many natural numbers that are not prime numbers.
The set of transcendental numbers are almost , because the algebraic real numbers form a countable subset of the set of real numbers (which is uncountable).
The Cantor set is uncountably infinite, but has Lebesgue measure zero. So almost all real numbers in (0, 1) are members of the complement of the Cantor set.
See also
Almost all
Almost surely
Approximation
List of mathematical jargon
References
Mathematical terminology
Set theory
de:Fast alle |
https://en.wikipedia.org/wiki/Well-defined%20expression | In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value. Otherwise, the expression is said to be not well defined, ill defined or ambiguous. A function is well defined if it gives the same result when the representation of the input is changed without changing the value of the input. For instance, if takes real numbers as input, and if does not equal then is not well defined (and thus not a function). The term well defined can also be used to indicate that a logical expression is unambiguous or uncontradictory.
A function that is not well defined is not the same as a function that is undefined. For example, if , then even though is undefined does not mean that the function is not well defined – but simply that 0 is not in the domain of .
Example
Let be sets, let and "define" as if and if .
Then is well defined if . For example, if and , then would be well defined and equal to .
However, if , then would not be well defined because is "ambiguous" for . For example, if and , then would have to be both 0 and 1, which makes it ambiguous. As a result, the latter is not well defined and thus not a function.
"Definition" as anticipation of definition
In order to avoid the quotation marks around "define" in the previous simple example, the "definition" of could be broken down into two simple logical steps:
While the definition in step 1 is formulated with the freedom of any definition and is certainly effective (without the need to classify it as "well defined"), the assertion in step 2 has to be proved. That is, is a function if and only if , in which case – as a function – is well defined.
On the other hand, if , then for an , we would have that and , which makes the binary relation not functional (as defined in Binary relation#Special types of binary relations) and thus not well defined as a function. Colloquially, the "function" is also called ambiguo |
https://en.wikipedia.org/wiki/Horizon%20effect | The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few plies down the game tree. Thus, for a computer searching only five plies, there is a possibility that it will make a detrimental move, but the effect is not visible because the computer does not search to the depth of the error (i.e., beyond its "horizon").
When evaluating a large game tree using techniques such as minimax with alpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect.
In 1973 Hans Berliner named this phenomenon, which he and other researchers had observed, the "Horizon Effect." He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form."
Greedy algorithms tend to suffer from the horizon effect.
The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for a certain class of moves of major importance to the game state, such as captures in chess.
Rewriting the evaluation function for leaf nodes and/or analyzing more nodes will solve many horizon effect problems.
Example
For example, in chess, assume a situation where the computer only searches the game tree to six plies and from the current position determines that the queen is lost in the sixth ply; and suppose there is |
https://en.wikipedia.org/wiki/Implied%20volatility | In financial mathematics, the implied volatility (IV) of an option contract is that value of the volatility of the underlying instrument which, when input in an option pricing model (such as Black–Scholes), will return a theoretical value equal to the current market price of said option. A non-option financial instrument that has embedded optionality, such as an interest rate cap, can also have an implied volatility. Implied volatility, a forward-looking and subjective measure, differs from historical volatility because the latter is calculated from known past returns of a security. To understand where implied volatility stands in terms of the underlying, implied volatility rank is used to understand its implied volatility from a one-year high and low IV.
Motivation
An option pricing model, such as Black–Scholes, uses a variety of inputs to derive a theoretical value for an option. Inputs to pricing models vary depending on the type of option being priced and the pricing model used. However, in general, the value of an option depends on an estimate of the future realized price volatility, σ, of the underlying. Or, mathematically:
where C is the theoretical value of an option, and f is a pricing model that depends on σ, along with other inputs.
The function f is monotonically increasing in σ, meaning that a higher value for volatility results in a higher theoretical value of the option. Conversely, by the inverse function theorem, there can be at most one value for σ that, when applied as an input to , will result in a particular value for C.
Put in other terms, assume that there is some inverse function g = f−1, such that
where is the market price for an option. The value is the volatility implied by the market price , or the implied volatility.
In general, it is not possible to give a closed form formula for implied volatility in terms of call price (for a review see ). However, in some cases (large strike, low strike, short expiry, large expiry) it is po |
https://en.wikipedia.org/wiki/USENIX | USENIX is an American 501(c)(3) nonprofit membership organization based in Berkeley, California and founded in 1975 that supports advanced computing systems, operating system (OS), and computer networking research. It organizes several highly respected conferences in these fields. Its stated mission is to foster technical excellence and innovation, support and disseminate research with a practical bias, provide a neutral forum for discussion of technical issues, and encourage computing outreach into the community at large.
History
USENIX was established in 1975 under the name "Unix Users Group," focusing primarily on the study and development of the Unix OS family and similar systems. In June 1977, a lawyer from AT&T Corporation informed the group that they could not use the word "Unix" in their name as it was a trademark of Western Electric (the manufacturing arm of AT&T until 1995), which led to the change of name to USENIX. It has since grown into a respected organization among practitioners, developers, and researchers of computer operating systems more generally. Since its founding, it has published a technical journal titled ;login:.
USENIX was started as a technical organization. As commercial interest grew, a number of separate groups started in parallel, most notably the Software Tools Users Group (STUG), a technical adjunct for Unix-like tools and interface on non-Unix operating systems, and /usr/group, a commercially oriented user group.
USENIX's founding President was Lou Katz.
Conferences
USENIX hosts numerous conferences and symposia each year, including:
USENIX Symposium on Operating Systems Design and Implementation (OSDI) (was bi-annual till 2020)
USENIX Security Symposium (USENIX Security)
USENIX Conference on File and Storage Technologies (FAST)
USENIX Symposium on Networked Systems Design and Implementation (NSDI)
USENIX Annual Technical Conference (USENIX ATC) (co-located with OSDI since 2021)
SREcon, a conference for engineers focused on |
https://en.wikipedia.org/wiki/Apple%20ProDOS | ProDOS is the name of two similar operating systems for the Apple II series of personal computers. The original ProDOS, renamed ProDOS 8 in version 1.2, is the last official operating system usable by all 8-bit Apple II series computers, and was distributed from 1983 to 1993. The other, ProDOS 16, was a stop-gap solution for the 16-bit Apple II that was replaced by GS/OS within two years.
ProDOS was marketed by Apple as meaning Professional Disk Operating System, and became the most popular operating system for the Apple II series of computers 10 months after its release in January 1983.
Background
ProDOS was released to address shortcomings in the earlier Apple operating system (called simply DOS), which was beginning to show its age.
Apple DOS only has built-in support for 5.25" floppy disks and requires patches to use peripheral devices such as hard disk drives and non-Disk-II floppy disk drives, including 3.5" floppy drives. ProDOS adds a standard method of accessing ROM-based drivers on expansion cards for disk devices, expands the maximum volume size from about 400 kilobytes to 32 megabytes, introduces support for hierarchical subdirectories (a vital feature for organizing a hard disk's storage space), and supports RAM disks on machines with 128 KB or more of memory. ProDOS addresses problems with handling hardware interrupts, and includes a well-defined and documented programming and expansion interface, which Apple DOS had always lacked. Although ProDOS also includes support for a real-time clock (RTC), this support went largely unused until the release of the Apple II, the first in the Apple II series to include an RTC on board. Third-party clocks were available for the II Plus, IIe, and IIc, however.
ProDOS, unlike earlier Apple DOS versions, has its developmental roots in SOS, the operating system for the ill-fated Apple III computer released in 1980. Pre-release documentation for ProDOS (including early editions of Beneath Apple ProDOS) documented |
https://en.wikipedia.org/wiki/Apple%20DOS | Apple DOS is the family of disk operating systems for the Apple II series of microcomputers from late 1978 through early 1983. It was superseded by ProDOS in 1983. Apple DOS has three major releases: DOS 3.1, DOS 3.2, and DOS 3.3; each one of these three releases was followed by a second, minor "bug-fix" release, but only in the case of Apple DOS 3.2 did that minor release receive its own version number, Apple DOS 3.2.1. The best-known and most-used version is Apple DOS 3.3 in the 1980 and 1983 releases. Prior to the release of Apple DOS 3.1, Apple users had to rely on audio cassette tapes for data storage and retrieval.
Version history
When Apple Computer introduced the Apple II in April 1977, the new computer had no disk drive or disk operating system (DOS). Although Apple co-founder Steve Wozniak designed the Disk II controller late that year, and believed that he could have written a DOS, his co-founder Steve Jobs decided to outsource the task. The company considered using Digital Research's CP/M, but Wozniak sought an operating system that was easier to use. On 10 April 1978 Apple signed a $13,000 contract with Shepardson Microsystems to write a DOS and deliver it within 35 days. Apple provided detailed specifications, and early Apple employee Randy Wigginton worked closely with Shepardson's Paul Laughton as the latter wrote the operating system with punched cards and a minicomputer.
There was no Apple DOS 1 or 2. Versions 0.1 through 2.8 were serially enumerated revisions during development, which might as well have been called builds 1 through 28. Apple DOS 3.0, a renamed issue of version 2.8, was never publicly released due to bugs. Apple published no official documentation until release 3.2.
Apple DOS 3.1 was publicly released in June 1978, slightly more than one year after the Apple II was introduced, becoming the first disk-based operating system for any Apple computer. A bug-fix release came later, addressing a problem by means of its utility, which |
https://en.wikipedia.org/wiki/R-value%20%28insulation%29 | In the context of construction, the R-value is a measure of how well a two-dimensional barrier, such as a layer of insulation, a window or a complete wall or ceiling, resists the conductive flow of heat. R-value is the temperature difference per unit of heat flux needed to sustain one unit of heat flux between the warmer surface and colder surface of a barrier under steady-state conditions. The measure is therefore equally relevant for lowering energy bills for heating in the winter, for cooling in the summer, and for general comfort.
The R-value is the building industry term for thermal resistance "per unit area." It is sometimes denoted RSI-value if the SI units are used. An R-value can be given for a material (e.g. for polyethylene foam), or for an assembly of materials (e.g. a wall or a window). In the case of materials, it is often expressed in terms of R-value per metre. R-values are additive for layers of materials, and the higher the R-value the better the performance.
The U-factor or U-value is the overall heat transfer coefficient and can be found by taking the inverse of the R-value. It is a property that describes how well building elements conduct heat per unit area across a temperature gradient. The elements are commonly assemblies of many layers of materials, such as those that make up the building envelope. It is expressed in watts per square metre kelvin: W/(m2⋅K). The higher the U-value, the lower the ability of the building envelope to resist heat transfer. A low U-value, or conversely a high R-Value usually indicates high levels of insulation. They are useful as it is a way of predicting the composite behaviour of an entire building element rather than relying on the properties of individual materials.
R-value definition
This relates to the technical/constructional value.
where:
(K⋅m2/W) is the R-value,
(K) is the temperature difference between the warmer surface and colder surface of a barrier,
(W/m2) is the heat flux through the b |
https://en.wikipedia.org/wiki/Rithmomachia | Rithmomachia (also known as rithmomachy, arithmomachia, rythmomachy, rhythmomachy, the philosophers' game, and other variants) is an early European mathematical board game. Its earliest known description dates from the eleventh century. The name comes loosely from Greek and means "the battle of the numbers." The game is somewhat like chess except that most methods of capture depend on the numbers inscribed on each piece.
The game was used as an educational tool that teachers could introduce while teaching arithmetic as part of the quadrivium to those in Western Europe who received a classical education during the medieval period. David Sepkoski wrote that between the twelfth and sixteenth centuries, "rithmomachia served as a practical exemplar for teaching the contemplative values of Boethian mathematical philosophy, which emphasized the natural harmony and perfection of number and proportion, that it was used both as a mnemonic drill for the study of Boethian number theory and, more importantly, as a vehicle for moral education, by reminding players of the mathematical harmony of creation." The game declined sharply in popularity in the 17th century, as it was no longer used in education, and potential players were not introduced to it during their schooling.
History
Little is known about the origin of the game. Medieval writers attributed it to Pythagoras, but no trace of it has been discovered in Greek literature. The earliest surviving mentions of it is are from the early 11th century, suggesting it was created in the late 10th or early 11th century. The name and its many variations are from Greek; it is unclear whether this was due to being created by a rare Western European with a classical education that involved learning Greek, or if the game had a genuine origin in Greece and the Greek-speaking Byzantine Empire of the period.
The first written evidence of Rithmomachia dates to around 1030, when a monk named Asilo created a game that illustrated the numb |
https://en.wikipedia.org/wiki/Matrix%20decomposition | In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
Example
In numerical analysis, different decompositions are used to implement efficient matrix algorithms.
For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix L and an upper triangular matrix U. The systems and require fewer additions and multiplications to solve, compared with the original system , though one might require significantly more digits in inexact arithmetic such as floating point.
Similarly, the QR decomposition expresses A as QR with Q an orthogonal matrix and R an upper triangular matrix. The system Q(Rx) = b is solved by Rx = QTb = c, and the system Rx = c is solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition is numerically stable.
Decompositions related to solving systems of linear equations
LU decomposition
Traditionally applicable to: square matrix A, although rectangular matrices can be applicable.
Decomposition: , where L is lower triangular and U is upper triangular
Related: the LDU decomposition is , where L is lower triangular with ones on the diagonal, U is upper triangular with ones on the diagonal, and D is a diagonal matrix.
Related: the LUP decomposition is , where L is lower triangular, U is upper triangular, and P is a permutation matrix.
Existence: An LUP decomposition exists for any square matrix A. When P is an identity matrix, the LUP decomposition reduces to the LU decomposition.
Comments: The LUP and LU decompositions are useful in solving an n-by-n system of linear equations |
https://en.wikipedia.org/wiki/Five-second%20rule | The five-second rule, or sometimes the three-second rule, is a food hygiene myth that states a defined time window after which it is not safe to eat food (or sometimes to use cutlery) after it has been dropped on the floor or on the ground and thus exposed to contamination.
There appears to be no scientific consensus on the general applicability of the rule, and its origin is unclear. It probably originated succeeding germ theory in the late 19th century. The first known mention of the rule in print is in the 1995 novel Wanted: Rowing Coach.
Research
The five-second rule has received some scholarly attention. It has been studied as both a public health recommendation and a sociological effect.
University of Illinois
In 2003, Jillian Clarke of the University of Illinois at Urbana–Champaign found in a survey that 56% of men and 70% of women surveyed were familiar with the five-second rule. She also determined that a variety of foods were significantly contaminated by even brief exposure to a tile inoculated with E. coli. On the other hand, Clarke found no significant evidence of contamination on public flooring. For this work, Clarke received the 2004 Prize in public health.
A more thorough study in 2007 using salmonella on wood, tiles, and nylon carpet, found that the bacteria could thrive under dry conditions even after twenty-eight days. Tested on surfaces that had been contaminated with salmonella eight hours previously, the bacteria could still contaminate bread and baloney lunchmeat in under five seconds. But a minute-long contact increased contamination about tenfold (especially on tile and carpet surfaces).
Rutgers University
Researchers at Rutgers University debunked the theory in 2016 by dropping watermelon cubes, gummy candies, plain white bread, and buttered bread from a height of onto surfaces slathered in Enterobacter aerogenes. The surfaces used were carpet, ceramic tile, stainless steel and wood. The food was left on the surface for intervals |
https://en.wikipedia.org/wiki/Mast%20cell | A mast cell (also known as a mastocyte or a labrocyte) is a resident cell of connective tissue that contains many granules rich in histamine and heparin. Specifically, it is a type of granulocyte derived from the myeloid stem cell that is a part of the immune and neuroimmune systems. Mast cells were discovered by Paul Ehrlich in 1877. Although best known for their role in allergy and anaphylaxis, mast cells play an important protective role as well, being intimately involved in wound healing, angiogenesis, immune tolerance, defense against pathogens, and vascular permeability in brain tumors.
The mast cell is very similar in both appearance and function to the basophil, another type of white blood cell. Although mast cells were once thought to be tissue-resident basophils, it has been shown that the two cells develop from different hematopoietic lineages and thus cannot be the same cells.
Structure
Mast cells are very similar to basophil granulocytes (a class of white blood cells) in blood. Both are granulated cells that contain histamine and heparin, an anticoagulant. Their nuclei differ in that the basophil nucleus is lobated while the mast cell nucleus is round. The Fc region of immunoglobulin E (IgE) becomes bound to mast cells and basophils, and when IgE's paratopes bind to an antigen, it causes the cells to release histamine and other inflammatory mediators. These similarities have led many to speculate that mast cells are basophils that have "homed in" on tissues. Furthermore, they share a common precursor in bone marrow expressing the CD34 molecule. Basophils leave the bone marrow already mature, whereas the mast cell circulates in an immature form, only maturing once in a tissue site. The site an immature mast cell settles in probably determines its precise characteristics. The first in vitro differentiation and growth of a pure population of mouse mast cells has been carried out using conditioned medium derived from concanavalin A-stimulated splenocytes |
https://en.wikipedia.org/wiki/OSF/1 | OSF/1 is a variant of the Unix operating system developed by the Open Software Foundation during the late 1980s and early 1990s. OSF/1 is one of the first operating systems to have used the Mach kernel developed at Carnegie Mellon University, and is probably best known as the native Unix operating system for DEC Alpha architecture systems.
In 1994, after AT&T had sold UNIX System V to Novell and the rival Unix International consortium had disbanded, the Open Software Foundation ceased funding of research and development of OSF/1. The Tru64 UNIX variant of OSF/1 was supported by HP until 2012.
Background
In 1988, during the so-called "Unix wars", Digital Equipment Corporation (DEC) joined with IBM, Hewlett-Packard, and others to form the Open Software Foundation (OSF) to develop a version of Unix named OSF/1. The aim was to compete with System V Release 4 from AT&T Corporation and Sun Microsystems, and it has been argued that a primary goal was for the operating system to be free of AT&T intellectual property. The fact that OSF/1 is one of the first operating systems to have used the Mach kernel is cited as support of this assertion. Digital also strongly promoted OSF/1 for real-time applications, and with traditional UNIX implementations at the time providing poor real-time support at best, the real-time and multi-threading support can be interpreted as having been heavily dependent on the Mach kernel. It also incorporates a large part of the BSD kernel (based on the 4.3-Reno release) to implement the UNIX API. At the time of its introduction, OSF/1 became the third major flavor of UNIX together with System V and BSD.
Vendor releases
DEC's first release of OSF/1 (OSF/1 Release 1.0) in January 1992 was for its line of MIPS-based DECstation workstations, however this was never a fully supported product. DEC ported OSF/1 to their new Alpha AXP platform as DEC OSF/1 AXP Release 1.2, released in March 1993. OSF/1 AXP is a full 64-bit operating system. After OSF/1 AX |
https://en.wikipedia.org/wiki/Compatible%20Time-Sharing%20System | The Compatible Time-Sharing System (CTSS) was the first general purpose time-sharing operating system. Compatible Time Sharing referred to time sharing which was compatible with batch processing; it could offer both time sharing and batch processing concurrently.
CTSS was developed at the MIT Computation Center ("Comp Center"). CTSS was first demonstrated on MIT's modified IBM 709 in November 1961. The hardware was replaced with a modified IBM 7090 in 1962 and later a modified IBM 7094 called the "blue machine" to distinguish it from the Project MAC CTSS IBM 7094. Routine service to MIT Comp Center users began in the summer of 1963 and was operated there until 1968.
A second deployment of CTSS on a separate IBM 7094 that was received in October 1963 (the "red machine") was used early on in Project MAC until 1969 when the red machine was moved to the Information Processing Center and operated until July 20, 1973. CTSS ran on only those two machines; however, there were remote CTSS users outside of MIT including ones in California, South America, the University of Edinburgh and the University of Oxford.
History
John Backus said in the 1954 summer session at MIT that "By time sharing, a big computer could be used as several small ones; there would need to be a reading station for each user". Computers at that time, like IBM 704, were not powerful enough to implement such system, but at the end of 1958, MIT's Computation Center nevertheless added a typewriter input to its 704 with the intent that a programmer or operator could "obtain additional answers from the machine on a time-sharing basis with other programs using the machine simultaneously".
In June 1959, Christopher Strachey published a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris, where he envisaged a programmer debugging a program at a console (like a teletype) connected to the computer, while another program was running in the computer at the same ti |
https://en.wikipedia.org/wiki/Fibonacci%20heap | In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after the Fibonacci numbers, which are used in their running time analysis.
For the Fibonacci heap, the find-minimum operation takes constant (O(1)) amortized time. The insert and decrease key operations also work in constant amortized time. Deleting an element (most often used in the special case of deleting the minimum element) works in O(log n) amortized time, where n is the size of the heap. This means that starting from an empty data structure, any sequence of a insert and decrease key operations and b delete operations would take O(a + b log n) worst case time, where n is the maximum heap size. In a binary or binomial heap, such a sequence of operations would take O((a + b) log n) time. A Fibonacci heap is thus better than a binary or binomial heap when b is smaller than a by a non-constant factor. It is also possible to merge two Fibonacci heaps in constant amortized time, improving on the logarithmic merge time of a binomial heap, and improving on binary heaps which cannot handle merges efficiently.
Using Fibonacci heaps for priority queues improves the asymptotic running time of important algorithms, such as Dijkstra's algorithm for computing the shortest path between two nodes in a graph, compared to the same algorithm using other slower priority queue data structures.
Structure
A Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binom |
https://en.wikipedia.org/wiki/Data%20General%20RDOS | The Data General RDOS (Real-time Disk Operating System) is a real-time operating system released in 1970. The software was bundled with the company's popular Nova and Eclipse minicomputers.
Overview
RDOS is capable of multitasking, with the ability to run up to 32 tasks (similar to the current term threads) simultaneously on each of two grounds (foreground and background) within a 64 KB memory space. Later versions of RDOS are compatible with Data General's 16-bit Eclipse minicomputer line.
A cut-down version of RDOS, without real-time background and foreground capability but still capable of running multiple threads and multi-user Data General Business Basic, is called Data General Diskette Operating System (DG-DOS or now—somewhat confusingly—simply DOS); another related operating system is RTOS, a Real-Time Operating System for diskless environments. RDOS on microNOVA-based "Micro Products" micro-minicomputers is sometimes called DG/RDOS.
RDOS was superseded in the early 1980s by Data General's AOS family of operating systems, including AOS/VS and MP/AOS (MP/OS on smaller systems).
Commands
The following list of commands are supported by the RDOS/DOS CLI.
ALGOL
APPEND
ASM
BASIC
BATCH
BOOT
BPUNCH
BUILD
CCONT
CDIR
CHAIN
CHATR
CHLAT
CLEAR
CLG
COPY
CPART
CRAND
CREATE
DEB
DELETE
DIR
DISK
DUMP
EDIT
ENDLOG
ENPAT
EQUIV
EXFG
FDUMP
FGND
FILCOM
FLOAD
FORT
FORTRAN
FPRINT
GDIR
GMEM
GSYS
GTOD
INIT
LDIR
LFE
LINK
LIST
LOAD
LOG
MAC
MCABOOT
MDIR
MEDIT
MESSAGE
MKABS
MKSAVE
MOVE
NSPEED
OEDIT
OVLDR
PATCH
POP
PRINT
PUNCH
RDOSSORT
RELEASE
RENAME
REPLACE
REV
RLDR
SAVE
SDAY
SEDIT
SMEM
SPDIS
SPEBL
SPEED
SPKILL
STOD
SYSGEN
TPRINT
TUOFF
TUON
TYPE
VFU
XFER
Antitrust lawsuit
In the late 1970s, Data General was sued (under the Sherman and Clayton antitrust acts) by competitors for their practice of bundling RDOS with the Data General Nova or Eclipse minicomputer. When Data General introduced the Data Gen |
https://en.wikipedia.org/wiki/Proof%20of%20Bertrand%27s%20postulate | In mathematics, Bertrand's postulate (actually now a theorem) states that for each there is a prime such that . First conjectured in 1845 by Joseph Bertrand, it was first proven by Chebyshev, and a shorter but also advanced proof was given by Ramanujan.
The following elementary proof was published by Paul Erdős in 1932, as one of his earliest mathematical publications. The basic idea is to show that the central binomial coefficients need to have a prime factor within the interval in order to be large enough. This is achieved through analysis of the factorization of the central binomial coefficients.
The main steps of the proof are as follows. First, show that the contribution of every prime power factor in the prime decomposition of the central binomial coefficient is at most . Then show that every prime larger than appears at most once.
The next step is to prove that has no prime factors in the interval . As a consequence of these bounds, the contribution to the size of coming from the prime factors that are at most grows asymptotically as for some . Since the asymptotic growth of the central binomial coefficient is at least , the conclusion is that, by contradiction and for large enough , the binomial coefficient must have another prime factor, which can only lie between and .
The argument given is valid for all . The remaining values of are by direct inspection, which completes the proof.
Lemmas in the proof
The proof uses the following four lemmas to establish facts about the primes present in the central binomial coefficients.
Lemma 1
For any integer , we have
Proof: Applying the binomial theorem,
since is the largest term in the sum in the right-hand side, and the sum has terms (including the initial outside the summation).
Lemma 2
For a fixed prime , define to be the p-adic order of , that is, the largest natural number such that divides .
For any prime , .
Proof: The exponent of in is given by Legendre's formula
so
But each |
https://en.wikipedia.org/wiki/Curry%E2%80%93Howard%20correspondence | In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.
It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and the logician William Alvin Howard. It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation) and Stephen Kleene (see Realizability). The relationship has been extended to include category theory as the three-way Curry–Howard–Lambek correspondence.
Origin, scope, and consequences
The beginnings of the Curry–Howard correspondence lie in several observations:
In 1934 Curry observes that the types of the combinators could be seen as axiom-schemes for intuitionistic implicational logic.
In 1958 he observes that a certain kind of proof system, referred to as Hilbert-style deduction systems, coincides on some fragment to the typed fragment of a standard model of computation known as combinatory logic.
In 1969 Howard observes that another, more "high-level" proof system, referred to as natural deduction, can be directly interpreted in its intuitionistic version as a typed variant of the model of computation known as lambda calculus.
The Curry–Howard correspondence is the observation that there is an isomorphism between the proof systems, and the models of computation. It is the statement that these two families of formalisms can be considered as identical.
If one abstracts on the peculiarities of either formalism, the following generalizatio |
https://en.wikipedia.org/wiki/Mathematical%20fallacy | In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as illustrations of a concept called mathematical fallacy. There is a distinction between a simple mistake and a mathematical fallacy in a proof, in that a mistake in a proof leads to an invalid proof while in the best-known examples of mathematical fallacies there is some element of concealment or deception in the presentation of the proof.
For example, the reason why validity fails may be attributed to a division by zero that is hidden by algebraic notation. There is a certain quality of the mathematical fallacy: as typically presented, it leads not only to an absurd result, but does so in a crafty or clever way. Therefore, these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively subtle, or designed to show that certain steps are conditional, and are not applicable in the cases that are the exceptions to the rules.
The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy. The latter usually applies to a form of argument that does not comply with the valid inference rules of logic, whereas the problematic mathematical step is typically a correct rule applied with a tacit wrong assumption. Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (e.g., the introduction of Pasch's axiom of Euclidean geometry, the five colour theorem of graph theory). Pseudaria, an ancient lost book of false proofs, is attributed to Euclid.
Mathematical fallacies exist in many branches of mathematics. In elementary algebra, typical examples may involve a step where division by zero is performed, where a root is incorrectly extracted or, more generally, where different values of a m |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.