source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/MAI%20Systems%20Corp.%20v.%20Peak%20Computer%2C%20Inc. | MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511 (9th Cir. 1993), was a case heard by the United States Court of Appeals for the Ninth Circuit which addressed the issue of whether the loading of software programs into random-access memory (RAM) by a computer repair technician during maintenance constituted an unauthorized software copy and therefore a copyright violation. The court held that it did, although the United States Congress subsequently enacted an amendment to to specifically overrule this holding in the circumstances of computer repair.
Background
Peak Computer, Inc. is a computer maintenance company that organized in 1990. Peak maintained computer systems for its clients by performing routine maintenance and emergency repairs. When providing maintenance or making emergency repairs, Peak booted the MAI Systems computer, causing the MAI operating system to be loaded from the hard disk into RAM. MAI also alleged that Peak ran MAI's diagnostic software during Peak's service calls.
This case involved the two parties MAI Systems and Peak Computer, as well as defendant Eric Francis, a former MAI Systems Corporation employee who joined Peak Computer, Inc.
Copyright issues
MAI contended that Peak's use of the MAI operating system constituted copyright infringement. MAI argued that the license agreement which permitted an end user to make a copy of the program for their own use did not extend to Peak because Peak was not the licensee and therefore had no rights under the license agreement.
The court agreed and granted partial summary judgment which prohibited Peak from continuing their method of operation. The court determined that a copy of a program made from a hard drive into RAM for purpose of executing the program was, in fact, a copy under the Copyright Act. The judges utilized the criteria set forth by , which states 'A work is "fixed" in a tangible medium of expression when its embodiment in a copy or phonorecord, by or under the authority of |
https://en.wikipedia.org/wiki/Anyonic%20Lie%20algebra | In mathematics, an anyonic Lie algebra is a U(1) graded vector space over equipped with a bilinear operator and linear maps (some authors use ) and such that , satisfying following axioms:
for pure graded elements X, Y, and Z.
References
Vector spaces
Lie algebras |
https://en.wikipedia.org/wiki/Stellarium%20%28software%29 | Stellarium is a free and open-source planetarium, licensed under the terms of the GNU General Public License version 2, available for Linux, Windows, and macOS. A port of Stellarium called Stellarium Mobile is available for Android, iOS, and Symbian as a paid version, being developed by Noctua Software. These have a limited functionality, lacking some features of the desktop version. All versions use OpenGL to render a realistic projection of the night sky in real time.
Stellarium was featured on SourceForge in May 2006 as Project of the Month.
History
In 2006, Stellarium 0.7.1 won a gold award in the Education category of the Les Trophées du Libre free software competition.
A modified version of Stellarium has been used by the MeerKAT project as a virtual sky display showing where the antennae of the radiotelescope are pointed.
In December 2011, Stellarium was added as one of the "featured applications" in the Ubuntu Software Center.
Planetarium dome projection
The fisheye and spherical mirror distortion features allow Stellarium to be projected onto domes. Spherical mirror distortion is used in projection systems that use a digital video projector and a first surface convex spherical mirror to project images onto a dome. Such systems are generally cheaper than traditional planetarium projectors and fish-eye lens projectors and for that reason are used in budget and home planetarium setups where projection quality is less important.
Various companies which build and sell digital planetarium systems use Stellarium, such as e-Planetarium.
Digitalis Education Solutions, which helped develop Stellarium, created a fork called Nightshade which was specifically tailored to planetarium use.
VirGO
VirGO is a Stellarium plugin, a visual browser for the European Southern Observatory (ESO) Science Archive Facility which allows astronomers to browse professional astronomical data. It is no longer supported or maintained; the last version was 1.4.5, dated January 15, |
https://en.wikipedia.org/wiki/Loadable%20kernel%20module | In computing, a loadable kernel module (LKM) is an object file that contains code to extend the running kernel, or so-called base kernel, of an operating system. LKMs are typically used to add support for new hardware (as device drivers) and/or filesystems, or for adding system calls. When the functionality provided by an LKM is no longer required, it can be unloaded in order to free memory and other resources.
Most current Unix-like systems and Microsoft Windows support loadable kernel modules under different names, such as kernel loadable module (kld) in FreeBSD, kernel extension (kext) in macOS (although support for third-party modules is being dropped), kernel extension module in AIX, dynamically loadable kernel module in HP-UX, kernel-mode driver in Windows NT and downloadable kernel module (DKM) in VxWorks. They are also known as kernel loadable modules (or KLM), and simply as kernel modules (KMOD).
Advantages
Without loadable kernel modules, an operating system would have to include all possible anticipated functionality compiled directly into the base kernel. Much of that functionality would reside in memory without being used, wasting memory, and would require that users rebuild and reboot the base kernel every time they require new functionality.
Disadvantages
One minor criticism of preferring a modular kernel over a static kernel is the so-called fragmentation penalty. The base kernel is always unpacked into real contiguous memory by its setup routines; thus, the base kernel code is never fragmented. Once the system is in a state in which modules may be inserted, for example once the filesystems have been mounted that contain the modules, it is likely that any new kernel code insertion will cause the kernel to become fragmented, thereby introducing a minor performance penalty by using more TLB entries, causing more TLB misses.
Implementations in different operating systems
Linux
Loadable kernel modules in Linux are loaded (and unloaded) by the m |
https://en.wikipedia.org/wiki/Locale%20%28computer%20software%29 | In computing, a locale is a set of parameters that defines the user's language, region and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language code and a country/region code.
Locale is an important aspect of i18n.
General locale settings
These settings usually include the following display (output) format settings:
Number format setting (LC_NUMERIC, C/C++)
Character classification, case conversion settings (LC_CTYPE, C/C++)
Date-time format setting (LC_TIME, C/C++)
String collation setting (LC_COLLATE, C/C++)
Currency format setting (LC_MONETARY, C/C++)
Paper size setting (LC_PAPER, ISO 30112)
Color setting
The locale settings are about formatting output given a locale. So, the time zone information and daylight saving time are not usually part of the locale settings.
Less usual is the input format setting, which is mostly defined on a per application basis.
Programming and markup language support
In these environments,
C
C++
Eiffel
Java
.NET Framework
REBOL
Ruby
Perl
PHP
Python
XML
JSP
JavaScript
and other (nowadays) Unicode-based environments, they are defined in a format similar to BCP 47. They are usually defined with just ISO 639 (language) and ISO 3166-1 alpha-2 (2-letter country) codes.
International standards
In standard C and C++, locale is defined in "categories" of (text collation), (character class), (currency format), (number format), and (time format). The special category can be used to set all locale settings.
There is no standard locale names associated with C and C++ standards besides a "minimal locale" name "C", although the POSIX format is a commonly-used baseline.
POSIX platforms
On POSIX platforms such as Unix, Linux and others, locale identifiers are defined in a way similar to the BCP 47 definition of language tags, but the locale variant modifier is defined differently, and the character set is optionally incl |
https://en.wikipedia.org/wiki/Real%20tree | In mathematics, real trees (also called -trees) are a class of metric spaces generalising simplicial trees. They arise naturally in many mathematical contexts, in particular geometric group theory and probability theory. They are also the simplest examples of Gromov hyperbolic spaces.
Definition and examples
Formal definition
A metric space is a real tree if it is a geodesic space where every triangle is a tripod. That is, for every three points there exists a point such that the geodesic segments intersect in the segment and also . This definition is equivalent to being a "zero-hyperbolic space" in the sense of Gromov (all triangles are "zero-thin").
Real trees can also be characterised by a topological property. A metric space is a real tree if for any pair of points all topological embeddings of the segment into such that have the same image (which is then a geodesic segment from to ).
Simple examples
If is a connected graph with the combinatorial metric then it is a real tree if and only if it is a tree (i.e. it has no cycles). Such a tree is often called a simplicial tree. They are characterised by the following topological property: a real tree is simplicial if and only if the set of singular points of (points whose complement in has three or more connected components) is closed and discrete in .
The -tree obtained in the following way is nonsimplicial. Start with the interval [0, 2] and glue, for each positive integer n, an interval of length 1/n to the point 1 − 1/n in the original interval. The set of singular points is discrete, but fails to be closed since 1 is an ordinary point in this -tree. Gluing an interval to 1 would result in a closed set of singular points at the expense of discreteness.
The Paris metric makes the plane into a real tree. It is defined as follows: one fixes an origin , and if two points are on the same ray from , their distance is defined as the Euclidean distance. Otherwise, their distance is defined to |
https://en.wikipedia.org/wiki/Polyvinylidene%20fluoride | Polyvinylidene fluoride or polyvinylidene difluoride (PVDF) is a highly non-reactive thermoplastic fluoropolymer produced by the polymerization of vinylidene difluoride. Its chemical formula is (C2H2F2)n.
PVDF is a specialty plastic used in applications requiring the highest purity, as well as resistance to solvents, acids and hydrocarbons. PVDF has low density 1.78 g/cm3 in comparison to other fluoropolymers, like polytetrafluoroethylene.
It is available in the form of piping products, sheet, tubing, films, plate and an insulator for premium wire. It can be injected, molded or welded and is commonly used in the chemical, semiconductor, medical and defense industries, as well as in lithium-ion batteries. It is also available as a cross-linked closed-cell foam, used increasingly in aviation and aerospace applications, and as an exotic 3D printer filament. It can also be used in repeated contact with food products, as it is FDA-compliant and non-toxic below its degradation temperature.
As a fine powder grade, it is an ingredient in high-end paints for metals. These PVDF paints have extremely good gloss and color retention. They are in use on many prominent buildings around the world, such as the Petronas Towers in Malaysia and Taipei 101 in Taiwan, as well as on commercial and residential metal roofing.
PVDF membranes are used in western blots for the immobilization of proteins, due to its non-specific affinity for amino acids.
PVDF is also used as a binder component for the carbon electrode in supercapacitors and for other electrochemical applications.
Names
PVDF is sold under a variety of brand names including KF (Kureha), Hylar (Solvay), Kynar (Arkema) and Solef (Solvay).
Production
Properties
In 1969, strong piezoelectricity was observed in PVDF, with the piezoelectric coefficient of poled (placed under a strong electric field to induce a net dipole moment) thin films as large as 6–7 pC/N: 10 times larger than that observed in any other polymer.
PVDF has |
https://en.wikipedia.org/wiki/Weierstrass%20preparation%20theorem | In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P.
There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification.
Complex analytic functions
For one variable, the local form of an analytic function f(z) near 0 is zkh(z) where h(0) is not 0, and k is the order of the zero of f at 0. This is the result that the preparation theorem generalises.
We pick out one variable z, which we may assume is first, and write our complex variables as (z, z2, ..., zn). A Weierstrass polynomial W(z) is
zk + gk−1zk−1 + ... + g0
where gi(z2, ..., zn) is analytic and gi(0, ..., 0) = 0.
Then the theorem states that for analytic functions f, if
f(0, ...,0) = 0,
and
f(z, z2, ..., zn)
as a power series has some term only involving z, we can write (locally near (0, ..., 0))
f(z, z2, ..., zn) = W(z)h(z, z2, ..., zn)
with h analytic and h(0, ..., 0) not 0, and W a Weierstrass polynomial.
This has the immediate consequence that the set of zeros of f, near (0, ..., 0), can be found by fixing any small values of z2, ..., zn and then solving the equation W(z)=0. The corresponding values of z form a number of continuously-varying branches, in number equal to the degree of W in z. In particular f cannot have an isolated zero.
Division theorem
A related result is the Weierstrass division theorem, which states that if f and g are ana |
https://en.wikipedia.org/wiki/Radical%20of%20a%20ring | In ring theory, a branch of mathematics, a radical of a ring is an ideal of "not-good" elements of the ring.
The first example of a radical was the nilradical introduced by , based on a suggestion of . In the next few years several other radicals were discovered, of which the most important example is the Jacobson radical. The general theory of radicals was defined independently by and .
Definitions
In the theory of radicals, rings are usually assumed to be associative, but need not be commutative and need not have a multiplicative identity. In particular, every ideal in a ring is also a ring.
A radical class (also called radical property or just radical) is a class σ of rings possibly without identities, such that:
the homomorphic image of a ring in σ is also in σ
every ring R contains an ideal S(R) in σ that contains every other ideal of R that is in σ
S(R/S(R)) = 0. The ideal S(R) is called the radical, or σ-radical, of R.
The study of such radicals is called torsion theory.
For any class δ of rings, there is a smallest radical class Lδ containing it, called the lower radical of δ. The operator L is called the lower radical operator.
A class of rings is called regular if every non-zero ideal of a ring in the class has a non-zero image in the class. For every regular class δ of rings, there is a largest radical class Uδ, called the upper radical of δ, having zero intersection with δ. The operator U is called the upper radical operator.
A class of rings is called hereditary if every ideal of a ring in the class also belongs to the class.
Examples
The Jacobson radical
Let R be any ring, not necessarily commutative. The Jacobson radical of R is the intersection of the annihilators of all simple right R-modules.
There are several equivalent characterizations of the Jacobson radical, such as:
J(R) is the intersection of the regular maximal right (or left) ideals of R.
J(R) is the intersection of all the right (or left) primitive ideals of R.
J(R) i |
https://en.wikipedia.org/wiki/Long-term%20potentiation | In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons. The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength.
It is one of several phenomena underlying synaptic plasticity, the ability of chemical synapses to change their strength. As memories are thought to be encoded by modification of synaptic strength, LTP is widely considered one of the major cellular mechanisms that underlies learning and memory.
LTP was discovered in the rabbit hippocampus by Terje Lømo in 1966 and has remained a popular subject of research since. Many modern LTP studies seek to better understand its basic biology, while others aim to draw a causal link between LTP and behavioral learning. Still, others try to develop methods, pharmacologic or otherwise, of enhancing LTP to improve learning and memory. LTP is also a subject of clinical research, for example, in the areas of Alzheimer's disease and addiction medicine.
History
Early theories of learning
At the end of the 19th century, scientists generally recognized that the number of neurons in the adult brain (roughly 100 billion) did not increase significantly with age, giving neurobiologists good reason to believe that memories were generally not the result of new neuron production. With this realization came the need to explain how memories could form in the absence of new neurons.
The Spanish neuroanatomist Santiago Ramón y Cajal was among the first to suggest a mechanism of learning that did not require the formation of new neurons. In his 1894 Croonian Lecture, he proposed that memories might instead be formed by strengthening the connections between existing neurons to improve the effectiveness of their communication. Hebbian theory, introduced by Donald Hebb in 1949, echoed Ramón y Cajal's |
https://en.wikipedia.org/wiki/Tensor%20product%20of%20algebras | In mathematics, the tensor product of two algebras over a commutative ring R is also an R-algebra. This gives the tensor product of algebras. When the ring is a field, the most common application of such products is to describe the product of algebra representations.
Definition
Let R be a commutative ring and let A and B be R-algebras. Since A and B may both be regarded as R-modules, their tensor product
is also an R-module. The tensor product can be given the structure of a ring by defining the product on elements of the form by
and then extending by linearity to all of . This ring is an R-algebra, associative and unital with identity element given by . where 1A and 1B are the identity elements of A and B. If A and B are commutative, then the tensor product is commutative as well.
The tensor product turns the category of R-algebras into a symmetric monoidal category.
Further properties
There are natural homomorphisms from A and B to given by
These maps make the tensor product the coproduct in the category of commutative R-algebras. The tensor product is not the coproduct in the category of all R-algebras. There the coproduct is given by a more general free product of algebras. Nevertheless, the tensor product of non-commutative algebras can be described by a universal property similar to that of the coproduct:
where [-, -] denotes the commutator.
The natural isomorphism is given by identifying a morphism on the left hand side with the pair of morphisms on the right hand side where and similarly .
Applications
The tensor product of commutative algebras is of frequent use in algebraic geometry. For affine schemes X, Y, Z with morphisms from X and Z to Y, so X = Spec(A), Y = Spec(R), and Z = Spec(B) for some commutative rings A, R, B, the fiber product scheme is the affine scheme corresponding to the tensor product of algebras:
More generally, the fiber product of schemes is defined by gluing together affine fiber products of this form.
Examples
The t |
https://en.wikipedia.org/wiki/Opposite%20category | In category theory, a branch of mathematics, the opposite category or dual category Cop of a given category C is formed by reversing the morphisms, i.e. interchanging the source and target of each morphism. Doing the reversal twice yields the original category, so the opposite of an opposite category is the original category itself. In symbols, .
Examples
An example comes from reversing the direction of inequalities in a partial order. So if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤op by
x ≤op y if and only if y ≤ x.
The new order is commonly called dual order of ≤, and is mostly denoted by ≥. Therefore, duality plays an important role in order theory and every purely order theoretic concept has a dual. For example, there are opposite pairs child/parent, descendant/ancestor, infimum/supremum, down-set/up-set, ideal/filter etc. This order theoretic duality is in turn a special case of the construction of opposite categories as every ordered set can be understood as a category.
Given a semigroup (S, ·), one usually defines the opposite semigroup as (S, ·)op = (S, *) where x*y ≔ y·x for all x,y in S. So also for semigroups there is a strong duality principle. Clearly, the same construction works for groups, as well, and is known in ring theory, too, where it is applied to the multiplicative semigroup of the ring to give the opposite ring. Again this process can be described by completing a semigroup to a monoid, taking the corresponding opposite category, and then possibly removing the unit from that monoid.
The category of Boolean algebras and Boolean homomorphisms is equivalent to the opposite of the category of Stone spaces and continuous functions.
The category of affine schemes is equivalent to the opposite of the category of commutative rings.
The Pontryagin duality restricts to an equivalence between the category of compact Hausdorff abelian topological groups and the opposite of the category of (discrete) |
https://en.wikipedia.org/wiki/Production%20equipment%20control | Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most widely known architectures involve hierarchy, polyarchy, hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms, which may or may not utilize formal methods in their design.
Industrial equipment
Formal methods |
https://en.wikipedia.org/wiki/Thioester | In organic chemistry, thioesters are organosulfur compounds with the molecular structure . They are analogous to carboxylate esters () with the sulfur in the thioester replacing oxygen in the carboxylate ester, as implied by the thio- prefix. They are the product of esterification of a carboxylic acid () with a thiol (). In biochemistry, the best-known thioesters are derivatives of coenzyme A, e.g., acetyl-CoA. The R and R' represent organyl groups, or H in the case of R.
Synthesis
The most typical route to thioester involves the reaction of an acid chloride with an alkali metal salt of a thiol:
RSNa + R'COCl -> R'COSR + NaCl
Another common route entails the displacement of halides by the alkali metal salt of a thiocarboxylic acid. For example, thioacetate esters are commonly prepared by alkylation of potassium thioacetate:
CH3COSK + RX -> CH3COSR + KX
The analogous alkylation of an acetate salt is rarely practiced. The alkylation can be conducted using Mannich bases and the thiocarboxylic acid:
CH3COSH + R'_2NCH2OH -> CH3COSCH2NR'_2 + H2O
Thioesters can be prepared by condensation of thiols and carboxylic acids in the presence of dehydrating agents:
RSH + R'CO2H -> RSC(O)R' + H2O
A typical dehydration agent is DCC. Efforts to improve the sustainability of thioester synthesis have also been reported utilising safer coupling reagent T3P and greener solvent cyclopentanone. Acid anhydrides and some lactones also give thioesters upon treatment with thiols in the presence of a base.
Thioesters can be conveniently prepared from alcohols by the Mitsunobu reaction, using thioacetic acid.
They also arise via carbonylation of alkynes and alkenes in the presence of thiols.
Reactions
Thioesters hydrolyze to thiols and the carboxylic acid:
RC(O)SR' + H2O → RCO2H + RSH
The carbonyl center in thioesters is more reactive toward amine nucleophiles to give amides:
In a related reaction, but using a soft-metal to capture the thiolate, thioesters are converted into esters. |
https://en.wikipedia.org/wiki/Grothendieck%27s%20Galois%20theory | In mathematics, Grothendieck's Galois theory is an abstract approach to the Galois theory of fields, developed around 1960 to provide a way to study the fundamental group of algebraic topology in the setting of algebraic geometry. It provides, in the classical setting of field theory, an alternative perspective to that of Emil Artin based on linear algebra, which became standard from about the 1930s.
The approach of Alexander Grothendieck is concerned with the category-theoretic properties that characterise the categories of finite G-sets for a fixed profinite group G. For example, G might be the group denoted , which is the inverse limit of the cyclic additive groups Z/nZ — or equivalently the completion of the infinite cyclic group Z for the topology of subgroups of finite index. A finite G-set is then a finite set X on which G acts through a quotient finite cyclic group, so that it is specified by giving some permutation of X.
In the above example, a connection with classical Galois theory can be seen by regarding as the profinite Galois group Gal(F/F) of the algebraic closure F of any finite field F, over F. That is, the automorphisms of F fixing F are described by the inverse limit, as we take larger and larger finite splitting fields over F. The connection with geometry can be seen when we look at covering spaces of the unit disk in the complex plane with the origin removed: the finite covering realised by the zn map of the disk, thought of by means of a complex number variable z, corresponds to the subgroup n.Z of the fundamental group of the punctured disk.
The theory of Grothendieck, published in SGA1, shows how to reconstruct the category of G-sets from a fibre functor Φ, which in the geometric setting takes the fibre of a covering above a fixed base point (as a set). In fact there is an isomorphism proved of the type
G ≅ Aut(Φ),
the latter being the group of automorphisms (self-natural equivalences) of Φ. An abstract classification of categories w |
https://en.wikipedia.org/wiki/Video%20game%20industry | The video game industry is the tertiary and quaternary sectors of the entertainment industry that specialize in the development, marketing, distribution, monetization and consumer feedback of video games. The industry encompasses dozens of job disciplines and thousands of jobs worldwide.
The video game industry has grown from niche to mainstream. , video games generated annually in global sales. In the US, the industry earned about in 2007, in 2008, and 2010, according to the ESA annual report. Research from Ampere Analysis indicated three points: the sector has consistently grown since at least 2015 and expanded 26% from 2019 to 2021, to a record ; the global games and services market is forecast to shrink 1.2% annually to in 2022; the industry is not recession-proof.
The industry has influenced the technological advancement of personal computers through sound cards, graphics cards and 3D graphic accelerators, CPUs, and co-processors like PhysX. Sound cards, for example, were originally developed for games and then improved for adoptation by the music industry.
Industry overview
Size
In 2017 in the United States, which represented about a third of the global video game market, the Entertainment Software Association estimated that there were over 2,300 development companies and over 525 publishing companies, including in hardware and software manufacturing, service providers, and distributors. These companies in total have nearly 66,000 direct employees. When including indirect employment, such as a developer using the services of a graphics design package from a different firm, the total number of employees involved in the video game industry rises above 220,000.
Value chain
Traditionally, the video game industry has had six connected layers in its value chain based on the retail distribution of games:
Game development, representing programmers, designers, and artists, and their leadership, with support of middleware and other development tools.
Pub |
https://en.wikipedia.org/wiki/Schur%20decomposition | In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
Statement
The Schur decomposition reads as follows: if is an square matrix with complex entries, then A can be expressed as
where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.
The Schur decomposition implies that there exists a nested sequence of A-invariant subspaces , and that there exists an ordered orthonormal basis (for the standard Hermitian form of ) such that the first i basis vectors span for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag .
Proof
A constructive proof for the Schur decomposition is as follows: every operator A on a complex finite-dimensional vector space has an eigenvalue λ, corresponding to some eigenspace Vλ. Let Vλ⊥ be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, A has matrix representation (one can pick here any orthonormal bases Z1 and Z2 spanning Vλ and Vλ⊥ respectively)
where Iλ is the identity operator on Vλ. The above matrix would be upper-triangular except for the A22 block. But exactly the same procedure can be applied to the sub-matrix A22, viewed as an operator on Vλ⊥, and its submatrices. Continue this way until the resulting matrix is upper triangular. Since each conjugation increases the dimension of the upper-triangular block by at least one, this process takes at most n |
https://en.wikipedia.org/wiki/Credit%20risk | Credit risk is the possibility of losing a lender holds due to a risk of default on a debt that may arise from a borrower failing to make required payments. In the first resort, the risk is that of the lender and includes lost principal and interest, disruption to cash flows, and increased collection costs. The loss may be complete or partial. In an efficient market, higher levels of credit risk will be associated with higher borrowing costs. Because of this, measures of borrowing costs such as yield spreads can be used to infer credit risk levels based on assessments by market participants.
Losses can arise in a number of circumstances, for example:
A consumer may fail to make a payment due on a mortgage loan, credit card, line of credit, or other loan.
A company is unable to repay asset-secured fixed or floating charge debt.
A business or consumer does not pay a trade invoice when due.
A business does not pay an employee's earned wages when due.
A business or government bond issuer does not make a payment on a coupon or principal payment when due.
An insolvent insurance company does not pay a policy obligation.
An insolvent bank will not return funds to a depositor.
A government grants bankruptcy protection to an insolvent consumer or business.
To reduce the lender's credit risk, the lender may perform a credit check on the prospective borrower, may require the borrower to take out appropriate insurance, such as mortgage insurance, or seek security over some assets of the borrower or a guarantee from a third party. The lender can also take out insurance against the risk or on-sell the debt to another company. In general, the higher the risk, the higher will be the interest rate that the debtor will be asked to pay on the debt.
Credit risk mainly arises when borrowers are unable or unwilling to pay.
Types
A credit risk can be of the following types:
Credit default risk – The risk of loss arising from a debtor being unlikely to pay its loan obligatio |
https://en.wikipedia.org/wiki/Optical%20engineering | Optical engineering is the field of science and engineering encompassing the physical phenomena and technologies associated with the generation, transmission, manipulation, detection, and utilization of light. Optical engineers use optics to solve problems and to design and build devices that make light do something useful. They design and operate optical equipment that uses the properties of light using physics and chemistry, such as lenses, microscopes, telescopes, lasers, sensors, fiber optic communication systems and optical disc systems (e.g. CD, DVD).
Optical engineering metrology uses optical methods to measure either micro-vibrations with instruments like the laser speckle interferometer, or properties of masses with instruments that measure refraction
Nano-measuring and nano-positioning machines are devices designed by optical engineers. These machines, for example microphotolithographic steppers, have nanometer precision, and consequently are used in the fabrication of goods at this scale.
See also
Optical lens design
Optical physics
Optician
References
[1] Walker, Bruce H (1998). Optical Engineering Fundamentals. SPIE Press. p. 1. .
[2] Walker, Bruce H (1998). Optical Engineering Fundamentals, SPIE Press. p. 16. .
[3] Manske E. (2019) Nanopositioning and Nanomeasuring Machines. In: Gao W. (eds) Metrology. Precision Manufacturing. Springer, Singapore. .
[4] "ESO Awards ELT Sensor Contract to Teledyne e2V". www.eso.org. Retrieved 22 May 2017.
Further reading
Driggers, Ronald G. (ed.) (2003). Encyclopedia of Optical Engineering. New York: Marcel Dekker. 3 vols.
Bruce H. Walker, Historical Review,SPIE Press, Bellingham, WA.
FTS Yu & Xiangyang Yang (1997) Introduction to Optical Engineering, Cambridge University Press, .
Optical Engineering (ISSN 0091-3286)
Engineering
Engineering disciplines |
https://en.wikipedia.org/wiki/Schur%20complement | In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.
Suppose p, q are nonnegative integers, and suppose A, B, C, D are respectively p × p, p × q, q × p, and q × q matrices of complex numbers. Let
so that M is a (p + q) × (p + q) matrix.
If D is invertible, then the Schur complement of the block D of the matrix M is the p × p matrix defined by
If A is invertible, the Schur complement of the block A of the matrix M is the q × q matrix defined by
In the case that A or D is singular, substituting a generalized inverse for the inverses on M/A and M/D yields the generalized Schur complement.
The Schur complement is named after Issai Schur who used it to prove Schur's lemma, although it had been used previously. Emilie Virginia Haynsworth was the first to call it the Schur complement. The Schur complement is a key tool in the fields of numerical analysis, statistics, and matrix analysis.
Background
The Schur complement arises when performing a block Gaussian elimination on the matrix M. In order to eliminate the elements below the block diagonal, one multiplies the matrix M by a block lower triangular matrix on the right as follows:
where Ip denotes a p×p identity matrix. As a result, the Schur complement appears in the upper-left p×p block.
Continuing the elimination process beyond this point (i.e., performing a block Gauss–Jordan elimination),
leads to an LDU decomposition of M, which reads
Thus, the inverse of M may be expressed involving D−1 and the inverse of Schur's complement, assuming it exists, as
The above relationship comes from the elimination operations that involve D−1 and M/D. An equivalent derivation can be done with the roles of A and D interchanged. By equating the expressions for M−1 obtained in these two different ways, one can establish the matrix inversion lemma, which relates the two Schur complements of M: M/D and M/A (see "Derivation from LDU decomposition" in ).
Properties
If |
https://en.wikipedia.org/wiki/Pathological%20%28mathematics%29 | In mathematics, when a mathematical phenomenon runs counter to some intuition, then the phenomenon is sometimes called pathological. On the other hand, if a phenomenon does not run counter to intuition,
it is sometimes called well-behaved. These terms are sometimes useful in mathematical research and teaching, but there is no strict mathematical definition of pathological or well-behaved.
In analysis
A classic example of a pathology is the Weierstrass function, a function that is continuous everywhere but differentiable nowhere. The sum of a differentiable function and the Weierstrass function is again continuous but nowhere differentiable; so there are at least as many such functions as differentiable functions. In fact, using the Baire category theorem, one can show that continuous functions are generically nowhere differentiable.
Such examples were deemed pathological when they were first discovered:
To quote Henri Poincaré:
Since Poincaré, nowhere differentiable functions have been shown to appear in basic physical and biological processes such as Brownian motion and in applications such as the Black-Scholes model in finance.
Counterexamples in Analysis is a whole book of such counterexamples.
In topology
One famous counterexample in topology is the Alexander horned sphere, showing that topologically embedding the sphere S2 in R3 may fail to separate the space cleanly. As a counterexample, it motivated mathematicians to define the tameness property, which suppresses the kind of wild behavior exhibited by the horned sphere, wild knot, and other similar examples.
Like many other pathologies, the horned sphere in a sense plays on infinitely fine, recursively generated structure, which in the limit violates ordinary intuition. In this case, the topology of an ever-descending chain of interlocking loops of continuous pieces of the sphere in the limit fully reflects that of the common sphere, and one would expect the outside of it, after an embedding, to work |
https://en.wikipedia.org/wiki/Polynomial%20ring | In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring (which is also a commutative algebra) formed from the set of polynomials in one or more indeterminates (traditionally also called variables) with coefficients in another ring, often a field.
Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of the integers.
Polynomial rings occur and are often fundamental in many parts of mathematics such as number theory, commutative algebra, and algebraic geometry. In ring theory, many classes of rings, such as unique factorization domains, regular rings, group rings, rings of formal power series, Ore polynomials, graded rings, have been introduced for generalizing some properties of polynomial rings.
A closely related notion is that of the ring of polynomial functions on a vector space, and, more generally, ring of regular functions on an algebraic variety.
Definition (univariate case)
The polynomial ring, , in over a field (or, more generally, a commutative ring) can be defined in several equivalent ways. One of them is to define as the set of expressions, called polynomials in , of the form
where , the coefficients of , are elements of , if , and are symbols, which are considered as "powers" of , and follow the usual rules of exponentiation: , , and for any nonnegative integers and . The symbol is called an indeterminate or variable. (The term of "variable" comes from the terminology of polynomial functions. However, here, has not any value (other than itself), and cannot vary, being a constant in the polynomial ring.)
Two polynomials are equal when the corresponding coefficients of each are equal.
One can think of the ring as arising from by adding one new element that is external to , commutes with all elements o |
https://en.wikipedia.org/wiki/Equinumerosity | In mathematics, two sets or classes A and B are equinumerous if there exists a one-to-one correspondence (or bijection) between them, that is, if there exists a function from A to B such that for every element y of B, there is exactly one element x of A with f(x) = y. Equinumerous sets are said to have the same cardinality (number of elements). The study of cardinality is often called equinumerosity (equalness-of-number). The terms equipollence (equalness-of-strength) and equipotence (equalness-of-power) are sometimes used instead.
Equinumerosity has the characteristic properties of an equivalence relation. The statement that two sets A and B are equinumerous is usually denoted
or , or
The definition of equinumerosity using bijections can be applied to both finite and infinite sets, and allows one to state whether two sets have the same size even if they are infinite. Georg Cantor, the inventor of set theory, showed in 1874 that there is more than one kind of infinity, specifically that the collection of all natural numbers and the collection of all real numbers, while both infinite, are not equinumerous (see Cantor's first uncountability proof). In his controversial 1878 paper, Cantor explicitly defined the notion of "power" of sets and used it to prove that the set of all natural numbers and the set of all rational numbers are equinumerous (an example where a proper subset of an infinite set is equinumerous to the original set), and that the Cartesian product of even a countably infinite number of copies of the real numbers is equinumerous to a single copy of the real numbers.
Cantor's theorem from 1891 implies that no set is equinumerous to its own power set (the set of all its subsets). This allows the definition of greater and greater infinite sets starting from a single infinite set.
If the axiom of choice holds, then the cardinal number of a set may be regarded as the least ordinal number of that cardinality (see initial ordinal). Otherwise, it may be r |
https://en.wikipedia.org/wiki/Limit%20ordinal | In set theory, a limit ordinal is an ordinal number that is neither zero nor a successor ordinal. Alternatively, an ordinal λ is a limit ordinal if there is an ordinal less than λ, and whenever β is an ordinal less than λ, then there exists an ordinal γ such that β < γ < λ. Every ordinal number is either zero, or a successor ordinal, or a limit ordinal.
For example, the smallest limit ordinal is ω, the smallest ordinal greater than every natural number. This is a limit ordinal because for any smaller ordinal (i.e., for any natural number) n we can find another natural number larger than it (e.g. n+1), but still less than ω. The next-smallest limit ordinal is ω+ω. This will be discussed further in the article.
Using the von Neumann definition of ordinals, every ordinal is the well-ordered set of all smaller ordinals. The union of a nonempty set of ordinals that has no greatest element is then always a limit ordinal. Using von Neumann cardinal assignment, every infinite cardinal number is also a limit ordinal.
Alternative definitions
Various other ways to define limit ordinals are:
It is equal to the supremum of all the ordinals below it, but is not zero. (Compare with a successor ordinal: the set of ordinals below it has a maximum, so the supremum is this maximum, the previous ordinal.)
It is not zero and has no maximum element.
It can be written in the form ωα for α > 0. That is, in the Cantor normal form there is no finite number as last term, and the ordinal is nonzero.
It is a limit point of the class of ordinal numbers, with respect to the order topology. (The other ordinals are isolated points.)
Some contention exists on whether or not 0 should be classified as a limit ordinal, as it does not have an immediate predecessor;
some textbooks include 0 in the class of limit ordinals while others exclude it.
Examples
Because the class of ordinal numbers is well-ordered, there is a smallest infinite limit ordinal; denoted by ω (omega). The ordinal ω is also th |
https://en.wikipedia.org/wiki/Cardinal%20assignment | In set theory, the concept of cardinality is significantly developable without recourse to actually defining cardinal numbers as objects in the theory itself (this is in fact a viewpoint taken by Frege; Frege cardinals are basically equivalence classes on the entire universe of sets, by equinumerosity). The concepts are developed by defining equinumerosity in terms of functions and the concepts of one-to-one and onto (injectivity and surjectivity); this gives us a quasi-ordering relation
on the whole universe by size. It is not a true partial ordering because antisymmetry need not hold: if both and , it is true by the Cantor–Bernstein–Schroeder theorem that i.e. A and B are equinumerous, but they do not have to be literally equal (see isomorphism). That at least one of and holds turns out to be equivalent to the axiom of choice.
Nevertheless, most of the interesting results on cardinality and its arithmetic can be expressed merely with =c.
The goal of a cardinal assignment is to assign to every set A a specific, unique set that is only dependent on the cardinality of A. This is in accordance with Cantor's original vision of cardinals: to take a set and abstract its elements into canonical "units" and collect these units into another set, such that the only thing special about this set is its size. These would be totally ordered by the relation , and =c would be true equality. As Y. N. Moschovakis says, however, this is mostly an exercise in mathematical elegance, and you don't gain much unless you are "allergic to subscripts." However, there are various valuable applications of "real" cardinal numbers in various models of set theory.
In modern set theory, we usually use the Von Neumann cardinal assignment, which uses the theory of ordinal numbers and the full power of the axioms of choice and replacement. Cardinal assignments do need the full axiom of choice, if we want a decent cardinal arithmetic and an assignment for all sets.
Cardinal assignment without |
https://en.wikipedia.org/wiki/Von%20Neumann%20cardinal%20assignment | The von Neumann cardinal assignment is a cardinal assignment that uses ordinal numbers. For a well-orderable set U, we define its cardinal number to be the smallest ordinal number equinumerous to U, using the von Neumann definition of an ordinal number. More precisely:
where ON is the class of ordinals. This ordinal is also called the initial ordinal of the cardinal.
That such an ordinal exists and is unique is guaranteed by the fact that U is well-orderable and that the class of ordinals is well-ordered, using the axiom of replacement. With the full axiom of choice, every set is well-orderable, so every set has a cardinal; we order the cardinals using the inherited ordering from the ordinal numbers. This is readily found to coincide with the ordering via ≤c. This is a well-ordering of cardinal numbers.
Initial ordinal of a cardinal
Each ordinal has an associated cardinal, its cardinality, obtained by simply forgetting the order. Any well-ordered set having that ordinal as its order type has the same cardinality. The smallest ordinal having a given cardinal as its cardinality is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, but most infinite ordinals are not initial. The axiom of choice is equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In this case, it is traditional to identify the cardinal number with its initial ordinal, and we say that the initial ordinal is a cardinal.
The -th infinite initial ordinal is written . Its cardinality is written (the -th aleph number). For example, the cardinality of is , which is also the cardinality of , , and (all are countable ordinals). So we identify with , except that the notation is used for writing cardinals, and for writing ordinals. This is important because arithmetic on cardinals is different from arithmetic on ordinals, for example = whereas > . Also, is the smallest uncountable ordi |
https://en.wikipedia.org/wiki/Kahan%20summation%20algorithm | In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable.
In particular, simply summing numbers in sequence has a worst-case error that grows proportional to , and a root mean square error that grows as for random inputs (the roundoff errors form a random walk). With compensated summation, using a compensation variable with sufficiently high precision the worst-case error bound is effectively independent of , so a large number of values can be summed with an error that only depends on the floating-point precision of the result.
The algorithm is attributed to William Kahan; Ivo Babuška seems to have come up with a similar algorithm independently (hence Kahan–Babuška summation). Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation.
The algorithm
In pseudocode, the algorithm will be:
function KahanSum(input)
var sum = 0.0 // Prepare the accumulator.
var c = 0.0 // A running compensation for lost low-order bits.
for i = 1 to input.length do // The array input has elements indexed input[1] to input[input.length].
var y = input[i] - c // c is zero the first time around.
var t = sum + y // Alas, sum is big, y small, so low-order digits of y are lost.
c = (t - sum) - y // (t - sum) cancels the high-order part of y; subtracting y recovers negative (low part of y)
sum = t // Algebraic |
https://en.wikipedia.org/wiki/Copley%20Medal | The Copley Medal is the most prestigious award of the Royal Society, conferred "for sustained, outstanding achievements in any field of science". It alternates between the physical sciences or mathematics and the biological sciences. Given annually, the medal is the oldest Royal Society medal awarded and the oldest surviving scientific award in the world, having first been given in 1731 to Stephen Gray, for "his new Electrical Experiments: – as an encouragement to him for the readiness he has always shown in obliging the Society with his discoveries and improvements in this part of Natural Knowledge". The medal is made of silver-gilt and awarded with a £25,000 prize.
The Copley Medal is arguably the highest British award for scientific achievement, and has been included among the most distinguished international scientific awards. It is awarded to "senior scientists" irrespective of nationality, and nominations are considered over three nomination cycles. Since 2022, scientific teams or research groups are collectively eligible to receive the medal; that year, the research team which developed the Oxford–AstraZeneca COVID-19 vaccine became the first collective recipient. John Theophilus Desaguliers has won the medal the most often, winning three times, in 1734, 1736 and 1741. In 1976, Dorothy Hodgkin became the first female recipient; Jocelyn Bell Burnell, in 2021, became the second.
History
In 1709, Sir Godfrey Copley, the MP for Thirsk, bequeathed Abraham Hill and Hans Sloane £100 (roughly equivalent to £ in ) to be held in trust for the Royal Society "for improving natural knowledge to be laid out in experiments or otherwise for the benefit thereof as they shall direct and appoint". After the legacy had been received the following year, the interest of £5 was duly used to provide recurring grants for experimental work to researchers associated with the Royal Society, provided they registered their research within a stipulated period and demonstrated their ex |
https://en.wikipedia.org/wiki/Lie%20group%20decomposition | In mathematics, Lie group decompositions are used to analyse the structure of Lie groups and associated objects, by showing how they are built up out of subgroups. They are essential technical tools in the representation theory of Lie groups and Lie algebras; they can also be used to study the algebraic topology of such groups and associated homogeneous spaces. Since the use of Lie group methods became one of the standard techniques in twentieth century mathematics, many phenomena can now be referred back to decompositions.
The same ideas are often applied to Lie groups, Lie algebras, algebraic groups and p-adic number analogues, making it harder to summarise the facts into a unified theory.
List of decompositions
The Jordan–Chevalley decomposition of an element in algebraic group as a product of semisimple and unipotent elements
The Bruhat decomposition G = BWB of a semisimple algebraic group into double cosets of a Borel subgroup can be regarded as a generalization of the principle of Gauss–Jordan elimination, which generically writes a matrix as the product of an upper triangular matrix with a lower triangular matrix—but with exceptional cases. It is related to the Schubert cell decomposition of Grassmannians: see Weyl group for more details.
The Cartan decomposition writes a semisimple real Lie algebra as the sum of eigenspaces of a Cartan involution.
The Iwasawa decomposition G = KAN of a semisimple group G as the product of compact, abelian, and nilpotent subgroups generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix (a consequence of Gram–Schmidt orthogonalization).
The Langlands decomposition P = MAN writes a parabolic subgroup P of a Lie group as the product of semisimple, abelian, and nilpotent subgroups.
The Levi decomposition writes a finite dimensional Lie algebra as a semidirect product of a normal solvable ideal and a semisimple subalgebra.
The LU decomposition of a dense su |
https://en.wikipedia.org/wiki/Language%20of%20mathematics | The language of mathematics or mathematical language is an extension of the natural language (for example English) that is used in mathematics and in science for expressing results (scientific laws, theorems, proofs, logical deductions, etc) with concision, precision and unambiguity.
Features
The main features of the mathematical language are the following.
Use of common words with a derived meaning, generally more specific and more precise. For example, "or" means "one, the other or both", while, in common language, "both" is sometimes included and sometimes not. Also, a "line" is straight and has zero width.
Use of common words with a meaning that is completely different from their common meaning. For example, a mathematical ring is not related to any other meaning of "ring". Real numbers and imaginary numbers are two sorts of numbers, none being more real or more imaginary than the others.
Use of neologisms. For example polynomial, homomorphism.
Use of symbols as words or phrases. For example, and are respectively read as " equals " and
Use of formulas as part of sentences. For example: " represents quantitatively the mass–energy equivalence." A formula that is not included in a sentence is generally meaningless, since the meaning of the symbols may depend on the context: in this is the context that specifies that is the energy of a physical body, is its mass, and is the speed of light.
Use of mathematical jargon that consists of phrases that are used for informal explanations or shorthands. For example, "killing" is often used in place of "replacing with zero", and this led to the use of assassinator and annihilator as technical words.
Understanding mathematical text
The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example the sentence "a free module is a module that has a basis" is perfectly correct, although it appears only as a grammatically correct nonsense, |
https://en.wikipedia.org/wiki/Galactic%20Center | The Galactic Center is the rotational center, the barycenter, of the Milky Way galaxy. Its central massive object is a supermassive black hole of about 4 million solar masses, which is called Sagittarius A*, a compact radio source which is almost exactly at the galactic rotational center. The Galactic Center is approximately away from Earth in the direction of the constellations Sagittarius, Ophiuchus, and Scorpius, where the Milky Way appears brightest, visually close to the Butterfly Cluster (M6) or the star Shaula, south to the Pipe Nebula.
There are around 10 million stars within one parsec of the Galactic Center, dominated by red giants, with a significant population of massive supergiants and Wolf–Rayet stars from star formation in the region around 1 million years ago. The core stars are a small part within the much wider galactic bulge.
Discovery
Because of interstellar dust along the line of sight, the Galactic Center cannot be studied at visible, ultraviolet, or soft (low-energy) X-ray wavelengths. The available information about the Galactic Center comes from observations at gamma ray, hard (high-energy) X-ray, infrared, submillimetre, and radio wavelengths.
Immanuel Kant stated in Universal Natural History and Theory of the Heavens (1755) that a large star was at the center of the Milky Way Galaxy, and that Sirius might be the star. Harlow Shapley stated in 1918 that the halo of globular clusters surrounding the Milky Way seemed to be centered on the star swarms in the constellation of Sagittarius, but the dark molecular clouds in the area blocked the view for optical astronomy. In the early 1940s Walter Baade at Mount Wilson Observatory took advantage of wartime blackout conditions in nearby Los Angeles to conduct a search for the center with the Hooker Telescope. He found that near the star Alnasl (Gamma Sagittarii) there is a one-degree-wide void in the interstellar dust lanes, which provides a relatively clear view of the swarms of stars around |
https://en.wikipedia.org/wiki/Supergalactic%20coordinate%20system | The supergalactic plane is part of a reference frame for the supercluster of galaxies that contains the Milky Way galaxy.
The supergalactic plane, as so-far observed, is more or less perpendicular to the plane of the Milky Way; the angle is 84.5°. As viewed from Earth, the plane traces a great circle across the sky through the following constellations:
Cassiopeia (in the Milky Way galactic plane)
Camelopardalis
Ursa Major
Coma Berenices (near the Milky Way galactic north pole)
Virgo
Centaurus
Circinus (in the galactic plane)
Triangulum Australe
Pavo
Indus
Grus
Sculptor (near the galactic south pole)
Cetus
Pisces
Andromeda
Perseus
History
In the 1950s the astronomer Gérard de Vaucouleurs recognized the existence of a flattened “local supercluster” from the Shapley-Ames Catalog in the environment of the Milky Way. He noticed that when one plots nearby galaxies in 3D, they lie more or less on a plane. A flattened distribution of nebulae had earlier been noted by William Herschel. Vera Rubin had also identified the supergalactic plane in the 1950s, but her data remained unpublished. The plane delineated by various galaxies defined in 1976 the equator of the supergalactic coordinate system he developed. In years thereafter with more observation data available de Vaucouleurs findings about the existence of the plane proved right.
Based on the supergalactic coordinate system of de Vaucouleurs, surveys in recent years determined the positions of nearby galaxy clusters relative to the supergalactic plane. Amongst others the Virgo cluster, the Norma cluster (including the Great Attractor), the Coma cluster, the Pisces-Perseus supercluster, the Hydra cluster, the Centaurus cluster, the Pisces-Cetus supercluster and the Shapley Concentration were found to be near the supergalactic plane.
Definition
The supergalactic coordinate system is a spherical coordinate system in which the equator is the supergalactic plane.
By convention, supergalactic latitude is |
https://en.wikipedia.org/wiki/User-mode%20Linux | User-mode Linux (UML) is a virtualization system for the Linux operating system based on an architectural port of the Linux kernel to its own system call interface, which enables multiple virtual Linux kernel-based operating systems (known as guests) to run as an application within a normal Linux system (known as the host). A Linux kernel compiled for the um architecture can then boot as a process under another Linux kernel, entirely in user space, without affecting the host environment's configuration or stability.
This method gives the user a way to run many virtual Linux machines on a single piece of hardware, allowing some isolation, typically without changing the configuration or stability of the host environment because each guest is just a regular application running as a process in user space.
Applications
Numerous things become possible through the use of UML. One can run network services from a UML environment and remain totally sequestered from the main Linux system in which the UML environment runs. Administrators can use UML to set up honeypots, which allow one to test the security of one's computers or network. UML can serve to test and debug new software without adversely affecting the host system. UML can also be used for teaching and research, providing a realistic Linux networked environment with a high degree of safety.
In UML environments, host and guest kernel versions don't need to match, so it is entirely possible to test a "bleeding edge" version of Linux in User-mode on a system running a much older kernel. UML also allows kernel debugging to be performed on one machine, where other kernel debugging tools (such as kgdb) require two machines connected with a null modem cable.
Some web hosting providers offer UML-powered virtual servers for lower prices than true dedicated servers. Each customer has root access on what appears to be their own system, while in reality one physical computer is shared between many people.
libguestfs has su |
https://en.wikipedia.org/wiki/Grassmannian | In mathematics, the Grassmannian is a differentiable manifold that parameterizes the set of all -dimensional linear subspaces of an -dimensional vector space over a field .
For example, the Grassmannian is the space of lines through the origin in , so it is the same as the projective space of one dimension lower than .
When is a real or complex vector space, Grassmannians are compact smooth manifolds , of dimension . In general they have the structure of a nonsingular projective algebraic variety.
The earliest work on a non-trivial Grassmannian is due to Julius Plücker, who studied the set of projective lines in real projective 3-space, which is equivalent to , parameterizing them by what are now called Plücker coordinates. (See below.) Hermann Grassmann later introduced the concept in general.
Notations for Grassmannians vary between authors, and include , ,, to denote the Grassmannian of -dimensional subspaces of an -dimensional vector space .
Motivation
By giving a collection of subspaces of a vector space a topological structure, it is possible to talk about a continuous choice of subspaces or open and closed collections of subspaces. Giving them the further structure of a differential manifold, one can talk about smooth choices of subspace.
A natural example comes from tangent bundles of smooth manifolds embedded in a Euclidean space. Suppose we have a manifold of dimension embedded in . At each point , the tangent space to can be considered as a subspace of the tangent space of , which is also just . The map assigning to its tangent space defines a map from to . (In order to do this, we have to translate the tangent space at each so that it passes through the origin rather than , and hence defines a -dimensional vector subspace. This idea is very similar to the Gauss map for surfaces in a 3-dimensional space.)
This can with some effort be extended to all vector bundles over a manifold , so that every vector bundle generates a continuous m |
https://en.wikipedia.org/wiki/Video%20tape%20recorder | A video tape recorder (VTR) is a tape recorder designed to record and playback video and audio material from magnetic tape. The early VTRs were open-reel devices that record on individual reels of 2-inch-wide (5.08 cm) tape. They were used in television studios, serving as a replacement for motion picture film stock and making recording for television applications cheaper and quicker. Beginning in 1963, videotape machines made instant replay during televised sporting events possible. Improved formats, in which the tape was contained inside a videocassette, were introduced around 1969; the machines which play them are called videocassette recorders.
An agreement by Japanese manufacturers on a common standard recording format, which allowed cassettes recorded on one manufacturer's machine to play on another's, made a consumer market possible; and the first consumer videocassette recorder, which used the U-matic format, was introduced by Sony in 1971.
History
In early 1951, Bing Crosby asked his Chief Engineer John T. (Jack) Mullin if television could be recorded on tape as was the case for audio. Mullin said that he thought that it could be done. Bing asked Ampex to build one and also set up a laboratory for Mullin in Bing Crosby Enterprises (BCE) to build one. In 1951 it was believed that if the tape was run at a very high speed it could provide the necessary bandwidth to record the video signal. The problem was that a video signal has a much wider bandwidth than an audio signal does (6 MHz vs 20 kHz), requiring extremely high tape speeds to record it. However, there was another problem: the magnetic head design would not permit bandwidths over 1 megahertz to be recorded regardless of the tape speed.
The first efforts at video recording, using recorders similar to audio recorders with fixed heads, were unsuccessful. The first such demonstration of this technique was done by BCE on 11 November 1951. The result was a very poor picture. Another of the early |
https://en.wikipedia.org/wiki/Law%20of%20trichotomy | In mathematics, the law of trichotomy states that every real number is either positive, negative, or zero.
More generally, a binary relation R on a set X is trichotomous if for all x and y in X, exactly one of xRy, yRx and x=y holds. Writing R as <, this is stated in formal logic as:
Properties
A relation is trichotomous if, and only if, it is asymmetric and connected.
If a trichotomous relation is also transitive, then it is a strict total order; this is a special case of a strict weak order.
Examples
On the set X = {a,b,c}, the relation R = { (a,b), (a,c), (b,c) } is transitive and trichotomous, and hence a strict total order.
On the same set, the cyclic relation R = { (a,b), (b,c), (c,a) } is trichotomous, but not transitive; it is even antitransitive.
Trichotomy on numbers
A law of trichotomy on some set X of numbers usually expresses that some tacitly given ordering relation on X is a trichotomous one. An example is the law "For arbitrary real numbers x and y, exactly one of x < y, y < x, or x = y applies"; some authors even fix y to be zero, relying on the real number's additive linearly ordered group structure. The latter is a group equipped with a trichotomous order.
In classical logic, this axiom of trichotomy holds for ordinary comparison between real numbers and therefore also for comparisons between integers and between rational numbers. The law does not hold in general in intuitionistic logic.
In Zermelo–Fraenkel set theory and Bernays set theory, the law of trichotomy holds between the cardinal numbers of well-orderable sets even without the axiom of choice. If the axiom of choice holds, then trichotomy holds between arbitrary cardinal numbers (because they are all well-orderable in that case).
See also
Begriffsschrift contains an early formulation of the law of trichotomy
Dichotomy
Law of noncontradiction
Law of excluded middle
Three-way comparison
References
Order theory
Binary relations
3 (number) |
https://en.wikipedia.org/wiki/Stuffing | Stuffing, filling, or dressing is an edible mixture, often composed of herbs and a starch such as bread, used to fill a cavity in the preparation of another food item. Many foods may be stuffed, including poultry, seafood, and vegetables. As a cooking technique stuffing helps retain moisture, while the mixture itself serves to augment and absorb flavors during its preparation.
Poultry stuffing often consists of breadcrumbs, onion, celery, spices, and herbs such as sage, combined with the giblets. Additions in the United Kingdom include dried fruits and nuts (such as apricots and flaked almonds), and chestnuts.
History
It is not known when stuffings were first used. The earliest documentary evidence is the Roman cookbook, Apicius De Re Coquinaria, which contains recipes for stuffed chicken, dormouse, hare, and pig. Most of the stuffings described consist of vegetables, herbs and spices, nuts, and spelt (a cereal), and frequently contain chopped liver, brains, and other organ meat.
Names for stuffing include "farce" (~1390), "stuffing" (1538), "forcemeat" (1688), and relatively more recently in the United States; "dressing" (1850).
Cavities
In addition to stuffing the body cavity of animals, including birds, fish, and mammals, various cuts of meat may be stuffed after they have been deboned or a pouch has been cut into them. Recipes include stuffed chicken legs, stuffed pork chops, stuffed breast of veal, as well as the traditional holiday stuffed turkey or goose.
Many types of vegetables are also suitable for stuffing, after their seeds or flesh has been removed. Tomatoes, capsicums (sweet or hot peppers), and vegetable marrows such as zucchini may be prepared in this way. Cabbages and similar vegetables can also be stuffed or wrapped around a filling. They are usually blanched first, in order to make their leaves more pliable. Then, the interior may be replaced by stuffing, or small amounts of stuffing may be inserted between the individual leaves.
Purport |
https://en.wikipedia.org/wiki/Limit%20cardinal | In mathematics, limit cardinals are certain cardinal numbers. A cardinal number λ is a weak limit cardinal if λ is neither a successor cardinal nor zero. This means that one cannot "reach" λ from another cardinal by repeated successor operations. These cardinals are sometimes called simply "limit cardinals" when the context is clear.
A cardinal λ is a strong limit cardinal if λ cannot be reached by repeated powerset operations. This means that λ is nonzero and, for all κ < λ, 2κ < λ. Every strong limit cardinal is also a weak limit cardinal, because κ+ ≤ 2κ for every cardinal κ, where κ+ denotes the successor cardinal of κ.
The first infinite cardinal, (aleph-naught), is a strong limit cardinal, and hence also a weak limit cardinal.
Constructions
One way to construct limit cardinals is via the union operation: is a weak limit cardinal, defined as the union of all the alephs before it; and in general for any limit ordinal λ is a weak limit cardinal.
The ב operation can be used to obtain strong limit cardinals. This operation is a map from ordinals to cardinals defined as
(the smallest ordinal equinumerous with the powerset)
If λ is a limit ordinal,
The cardinal
is a strong limit cardinal of cofinality ω. More generally, given any ordinal α, the cardinal
is a strong limit cardinal. Thus there are arbitrarily large strong limit cardinals.
Relationship with ordinal subscripts
If the axiom of choice holds, every cardinal number has an initial ordinal. If that initial ordinal is then the cardinal number is of the form for the same ordinal subscript λ. The ordinal λ determines whether is a weak limit cardinal. Because if λ is a successor ordinal then is not a weak limit. Conversely, if a cardinal κ is a successor cardinal, say then Thus, in general, is a weak limit cardinal if and only if λ is zero or a limit ordinal.
Although the ordinal subscript tells us whether a cardinal is a weak limit, it does not tell us whether a cardinal is a strong li |
https://en.wikipedia.org/wiki/Regular%20cardinal | In set theory, a regular cardinal is a cardinal number that is equal to its own cofinality. More explicitly, this means that is a regular cardinal if and only if every unbounded subset has cardinality . Infinite well-ordered cardinals that are not regular are called singular cardinals. Finite cardinal numbers are typically not called regular or singular.
In the presence of the axiom of choice, any cardinal number can be well-ordered, and then the following are equivalent for a cardinal :
is a regular cardinal.
If and for all , then .
If , and if and for all , then .
The category of sets of cardinality less than and all functions between them is closed under colimits of cardinality less than .
is a regular ordinal (see below)
Crudely speaking, this means that a regular cardinal is one that cannot be broken down into a small number of smaller parts.
The situation is slightly more complicated in contexts where the axiom of choice might fail, as in that case not all cardinals are necessarily the cardinalities of well-ordered sets. In that case, the above equivalence holds for well-orderable cardinals only.
An infinite ordinal is a regular ordinal if it is a limit ordinal that is not the limit of a set of smaller ordinals that as a set has order type less than . A regular ordinal is always an initial ordinal, though some initial ordinals are not regular, e.g., (see the example below).
Examples
The ordinals less than are finite. A finite sequence of finite ordinals always has a finite maximum, so cannot be the limit of any sequence of type less than whose elements are ordinals less than , and is therefore a regular ordinal. (aleph-null) is a regular cardinal because its initial ordinal, , is regular. It can also be seen directly to be regular, as the cardinal sum of a finite number of finite cardinal numbers is itself finite.
is the next ordinal number greater than . It is singular, since it is not a limit ordinal. is the next limit ordina |
https://en.wikipedia.org/wiki/Piano%20roll | A piano roll is a music storage medium used to operate a player piano, piano player or reproducing piano. Piano rolls, like other music rolls, are continuous rolls of paper with holes punched into them. These perforations represent note control data. The roll moves over a reading system known as a tracker bar; the playing cycle for each musical note is triggered when a perforation crosses the bar.
Piano rolls have been in continuous production since at least 1896, and are still being manufactured today; QRS Music offers 45,000 titles with "new titles being added on a regular basis", although they are no longer mass-produced. MIDI files have generally supplanted piano rolls in storing and playing back performance data, accomplishing digitally and electronically what piano rolls do mechanically. MIDI editing software often features the ability to represent the music graphically as a piano roll.
The first paper rolls were used commercially by Welte & Sons in their orchestrions beginning in 1883.
A rollography is a listing of piano rolls, especially made by a single performer, analogous to a discography.
The Musical Museum in Brentford, London, England houses one of the world's largest collections of piano rolls, with over 20,000 rolls as well as an extensive collection of instruments which may be seen and heard.
Buffalo Convention
In the early years of player pianos, piano rolls were produced in varying dimensions and formats. Most rolls used one of three musical scales. The 65-note format, with a playing range of A1 to C♯7, was introduced in 1896 in the United States, specifically for piano music. In 1900, an American format playing all 88 notes of the standard piano scale (A0 to C8) was introduced. In 1902, a German 72-note scale (F1, G1 to E7) was introduced.
On December 10, 1908, a group representing most of the largest U.S. manufacturers of player pianos gathered in Buffalo, New York, to try to agree on some standards. The group settled on a width of and p |
https://en.wikipedia.org/wiki/Inverse%20trigonometric%20functions | In mathematics, the inverse trigonometric functions (occasionally also called arcus functions, antitrigonometric functions or cyclometric functions) are the inverse functions of the trigonometric functions (with suitably restricted domains). Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
Notation
Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: , , , etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships:
when measuring in radians, an angle of radians will correspond to an arc whose length is , where is the radius of the circle. Thus in the unit circle, "the arc whose cosine is " is the same as "the angle whose cosine is ", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms , , .
The notations , , , etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established , , – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: However, this might appear to conflict logically with the common semantics for expressions such as (although only , without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function.
The confusion is somewhat mitigated by th |
https://en.wikipedia.org/wiki/Triangular%20matrix | In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries above the main diagonal are zero. Similarly, a square matrix is called if all the entries below the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
Description
A matrix of the form
is called a lower triangular matrix or left triangular matrix, and analogously a matrix of the form
is called an upper triangular matrix or right triangular matrix. A lower or left triangular matrix is commonly denoted with the variable L, and an upper or right triangular matrix is commonly denoted with the variable U or R.
A matrix that is both upper and lower triangular is diagonal. Matrices that are similar to triangular matrices are called triangularisable.
A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of a trapezoid.
Examples
This matrix
is lower triangular, and
is upper triangular
Forward and back substitution
A matrix equation in the form or is very easy to solve by an iterative process called forward substitution for lower triangular matrices and analogously back substitution for upper triangular matrices. The process is so called because for lower triangular matrices, one first computes , then substitutes that forward into the next equation to solve for , and repeats through to . In an upper triangular matrix, one works backwards, first computing , then substituting that back into the previous equation to solve for , and repeating through .
Notice that this does not require inverting the matrix.
Forward substitution
The matrix equati |
https://en.wikipedia.org/wiki/Mobile%20processor | A mobile processor is a microprocessor designed for mobile devices such as laptops, and cell phones.
A CPU chip is designed for portable computers to run fanless, under 10 to 15W, which is cool enough without a fan. It is typically housed in a smaller chip package, but more importantly, in order to run cooler, it uses lower voltages than its desktop counterpart and has more sleep mode capability. A mobile processor can be throttled down to different power levels or sections of the chip can be turned off entirely when not in use. Further, the clock frequency may be stepped down under low processor loads. This stepping down conserves power and prolongs battery life.
Today's CPUs are usually more than just a single unit. They are split into "cores", each acting like an individual CPU. They also use "threading", allowing each core to do multiple tasks, amplifying the performance.
In laptops
One of the main characteristics differentiating laptop processors from other CPUs is low-power consumption, however, they are not without tradeoffs; they also tend to not perform as well as their desktop counterparts.
The notebook processor has become an important market segment in the semiconductor industry. Notebook computers are a popular format of the broader category of mobile computers. The objective of a notebook computer is to provide the performance and functionality of a desktop computer in a portable size and weight.
Cell phones and PDAs use "system on a chip" integrated circuits that use less power than most notebook processors.
While it is possible to use desktop processors in laptops, this practice is generally not recommended, as desktop processors heat faster than notebook processors and drain batteries faster.
Examples
Current
ARM architecture (used in Chromebooks, Windows 10 laptops, Linux netbooks and recent Macs)
Apple M series
MediaTek
Nvidia: Tegra
Qualcomm: Snapdragon
Rockchip
Samsung Electronics: Exynos
x86
AMD: Ryzen, Athlon, and A-Series |
https://en.wikipedia.org/wiki/Emanuel%20Sperner | Emanuel Sperner (9 December 1905 – 31 January 1980) was a German mathematician, best known for two theorems. He was born in Waltdorf (near Neiße, Upper Silesia, now Nysa, Poland), and died in Sulzburg-Laufen, West Germany. He was a student at Carolinum in Nysa and then Hamburg University where his advisor was Wilhelm Blaschke. He was appointed Professor in Königsberg in 1934, and subsequently held posts in a number of universities until 1974.
Sperner's theorem, from 1928, says that the size of an antichain in the power set of an n-set (a Sperner family) is at most the middle binomial coefficient(s). It has several proofs and numerous generalizations, including the Sperner property of a partially ordered set.
Sperner's lemma, from 1928, states that every Sperner coloring of a triangulation of an n-dimensional simplex contains a cell colored with a complete set of colors. It was proven by Sperner to provide an alternate proof of a theorem of Lebesgue characterizing dimensionality of Euclidean spaces. It was later noticed that this lemma provides a direct proof of the Brouwer fixed-point theorem without explicit use of homology.
Sperner's students included Kurt Leichtweiss and Gerhard Ringel.
References
External links
Sperner's photos – from the Mathematical Research Institute of Oberwolfach
1905 births
1980 deaths
People from Nysa County
People from the Province of Silesia
20th-century German mathematicians
Combinatorialists
Kolegium Carolinum Neisse alumni
University of Freiburg alumni
Academic staff of the University of Freiburg
University of Hamburg alumni
Academic staff of the University of Hamburg
Academic staff of the University of Königsberg
Academic staff of the University of Strasbourg
Academic staff of the University of Bonn |
https://en.wikipedia.org/wiki/Nipkow%20disk | A Nipkow disk (sometimes Anglicized as Nipkov disk; patented in 1884), also known as scanning disk, is a mechanical, rotating, geometrically operating image scanning device, patented by Paul Gottlieb Nipkow in Berlin. This scanning disk was a fundamental component in mechanical television, and thus the first televisions, through the 1920s and 1930s.
Operation
The device is a mechanically spinning disk of any suitable material (metal, plastic, cardboard, etc.), with a series of equally-distanced circular holes of equal diameter drilled in it. The holes may also be square for greater precision. These holes are positioned to form a single-turn spiral starting from an external radial point of the disk and proceeding to the center of the disk. When the disk rotates, the holes trace circular ring patterns, with inner and outer diameter depending on each hole's position on the disk and thickness equal to each hole's diameter. The patterns may or may not partially overlap, depending on the exact construction of the disk. A lens projects an image of the scene in front of it directly onto the disk. Each hole in the spiral takes a "slice" through the image which is picked up as a temporal pattern of light and dark by a sensor. If the sensor is made to control a light behind a second Nipkow disk rotating synchronously at the same speed and in the same direction, the image will be reproduced line-by-line. The size of the reproduced image is again determined by the size of the disc; a larger disc produces a larger image.
When spinning the disk while observing an object "through" the disk, preferably through a relatively small circular sector of the disk (the viewport), for example, an angular quarter or eighth of the disk, the object seems "scanned" line by line, first by length or height or even diagonally, depending on the exact sector chosen for observation. By spinning the disk rapidly enough, the object seems complete and capturing of motion becomes possible. This can be |
https://en.wikipedia.org/wiki/Virtual%20function | In object-oriented programming, in languages such as C++, and Object Pascal, a virtual function or virtual method is an inheritable and overridable function or method for which dynamic dispatch is facilitated. This concept is an important part of the (runtime) polymorphism portion of object-oriented programming (OOP). In short, a virtual function defines a target function to be executed, but the target might not be known at compile time.
Most programming languages, such as JavaScript, PHP and Python, treat all methods as virtual by default and do not provide a modifier to change this behavior. However, some languages provide modifiers to prevent methods from being overridden by derived classes (such as the final and private keywords in Java and PHP).
Purpose
The need of the virtual function or abstract method can be understood as follows:
A class is a user-defined data type. It is used to model an identified Entity in the problem domain. The information required of an entity becomes fields or data members of the class, The actions required of the entity becomes member functions or methods of the class. However, in some situations, It is not possible to provide definition to one or more methods of the class. Let us understand the concept in the following scenario.
The identified Entity is Two-dimensional figure. It has two dimensions. Its area is required to be computed. Triangle, Rectangle, Square, Circle etc. are all two-dimensional figures. However the formula to compute area is different for each of the figures.
In C++, the abstract function is defined as a Pure Virtual Function. A class is a abstract class in C++ if it at least has one pure virtual function. Now in C++, the two-dimensional figure class is implemented as follows:class TwoDFigure {
protected:
int d1;
int d2;
public:
TwoDFigure(int d1, int d2) {
this->d1 = d1;
this->d2 = d2;
}
// Pure Virtual Function
// abstract function as no definition
virtual voi |
https://en.wikipedia.org/wiki/Typing%20rule | In type theory, a typing rule is an inference rule that describes how a type system assigns a type to a syntactic construction. These rules may be applied by the type system to determine if a program is well-typed and what type expressions have. A prototypical example of the use of typing rules is in defining type inference in the simply typed lambda calculus, which is the internal language of Cartesian closed categories.
Notation
Typing rules specify the structure of a typing relation that relates syntactic terms to their types. Syntactically, the typing relation is usually denoted by a colon, so for example denotes that an expression has type . The rules themselves are usually specified using the notation of natural deduction. For example, the following typing rules specify the typing relation for a simple language of booleans:
Each rule states that the conclusion below the line may be derived from the premises above the line. The first two rules have no premises above the line, so they are axioms. The third rule has premises above the line (specifically, three premises), so it is an inference rule.
In programming languages, the type of a variable depends on where it is bound, which necessitates context-sensitive typing rules. These rules are given by a typing judgment, usually written , which states that an expression has type under a typing context that relates variables to their types. Typing contexts are occasionally supplemented by the types of individual variables; for example, can be read as "the context supplemented by the information that the expression has type yields the judgement that expression has type ". This notation can be used to give typing rules for variable references and lambda abstraction in the simply typed lambda calculus:
Similarly, the following typing rule describes the construct of Standard ML:
Not all systems of typing rules directly specify a type checking algorithm. For example, the typing rule for applying a param |
https://en.wikipedia.org/wiki/Acetobacter | Acetobacter is a genus of acetic acid bacteria. Acetic acid bacteria are characterized by the ability to convert ethanol to acetic acid in the presence of oxygen. Of these, the genus Acetobacter is distinguished by the ability to oxidize lactate and acetate into carbon dioxide and water. Bacteria of the genus Acetobacter have been isolated from industrial vinegar fermentation processes and are frequently used as fermentation starter cultures.
History of research
The acetic fermentation was demonstrated by Louis Pasteur, who discovered the first acetobacter - Acetobacter aceti - in 1864.
In 1998, two strains of Acetobacter isolated from red wine and cider vinegar were named Acetobacter oboediens and Acetobacter pomorum.
In 2000, Acetobacter oboediens and Acetobacter intermedius were transferred to Gluconacetobacter on the basis of 16S rRNA sequencing.
In 2002, Acetobacter cerevisiae and Acetobacter malorum were identified by 16S rRNA sequence analysis of Acetobacter strains.
In 2006, a strain of Acetobacter isolated from spoiled red wine was named Acetobacter oeni.
Microbiota
Regarding the genus Acetobacter’s involvement with other organisms, it is known for having species that are important commensal bacteria in the gut microbiome of Drosophila melanogaster. The species A. pomorum specifically helps uphold the physiology and development of Drosophila melanogaster through insulin/insulin-like growth factor signaling.
References
Further reading
Rhodospirillales
Oenology
Vinegar
Bacteria genera |
https://en.wikipedia.org/wiki/Trigonometric%20substitution | In mathematics, trigonometric substitution is the replacement of trigonometric functions for other expressions. In calculus, trigonometric substitution is a technique for evaluating integrals. Moreover, one may use the trigonometric identities to simplify certain integrals containing radical expressions. Like other methods of integration by substitution, when evaluating a definite integral, it may be simpler to completely deduce the antiderivative before applying the boundaries of integration.
Case I: Integrands containing a2 − x2
Let and use the identity
Examples of Case I
Example 1
In the integral
we may use
Then,
The above step requires that and We can choose to be the principal root of and impose the restriction by using the inverse sine function.
For a definite integral, one must figure out how the bounds of integration change. For example, as goes from to then goes from to so goes from to Then,
Some care is needed when picking the bounds. Because integration above requires that , can only go from to Neglecting this restriction, one might have picked to go from to which would have resulted in the negative of the actual value.
Alternatively, fully evaluate the indefinite integrals before applying the boundary conditions. In that case, the antiderivative gives
as before.
Example 2
The integral
may be evaluated by letting where so that and by the range of arcsine, so that and
Then,
For a definite integral, the bounds change once the substitution is performed and are determined using the equation with values in the range Alternatively, apply the boundary terms directly to the formula for the antiderivative.
For example, the definite integral
may be evaluated by substituting with the bounds determined using
Because and
On the other hand, direct application of the boundary terms to the previously obtained formula for the antiderivative yields
as before.
Case II: Integrands containing a2 + x2
Let and use the id |
https://en.wikipedia.org/wiki/Othala | Othala (), also known as ēðel and odal, is a rune that represents the o and œ phonemes in the Elder Futhark and the Anglo-Saxon Futhorc writing systems respectively. Its name is derived from the reconstructed Proto-Germanic *ōþala- "heritage; inheritance, inherited estate". As it does not occur in Younger Futhark, it disappears from the Scandinavian record around the 8th century, however its usage continued in England into the 11th century.
As with other symbols used historically in Europe such as the swastika and Celtic cross, othala has been appropriated by far-right groups such as the Nazi party and neo-Nazis. The rune also continues to be used in non-racist contexts, both in Heathenry and in wider popular culture such as the works of J.R.R. Tolkien.
Name and etymology
The sole attested name of the rune is , meaning "homeland". Based on this, and cognates in other Germanic languages such as and , the can be reconstructed, meaning "ancestral land", "the land owned by one's kin", and by extension "property" or "inheritance". is in turn derived from , meaning "nobility" and "disposition".
Terms derived from are formative elements in some Germanic names, notably Ulrich.
The term "odal" () refers to Scandinavian laws of inheritance which established land rights for families that had owned that parcel of land over a number of generations, restricting its sale to others. Among other aspects, this protected the inheritance rights of daughters against males from outside the immediate family. Some of these laws remain in effect today in Norway as the Odelsrett (allodial right). The tradition of Udal law found in Shetland, Orkney, and the Isle of Man, is from the same origin.
Elder Futhark o-rune
The o-rune is attested early, in inscriptions from the 3rd century, such as the Thorsberg chape (DR7) and the Vimose planer (Vimose-Høvelen, DR 206).
The corresponding Gothic letter is (derived from Greek Ω), which had the name oþal. The othala rune is found in some tr |
https://en.wikipedia.org/wiki/Successor%20cardinal | In set theory, one can define a successor operation on cardinal numbers in a similar way to the successor operation on the ordinal numbers. The cardinal successor coincides with the ordinal successor for finite cardinals, but in the infinite case they diverge because every infinite ordinal and its successor have the same cardinality (a bijection can be set up between the two by simply sending the last element of the successor to 0, 0 to 1, etc., and fixing ω and all the elements above; in the style of Hilbert's Hotel Infinity). Using the von Neumann cardinal assignment and the axiom of choice (AC), this successor operation is easy to define: for a cardinal number κ we have
,
where ON is the class of ordinals. That is, the successor cardinal is the cardinality of the least ordinal into which a set of the given cardinality can be mapped one-to-one, but which cannot be mapped one-to-one back into that set.
That the set above is nonempty follows from Hartogs' theorem, which says that for any well-orderable cardinal, a larger such cardinal is constructible. The minimum actually exists because the ordinals are well-ordered. It is therefore immediate that there is no cardinal number in between κ and κ+. A successor cardinal is a cardinal that is κ+ for some cardinal κ. In the infinite case, the successor operation skips over many ordinal numbers; in fact, every infinite cardinal is a limit ordinal. Therefore, the successor operation on cardinals gains a lot of power in the infinite case (relative the ordinal successorship operation), and consequently the cardinal numbers are a very "sparse" subclass of the ordinals. We define the sequence of alephs (via the axiom of replacement) via this operation, through all the ordinal numbers as follows:
and for λ an infinite limit ordinal,
If β is a successor ordinal, then is a successor cardinal. Cardinals that are not successor cardinals are called limit cardinals; and by the above definition, if λ is a limit ordinal, then |
https://en.wikipedia.org/wiki/Successor%20ordinal | In set theory, the successor of an ordinal number α is the smallest ordinal number greater than α. An ordinal number that is a successor is called a successor ordinal. The ordinals 1, 2, and 3 are the first three successor ordinals and the ordinals ω+1, ω+2 and ω+3 are the first three infinite successor ordinals.
Properties
Every ordinal other than 0 is either a successor ordinal or a limit ordinal.
In Von Neumann's model
Using von Neumann's ordinal numbers (the standard model of the ordinals used in set theory), the successor S(α) of an ordinal number α is given by the formula
Since the ordering on the ordinal numbers is given by α < β if and only if α ∈ β, it is immediate that there is no ordinal number between α and S(α), and it is also clear that α < S(α).
Ordinal addition
The successor operation can be used to define ordinal addition rigorously via transfinite recursion as follows:
and for a limit ordinal λ
In particular, . Multiplication and exponentiation are defined similarly.
Topology
The successor points and zero are the isolated points of the class of ordinal numbers, with respect to the order topology.
See also
Ordinal arithmetic
Limit ordinal
Successor cardinal
References
Ordinal numbers |
https://en.wikipedia.org/wiki/Colorimetry | Colorimetry is "the science and technology used to quantify and describe physically the human color perception".
It is similar to spectrophotometry, but is distinguished by its interest in reducing spectra to the physical correlates of color perception, most often the CIE 1931 XYZ color space tristimulus values and related quantities.
History
The Duboscq colorimeter was invented by Jules Duboscq in 1870.
Instruments
Colorimetric equipment is similar to that used in spectrophotometry. Some related equipment is also mentioned for completeness.
A tristimulus colorimeter measures the tristimulus values of a color.
A spectroradiometer measures the absolute spectral radiance (intensity) or irradiance of a light source.
A spectrophotometer measures the spectral reflectance, transmittance, or relative irradiance of a color sample.
A spectrocolorimeter is a spectrophotometer that can calculate tristimulus values.
A densitometer measures the degree of light passing through or reflected by a subject.
A color temperature meter measures the color temperature of an incident illuminant.
Tristimulus colorimeter
In digital imaging, colorimeters are tristimulus devices used for color calibration. Accurate color profiles ensure consistency throughout the imaging workflow, from acquisition to output.
Spectroradiometer, spectrophotometer, spectrocolorimeter
The absolute spectral power distribution of a light source can be measured with a spectroradiometer, which works by optically collecting the light, then passing it through a monochromator before reading it in narrow bands of wavelength.
Reflected color can be measured using a spectrophotometer (also called spectroreflectometer or reflectometer), which takes measurements in the visible region (and a little beyond) of a given color sample. If the custom of taking readings at 10 nanometer increments is followed, the visible light range of 400–700 nm will yield 31 readings. These readings are typically used to draw the |
https://en.wikipedia.org/wiki/Mereology | In logic, philosophy and related fields, mereology ( (root: , mere-, 'part') and the suffix -logy, 'study, discussion, science') is the study of parts and the wholes they form. Whereas set theory is founded on the membership relation between a set and its elements, mereology emphasizes the meronomic relation between entities, which—from a set-theoretic perspective—is closer to the concept of inclusion between sets.
Mereology has been explored in various ways as applications of predicate logic to formal ontology, in each of which mereology is an important part. Each of these fields provides its own axiomatic definition of mereology. A common element of such axiomatizations is the assumption, shared with inclusion, that the part-whole relation orders its universe, meaning that everything is a part of itself (reflexivity), that a part of a part of a whole is itself a part of that whole (transitivity), and that two distinct entities cannot each be a part of the other (antisymmetry), thus forming a poset. A variant of this axiomatization denies that anything is ever part of itself (irreflexivity) while accepting transitivity, from which antisymmetry follows automatically.
Although mereology is an application of mathematical logic, what could be argued to be a sort of "proto-geometry", it has been wholly developed by logicians, ontologists, linguists, engineers, and computer scientists, especially those working in artificial intelligence. In particular, mereology is also on the basis for a point-free foundation of geometry (see for example the quoted pioneering paper of Alfred Tarski and the review paper by Gerla 1995).
In general systems theory, mereology refers to formal work on system decomposition and parts, wholes and boundaries (by, e.g., Mihajlo D. Mesarovic (1970), Gabriel Kron (1963), or Maurice Jessel (see Bowden (1989, 1998)). A hierarchical version of Gabriel Kron's Network Tearing was published by Keith Bowden (1991), reflecting David Lewis's ideas on gu |
https://en.wikipedia.org/wiki/Computer%20simulation | Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.
Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;
a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Because of the computational cost of simulation, computer ex |
https://en.wikipedia.org/wiki/Am486 | The Am486 is a 80486-class family of computer processors that was produced by AMD in the 1990s. Intel beat AMD to market by nearly four years, but AMD priced its 40 MHz 486 at or below Intel's price for a 33 MHz chip, offering about 20% better performance for the same price.
While competing 486 chips, such as those from Cyrix, benchmarked lower than the equivalent Intel chip, AMD's 486 matched Intel's performance on a clock-for-clock basis.
While the Am386 was primarily used by small computer manufacturers, the Am486DX, DX2, and SX2 chips gained acceptance among larger computer manufacturers, especially Acer and Compaq, in the 1994 time frame.
AMD's higher clocked 486 chips provided superior performance to many of the early Pentium chips, especially the 60 and 66 MHz launch products. While equivalent Intel 80486DX4 chips were priced high and required a minor socket modification, AMD priced low. Intel's DX4 chips initially had twice the cache of the AMD chips, giving them a slight performance edge, but AMD's DX4-100 usually cost less than Intel's DX2-66.
The enhanced Am486 series supported new features like extended power-saving modes and an 8 KiB Write-Back L1-Cache, later versions even got an upgrade to 16 KiB Write-Back L1-Cache.
The 133 MHz AMD Am5x86 was a higher clocked enhanced Am486.
One derivative of the Am486 family is the core used in the AMD Élan SC4xx family of microcontrollers marketed by AMD.
Features
Am486 models
WT = Write-Through cache strategy, WB = Write-Back cache strategy
References
External links
AMD: Enhanced Am486 Microprocessors
AMD: 30 Years of Pursuing the Leader. Part 2
cpu-collection.de AMD Am486 processor images and descriptions
Am486
X86 microarchitectures |
https://en.wikipedia.org/wiki/Halt%20and%20Catch%20Fire%20%28computing%29 | In computer engineering, Halt and Catch Fire, known by the assembly mnemonic HCF, is an idiom referring to a computer machine code instruction that causes the computer's central processing unit (CPU) to cease meaningful operation, typically requiring a restart of the computer. It originally referred to a fictitious instruction in IBM System/360 computers (introduced in 1964), making a joke about its numerous non-obvious instruction mnemonics.
With the advent of the MC6800 (introduced in 1974), a design flaw was discovered by programmers. Due to incomplete opcode decoding, two illegal opcodes, 0x9D and 0xDD, will cause the program counter on the processor to increment endlessly, which locks the processor until reset. Those codes have been unofficially named HCF. During the design process of the MC6802, engineers originally planned to remove this instruction, but kept it as-is for testing purposes. As a result, HCF was officially recognized as a real instruction. Later, HCF became a humorous catch-all term for instructions that may freeze a processor, including intentional instructions for testing purposes, and unintentional illegal instructions. Some are considered hardware defects, and if the system is shared, a malicious user can execute it to launch a denial-of-service attack.
In the case of real instructions, the implication of this expression is that, whereas in most cases in which a CPU executes an unintended instruction (a bug in the code) the computer may still be able to recover, in the case of an HCF instruction there is, by definition, no way for the system to recover without a restart.
The expression catch fire is a facetious exaggeration of the speed with which the CPU chip would be switching some bus circuits, causing them to overheat and burn.
Origins
The Z1 (1938) and Z3 (1941) computers built by Konrad Zuse contained illegal sequences of instructions which damaged the hardware if executed by accident.
Apocryphal stories connect this term with a |
https://en.wikipedia.org/wiki/Bus%20contention | Bus contention is an undesirable state in computer design where more than one device on a bus attempts to place values on it at the same time.
Bus contention is the kind of telecommunication contention that occurs when all communicating devices communicate directly with each other through a single shared channel, and contrasted with "network contention" that occurs when communicating devices communicate indirectly with each other, through point-to-point connections through routers or bridges.
Bus contention can lead to erroneous operation, excess power consumption, and, in unusual cases, permanent damage to the hardware—such as burning out a MOSFET.
Description
Most bus architectures requires devices sharing a bus to follow an arbitration protocol carefully designed to make the likelihood of contention negligible. However, when devices on the bus have logic errors, manufacturing defects, or are driven beyond their design speeds, arbitration may break down and contention may result. Contention may also arise on systems which have a programmable memory mapping when illegal values are written to the registers controlling the mapping.
Most small-scale computer systems are carefully designed to avoid bus contention on the system bus. They use a single device, called bus arbiter, that controls which device is allowed to drive the bus at each instant, so bus contention never happens in normal operation. The standard solution to bus contention between memory devices, such as EEPROM and SRAM, is the three-state bus with a bus arbiter.
Some networks, such as Token Ring, are also designed to avoid bus contention, so bus contention never happens in normal operation.
Most networks are designed with hardware robust enough to tolerate occasional bus contention on the network. CAN bus, ALOHAnet, Ethernet, etc., all experience occasional bus contention in normal operation, but use some protocol (such as Multiple Access with Collision Avoidance, carrier-sense multiple access |
https://en.wikipedia.org/wiki/ICAO%20airport%20code | The ICAO airport code or location indicator is a four-letter code designating aerodromes around the world. These codes, as defined by the International Civil Aviation Organization and published quarterly in ICAO Document 7910: Location Indicators, are used by air traffic control and airline operations such as flight planning.
ICAO codes are also used to identify other aviation facilities such as weather stations, international flight service stations or area control centers, whether or not they are located at airports. Flight information regions are also identified by a unique ICAO-code.
History
The International Civil Aviation Organization was formed in 1947 under the auspices of the United Nations, and it established flight information regions (FIRs) for controlling air traffic and making airport identification simple and clear.
ICAO codes versus IATA codes
ICAO codes are separate and different from IATA codes, which have three letters and are generally used for airline timetables, reservations, and baggage tags. For example, the IATA code for London's Heathrow Airport is LHR and its ICAO code is EGLL. IATA codes are commonly seen by passengers and the general public on flight-tracking services such as FlightAware.
In general IATA codes are usually derived from the name of the airport or the city it serves, while ICAO codes are distributed by region and country. Far more aerodromes (in the broad sense) have ICAO codes than IATA codes, which are sometimes assigned to railway stations as well. The selection of ICAO codes is partly delegated to authorities in each country, while IATA codes which have no geographic structure must be decided centrally by IATA.
Structure
Typically, the first one or two letters of the ICAO code indicate the country and the remaining letters identify the airport, as indicated by the adjoining figures. ICAO codes provide geographical context. For example, if one knows that the ICAO code for Heathrow is EGLL, then one can deduce th |
https://en.wikipedia.org/wiki/Truncated%20dodecahedron | In geometry, the truncated dodecahedron is an Archimedean solid. It has 12 regular decagonal faces, 20 regular triangular faces, 60 vertices and 90 edges.
Geometric relations
This polyhedron can be formed from a regular dodecahedron by truncating (cutting off) the corners so the pentagon faces become decagons and the corners become triangles.
It is used in the cell-transitive hyperbolic space-filling tessellation, the bitruncated icosahedral honeycomb.
Area and volume
The area A and the volume V of a truncated dodecahedron of edge length a are:
Cartesian coordinates
Cartesian coordinates for the vertices of a truncated dodecahedron with edge length 2φ − 2, centered at the origin, are all even permutations of:
(0, ±, ±(2 + φ))
(±, ±φ, ±2φ)
(±φ, ±2, ±(φ + 1))
where φ = is the golden ratio.
Orthogonal projections
The truncated dodecahedron has five special orthogonal projections, centered: on a vertex, on two types of edges, and two types of faces. The last two correspond to the A2 and H2 Coxeter planes.
Spherical tilings and Schlegel diagrams
The truncated dodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Schlegel diagrams are similar, with a perspective projection and straight edges.
Vertex arrangement
It shares its vertex arrangement with three nonconvex uniform polyhedra:
Related polyhedra and tilings
It is part of a truncation process between a dodecahedron and icosahedron:
This polyhedron is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry.
Truncated dodecahedral graph
In the mathematical field of graph theory, a truncated dodecahedral graph is the graph of vertices and edges of the truncated dodecahedron, one of the Archimedean solids. |
https://en.wikipedia.org/wiki/Pixel%20geometry | The components of the pixels (primary colors red, green and blue) in an image sensor or display can be ordered in different patterns, called pixel geometry.
The geometric arrangement of the primary colors within a pixel varies depending on usage (see figure 1). In monitors, such as LCDs or CRTs, that typically display edges or rectangles, the components are arranged in vertical stripes. Displays with motion pictures should instead have triangular or diagonal patterns so that the image variation is perceived better by the viewer.
Knowledge of the pixel geometry used by a display may be used to create raster images of higher apparent resolution using subpixel rendering.
See also
PenTile matrix family
Quattron
Bayer filter
Subpixel rendering
Pixel
References
Digital imaging |
https://en.wikipedia.org/wiki/Royal%20Signals%20and%20Radar%20Establishment | The Royal Signals and Radar Establishment (RSRE) was a scientific research establishment within the Ministry of Defence (MoD) of the United Kingdom. It was located primarily at Malvern in Worcestershire, England. The RSRE motto was Ubique Sentio (Latin for "I sense everywhere").
History
RSRE was formed in 1976 by an amalgamation of previous research organizations; these included the Royal Radar Establishment (RRE), itself derived from the World War II-era Telecommunications Research Establishment, the Signals Research and Development Establishment (SRDE) in Christchurch, Dorset, and the Services Electronic Research Laboratory (SERL) at Baldock.
Beginning in 1979, the SRDE and SERL moved to Malvern to join the RRE's location. There were several out-stations in Worcestershire, including the ex-RAF airfields at Defford and Pershore and the satellite tracking station at Sheriffs Lench.
In April 1991 RSRE amalgamated with other defence research establishments to form the Defence Research Agency, which in April 1995 amalgamated with more organisations to form the Defence Evaluation and Research Agency.
In June 2001 this became independent of the MoD, with approximately two-thirds of it being incorporated into QinetiQ, a commercial company owned by the MoD, and the remainder into the fully government-owned laboratory DSTL. In 2003 the Carlyle Group bought a private equity stake (~30%) in QinetiQ.
Research
Some of the most important technologies developed from work at RSRE are radar, satellite communications, thermography, liquid crystal displays, speech synthesis and the Touchscreen.
Predecessor organisation Signals Research and Development Establishment (SRDE) had been involved in the development of military communications satellites, within the U.S. Interim Defense Communication Satellite Program (IDCSP) and the development of the British Skynet 1 and 2 satellite types. The SRDE establishment moved to a RSRE facility at RAF Defford near Malvern in 1980, which ha |
https://en.wikipedia.org/wiki/Flex%20machine | The Flex Computer System was developed by Michael Foster and Ian Currie of Royal Signals and Radar Establishment (RSRE) in Malvern, England, during the late 1970s and 1980s. It used a tagged storage scheme to implement a capability architecture, and was designed for the safe and efficient implementation of strongly typed procedures.
The hardware was custom and microprogrammable, with an operating system, (modular) compiler, editor, garbage collector and filing system all written in ALGOL 68RS.
There were (at least) two incarnations of Flex, implemented using hardware with writable microcode. The first was supplied by Logica to an RSRE design, and the second used an ICL PERQ. The microcode alone was responsible for storage allocation, deallocation and garbage collection. This immediately precluded a whole class of errors arising from the misuse (deliberate or accidental) of pointers.
A notable feature of Flex was the tagged, write-once filestore. This allowed arbitrary code and data structures to be written and retrieved transparently, without recourse to external encodings. Data could thus be passed safely from program to program.
In a similar way, remote capabilities allowed data and procedures on other machines to be accessed over a network connection, again without the application program being involved in external encodings of data, parameters or result values.
The whole scheme allowed abstract data types to be safely implemented, as data items and the procedures permitted to access them could be bound together, and the resulting capability passed freely around. The capability would grant access to the procedures, but could not be used in any way to obtain access to the data.
Another notable feature of Flex was the notion of shaky pointers, more recently often called weak references, which points to blocks of memory that could be freed at the next garbage collection. This is used for example for cached disc blocks or a list of spare procedure work-space |
https://en.wikipedia.org/wiki/Balanced%20ternary | Balanced ternary is a ternary numeral system (i.e. base 3 with three digits) that uses a balanced signed-digit representation of the integers in which the digits have the values −1, 0, and 1. This stands in contrast to the standard (unbalanced) ternary system, in which digits have values 0, 1 and 2.
The balanced ternary system can represent all integers without using a separate minus sign; the value of the leading non-zero digit of a number has the sign of the number itself. The balanced ternary system is an example of a non-standard positional numeral system. It was used in some early computers and also in some solutions of balance puzzles.
Different sources use different glyphs used to represent the three digits in balanced ternary. In this article, T (which resembles a ligature of the minus sign and 1) represents −1, while 0 and 1 represent themselves. Other conventions include using '−' and '+' to represent −1 and 1 respectively, or using Greek letter theta (Θ), which resembles a minus sign in a circle, to represent −1. In publications about the Setun computer, −1 is represented as overturned 1: "1".
Balanced ternary makes an early appearance in Michael Stifel's book Arithmetica Integra (1544). It also occurs in the works of Johannes Kepler and Léon Lalanne. Related signed-digit schemes in other bases have been discussed by John Colson, John Leslie, Augustin-Louis Cauchy, and possibly even the ancient Indian Vedas.
Definition
Let denote the set of symbols (also called glyphs or characters), where the symbol is sometimes used in place of
Define an integer-valued function by
and
where the right hand sides are integers with their usual values. This function, is what rigorously and formally establishes how integer values are assigned to the symbols/glyphs in One benefit of this formalism is that the definition of "the integers" (however they may be defined) is not conflated with any particular system for writing/representing them; in this way, thes |
https://en.wikipedia.org/wiki/DESQview | DESQview (DV) is a text mode multitasking operating environment developed by Quarterdeck Office Systems which enjoyed modest popularity in the late 1980s and early 1990s. Running on top of DOS, it allows users to run multiple programs concurrently in multiple windows.
Desq
Quarterdeck's predecessor to DESQview was a task switching product called Desq (shipped late April or May 1984), which allows users to switch between running programs. Quarterdeck revamped its package, bringing multitasking in, and adding TopView compatibility.
DESQview was released in July 1985, four months before Microsoft released the first version of Windows. It was widely thought to be the first program to bring multitasking and windowing capabilities to DOS; in fact, there was a predecessor, IBM TopView, which shipped March 1985,.
Under DESQview, well-behaved DOS programs can be run concurrently in resizable, overlapping windows (something the first version of MS Windows cannot do). A simple hideable menu allows cutting and pasting between programs. DESQview provides support for simple editable macros as well. Quarterdeck also developed a set of optional utilities for DESQview, including a notepad and dialer. Later versions allow graphics mode programs to be loaded as well, but only run in full screen mode.
DESQview is not a GUI (Graphical User Interface) operating system. Rather, it is a non-graphical, windowed shell that runs in real mode on top of DOS, although it can run on any Intel 8086- or Intel 80286-based PC. It can also use expanded memory add-ons to work around the 640 KB RAM limit of conventional memory on early PCs. DESQview really came into its own on Intel 80386 machines, which are better at utilizing memory above DOS's limit. However, in either case, it runs in real mode rather than protected mode, meaning that a misbehaving program can still crash the system.
DESQview and QEMM
To make maximum use of extended memory on Intel 80386 processors, by transforming it into e |
https://en.wikipedia.org/wiki/Flex%20%28lexical%20analyser%20generator%29 | Flex (fast lexical analyzer generator) is a free and open-source software alternative to lex.
It is a computer program that generates lexical analyzers (also known as "scanners" or "lexers").
It is frequently used as the lex implementation together with Berkeley Yacc parser generator on BSD-derived operating systems (as both lex and yacc are part of POSIX), or together with GNU bison (a version of yacc) in *BSD ports and in Linux distributions. Unlike Bison, flex is not part of the GNU Project and is not released under the GNU General Public License, although a manual for Flex was produced and published by the Free Software Foundation.
History
Flex was written in C around 1987 by Vern Paxson, with the help of many ideas and much inspiration from Van Jacobson. Original version by Jef Poskanzer. The fast table representation is a partial implementation of a design done by Van Jacobson. The implementation was done by Kevin Gong and Vern Paxson.
Example lexical analyzer
This is an example of a Flex scanner for the instructional programming language PL/0.
The tokens recognized are: '+', '-', '*', '/', '=', '(', ')', ',', ';', '.', ':=', '<', '<=', '<>', '>', '>=';
numbers: 0-9 {0-9}; identifiers: a-zA-Z {a-zA-Z0-9} and keywords: begin, call, const, do, end, if, odd, procedure, then, var, while.
%{
#include "y.tab.h"
%}
digit [0-9]
letter [a-zA-Z]
%%
"+" { return PLUS; }
"-" { return MINUS; }
"*" { return TIMES; }
"/" { return SLASH; }
"(" { return LPAREN; }
")" { return RPAREN; }
";" { return SEMICOLON; }
"," { return COMMA; }
"." { return PERIOD; }
":=" { return BECOMES; }
"=" { return EQL; }
"<>" { return NEQ; }
"<" { return LSS; }
">" { return GTR; |
https://en.wikipedia.org/wiki/Shim%20%28computing%29 | In computer programming, a shim is a library that transparently intercepts API calls and changes the arguments passed, handles the operation itself or redirects the operation elsewhere. Shims can be used to support an old API in a newer environment, or a new API in an older environment. Shims can also be used for running programs on different software platforms than they were developed for.
Shims for older APIs typically come about when the behavior of an API changes, thereby causing compatibility issues for older applications which still rely on the older functionality; in such cases, the older API can still be supported by a thin compatibility layer on top of the newer code. Shims for newer APIs are defined as: "a library that brings a new API to an older environment, using only the means of that environment."
Examples
Web polyfills implement newer web standards using older standards and JavaScript, if the newer standard is not available in a given web browser.
Support of AppleTalk on Macintosh computers, during the brief period in which Apple Computer supported the Open Transport networking system. Thousands of Mac programs were based on the AppleTalk protocol; to support these programs, AppleTalk was re-implemented as an OpenTransport "stack", and then re-implemented as an API shim on top of this new library.
The Microsoft Windows Application Compatibility Toolkit (ACT) uses the term to mean backward compatible libraries. Shims simulate the behavior of older versions of Windows for legacy applications that rely on incorrect or deprecated functionality, or correct the way in which poorly written applications call unchanged APIs, for example to fix least-privileged user account (LUA) bugs.
bind.so is a shim library for Linux that allows any application, regardless of permissions, to bind to a listening socket or specify outgoing IP address. It uses the LD_PRELOAD mechanism, which allows shims and other libraries to be loaded into any program.
In the type t |
https://en.wikipedia.org/wiki/Critical%20phenomena | In physics, critical phenomena is the collective name associated with the
physics of critical points. Most of them stem from the divergence of the
correlation length, but also the dynamics slows down. Critical phenomena include scaling relations among different quantities, power-law divergences of some quantities (such as the magnetic susceptibility in the ferromagnetic phase transition) described by critical exponents, universality, fractal behaviour, and ergodicity breaking. Critical phenomena take place in second order phase transitions, although not exclusively.
The critical behavior is usually different from the mean-field approximation which is valid away from the phase transition, since the latter neglects correlations, which become increasingly important as the system approaches the critical point where the correlation length diverges. Many properties of the critical behavior of a system can be derived in the framework of the renormalization group.
In order to explain the physical origin of these phenomena, we shall use the Ising model as a pedagogical example.
The critical point of the 2D Ising model
Consider a square array of classical spins which may only take two positions: +1 and −1, at a certain temperature , interacting through the Ising classical Hamiltonian:
where the sum is extended over the pairs of nearest neighbours and is a coupling constant, which we will consider to be fixed. There is a certain temperature, called the Curie temperature or critical temperature, below which the system presents ferromagnetic long range order. Above it, it is paramagnetic and is apparently disordered.
At temperature zero, the system may only take one global sign, either +1 or -1. At higher temperatures, but below , the state is still globally magnetized, but clusters of the opposite sign appear. As the temperature increases, these clusters start to contain smaller clusters themselves, in a typical Russian dolls picture. Their typical size, called the |
https://en.wikipedia.org/wiki/Cleavage%20furrow | In cell biology, the cleavage furrow is the indentation of the cell's surface that begins the progression of cleavage, by which animal and some algal cells undergo cytokinesis, the final splitting of the membrane, in the process of cell division. The same proteins responsible for muscle contraction, actin and myosin, begin the process of forming the cleavage furrow, creating an actomyosin ring. Other cytoskeletal proteins and actin binding proteins are involved in the procedure.
Mechanism
Plant cells do not perform cytokinesis through this exact method but the two procedures are not totally different. Animal cells form an actin-myosin contractile ring within the equatorial region of the cell membrane that constricts to form the cleavage furrow. In plant cells, Golgi vesicle secretions form a cell plate or septum on the equatorial plane of the cell wall by the action of microtubules of the phragmoplast. The cleavage furrow in animal cells and the phragmoplast in plant cells are complex structures made up of microtubules and microfilaments that aide in the final separation of the cells into two identical daughter cells.
Cell cycle
The cell cycle begins with interphase when the DNA replicates, the cell grows and prepares to enter mitosis. Mitosis includes four phases: prophase, metaphase, anaphase, and telophase. Prophase is the initial phase when spindle fibers appear that function to move the chromosomes toward opposite poles. This spindle apparatus consists of microtubules, microfilaments and a complex network of various proteins. During metaphase, the chromosomes line up using the spindle apparatus in the middle of the cell along the equatorial plate. The chromosomes move to opposite poles during anaphase and remain attached to the spindle fibers by their centromeres. Animal cell cleavage furrow formation is caused by a ring of actin microfilaments called the contractile ring, which forms during early anaphase. Myosin is present in the region of the contracti |
https://en.wikipedia.org/wiki/Power%20of%20two | A power of two is a number of the form where is an integer, that is, the result of exponentiation with number two as the base and integer as the exponent.
In a context where only integers are considered, is restricted to non-negative values, so there are 1, 2, and 2 multiplied by itself a certain number of times.
The first ten powers of 2 for non-negative values of are:
1, 2, 4, 8, 16, 32, 64, 128, 256, 512, ...
Base of the binary numeral system
Because two is the base of the binary numeral system, powers of two are common in computer science. Written in binary, a power of two always has the form 100...000 or 0.00...001, just like a power of 10 in the decimal system.
Computer science
Two to the exponent of , written as , is the number of ways the bits in a binary word of length can be arranged. A word, interpreted as an unsigned integer, can represent values from 0 () to () inclusively. Corresponding signed integer values can be positive, negative and zero; see signed number representations. Either way, one less than a power of two is often the upper bound of an integer in binary computers. As a consequence, numbers of this form show up frequently in computer software. As an example, a video game running on an 8-bit system might limit the score or the number of items the player can hold to 255—the result of using a byte, which is 8 bits long, to store the number, giving a maximum value of . For example, in the original Legend of Zelda the main character was limited to carrying 255 rupees (the currency of the game) at any given time, and the video game Pac-Man famously has a kill screen at level 256.
Powers of two are often used to measure computer memory. A byte is now considered eight bits (an octet), resulting in the possibility of 256 values (28). (The term byte once meant (and in some cases, still means) a collection of bits, typically of 5 to 32 bits, rather than only an 8-bit unit.) The prefix kilo, in conjunction with byte, may be, and has t |
https://en.wikipedia.org/wiki/Online%20service%20provider%20law | Online service provider law is a summary and case law tracking page for laws, legal decisions and issues relating to online service providers (OSPs), like the Wikipedia and Internet service providers, from the viewpoint of an OSP considering its liability and customer service issues. See Cyber law for broader coverage of the law of cyberspace.
United States
The general liability risk within the United States is low but it's necessary to review the laws and decisions of all other countries because the extraterritorial application of laws to content hosted in the US is a significant concern.
Libel, defamation
1991 Cubby v. CompuServe held that CompuServe wasn't the publisher and granted summary judgment in its favor.
May 1995 Stratton Oakmont, Inc. v. Prodigy Services Co. decision which held that Prodigy was the publisher, because it could delete messages.
1996 Section 230 of the Communications Decency Act (CDA), which states in part that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". Note that this portion of the CDA was not struck down and remains law.
November 1997 Zeran v. AOL The CDA protects AOL even though it repeatedly ignored a defamation complaint.
April 1998 Blumenthal v. AOL (part of the case against Drudge and AOL) held that the CDA protects AOL for Drudge's writing that Blumenthal, an assistant to the US President, had a spousal abuse background (retracted in two days) even though it paid Drudge US$3,000 a month for his columns, had editorial control and might well have been liable if it was not an online publication .
Lunney v. Prodigy Services Co. 94 N.Y.2d 242 (1999) held that internet chatroom provider was not considered a publisher of defamatory material posted from an impostor account due to Prodigy's passive role.
2003 Carafano v. Metrosplash.com (the Star Trek actress case) . Providing multiple choice options in forms d |
https://en.wikipedia.org/wiki/Wireless%20Internet%20service%20provider | A wireless Internet service provider (WISP) is an Internet service provider with a network based on wireless networking. Technology may include commonplace Wi-Fi wireless mesh networking, or proprietary equipment designed to operate over open 900 MHz, 2.4 GHz, 4.9, 5, 24, and 60 GHz bands or licensed frequencies in the UHF band (including the MMDS frequency band), LMDS, and other bands from 6 GHz to 80 GHz.
In the US, the Federal Communications Commission (FCC) released Report and Order, FCC 05-56 in 2005 that revised the FCC’s rules to open the 3650 MHz band for terrestrial wireless broadband operations. On November 14, 2007 the Commission released Public Notice (DA 07-4605) in which the Wireless Telecommunications Bureau announced the start date for licensing and registration process for the 3650-3700 MHz band.
As of July 2015, there are over 2,000 fixed wireless broadband providers operating in the US, servicing nearly 4 million customers.
History
Initially, WISPs were only found in rural areas not covered by cable television or DSL. The first WISP in the world was LARIAT, a non-profit rural telecommunications cooperative founded in 1992 in Laramie, Wyoming by electrical engineer and InfoWorld columnist Brett Glass. LARIAT originally used WaveLAN equipment, manufactured by the NCR Corporation, which operated on the 900 MHz unlicensed radio band. LARIAT was taken private in 2003 and continues to exist as a for-profit wireless ISP.
Another early WISP was a company called Internet Office Parks in Johannesburg, South Africa that was founded by Roy Pater, Brett Airey and Attila Barath in January 1996 when they realized the South African Telco, Telkom could not keep up with the demand for dedicated Internet links for business use. Using what was one of the first wireless LAN products available for wireless barcode scanning in stores, called Aironet (now owned by Cisco), they worked out if they ran a dedicated Telco link into the highest building in a business area |
https://en.wikipedia.org/wiki/Interactive%20television | Interactive television is a form of media convergence, adding data services to traditional television technology. It has included on-demand delivery of content, online shopping, and viewer polls. Interactive TV is an example of how new information technology can be integrated vertically into established technologies and commercial structures.
History
Prior to the development of interactive television, interaction could only be simulated. In the 1950s, there were limited efforts to provide an illusion of interactive experience, most overtly with Winky Dink and You, which encouraged viewers to draw on a vinyl sheet they would attach to a television set. QUBE operated an interactive cable television service in Ohio from 1977 to 1984.
An interactive video-on-demand (VOD) television service was proposed in 1986 in Japan, where there were plans to develop an "Integrated Network System" service. It was intended to include various interactive services, including videotelephony, home shopping, online banking, remote work, and home entertainment services. However, it was not possible to practically implement such an interactive VOD service until the adoption of DCT and ADSL technologies made it possible in the 1990s. In early 1994, British Telecommunications (BT) began testing an interactive VOD television trial service in the United Kingdom. It used the DCT-based MPEG-1 and MPEG-2 video compression standards, along with ADSL technology.
Sega Channel, a service that allowed Sega Genesis owners to download video games on demand via cable television signals, began rolling out in the United States in 1994 and was discontinued in 1998. It has been described as a form of interactive television.
The first patent of interactive connected TV was granted in 1999 in the United States; it expired in 2015.
ATSC 3.0, also known as "NextGen TV", adds interactivity features to terrestrial television. As of April 2022, broadcasters in 60 media markets in the United States were using ATS |
https://en.wikipedia.org/wiki/List%20of%20Latin%20and%20Greek%20words%20commonly%20used%20in%20systematic%20names | This list of Latin and Greek words commonly used in systematic names is intended to help those unfamiliar with classical languages to understand and remember the scientific names of organisms. The binomial nomenclature used for animals and plants is largely derived from Latin and Greek words, as are some of the names used for higher taxa, such as orders and above. At the time when biologist Carl Linnaeus (1707–1778) published the books that are now accepted as the starting point of binomial nomenclature, Latin was used in Western Europe as the common language of science, and scientific names were in Latin or Greek: Linnaeus continued this practice.
While learning Latin is now less common, it is still used by classical scholars, and for certain purposes in botany, medicine and the Roman Catholic Church, and it can still be found in scientific names. It is helpful to be able to understand the source of scientific names. Although the Latin names do not always correspond to the current English common names, they are often related, and if their meanings are understood, they are easier to recall. The binomial name often reflects limited knowledge or hearsay about a species at the time it was named. For instance Pan troglodytes, the chimpanzee, and Troglodytes troglodytes, the wren, are not necessarily cave-dwellers.
Sometimes a genus name or specific descriptor is simply the Latin or Greek name for the animal (e.g. Canis is Latin for dog). These words may not be included in the table below if they only occur for one or two taxa. Instead, the words listed below are the common adjectives and other modifiers that repeatedly occur in the scientific names of many organisms (in more than one genus).
Adjectives vary according to gender, and in most cases only the lemma form (nominative singular masculine form) is listed here. 1st-and-2nd-declension adjectives end in -us (masculine), -a (feminine) and -um (neuter), whereas 3rd-declension adjectives ending in -is (masculine |
https://en.wikipedia.org/wiki/MiNT | MiNT (MiNT is Now TOS) is a free software alternative operating system kernel for the Atari ST series. It is a multi-tasking alternative to TOS and MagiC. Together with the free system components fVDI device drivers, XaAES graphical user interface widgets, and TeraDesk file manager, MiNT provides a free TOS compatible replacement OS that can multitask.
History
Work on MiNT began in 1989, as the developer Eric Smith was trying to port the GNU library and related utilities on the Atari ST TOS. It soon became much easier to add a Unix-like layer to the TOS, than to patch all of the GNU software, and MiNT began as a TOS extension to help in porting.
MiNT was originally released by Eric Smith as "MiNT is Not TOS" (a recursive acronym in the style of "GNU's Not Unix") in May 1990. The new Kernel got traction, with people contributing a port of the MINIX file system and a port to the Atari TT.
At the same time, Atari was looking to enhance the TOS with multitasking abilities. MiNT could fulfill the job, and Atari hired Eric Smith. MiNT was adopted as an official alternative kernel with the release of the Atari Falcon, slightly altering the MiNT acronym into "MiNT is Now TOS". Atari bundled MiNT with a multitasking version of the Graphics Environment Manager (GEM) under the name MultiTOS as a floppy disk based installer.
After Atari left the computer market, MiNT development continued as FreeMiNT, and became maintained by a team of volunteers. FreeMiNT development follows a classic open-source approach, with the source code hosted on a publicly browsable FreeMiNT Git repository on GitHub and development discussed in a public mailing list., which is maintained on SourceForge, after an earlier (2014) move from AtariForge, where it was maintained for almost 20 years.
MiNT software ecosystem
FreeMiNT provides only a kernel, so several distributions support MiNT, like VanillaMint, EasyMint, STMint, and BeeKey/BeePi.
Although FreeMiNT can use the graphical user interface |
https://en.wikipedia.org/wiki/Hankel%20matrix | In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:
More generally, a Hankel matrix is any matrix of the form
In terms of the components, if the element of is denoted with , and assuming , then we have for all
Properties
Any Hankel matrix is symmetric.
Let be the exchange matrix. If is a Hankel matrix, then where is a Toeplitz matrix.
If is real symmetric, then will have the same eigenvalues as up to sign.
The Hilbert matrix is an example of a Hankel matrix.
Relation to formal Laurent series
Hankel matrices are closely related to formal Laurent series. In fact, such a series gives rise to a linear map, referred to as a Hankel operator
which takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix
Any Hankel matrix arises in such a way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, i.e., a fraction of two polynomials .
Hankel operator
A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix must satisfy, for all rows and columns , . Note that every entry depends only on .
Let the corresponding Hankel Operator be . Given a Hankel matrix , the corresponding Hankel operator is then defined as .
We are often interested in Hankel operators over the Hilbert space , the space of square integrable bilateral complex sequences. For any , we have
We are often interested in approximations of the Hankel ope |
https://en.wikipedia.org/wiki/Shinobi%20%281987%20video%20game%29 | is a side-scrolling hack and slash video game produced by Sega, originally released for arcades on the Sega System 16 board in 1987. The player controls ninja Joe Musashi, to stop the Zeed terrorist organization from kidnapping students of his clan.
Shinobi was a commercial success in arcades; it topped the monthly Japanese table arcade charts in December 1987, and became a blockbuster arcade hit in the United States, where it was the highest-grossing conversion kit of 1988 and one of the top five conversion kits of 1989. It was adapted by Sega to its Master System game console, followed by conversions to the Nintendo Entertainment System, PC Engine, and home computers. It was re-released as downloadable emulated versions of the original arcade game for the Wii and Xbox 360. The arcade game joined the Nintendo Switch in January 2020 through the Sega Ages series. Shinobis success inspired various sequels and spin-offs of the Shinobi series.
Gameplay
The controls of Shinobi consist of an eight-way joystick and three action buttons for attacking, jumping, and using ninjutsu techniques called "ninja magic". The player can walk, or perform a crouching walk by pressing the joystick diagonally downward. The player can jump to higher or lower floors by pressing the jump button while holding the joystick up or down. The protagonist Joe Musashi's standard weapons are an unlimited supply of shurikens, and punches and kicks. Rescuing certain hostages in each stage will grant him an attack upgrade replacing throwing stars with a gun, and his close-range attack becomes a katana slash. Musashi's ninjutsu techniques can only be used once per stage and will clear the screen of all enemies, or greatly damage a boss. Depending on the stage, the three ninjutsu techniques are a thunderstorm, a tornado, and a doppelganger attack.
Enemies include punks, mercenaries, ninjas, and the Mongolian swordsmen guarding each hostage. Musashi can bump into most enemies without harm and can only |
https://en.wikipedia.org/wiki/Seismic%20tomography | Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
Theory
Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
History
Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks, and in the 1970s when digital seismograph data archives were |
https://en.wikipedia.org/wiki/Planning | Planning is the process of thinking regarding the activities required to achieve a desired goal. Planning is based on foresight, the fundamental capacity for mental time travel. The evolution of forethought, the capacity to think ahead, is considered to have been a prime mover in human evolution. Planning is a fundamental property of intelligent behavior. It involves the use of logic and imagination to visualise not only a desired result, but the steps necessary to achieve that result.
An important aspect of planning is its relationship to forecasting. Forecasting aims to predict what the future will look like, while planning imagines what the future could look like.
Planning according to established principles is a core part of many professional occupations, particularly in fields such as management and business. Once a plan has been developed, it is possible to measure and assess progress, efficiency and effectiveness. As circumstances change, plans may need to be modified or even abandoned.
Psychology
Planning has been modelled in terms of intentions: deciding what tasks one might wish to do; tenacity: continuing towards a goal in the face of difficulty and flexibility, adapting one's approach in response implementation. An implementation intention is a specification of behaviour that an individual believes to be correlated with a goal will take place, such as at a particular time or in a particular place. Implementation intentions are distinguished from goal intentions, which specifies an outcome such as running a marathon.
Neurology
Planning is one of the executive functions of the brain, encompassing the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal. Various studies utilizing a combination of neuropsychological, neuropharmacological and functional neuroimaging approaches have suggested there is a positive relationship between impaired planning ability and damag |
https://en.wikipedia.org/wiki/88%20%28number%29 | 88 (eighty-eight) is the natural number following 87 and preceding 89.
In mathematics
88 is:
a refactorable number.
a primitive semiperfect number.
an untouchable number.
a hexadecagonal number.
an Erdős–Woods number, since it is possible to find sequences of 88 consecutive integers such that each inner member shares a factor with either the first or the last member.
a palindromic number in bases 5 (3235), 10 (8810), 21 (4421), and 43 (2243).
a repdigit in bases 10, 21 and 43.
a 2-automorphic number.
the smallest positive integer with a Zeckendorf representation requiring 5 Fibonacci numbers.
a strobogrammatic number.
the largest number in English not containing the letter 'n' in its name, when using short scale.
88 and 945 are the smallest coprime abundant numbers.
In science and technology
The atomic number of the element radium.
The number of constellations in the sky as defined by the International Astronomical Union.
Messier object M88, a magnitude 11.0 spiral galaxy in the constellation Coma Berenices.
The New General Catalogue object NGC 88, a spiral galaxy in the constellation Phoenix, and a member of Robert's Quartet.
Space Shuttle Mission 88 (STS-88), launched and completed in December 1998, began the construction of the International Space Station.
Approximately the number of days it takes Mercury to complete its orbit.
Cultural significance
In Chinese culture
Number 88 symbolizes fortune and good luck in Chinese culture, since the word 8 sounds similar to the word fā (, which implies , or wealth, in Mandarin or Cantonese). The number 8 is considered to be the luckiest number in Chinese culture, and prices in Chinese supermarkets often contain many 8s. The shape of the Chinese character for 8 () implies that a person will have a great, wide future as the character starts narrow and gets wider toward the bottom. The Chinese government has been auctioning auto license plates containing many 8s for tens of thousands of dollars. The 20 |
https://en.wikipedia.org/wiki/76%20%28number%29 | 76 (seventy-six) is the natural number following 75 and preceding 77.
In mathematics
76 is:
a composite number; a square-prime, of the form (p2, q) where q is a higher prime. It is the ninth of this general form and the seventh of the form (22.q).
a Lucas number.
a telephone or involution number, the number of different ways of connecting 6 points with pairwise connections.
a nontotient.
a 14-gonal number.
a centered pentagonal number.
an Erdős–Woods number since it is possible to find sequences of 76 consecutive integers such that each inner member shares a factor with either the first or the last member.
with an aliquot sum of 64; within an aliquot sequence of two composite numbers (76,64,63,1,0) to the Prime in the 63-aliquot tree.
an automorphic number in base 10. It is one of two 2-digit numbers whose square, 5,776, and higher powers, end in the same two digits. The other is .
There are 76 unique compact uniform hyperbolic honeycombs in the third dimension that are generated from Wythoff constructions.
In science
The atomic number of osmium.
The Little Dumbbell Nebula in the constellation Pegasus is designated as Messier object 76 (M76).
In other fields
Seventy-six is also:
In colloquial American parlance, reference to 1776, the year of the signing of the United States Declaration of Independence.
Seventy-Six, an 1823 novel by American writer John Neal.
The Spirit of '76, patriotic painting by Archibald MacNeal Willard.
A brand of ConocoPhillips gas stations, 76.
The number of trombonists leading the parade in "Seventy-Six Trombones", from Meredith Willson's musical The Music Man.
The 76ers, a professional basketball team based in Philadelphia.
76, the debut album of Dutch trance producer and DJ Armin van Buuren.
Years like 1876 and 1976
See also
List of highways numbered 76
References
Integers |
https://en.wikipedia.org/wiki/69%20%28number%29 | 69 (sixty-nine) is the natural number following 68 and preceding 70.
In mathematics
69 is:
a lucky number.
the twentieth semiprime (3.23) and the seventh of the form (3.q) where q is a higher prime.
the aliquot sum of sixty-nine is 27 within the aliquot sequence (69,27,13,1,0) and is the third composite number in the 13-aliquot tree; following (27,35).
a Blum integer, since the two factors of 69 are both Gaussian primes.
the sum of the sums of the divisors of the first 9 positive integers.
a strobogrammatic number.
a centered tetrahedral number.
Because 69 has an odd number of 1s in its binary representation, it is sometimes called an "odious number."
In decimal, 69 is the only natural number whose square () and cube () use every digit from 0–9 exactly once.
69 is equal to 105 octal, while 105 is equal to 69 hexadecimal. This same property can be applied to all numbers from 64 to 69.
On many handheld scientific and graphing calculators, the highest factorial that can be calculated, due to memory limitations, is 69!, or about 1.711224524.
In science
The atomic number of thulium, a lanthanide.
Astronomy
The Messier object M69 is a magnitude 9.0 globular cluster in the constellation Sagittarius.
In other fields
Sixty-nine may also refer to:
69ing, a sex position involving each partner aligning themselves to achieve oral sex simultaneously with each other.
In reference to the sex position, "69" has become an Internet meme, where users will respond to any occurrence of the number with the word "nice" and draw specific attention to it. This means to sarcastically imply that the reference to the sex position was intentional. Because of its association with the sex position and resulting meme, "69" has become known as "the sex number".
The registry of the U.S. Navy's aircraft carrier , named after Dwight D. Eisenhower, the 34th President of the United States and five-star general in the United States Army.
The number of the French department Rhône. |
https://en.wikipedia.org/wiki/Federico%20Faggin | Federico Faggin (, ; born 1 December 1941) is an Italian physicist, engineer, inventor and entrepreneur. He is best known for designing the first commercial microprocessor, the Intel 4004. He led the 4004 (MCS-4) project and the design group during the first five years of Intel's microprocessor effort. Faggin also created, while working at Fairchild Semiconductor in 1968, the self-aligned MOS (metal-oxide-semiconductor) silicon-gate technology (SGT), which made possible MOS semiconductor memory chips, CCD image sensors, and the microprocessor. After the 4004, he led development of the Intel 8008 and 8080, using his SGT methodology for random logic chip design, which was essential to the creation of early Intel microprocessors. He was co-founder (with Ralph Ungermann) and CEO of Zilog, the first company solely dedicated to microprocessors, and led the development of the Zilog Z80 and Z8 processors. He was later the co-founder and CEO of Cygnet Technologies, and then Synaptics.
In 2010, he received the 2009 National Medal of Technology and Innovation, the highest honor the United States confers for achievements related to technological progress. In 2011, Faggin founded the Federico and Elvia Faggin Foundation to support the scientific study of consciousness at US universities and research institutes. In 2015, the Faggin Foundation helped to establish a $1 million endowment for the Faggin Family Presidential Chair in the Physics of Information at UC Santa Cruz to promote the study of "fundamental questions at the interface of physics and related fields including mathematics, complex systems, biophysics, and cognitive science, with the unifying theme of information in physics."
Education and early career
Born in Vicenza, Italy, Federico grew up in an intellectual environment. His father, Giuseppe Faggin, was a scholar who wrote many academic books and translated, with commentaries, the Enneads of Plotinus from the original Greek into modern Italian. Federico had a str |
https://en.wikipedia.org/wiki/Banyan%20VINES | Banyan VINES is a discontinued network operating system developed by Banyan Systems for computers running AT&T's UNIX System V.
VINES is an acronym for Virtual Integrated NEtwork Service. Like Novell NetWare, VINES's network services are based on the Xerox XNS stack.
James Allchin, who later worked as Group Vice President for Platforms at Microsoft until his retirement on January 30, 2007, was the chief architect of Banyan VINES.
VINES technology
VINES ran on a low-level protocol known as VIP—the VINES Internetwork Protocol—that was essentially identical to the lower layers of the Xerox Network Systems (XNS) protocols. Addresses consist of a 32-bit address and a 16-bit subnet that map to the 48-bit Ethernet address to route to machines. This means that, like other XNS-based systems, VINES can only support a two-level internet.
A set of routing algorithms, however, set VINES apart from other XNS systems at this level. The key differentiator, ARP (Address Resolution Protocol), allowed VINES clients to automatically set up their own network addresses. When a client first boots up, it broadcast a request on the subnet asking for servers, which responds with suggested addresses. The client used the first to respond, although the servers could hand off "better" routing instructions to the client if the network changed. The overall concept resembled AppleTalk's AARP system, with the exception that VINES required at least one server, whereas AARP functioned as peer-to-peer. Like AARP, VINES required an inherently "chatty" network, sending updates about the status of clients to other servers on the internetwork.
Rounding out its lower-level system, VINES used RTP (the Routing Table Protocol), a low-overhead message system for passing around information about changes to the routing, and ARP to determine the address of other nodes on the system. These closely resembled the similar systems used in other XNS-based protocols. VINES also included ICP (the Internet Control Pr |
https://en.wikipedia.org/wiki/72%20%28number%29 | 72 (seventy-two) is the natural number following 71 and preceding 73. It is half a gross or 6 dozen (i.e., 60 in duodecimal).
In mathematics
Seventy-two is a pronic number, as it is the product of 8 and 9. It is the smallest Achilles number, as it's a powerful number that is not itself a power.
72 is an abundant number. With exactly twelve positive divisors, including 12 (one of only two sublime numbers), 72 is also the twelfth member in the sequence of refactorable numbers. 72 has a Euler totient of 24, which makes it a highly totient number, as there are 17 solutions to the equation φ(x) = 72, more than any integer below 72. It is equal to the sum of its preceding smaller highly totient numbers 24 and 48, and contains the first six highly totient numbers 1, 2, 4, 8, 12 and 24 as a subset of its proper divisors. 144, or twice 72, is also highly totient, as is 576, the square of 24. While 17 different integers have a totient value of 72, the sum of Euler's totient function φ(x) over the first 15 integers is 72. It also is a perfect indexed Harshad number in decimal (twenty-eighth), as it is divisible by the sum of its digits (9).
72 is the second multiple of 12, after 48, that is not a sum of twin primes. It is, however, the sum of four consecutive primes (13 + 17 + 19 + 23), as well as the sum of six consecutive primes (5 + 7 + 11 + 13 + 17 + 19).
72 is the smallest number whose fifth power is the sum of five smaller fifth powers: 195 + 435 + 465 + 475 + 675 = 725.
72 is the number of distinct } magic heptagrams, all with a magic constant of 30.
72 is the sum of the eighth row of Lozanić's triangle.
72 is the number of degrees in the central angle of a regular pentagon, which is constructible with a compass and straight-edge.
72 plays a role in the Rule of 72 in economics when approximating annual compounding of interest rates of a round 6% to 10%, due in part to its high number of divisors.
Inside Lie algebras:
72 is the number of vertices of the six-dimen |
https://en.wikipedia.org/wiki/Coprocessor | A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I/O interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it.
Functionality
Coprocessors vary in their degree of autonomy. Some (such as FPUs) rely on direct control via coprocessor instructions, embedded in the CPU's instruction stream. Others are independent processors in their own right, capable of working asynchronously; they are still not optimized for general-purpose code, or they are incapable of it due to a limited instruction set focused on accelerating specific tasks. It is common for these to be driven by direct memory access (DMA), with the host processor (a CPU) building a command list. The PlayStation 2's Emotion Engine contained an unusual DSP-like SIMD vector unit capable of both modes of operation.
History
To make the best use of mainframe computer processor time, input/output tasks were delegated to separate systems called Channel I/O. The mainframe would not require any I/O processing at all, instead would just set parameters for an input or output operation and then signal the channel processor to carry out the whole of the operation. By dedicating relatively simple sub-processors to handle time-consuming I/O formatting and processing, overall system performance was improved.
Coprocessors for floating-point arithmetic first appeared in desktop computers in the 1970s and became common throughout the 1980s and into the early 1990s. Early 8-bit and 16-bit processors used software to carry out floating-point arithmetic operations. Where a coprocessor was supported, floating-poin |
https://en.wikipedia.org/wiki/Lowest%20common%20denominator | In mathematics, the lowest common denominator or least common denominator (abbreviated LCD) is the lowest common multiple of the denominators of a set of fractions. It simplifies adding, subtracting, and comparing fractions.
Description
The lowest common denominator of a set of fractions is the lowest number that is a multiple of all the denominators: their lowest common multiple.
The product of the denominators is always a common denominator, as in:
but it is not always the lowest common denominator, as in:
Here, 36 is the least common multiple of 12 and 18. Their product, 216, is also a common denominator, but calculating with that denominator involves larger numbers:
With variables rather than numbers, the same principles apply:
Some methods of calculating the LCD are at .
Role in arithmetic and algebra
The same fraction can be expressed in many different forms. As long as the ratio between numerator and denominator is the same, the fractions represent the same number. For example:
because they are all multiplied by 1 written as a fraction:
It is usually easiest to add, subtract, or compare fractions when each is expressed with the same denominator, called a "common denominator". For example, the numerators of fractions with common denominators can simply be added, such that and that , since each fraction has the common denominator 12. Without computing a common denominator, it is not obvious as to what equals, or whether is greater than or less than . Any common denominator will do, but usually the lowest common denominator is desirable because it makes the rest of the calculation as simple as possible.
Practical uses
The LCD has many practical uses, such as determining the number of objects of two different lengths necessary to align them in a row which starts and ends at the same place, such as in brickwork, tiling, and tessellation. It is also useful in planning work schedules with employees with y days off every x days.
In musi |
https://en.wikipedia.org/wiki/Open%20architecture | Open architecture is a type of computer architecture or software architecture intended to make adding, upgrading, and swapping components with other computers easy. For example, the IBM PC, Amiga 500 and Apple IIe have an open architecture supporting plug-in cards, whereas the Apple IIc computer has a closed architecture. Open architecture systems may use a standardized system bus such as S-100, PCI or ISA or they may incorporate a proprietary bus standard such as that used on the Apple II, with up to a dozen slots that allow multiple hardware manufacturers to produce add-ons, and for the user to freely install them. By contrast, closed architectures, if they are expandable at all, have one or two "expansion ports" using a proprietary connector design that may require a license fee from the manufacturer, or enhancements may only be installable by technicians with specialized tools or training.
Computer platforms may include systems with both open and closed architectures. The Mac mini and Compact Macintosh are closed; the Macintosh II and Power Mac G5 are open. Most desktop PCs are open architecture.
Similarly, an open software architecture is one in which additional software modules can be added to the basic framework provided by the architecture. Open APIs (Application Programming Interfaces) to major software products are the way in which the basic functionality of such products can be modified or extended. The Google APIs are examples. A second type of open software architecture consists of the messages that can flow between computer systems. These messages have a standard structure that can be modified or extended per agreements between the computer systems. An example is IBM's Distributed Data Management Architecture.
Open architecture allows potential users to see inside all or parts of the architecture without any proprietary constraints. Typically, an open architecture publishes all or parts of its architecture that the developer or integrator wants to |
https://en.wikipedia.org/wiki/William%20Lowell%20Putnam%20Mathematical%20Competition | The William Lowell Putnam Mathematical Competition, often abbreviated to Putnam Competition, is an annual mathematics competition for undergraduate college students enrolled at institutions of higher learning in the United States and Canada (regardless of the students' nationalities). It awards a scholarship and cash prizes ranging from $250 to $2,500 for the top students and $5,000 to $25,000 for the top schools, plus one of the top five individual scorers (designated as Putnam Fellows) is awarded a scholarship of up to $12,000 plus tuition at Harvard University (Putnam Fellow Prize Fellowship), the top 100 individual scorers have their names mentioned in the American Mathematical Monthly (alphabetically ordered within rank), and the names and addresses of the top 500 contestants are mailed to all participating institutions. It is widely considered to be the most prestigious university-level mathematics competition in the world, and its difficulty is such that the median score is often zero (out of 120) despite being attempted by students specializing in mathematics.
The competition was founded in 1927 by Elizabeth Lowell Putnam in memory of her husband William Lowell Putnam, who was an advocate of intercollegiate intellectual competition. The competition has been offered annually since 1938 and is administered by the Mathematical Association of America.
Competition layout
The Putnam competition takes place on the first Saturday in December, and consists of two three-hour sittings separated by a lunch break. The competition is supervised by faculty members at the participating schools. Each one consists of twelve challenging problems. The problems cover a range of advanced material in undergraduate mathematics, including concepts from group theory, set theory, graph theory, lattice theory, and number theory.
Each of the twelve questions is worth 10 points, and the most frequent scores above zero are 10 points for a complete solution, 9 points for a nearly c |
https://en.wikipedia.org/wiki/Traitorous%20eight | The traitorous eight was a group of eight employees who left Shockley Semiconductor Laboratory in 1957 to found Fairchild Semiconductor. William Shockley had in 1956 recruited a group of young Ph.D. graduates with the goal to develop and produce new semiconductor devices. While Shockley had received a Nobel Prize in Physics and was an experienced researcher and teacher, his management of the group was authoritarian and unpopular. This was accentuated by Shockley's research focus not proving fruitful. After the demand for Shockley to be replaced was rebuffed, the eight left to form their own company.
Shockley described their leaving as a "betrayal". The eight who left Shockley Semiconductor were Julius Blank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last, Gordon Moore, Robert Noyce, and Sheldon Roberts. In August 1957, they reached an agreement with Sherman Fairchild, and on September 18, 1957, they formed Fairchild Semiconductor. The newly founded Fairchild Semiconductor soon grew into a leader in the semiconductor industry. In 1960, it became an incubator of Silicon Valley and was directly or indirectly involved in the creation of dozens of corporations, including Intel and AMD. These many spin-off companies came to be known as "Fairchildren".
Initiation
In the winter of 1954–1955, William Shockley, an inventor of the transistor and a visiting professor at Stanford University, decided to establish his own mass production of advanced transistors and Shockley diodes. He found a sponsor in Raytheon, but Raytheon discontinued the project after a month. In August 1955, Shockley turned for advice to the financier Arnold Beckman, the owner of Beckman Instruments. Shockley needed one million dollars (1 million dollars in 1955 is about 11 million in 2023). Beckman knew that Shockley had no chance in the business, but believed that Shockley's new inventions would be beneficial for his own company and did not want to give them to his competitors. Accordingly, Bec |
https://en.wikipedia.org/wiki/Thermoregulation | Thermoregulation is the ability of an organism to keep its body temperature within certain boundaries, even when the surrounding temperature is very different. A thermoconforming organism, by contrast, simply adopts the surrounding temperature as its own body temperature, thus avoiding the need for internal thermoregulation. The internal thermoregulation process is one aspect of homeostasis: a state of dynamic stability in an organism's internal conditions, maintained far from thermal equilibrium with its environment (the study of such processes in zoology has been called physiological ecology). If the body is unable to maintain a normal temperature and it increases significantly above normal, a condition known as hyperthermia occurs. Humans may also experience lethal hyperthermia when the wet bulb temperature is sustained above for six hours. Work in 2022 established by experiment that a wet-bulb temperature exceeding 30.55°C caused uncompensable heat stress in young, healthy adult humans. The opposite condition, when body temperature decreases below normal levels, is known as hypothermia. It results when the homeostatic control mechanisms of heat within the body malfunction, causing the body to lose heat faster than producing it. Normal body temperature is around 37°C(98.6°F), and hypothermia sets in when the core body temperature gets lower than . Usually caused by prolonged exposure to cold temperatures, hypothermia is usually treated by methods that attempt to raise the body temperature back to a normal range.
It was not until the introduction of thermometers that any exact data on the temperature of animals could be obtained. It was then found that local differences were present, since heat production and heat loss vary considerably in different parts of the body, although the circulation of the blood tends to bring about a mean temperature of the internal parts. Hence it is important to identify the parts of the body that most closely reflect the temperature |
https://en.wikipedia.org/wiki/Calcium%20in%20biology | Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation.
Plasma calcium levels in mammals are tightly regulated, with bone acting as the major mineral storage site. Calcium ions, Ca2+, are released from bone into the bloodstream under controlled conditions. Calcium is transported through the bloodstream as dissolved ions or bound to proteins such as serum albumin. Parathyroid hormone secreted by the parathyroid gland regulates the resorption of Ca2+ from bone, reabsorption in the kidney back into circulation, and increases in the activation of vitamin D3 to calcitriol. Calcitriol, the active form of vitamin D3, promotes absorption of calcium from the intestines and bones. Calcitonin secreted from the parafollicular cells of the thyroid gland also affects calcium levels by opposing parathyroid hormone; however, its physiological significance in humans is dubious.
Intracellular calcium is stored in organelles which repetitively release and then reaccumulate Ca2+ ions in response to specific cellular events: storage sites include mitochondria and the endoplasmic reticulum.
Characteristic concentrations of calcium in model organisms are: in E. coli 3mM (bound), 100nM (free), in budding yeast 2mM (bound), in mammalian cell 10-100nM (free) and in blood plasma 2mM.
Humans
In 2020, calcium was the 204th most commonly prescribed medication in the United States, with more than 2million prescriptions.
Dietary recommendations
The U.S. Institute of Medicine (IOM) established Recommended Dietary Allowanc |
https://en.wikipedia.org/wiki/Print-through | Print-through is a generally undesirable effect that arises in the use of magnetic tape for storing analog information, in particular music, caused by contact transfer of signal patterns from one layer of tape to another as it sits wound concentrically on a reel.
Explanation
Print-through is a category of noise caused by contact transfer of signal patterns from one layer of tape to another after it is wound onto a reel.
Print-through can take two forms:
thermo-remanent magnetization induced by temperature, and
anhysteretic magnetization caused by an external magnetic field.
The former is unstable over time and can be easily erased by rewinding a tape and letting it sit so that the patterns formed by the contact of upper and lower layers begin to erase each other and form new patterns with the repositioning of upper/lower layers after rewinding. This type of contact printing begins immediately after a recording and increases over time at a rate dependent on the temperature of the storage conditions. Depending on tape formulation and type, a maximum level will be reached after a certain length of time, if it is not further disturbed physically or magnetically.
Audibility
The audibility of print noise caused by contact printing depends on a number of factors:
the amount of print due to conditions of time and storage;
the thickness of the base film that acts as magnetic barrier (thin C-90 cassette tapes are more susceptible than studio mastering tapes that use a base film four times thicker);
the stability of the magnetic particle used in the tape coating;
the speed of the tape (the wavelengths of the prints shift so that higher speeds move printed signal closer to the range where the ear is more sensitive); the dynamics of the musical program (very quiet passages adjacent to sudden loud signals can expose the print signal transferred from the loud signal); and the wind of the tape (A-winds for cassettes with the magnetic layer facing outward have stronger |
https://en.wikipedia.org/wiki/Japanese%20naval%20codes | The vulnerability of Japanese naval codes and ciphers was crucial to the conduct of World War II, and had an important influence on foreign relations between Japan and the west in the years leading up to the war as well. Every Japanese code was eventually broken, and the intelligence gathered made possible such operations as the victorious American ambush of the Japanese Navy at Midway in 1942 (by breaking code JN-25b) and the shooting down of Japanese admiral Isoroku Yamamoto a year later in Operation Vengeance.
The Imperial Japanese Navy (IJN) used many codes and ciphers. All of these cryptosystems were known differently by different organizations; the names listed below are those given by Western cryptanalytic operations.
Red code
The Red Book code was an IJN code book system used in World War I and after. It was called "Red Book" because the American photographs made of it were bound in red covers. It should not be confused with the RED cipher used by the diplomatic corps.
This code consisted of two books. The first contained the code itself; the second contained an additive cipher which was applied to the codes before transmission, with the starting point for the latter being embedded in the transmitted message. A copy of the code book was obtained in a "black bag" operation on the luggage of a Japanese naval attaché in 1923; after three years of work Agnes Driscoll was able to break the additive portion of the code.
Knowledge of the Red Book code helped crack the similarly constructed Blue Book code.
Coral
A cipher machine developed for Japanese naval attaché ciphers, similar to JADE. It was not used extensively, but Vice Admiral Katsuo Abe, a Japanese representative to the Axis Tripartite Military Commission, passed considerable information about German deployments in CORAL, intelligence "essential for Allied military decision making in the European Theater."
Jade
A cipher machine used by the Imperial Japanese Navy from late 1942 to 1944 and similar t |
https://en.wikipedia.org/wiki/Mastercard | Mastercard Inc. (stylized as MasterCard from 1979 to 2016, mastercard from 2016 to 2019) is the second-largest payment-processing corporation worldwide. It offers a range of payment transaction processing and other related-payment services (such as travel-related payments and bookings). Its headquarters are in Purchase, New York. Throughout the world, its principal business is to process payments between the banks of merchants and the card-issuing banks or credit unions of the purchasers who use the Mastercard-brand debit, credit and prepaid cards to make purchases. Mastercard has been publicly traded since 2006.
Mastercard (originally Interbank then Master Charge) was created by an alliance of several banks and regional bankcard associations in response to the BankAmericard issued by Bank of America, which later became Visa, still its biggest competitor. Prior to its initial public offering, Mastercard Worldwide was a cooperative owned by the more than 25,000 financial institutions that issue its branded cards.
History
Although BankAmericard's debut in September 1958 was a notorious disaster, it began to turn a profit by May 1961. Bank of America deliberately kept this information secret and allowed then-widespread negative impressions to linger in order to ward off competition. This strategy was successful until 1966, when BankAmericard's profitability had become far too big to hide. From 1960 to 1966, there were only 10 new credit cards introduced in the United States, but from 1966 to 1968, approximately 440 credit cards were introduced by banks large and small throughout the country. These newcomers promptly banded together into regional bankcard associations.
One reason why most banks chose to join forces was that at the time, 16 states limited the ability of banks to operate through branch locations, while 15 states entirely prohibited branch banking and required unit banking. A unit bank can legally operate only at a single site and is thereby forced to |
https://en.wikipedia.org/wiki/Potassium%20in%20biology | Potassium is the main intracellular ion for all types of cells, while having a major role in maintenance of fluid and electrolyte balance. Potassium is necessary for the function of all living cells, and is thus present in all plant and animal tissues. It is found in especially high concentrations within plant cells, and in a mixed diet, it is most highly concentrated in fruits. The high concentration of potassium in plants, associated with comparatively very low amounts of sodium there, historically resulted in potassium first being isolated from the ashes of plants (potash), which in turn gave the element its modern name. The high concentration of potassium in plants means that heavy crop production rapidly depletes soils of potassium, and agricultural fertilizers consume 93% of the potassium chemical production of the modern world economy.
The functions of potassium and sodium in living organisms are quite different. Animals, in particular, employ sodium and potassium differentially to generate electrical potentials in animal cells, especially in nervous tissue. Potassium depletion in animals, including humans, results in various neurological dysfunctions. Characteristic concentrations of potassium in model organisms are: 30–300mM in E. coli, 300mM in budding yeast, 100mM in mammalian cell and 4mM in blood plasma.
Function in plants
The main role of potassium in plants is to provide the ionic environment for metabolic processes in the cytosol, and as such functions as a regulator of various processes including growth regulation. Plants require potassium ions (K+) for protein synthesis and for the opening and closing of stomata, which is regulated by proton pumps to make surrounding guard cells either turgid or flaccid. A deficiency of potassium ions can impair a plant's ability to maintain these processes. Potassium also functions in other physiological processes such as photosynthesis, protein synthesis, activation of some enzymes, phloem solute transport of |
https://en.wikipedia.org/wiki/Magnesium%20in%20biology | Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA.
Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA.
In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis.
Function
A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments.
Human health
Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time.
|
https://en.wikipedia.org/wiki/Grosch%27s%20law | Grosch's law is the following observation of computer performance, made by Herb Grosch in 1953:
I believe that there is a fundamental rule, which I modestly call Grosch's law, giving added economy only as the square root of the increase in speed — that is, to do a calculation ten times as cheaply you must do it hundred times as fast.
This adage is more commonly stated as
Computer performance increases as the square of the cost. If computer A costs twice as much as computer B, you should expect computer A to be four times as fast as computer B.
Two years before Grosch's statement, Seymour Cray was quoted in Business Week (August 1963) expressing this very same thought:
Computers should obey a square law — when the price doubles, you should get at least four times as much speed.
The law can also be interpreted as meaning that computers present economies of scale: the more costly is the computer, the price–performance ratio linearly becomes better. This implies that low-cost computers cannot compete in the market.
An analysis of rental cost/performance data for computers between 1951 and 1963 by Kenneth E. Knight found that Grosch's law held for commercial and scientific operations (a modern analysis of the same data found that Grosch's law only applied to commercial operations). In a separate study, Knight found that Grosch's law did not apply to computers between 1963-1967 (also confirmed by the aforementioned modern analysis).
Debates
Paul Strassmann asserted in 1997, that "it was never clear whether Grosch's Law was a reflection of how IBM priced its computers or whether it related to actual costs. It provided the rationale that a bigger computer is always better. The IBM sales force used Grosch's rationale to persuade organizations to acquire more computing capacity than they needed. Grosch's Law also became the justification for offering time-sharing services from big data centers as a substitute for distributed computing." Grosch himself has stated that |
https://en.wikipedia.org/wiki/Pico%20%28programming%20language%29 | Pico is a programming language developed at the Software Languages Lab at Vrije Universiteit Brussel. The language was created to introduce the essentials of programming to non-computer science students.
Pico can be seen as an effort to generate a palatable and enjoyable language for people who do not want to study hard for the elegance and power of a language. They have done it by adapting Scheme's semantics.
While designing Pico, the Software Languages Lab was inspired by the Abelson and Sussman's book "Structure and Interpretation of Computer Programs". Furthermore, they were influenced by the teaching of programming at high school or academic level.
Pico should be interpreted as 'small', the idea was to create a small language for educational purposes.
Language elements
Comments
Comments are surrounded by backquotes ("`").
Variables
Variables are dynamically typed; Pico uses static scope.
var: value
Functions
Functions are first-class objects in Pico. They can be assigned to variables. For example a function with two parameters param1 and param2 can be defined as:
func(param1, param2): ...
Functions can be called with the following syntax:
func(arg1, arg2)
Operators
Operators can be used as prefix or infix in Pico:
+(5, 2)
5 + 2
Data types
Pico has the following types: string, integer, real and tables.
It does not have a native char type, so users should resort to size 1 strings.
Tables are compound data structures that may contain any of the regular data types.
Boolean types are represented by functions (as in lambda calculus).
Control structures
Conditional evaluation
Only the usual if statement is included
if(condition, then, else)
Code snippets
display('Hello World', eoln)
max(a, b):
if(a < b, b, a)
`http://www.paulgraham.com/accgen.html`
foo(n): fun(i): n := n+i
Implementations
Mac OS, Mac OS X
MacPico
XPico
Windows
WinPico This version is buggy
WinPico stable
Linux
TextPico for Linux
Cross-platform
sPico for Dr |
https://en.wikipedia.org/wiki/Equivalence%20of%20categories | In category theory, a branch of abstract mathematics, an equivalence of categories is a relation between two categories that establishes that these categories are "essentially the same". There are numerous examples of categorical equivalences from many areas of mathematics. Establishing an equivalence involves demonstrating strong similarities between the mathematical structures concerned. In some cases, these structures may appear to be unrelated at a superficial or intuitive level, making the notion fairly powerful: it creates the opportunity to "translate" theorems between different kinds of mathematical structures, knowing that the essential meaning of those theorems is preserved under the translation.
If a category is equivalent to the opposite (or dual) of another category then one speaks of
a duality of categories, and says that the two categories are dually equivalent.
An equivalence of categories consists of a functor between the involved categories, which is required to have an "inverse" functor. However, in contrast to the situation common for isomorphisms in an algebraic setting, the composite of the functor and its "inverse" is not necessarily the identity mapping. Instead it is sufficient that each object be naturally isomorphic to its image under this composition. Thus one may describe the functors as being "inverse up to isomorphism". There is indeed a concept of isomorphism of categories where a strict form of inverse functor is required, but this is of much less practical use than the equivalence concept.
Definition
Formally, given two categories C and D, an equivalence of categories consists of a functor F : C → D, a functor G : D → C, and two natural isomorphisms ε: FG→ID and η : IC→GF. Here FG: D→D and GF: C→C denote the respective compositions of F and G, and IC: C→C and ID: D→D denote the identity functors on C and D, assigning each object and morphism to itself. If F and G are contravariant functors one speaks of a duality of categories in |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.