source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Primitive%20root%20modulo%20n | In modular arithmetic, a number is a primitive root modulo if every number coprime to is congruent to a power of modulo . That is, is a primitive root modulo if for every integer coprime to , there is some integer for which ≡ (mod ). Such a value is called the index or discrete logarithm of to the base modulo . So is a primitive root modulo if and only if is a generator of the multiplicative group of integers modulo .
Gauss defined primitive roots in Article 57 of the Disquisitiones Arithmeticae (1801), where he credited Euler with coining the term. In Article 56 he stated that Lambert and Euler knew of them, but he was the first to rigorously demonstrate that primitive roots exist for a prime . In fact, the Disquisitiones contains two proofs: The one in Article 54 is a nonconstructive existence proof, while the proof in Article 55 is constructive.
Elementary example
The number 3 is a primitive root modulo 7 because
Here we see that the period of 3 modulo 7 is 6. The remainders in the period, which are 3, 2, 6, 4, 5, 1, form a rearrangement of all nonzero remainders modulo 7, implying that 3 is indeed a primitive root modulo 7. This derives from the fact that a sequence ( modulo ) always repeats after some value of , since modulo produces a finite number of values. If is a primitive root modulo and is prime, then the period of repetition is Permutations created in this way (and their circular shifts) have been shown to be Costas arrays.
Definition
If is a positive integer, the integers from 1 to that are coprime to (or equivalently, the congruence classes coprime to ) form a group, with multiplication modulo as the operation; it is denoted by , and is called the group of units modulo , or the group of primitive classes modulo . As explained in the article multiplicative group of integers modulo , this multiplicative group () is cyclic if and only if is equal to 2, 4, , or 2 where is a power of an odd prime number. When (and only when) |
https://en.wikipedia.org/wiki/Hall%27s%20marriage%20theorem | In mathematics, Hall's marriage theorem, proved by , is a theorem with two equivalent formulations. In each case, the theorem gives a necessary and sufficient condition for an object to exist:
The combinatorial formulation answers whether a finite collection of sets has a transversal—that is, whether an element can be chosen from each set without repetition. Hall's condition is that for any group of sets from the collection, the total unique elements they contain is at least as large as the number of sets in the group.
The graph theoretic formulation answers whether a finite bipartite graph has a perfect matching—that is, a way to match each vertex from one group uniquely to an adjacent vertex from the other group. Hall's condition is that any subset of vertices from one group has a neighbourhood of equal or greater size.
Combinatorial formulation
Statement
Let be a finite family of sets (note that although is not itself allowed to be infinite, the sets in it may be so, and may contain the same set multiple times). Let be the union of all the sets in , the set of elements that belong to at least one of its sets. A transversal for is a subset of that can be obtained by choosing a distinct element from each set in . This concept can be formalized by defining a transversal to be the image of an injective function such that for each . An alternative term for transversal is system of distinct representatives.
The collection satisfies the marriage condition when each subfamily of contains at least as many distinct members as its number of sets. That is, for all ,
If a transversal exists then the marriage condition must be true: the function used to define the transversal maps to a subset of its union, of size equal to , so the whole union must be at least as large. Hall's theorem states that the converse is also true:
Examples
Example 1
Consider the family with and The transversal could be generated by the function that maps to , to , and to , |
https://en.wikipedia.org/wiki/Frequency%20allocation | Frequency allocation (or spectrum allocation or spectrum management) is the allocation and regulation of the electromagnetic spectrum into radio frequency bands, normally done by governments in most countries. Because radio propagation does not stop at national boundaries, governments have sought to harmonise the allocation of RF bands and their standardization.
ITU definition
The International Telecommunication Union defines frequency allocation as being of "a given frequency band for the purpose of its use by one or more terrestrial or space radiocommunication services or the radio astronomy service under specified conditions".
Frequency allocation is also a special term, used in national frequency administration. Other terms are:
Bodies
Several bodies set standards for frequency allocation, including:
International Telecommunication Union (ITU)
European Conference of Postal and Telecommunications Administrations (CEPT)
Inter-American Telecommunication Commission (CITEL)
To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are:
primary
secondary
exclusive or shared utilization, within the responsibility of national administrations.
Allocations of military usage will be in accordance with the ITU Radio Regulations. In NATO countries, military mobile utilizations are made in accordance with the NATO Joint Civil/Military Frequency Agreement (NJFA).
Examples of frequency allocations
Some of the bands listed (e.g., amateur 1.8–29.7 MHz) have gaps / are not continuous allocations.
BCB is an abbreviation for broadcast band, for commercial radio news and music broadcasts.
See also
Spectrum management
Amateur radio frequency allocations
References
External links
International Telecommunication Union (ITU)
ITU Radio Regulations - Volume 1 (Article 5) international table of f |
https://en.wikipedia.org/wiki/Han%20unification | Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (hanzi), Japanese (kanji), Korean (hanja) and Vietnamese (chữ Hán).
Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them as allographsdifferent glyphs representing the same "grapheme" or orthographic unit hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan.
Nevertheless, many characters have regional variants assigned to different code points, such as Traditional (U+500B) versus Simplified (U+4E2A).
Rationale and controversy
The Unicode Standard details the principles of Han unification.
The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process.
One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Unicode was later extended to 21 bits allowing many more CJK characters (97,680 are assigned, with room for more).
An article hosted by IBM attempts to illustrate part of the motivation for Han unification:
In fact, the three ideographs for "one" (, , or ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be |
https://en.wikipedia.org/wiki/Antenna%20%28radio%29 | In radio engineering, an antenna (American English) or aerial (British English) is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment.
An antenna is an array of conductors (elements), electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional, or high-gain, or "beam" antennas). An antenna may include components not connected to the transmitter, parabolic reflectors, horns, or parasitic elements, which serve to direct the radio waves into a beam or other desired radiation pattern. Strong directivity and good efficiency when transmitting are hard to achieve with antennas with dimensions that are much smaller than a half wavelength.
The first antennas were built in 1888 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of waves predicted by the electromagnetic theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. Starting in 1895, Guglielmo Marconi began development of antennas practical for long-distance, wireless telegraphy, for which he received a Nobel Prize.
Terminology
The words antenna and aerial are used interchangeably. Occasionally the equivalent term "aerial" is used to specifically mean an elevated horizontal wire antenna. The origin of the word an |
https://en.wikipedia.org/wiki/State%20diagram | A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which differ slightly and have different semantics.
Overview
State diagrams are used to give an abstract description of the behavior of a system. This behavior is analyzed and represented by a series of events that can occur in one or more possible states. Hereby "each diagram usually represents objects of a single class and track the different states of its objects through the system".
State diagrams can be used to graphically represent finite-state machines (also called finite automata). This was introduced by Claude Shannon and Warren Weaver in their 1949 book The Mathematical Theory of Communication. Another source is Taylor Booth in his 1967 book Sequential Machines and Automata Theory. Another possible representation is the state-transition table.
Directed graph
A classic form of state diagram for a finite automaton (FA) is a directed graph with the following elements (Q, Σ, Z, δ, q0, F):
Vertices Q: a finite set of states, normally represented by circles and labeled with unique designator symbols or words written inside them
Input symbols Σ: a finite collection of input symbols or designators
Output symbols Z: a finite collection of output symbols or designators
The output function ω represents the mapping of ordered pairs of input symbols and states onto output symbols, denoted mathematically as ω : Σ × Q→ Z.
Edges δ: represent transitions from one state to another as caused by the input (identified by their symbols drawn on the edges). An edge is usually drawn as an arrow directed from the present state to the next state. This mapping describes the state transition that is to occur on input of a particular symbol. This |
https://en.wikipedia.org/wiki/Magnetic%20susceptibility | In electromagnetism, the magnetic susceptibility (; denoted , chi) is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetizing field intensity . This allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, , called paramagnetism, or an alignment against the field, , called diamagnetism.
Magnetic susceptibility indicates whether a material is attracted into or repelled out of a magnetic field. Paramagnetic materials align with the applied field and are attracted to regions of greater magnetic field. Diamagnetic materials are anti-aligned and are pushed away, toward regions of lower magnetic fields. On top of the applied field, the magnetization of the material adds its own magnetic field, causing the field lines to concentrate in paramagnetism, or be excluded in diamagnetism. Quantitative measures of the magnetic susceptibility also provide insights into the structure of materials, providing insight into bonding and energy levels. Furthermore, it is widely used in geology for paleomagnetic studies and structural geology.
The magnetizability of materials comes from the atomic-level magnetic properties of the particles of which they are made. Usually, this is dominated by the magnetic moments of electrons. Electrons are present in all materials, but without any external magnetic field, the magnetic moments of the electrons are usually either paired up or random so that the overall magnetism is zero (the exception to this usual case is ferromagnetism). The fundamental reasons why the magnetic moments of the electrons line up or do not are very complex and cannot be explained by classical physics. However, a useful simplification is to measure the magnetic susceptibility of a material and apply the macroscopic form of Maxwell's equations. This allows clas |
https://en.wikipedia.org/wiki/Cisco%20IOS | The Internetworking Operating System (IOS) is a family of proprietary network operating systems used on several router and network switch models manufactured by Cisco Systems. The system is a package of routing, switching, internetworking, and telecommunications functions integrated into a multitasking operating system. Although the IOS code base includes a cooperative multitasking kernel, most IOS features have been ported to other kernels, such as Linux and QNX, for use in Cisco products.
Not all Cisco networking products run IOS. Exceptions include some Cisco Catalyst switches, which run IOS XE, and Cisco ASR routers, which run either IOS XE or IOS XR; both are Linux-based operating systems. For data center environments, Cisco Nexus switches (Ethernet) and Cisco MDS switches (Fibre Channel) both run Cisco NX-OS, also a Linux-based operating system.
History
The IOS network operating system was created from code written by William Yeager at Stanford University, which was developed in the 1980s for routers with 256 kB of memory and low CPU processing power. Through modular extensions, IOS has been adapted to increasing hardware capabilities and new networking protocols. When IOS was developed, Cisco Systems' main product line were routers. The company acquired a number of young companies that focused on network switches, such as the inventor of the first Ethernet switch Kalpana, and as a result Cisco switches did not initially run IOS. Prior to IOS, the Cisco Catalyst series ran CatOS.
Command-line interface
The IOS command-line interface (CLI) provides a fixed set of multiple-word commands. The set available is determined by the "mode" and the privilege level of the current user. "Global configuration mode" provides commands to change the system's configuration, and "interface configuration mode" provides commands to change the configuration of a specific interface. All commands are assigned a privilege level, from 0 to 15, and can only be accessed by users wit |
https://en.wikipedia.org/wiki/Software%20metric | In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.
Common software measurements
Common software measurements include:
ABC Software Metric
Balanced scorecard
Bugs per line of code
Code coverage
Cohesion
Comment density
Connascent software components
Constructive Cost Model
Coupling
Cyclomatic complexity (McCabe's complexity)
Cyclomatic complexity density
Defect density - defects found in a component
Defect potential - expected number of defects in a particular component
Defect removal rate
DSQI (design structure quality index)
Function Points and Automated Function Points, an Object Management Group standard
Halstead Complexity
Instruction path length
Maintainability index
Source lines of code - number of lines of code
Program execution time
Program load time
Program size (binary)
Weighted Micro Function Points
Cycle time (software)
First pass yield
Corrective Commit Probability
Limitations
As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to t |
https://en.wikipedia.org/wiki/Lenz%27s%20law | Lenz's law states that the direction of the electric current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes changes in the initial magnetic field. It is named after physicist Emil Lenz, who formulated it in 1834.
It is a qualitative law that specifies the direction of induced current, but states nothing about its magnitude. Lenz's law predicts the direction of many effects in electromagnetism, such as the direction of voltage induced in an inductor or wire loop by a changing current, or the drag force of eddy currents exerted on moving objects in a magnetic field.
Lenz's law may be seen as analogous to Newton's third law in classical mechanics and Le Chatelier's principle in chemistry.
Definition
Lenz's law states that:
The current induced in a circuit due to a change in a magnetic field is directed to oppose the change in flux and to exert a mechanical force which opposes the motion.
Lenz's law is contained in the rigorous treatment of Faraday's law of induction (the magnitude of EMF induced in a coil is proportional to the rate of change of the magnetic field), where it finds expression by the negative sign:
which indicates that the induced electromotive force and the rate of change in magnetic flux have opposite signs.
This means that the direction of the back EMF of an induced field opposes the changing current that is its cause. D.J. Griffiths summarized it as follows: Nature abhors a change in flux.
If a change in the magnetic field of current i1 induces another electric current, i2, the direction of i2 is opposite that of the change in i1. If these currents are in two coaxial circular conductors ℓ1 and ℓ2 respectively, and both are initially 0, then the currents i1 and i2 must counter-rotate. The opposing currents will repel each other as a result.
Example
Magnetic fields from strong magnets can create counter-rotating currents in a copper or aluminium pipe. This is shown |
https://en.wikipedia.org/wiki/APT%20%28software%29 | Advanced package tool, or APT, is a free-software user interface that works with core libraries to handle the installation and removal of software on Debian, and Debian-based Linux distributions. APT simplifies the process of managing software on Unix-like computer systems by automating the retrieval, configuration and installation of software packages, either from precompiled files or by compiling source code.
Usage
APT is a collection of tools distributed in a package named apt. A significant part of APT is defined in a C++ library of functions; APT also includes command-line programs for dealing with packages, which use the library. Three such programs are apt, apt-get and apt-cache. They are commonly used in examples because they are simple and ubiquitous. The apt package is of "important" priority in all current Debian releases, and is therefore included in a default Debian installation. APT can be considered a front-end to dpkg, friendlier than the older dselect front-end. While dpkg performs actions on individual packages, APT manages relations (especially dependencies) between them, as well as sourcing and management of higher-level versioning decisions (release tracking and version pinning).
APT is often hailed as one of Debian's best features, which Debian developers attribute to the strict quality controls in Debian's policy.
A major feature of APT is the way it calls dpkg — it does topological sorting of the list of packages to be installed or removed and calls dpkg in the best possible sequence. In some cases, it utilizes the --force options of dpkg. However, it only does this when it is unable to calculate how to avoid the reason dpkg requires the action to be forced.
Installing software
The user indicates one or more packages to be installed. Each package name is phrased as just the name portion of the package, not a fully qualified filename (for instance, in a Debian system, libc6 would be the argument provided, not libc6_1.9.6-2.deb). Notably, A |
https://en.wikipedia.org/wiki/MCI%20Communications | MCI Communications Corporation (originally Microwave Communications, Inc.) was a telecommunications company headquartered in Washington, D.C. that was at one point the second-largest long-distance provider in the United States.
MCI was instrumental in legal and regulatory changes that led to the breakup of the Bell System and introduced competition in the telephone industry. Its MCI Mail, launched in 1983, was one of the first Email services and its MCI.net was an integral part of the Internet backbone.
The company was acquired by WorldCom (later called MCI Inc.) in 1998.
History
Founding
MCI was founded as Microwave Communications, Inc. on October 3, 1963, with John D. Goeken being named the company's first president. The initial business plan was for the company to build a series of microwave radio relay stations between Chicago, Illinois, and St. Louis, Missouri. The relay stations would then be used to interface with limited-range two-way radios used by truckers along U.S. Route 66 or by barges on the Illinois Waterway. The long-distance communication service would then be marketed to shipping companies that were too small to build their own private relay systems. In addition to the radio relay services, MCI soon made plans to offer voice, computer information, and data communication services for business customers unable to afford AT&T's TELPAK service.
Hearings on the company's initial license application between February 13, 1967, and April 19, 1967, resulted in a recommendation of approval by the FCC.
On June 26, 1968, the FCC ruled in the Carterfone case that AT&T's rules prohibiting private two-way radio connections to a telephone network were illegal. AT&T quickly sought a reversal of the ruling, and when the FCC denied the request, AT&T brought suit against the FCC in the United States courts of appeals. The FCC's decision was upheld, thus creating a new industry: privately (non-Bell) manufactured devices could be connected to the telephone network |
https://en.wikipedia.org/wiki/Telegraph%20key | A telegraph key or Morse key is a specialized electrical switch used by a trained operator to transmit text messages in Morse code in a telegraphy system. Keys are used in all forms of electrical telegraph systems, including landline (also called wire) telegraphy and radio (also called wireless) telegraphy. An operator uses the telegraph key to send electrical pulses (or in the case of modern CW, unmodulated radio waves) of two different lengths: short pulses, called dots or dits, and longer pulses, called dashes or dahs. These pulses encode the letters and other characters that spell out the message.
Types
Since its original inception, the telegraph key's design has developed such that there are now multiple types of keys.
Straight keys
A straight key is the common telegraph key as seen in various movies. It is a simple bar with a knob on top and an electrical contact underneath. When the bar is pressed down against spring tension, it makes a closed electric circuit. Traditionally, American telegraph keys had flat topped knobs and narrow bars (frequently curved), while British telegraph keys had ball shaped knobs and thick bars. This appears to be purely a matter of culture and training, but the users of each are tremendously partisan.
Straight keys have been made in numerous variations for over 150 years and in numerous countries. They are the subject of an avid community of key collectors. The straight keys used in wire telegraphy also had a shorting bar that closed the electrical circuit when the operator was not actively sending messages. This was to complete the electrical path to the next station so that its sounder would operate, as in the operator receiving a message from the next town. Although occasionally included in later keys for reasons of tradition, the shorting bar is unnecessary for radio telegraphy, except as a convenience when tuning the transmitter.
The straight key is simple and reliable, but the rapid pumping action needed to send a str |
https://en.wikipedia.org/wiki/Lamarckism | Lamarckism, also known as Lamarckian inheritance or neo-Lamarckism, is the notion that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime. It is also called the inheritance of acquired characteristics or more recently soft inheritance. The idea is named after the French zoologist Jean-Baptiste Lamarck (1744–1829), who incorporated the classical era theory of soft inheritance into his theory of evolution as a supplement to his concept of orthogenesis, a drive towards complexity.
Introductory textbooks contrast Lamarckism with Charles Darwin's theory of evolution by natural selection. However, Darwin's book On the Origin of Species gave credence to the idea of heritable effects of use and disuse, as Lamarck had done, and his own concept of pangenesis similarly implied soft inheritance.
Many researchers from the 1860s onwards attempted to find evidence for Lamarckian inheritance, but these have all been explained away, either by other mechanisms such as genetic contamination or as fraud. August Weismann's experiment, considered definitive in its time, is now considered to have failed to disprove Lamarckism, as it did not address use and disuse. Later, Mendelian genetics supplanted the notion of inheritance of acquired traits, eventually leading to the development of the modern synthesis, and the general abandonment of Lamarckism in biology. Despite this, interest in Lamarckism has continued.
Since ca. 2000 new experimental results in the fields of epigenetics, genetics, and somatic hypermutation proved the possibility of transgenerational epigenetic inheritance of traits acquired by the previous generation. These proved a limited validity of Lamarckism. The inheritance of the hologenome, consisting of the genomes of all an organism's symbiotic microbes as well as its own genome, is also somewhat Lamarckian in effect, though entirely Darwinian in its mechanisms.
Early history
|
https://en.wikipedia.org/wiki/CTAN | CTAN (an acronym for "Comprehensive TeX Archive Network") is the authoritative place where TeX related material and software can be found for download. Repositories for other projects, such as the MiKTeX distribution of TeX, constantly mirror most of CTAN.
History
Before CTAN there were a number of people who made some TeX materials available for public download, but there was no systematic collection. At a podium discussion that Joachim Schrod organized at the 1991 EuroTeX conference, the idea arose to bring together the separate collections. (Joachim was interested in this topic because he is active in the TeX community since 1983 and ran one of the largest ftp servers in Germany at that time.)
CTAN was built in 1992, by Rainer Schöpf and Joachim Schrod in Germany, Sebastian Rahtz in the UK, and George Greenwade in the U.S. (George came up with the name). Today, there are still only four people who maintain the archives and the TeX catalogue updates: Erik Braun, Ina Dau, Manfred Lotz, and Petra Ruebe-Pugliese. The site structure was put together at the start of 1992 – Sebastian did the main work – and synchronized at the start of 1993. The TeX Users Group provided a framework, a Technical Working Group, for this task's organization. CTAN was officially announced at the EuroTeX conference at Aston University, 1993. The WEB server itself is maintained by Gerd Neugebauer.
The English site has been stable since the beginning, but both the American and the German sites have moved thrice. The American site was first at Sam Houston State University under George Greenwade, in 1995 it moved to UMass Boston where it was run by Karl Berry. In 1999 it moved to Saint Michael's College in Colchester, Vermont. There it was announced to go off-line in the end of January 2011. Since January 2013, a mirror has been hosted by the University of Utah (no upload node). The German site was first at the University of Heidelberg, operated by Rainer; in 1999 it moved to the University o |
https://en.wikipedia.org/wiki/Sound%20Blaster | Sound Blaster is a family of sound cards and audio peripherals designed by Singaporean technology company Creative Technology (known in the US as Creative Labs). The first Sound Blaster card was introduced in 1989.
Sound Blaster sound cards were the de facto standard for consumer audio on the IBM PC compatible system platform, until the widespread transition to Microsoft Windows 95, which standardized the programming interface at application level (eliminating the importance of backward compatibility with Sound Blaster), and the evolution in PC design led to onboard audio electronics, which commoditized PC audio functionality. By 1995, Sound Blaster cards had sold over 15 million units worldwide and accounted for seven out of ten sound card sales.
Creative Music System and Game Blaster
Creative Music System
The history of Creative sound cards started with the release of the Creative Music System ("C/MS") CT-1300 board in August 1987. It contained two Philips SAA1099 integrated circuits, which, together, provided 12 channels of square-wave "bee-in-a-box" stereo sound, four channels of which can be used for noise.
These ICs were featured earlier in various popular electronics magazines around the world. For many years Creative tended to use off-the-shelf components and manufacturers' reference designs for their early products. The various integrated circuits had white or black paper stickers fully covering their top thus hiding their identity. On the C/MS board in particular, the Philips chips had white pieces of paper with a fantasy CMS-301 inscription on them: real Creative parts usually had consistent CT number references.
Surprisingly, the board also contained a large 40-pin DIP integrated circuit, bearing a CT 1302A CTPL 8708 (Creative Technology Programmable Logic) serigraphed inscription and looking exactly like the DSP of the later Sound Blaster. This chip allows software to automatically detect the card by certain register reads and writes.
Game Blaste |
https://en.wikipedia.org/wiki/Large%20numbers | Large numbers are numbers significantly larger than those typically used in everyday life (for instance in simple counting or in monetary transactions), appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts.
Googology is the study of nomenclature and properties of large numbers.
In the everyday world
Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, or a 1 followed by nine zeros: 1 000 000 000. The reciprocal, 1.0 × 10−9, means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. In addition to scientific (powers of 10) notation, the following examples include (short scale) systematic nomenclature of large numbers.
Examples of large numbers describing everyday real-world objects include:
The number of cells in the human body (estimated at 3.72 × 1013), or 37.2 trillion
The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion
The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion
The Avogadro constant is the number of “elementary entities” (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion.
The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at (5.3 ± 3.6) × 1037, or 53±36 undecillion
The mass of Earth consists of about 4 × 1051, or 4 sexdecillion, nucleons
The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion
The lower bound on the game-tree complexity of chess, also known as the “Shannon number” (estim |
https://en.wikipedia.org/wiki/Percolation%20theory | In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly larger connected, so-called spanning clusters. The applications of percolation theory to materials science and in many other disciplines are discussed here and in the articles Network theory and Percolation (cognitive psychology).
Introduction
A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of vertices, usually called "sites", in which the edge or "bonds" between each two neighbors may be open (allowing the liquid through) with probability , or closed with probability , and they are assumed to be independent. Therefore, for a given , what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for large is of primary interest. This problem, called now bond percolation, was introduced in the mathematics literature by , and has been studied intensively by mathematicians and physicists since then.
In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probability or "empty" (in which case its edges are removed) with probability ; the corresponding problem is called site percolation. The question is the same: for a given p, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction of failures the graph will become disconnected (no large component).
The same questions can be asked for any lattice dimension. As is quite typical, it is actually |
https://en.wikipedia.org/wiki/Transport%20Layer%20Security | Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.
The TLS protocol aims primarily to provide security, including privacy (confidentiality), integrity, and authenticity through the use of cryptography, such as the use of certificates, between two or more communicating computer applications. It runs in the presentation layer and is itself composed of two layers: the TLS record and the TLS handshake protocols.
The closely related Datagram Transport Layer Security (DTLS) is a communications protocol that provides security to datagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions.
TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecated SSL (Secure Sockets Layer) specifications (1994, 1995, 1996) developed by Netscape Communications for adding the HTTPS protocol to their Navigator web browser.
Description
Client-server applications use the TLS protocol to communicate across a network in a way designed to prevent eavesdropping and tampering.
Since applications can communicate either with or without TLS (or SSL), it is necessary for the client to request that the server set up a TLS connection. One of the main ways of achieving this is to use a different port number for TLS connections. Port 80 is typically used for unencrypted HTTP traffic while port 443 is the common port used for encrypted HTTPS traffic. Another mechanism is to make a protocol-specific STARTTLS request to the server to switch the connection to TLS – for example, when using the mail and news protocols.
Once the client and server have agreed to use TLS, they negotiate a stat |
https://en.wikipedia.org/wiki/Fitness%20%28biology%29 | Fitness (often denoted or ω in population genetics models) is the quantitative representation of individual reproductive success. It is also equal to the average contribution to the gene pool of the next generation, made by the same individuals of the specified genotype or phenotype. Fitness can be defined either with respect to a genotype or to a phenotype in a given environment or time. The fitness of a genotype is manifested through its phenotype, which is also affected by the developmental environment. The fitness of a given phenotype can also be different in different selective environments.
With asexual reproduction, it is sufficient to assign fitnesses to genotypes. With sexual reproduction, recombination scrambles alleles into different genotypes every generation; in this case, fitness values can be assigned to alleles by averaging over possible genetic backgrounds. Natural selection tends to make alleles with higher fitness more common over time, resulting in Darwinian evolution.
The term "Darwinian fitness" can be used to make clear the distinction with physical fitness. Fitness does not include a measure of survival or life-span; Herbert Spencer's well-known phrase "survival of the fittest" should be interpreted as: "Survival of the form (phenotypic or genotypic) that will leave the most copies of itself in successive generations."
Inclusive fitness differs from individual fitness by including the ability of an allele in one individual to promote the survival and/or reproduction of other individuals that share that allele, in preference to individuals with a different allele. One mechanism of inclusive fitness is kin selection.
Fitness as propensity
Fitness is often defined as a propensity or probability, rather than the actual number of offspring. For example, according to Maynard Smith, "Fitness is a property, not of an individual, but of a class of individuals—for example homozygous for allele A at a particular locus. Thus the phrase 'expected nu |
https://en.wikipedia.org/wiki/Card%20standards | Card standard(s) may refer to any amount of numbers of ISO standards related to smartcards.
ISO/IEC 7810 Identification cards — Physical characteristics
ISO/IEC 7812 Identification cards — Identification of issuers
ISO/IEC 7816 Identification cards — Integrated circuit cards
ISO/IEC 14443 Identification cards — Contactless integrated circuit cards — Proximity cards
See also
List of ISO standards
Smart cards |
https://en.wikipedia.org/wiki/Fasting | Fasting is the abstention from eating and sometimes drinking. From a purely physiological context, "fasting" may refer to the metabolic status of a person who has not eaten overnight (see "Breakfast"), or to the metabolic state achieved after complete digestion and absorption of a meal. Metabolic changes in the fasting state begin after absorption of a meal (typically 3–5 hours after eating).
A diagnostic fast refers to prolonged fasting from 1 to 100 hours (depending on age) conducted under observation to facilitate the investigation of a health complication, usually hypoglycemia. Many people may also fast as part of a medical procedure or a check-up, such as preceding a colonoscopy or surgery, or before certain medical tests. Intermittent fasting is a technique sometimes used for weight loss that incorporates regular fasting into a person's dietary schedule. Fasting may also be part of a religious ritual, often associated with specifically scheduled fast days, as determined by the religion, or by applied as a public demonstration for a given cause in a practice known as a hunger strike.
Health effects
Fasting may have different results on health in different circumstances. To understand whether loss of appetite (anorexia) during illness was protective or detrimental, researchers in the laboratory of Ruslan Medzhitov at Yale School of Medicine gave carbohydrate to mice with a bacterial or viral illness, or deprived them of carbohydrate. They found that carbohydrate was detrimental to bacterial sepsis. But with viral sepsis or influenza, nutritional supplementation with carbohydrates was beneficial, decreasing mortality, whereas denying glucose to the mice, or blocking its metabolism, was lethal. The researchers put forth hypotheses to explain the findings and called for more research on humans to determine whether our bodies react similarly, depending on whether an illness is bacterial or viral.
Alternate-day fasting (alternating between a 24-hour "fast day" w |
https://en.wikipedia.org/wiki/Centroid | In mathematics and physics, the centroid, also known as geometric center or center of figure, of a plane figure or solid figure is the point defined by the arithmetic mean position of all the points in the surface of the figure. In a polytope, it can be found using the arithmetic mean position of the vertices. The same definition extends to any object in -dimensional Euclidean space.
In geometry, one often assumes uniform mass density, in which case the barycenter or center of mass coincides with the centroid. Informally, it can be understood as the point at which a cutout of the shape (with uniformly distributed mass) could be perfectly balanced on the tip of a pin.
In physics, if variations in gravity are considered, then a center of gravity can be defined as the weighted mean of all points weighted by their specific weight.
In geography, the centroid of a radial projection of a region of the Earth's surface to sea level is the region's geographical center.
History
The term "centroid" is of recent coinage (1814). It is used as a substitute for the older terms "center of gravity" and "center of mass" when the purely geometrical aspects of that point are to be emphasized. The term is peculiar to the English language; the French, for instance, use "" on most occasions, and others use terms of similar meaning.
The center of gravity, as the name indicates, is a notion that arose in mechanics, most likely in connection with building activities. It is uncertain when the idea first appeared, as the concept likely occurred to many people individually with minor differences. Nonetheless, the center of gravity of figures was studied extensively in Antiquity; Bossut credits Archimedes (287–212 BCE) with being the first to find the centroid of plane figures, although he never defines it. A treatment of centroids of solids by Archimedes has been lost.
It is unlikely that Archimedes learned the theorem that the medians of a triangle meet in a point—the center of gravity |
https://en.wikipedia.org/wiki/WPIX | WPIX (channel 11) is a television station in New York City, serving as the de facto flagship of The CW Television Network. Owned by Mission Broadcasting, the station is operated by CW majority owner Nexstar Media Group under a local marketing agreement (LMA). Since its inception in 1948, WPIX's studios and offices have been located in the Daily News Building on East 42nd Street (also known as "11 WPIX Plaza") in Midtown Manhattan. The station's transmitter is located at the Empire State Building.
WPIX is also available as a regional superstation via satellite and cable in the United States and Canada. It is the largest Nexstar-operated station by population of market size.
History
As an independent station (1948–1995)
The station first signed on the air on June 15, 1948; it was the fifth television station to sign on in New York City and was the market's second independent station. It was also the second of three stations to launch in the New York market during 1948, debuting one month after Newark, New Jersey–based independent WATV (channel 13, now WNET) and two months before WJZ-TV (channel 7, now WABC-TV). WPIX's call letters come from the slogan of the newspaper which founded the station, the New York Daily News, whose slogan was "New York's Picture Newspaper". The Daily Newss partial corporate parent was the Chicago-based Tribune Company, publishers of the Chicago Tribune.
Until becoming owned outright by Tribune in 1991, WPIX operated separately from the company's other television and radio outlets (including WGN-TV in Chicago, which signed-on two months before WPIX in April 1948) through the News-owned license holder, WPIX, Incorporated – which in 1963, purchased New York radio station, WBFM (101.9 FM) and soon changed that station's call letters to WPIX-FM. British businessman Robert Maxwell bought the Daily News in 1991. Tribune retained WPIX and WQCD; the radio station was sold to Emmis Communications in 1997 (it is now WFAN-FM). WPIX initially feature |
https://en.wikipedia.org/wiki/Transactivation | In the context of gene regulation: transactivation is the increased rate of gene expression triggered either by biological processes or by artificial means, through the expression of an intermediate transactivator protein.
In the context of receptor signaling, transactivation occurs when one or more receptors activate yet another; receptor transactivation may result from the crosstalk of signaling cascades or the activation of G protein–coupled receptor hetero-oligomer subunits, among other mechanisms.
Natural transactivation
Transactivation can be triggered either by endogenous cellular or viral proteins, also called transactivators. These protein factors act in trans (i.e., intermolecularly). HIV and HTLV are just two of the many viruses that encode transactivators to enhance viral gene expression. These transactivators can also be linked to cancer if they start interacting with, and increasing expression of, a cellular proto-oncogene. HTLV, for instance, has been associated with causing leukemia primarily through this process. Its transactivator, Tax, can interact with p40, inducing overexpression of interleukin 2, interleukin receptors, GM-CSF and the transcription factor c-Fos. HTLV infects T-cells and via the increased expression of these stimulatory cytokines and transcription factors, leads to uncontrolled proliferation of T-cells and hence lymphoma.
Artificial transactivation
Artificial transactivation of a gene is achieved by inserting it into the genome at the appropriate area as transactivator gene adjoined to special promoter regions of DNA. The transactivator gene expresses a transcription factor that binds to specific promoter region of DNA. By binding to the promoter region of a gene, the transcription factor causes that gene to be expressed. The expression of one transactivator gene can activate multiple genes, as long as they have the same, specific promoter region attached. Because the expression of the transactivator gene can be controlle |
https://en.wikipedia.org/wiki/Test%20card | A test card, also known as a test pattern or start-up/closedown test, is a television test signal, typically broadcast at times when the transmitter is active but no program is being broadcast (often at sign-on and sign-off).
Used since the earliest TV broadcasts, test cards were originally physical cards at which a television camera was pointed, allowing for simple adjustments of picture quality. Such cards are still often used for calibration, alignment, and matching of cameras and camcorders. From the 1950s, test card images were built into monoscope tubes which freed up the use of TV cameras which would otherwise have to be rotated to continuously broadcast physical test cards during downtime hours.
Electronically generated test patterns, used for calibrating or troubleshooting the downstream signal path, were introduced in the late-1960s. These are generated by test signal generators, which do not depend on the correct configuration (and presence) of a camera, and can also test for additional parameters such as correct color decoding, sync, frames per second, and frequency response. These patterns are specially tailored to be used in conjunction with devices such as a vectorscope, allowing precise adjustments of image equipment.
The audio broadcast while test cards are shown is typically a sine wave tone, radio (if associated or affiliated with the television channel) or music (usually instrumental, though some also broadcast with jazz or popular music).
Digitally generated cards came later, associated with digital television, and add a few features specific of digital signals, like checking for error correction, chroma subsampling, aspect ratio signaling, surround sound, etc. More recently, the use of test cards has also expanded beyond television to other digital displays such as large LED walls and video projectors.
Technical details
Test cards typically contain a set of patterns to enable television cameras and receivers to be adjusted to show the pic |
https://en.wikipedia.org/wiki/NSD | In Internet computing, NSD (for "name server daemon") is an open-source Domain Name System (DNS) server. It was developed by NLnet Labs of Amsterdam in cooperation with the RIPE NCC, from scratch as an authoritative name server (i.e., not implementing the recursive caching function by design). The intention of this development is to add variance to the "gene pool" of DNS implementations
used by higher level name servers and thus increase the resilience of DNS against software flaws or exploits.
NSD uses BIND-style zone-files (zone-files used under BIND can usually be used unmodified in NSD, once entered into the NSD configuration).
NSD uses zone information compiled via zonec into a binary database file (nsd.db) which allows fast startup of the NSD name-service daemon, and allows syntax-structural errors in Zone-Files to be flagged at compile-time (before being made available to NSD service itself).
The collection of programs/processes that make-up NSD are designed so that the NSD daemon itself runs as a non-privileged user and can be easily configured to run in a Chroot jail, such that security flaws in the NSD daemon are not so likely to result in system-wide compromise as without such measures.
As of May, 2018, four of the Internet root nameservers are using NSD:
k.root-servers.net was switched to NSD on February 19, 2003.
One of the 2 load-balanced servers for h.root-servers.net (called "H1", "H2") was switched to NSD, and now there are 3 servers all running NSD (called "H1", "H2", "H3").
l.root-servers.net switched to NSD on February 6, 2007.
d.root-servers.net was switched to NSD in May 2018.
Several other TLDs use NSD for part of their servers.
See also
Unbound, a recursive DNS server, also developed by NLnet Labs
Comparison of DNS server software
References
External links
NSD License
NSD DNS Tutorial with examples and explanations
DNS software
Free network-related software
DNS server software for Linux
Software using the BSD license |
https://en.wikipedia.org/wiki/Electronic%20Privacy%20Information%20Center | Electronic Privacy Information Center (EPIC) is an independent nonprofit research center established in 1994 to protect privacy, freedom of expression, and democratic values in the information age. EPIC is based in Washington, D.C. EPIC's mission is to secure the fundamental right to privacy in the digital age for all people through advocacy, research, and litigation.
EPIC pursues a wide range of civil liberties, consumer protection, and human rights issues. EPIC has pursued several successful consumer privacy complaints with the US Federal Trade Commission, concerning Snapchat (faulty privacy technology), WhatsApp (privacy policy after acquisition by Facebook), Facebook (changes in user privacy settings), Google (roll-out of Google Buzz), Microsoft (Hailstorm log-in), and Choicepoint (sale of personal information to identity thieves). EPIC has also prevailed in significant Freedom of Information Act cases against the CIA, the DHS, the Dept. of Education, the Federal Bureau of Investigation, the National Security Agency (NSA), the ODNI, and the Transportation Security Administration. EPIC has also filed many "friend of the court" briefs on law and technology, including Riley v. California (U.S. 2014) (concerning cell phone privacy), and litigated important privacy cases, including EPIC v. DHS (D.C. Cir. 2011), which led to the removal of the x-ray body scanners in US airports, and EPIC v. NSA (D.C. Cir. 2014), which led to the release of the NSA's formerly secret cybersecurity authority. Additionally, EPIC challenged the NSA's domestic surveillance program in a petition to the U.S. Supreme Court. In re EPIC, (U.S. 2013) after the release of the "Verizon Order" in June 2013. One of EPIC's current cases concerns the obligation of the Federal Aviation Administration to establish privacy regulations prior to the deployment of commercial drones in the United States.
EPIC works closely with a distinguished advisory board, who have expertise in law, technology and publi |
https://en.wikipedia.org/wiki/Reconfigurable%20computing | Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.
History
The concept of reconfigurable computing has existed since the 1960s, when Gerald Estrin's paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable" hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware.
In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: Copacobana, Matrix, GARP, Elixent, NGEN, Polyp, MereGen, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, and PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. Some of these massively parallel reconfigurable computers were built primarily for special subdomains such as molecular evolution, neural or image processing. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was pro |
https://en.wikipedia.org/wiki/Phosphoric%20acid | Phosphoric acid (orthophosphoric acid, monophosphoric acid or phosphoric(V) acid) is a colorless, odorless phosphorus-containing solid, and inorganic compound with the chemical formula . It is commonly encountered as an 85% aqueous solution, which is a colourless, odourless, and non-volatile syrupy liquid. It is a major industrial chemical, being a component of many fertilizers.
The compound is an acid. Removal of all three ions gives the phosphate ion . Removal of one or two protons gives dihydrogen phosphate ion , and the hydrogen phosphate ion , respectively. Phosphoric acid forms esters, called organophosphates.
The name "orthophosphoric acid" can be used to distinguish this specific acid from other "phosphoric acids", such as pyrophosphoric acid. Nevertheless, the term "phosphoric acid" often means this specific compound; and that is the current IUPAC nomenclature.
Production
Phosphoric acid is produced industrially by one of two routes, wet processes and dry.
Wet process
In the wet process, a phosphate-containing mineral such as calcium hydroxyapatite and fluorapatite are treated with sulfuric acid.
Calcium sulfate (gypsum, ) is a by-product, which is removed as phosphogypsum. The hydrogen fluoride (HF) gas is streamed into a wet (water) scrubber producing hydrofluoric acid. In both cases the phosphoric acid solution usually contains 23–33% P2O5 (32–46% ). It may be concentrated to produce commercial- or merchant-grade phosphoric acid, which contains about 54–62% (75–85% ). Further removal of water yields superphosphoric acid with a concentration above 70% (corresponding to nearly 100% ). The phosphoric acid from both processes may be further purified by removing compounds of arsenic and other potentially toxic impurities.
Dry process
To produce food-grade phosphoric acid, phosphate ore is first reduced with coke in an electric arc furnace, to give elemental phosphorus. Silica is also added, resulting in the production of calcium silicate slag. El |
https://en.wikipedia.org/wiki/Axiomatic%20system | In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system.
Properties
An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms. Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion).
In an axiomatic system, an axiom is called independent if it cannot be proven or disproven from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system.
An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms (equivalently, every statement is capable of being proven true or false).
Relative consistency
Beyond consistency, relative consistency is also the mark of a worthwhile axiom system. This describes the scenario where the undefined terms of a first axiom system are provided definitions from a second, such that the axioms of the first are theorems of the second.
A good example is the relative consistency of absolute geometry with respect to the theory of the real number system. Lines and points are undefin |
https://en.wikipedia.org/wiki/Cytopathology | Cytopathology (from Greek , kytos, "a hollow"; , pathos, "fate, harm"; and , -logia) is a branch of pathology that studies and diagnoses diseases on the cellular level. The discipline was founded by George Nicolas Papanicolaou in 1928. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to histopathology, which studies whole tissues. Cytopathology is frequently, less precisely, called "cytology", which means "the study of cells".
Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening tool used to detect precancerous cervical lesions that may lead to cervical cancer.
Cytopathologic tests are sometimes called smear tests because the samples may be smeared across a glass microscope slide for subsequent staining and microscopic examination. However, cytology samples may be prepared in other ways, including cytocentrifugation. Different types of smear tests may also be used for cancer diagnosis. In this sense, it is termed a cytologic smear.
Cell collection
There are two methods of collecting cells for cytopathologic analysis: exfoliative cytology, and intervention cytology.
Exfoliative cytology
In this method, cells are collected after they have been either spontaneously shed by the body ("spontaneous exfoliation"), or manually scraped/brushed off of a surface in the body ("mechanical exfoliation"). An example of spontaneous exfoliation is when cells of the pleural cavity or peritoneal cavity are shed into the pleural or peritoneal fluid. This fluid can be collected via various methods for examination. Examples of mechanical exfoliation include Pap smears, where cells are scraped from the cervix with a cervical spatula, or bronchial brushings, where a bronchoscope is inserted into th |
https://en.wikipedia.org/wiki/TI%20Advanced%20Scientific%20Computer | The Advanced Scientific Computer (ASC) is a supercomputer designed and manufactured by Texas Instruments (TI) between 1966 and 1973. The ASC's central processing unit (CPU) supported vector processing, a performance-enhancing technique which was key to its high-performance. The ASC, along with the Control Data Corporation STAR-100 supercomputer (which was introduced in the same year), were the first computers to feature vector processing. However, this technique's potential was not fully realized by either the ASC or STAR-100 due to an insufficient understanding of the technique; it was the Cray Research Cray-1 supercomputer, announced in 1975 that would fully realize and popularize vector processing. The more successful implementation of vector processing in the Cray-1 would demarcate the ASC (and STAR-100) as first-generation vector processors, with the Cray-1 belonging in the second.
History
TI began as a division of Geophysical Service Incorporated (GSI), a company that performed seismic surveys for oil exploration companies. GSI was now a subsidiary of TI, and TI wanted to apply the latest computer technology to the processing and analysis of seismic datasets. The ASC project started as the Advanced Seismic Computer. As the project developed, TI decided to expand its scope. "Seismic" was replaced by "Scientific" in the name, allowing the project to retain the designation ASC.
Originally the software, including an operating system and a FORTRAN compiler, were done under contract by Computer Usage Company, under direction of George R. Trimble, Jr.
but later taken over by TI itself. Southern Methodist University in Dallas developed an ALGOL compiler for the ASC.
Architecture
The ASC was based around a single high-speed shared memory, which was accessed by the CPU and eight I/O channel controllers, in an organization similar to Seymour Cray's groundbreaking CDC 6600. Memory was accessed solely under the control of the memory control unit (MCU). The MCU was a tw |
https://en.wikipedia.org/wiki/Hitachi | () is a Japanese multinational electronics company headquartered in Chiyoda, Tokyo. It traces its origins back to 1910 with the establishment of a subsidiary electrical machinery manufacturing plant by Namihei Odaira within the Kuhara Mining Plant Hitachi Mine in Hitachi, Ibaraki. It became independent from the Mining Plant in 1920.
It had formed part of the Nissan zaibatsu and later DKB Group and Fuyo Group of companies before DKB and Fuji Bank (the core Fuyo Group company) merged into the Mizuho Financial Group. As of 2020, Hitachi conducts business ranging from IT, including AI, the Internet of Things, and big data, to infrastructure.
Hitachi is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange and its Tokyo listing is a constituent of the Nikkei 225 and TOPIX Core30 indices. It is ranked 38th in the 2012 Fortune Global 500 and 129th in the 2012 Forbes Global 2000.
History
Founding and Early History
Hitachi was founded in 1910 by electrical engineer Namihei Odaira (1874–1951) in Ibaraki Prefecture. The company's first product was Japan's first induction motor, initially developed for use in copper mining.
The company began as an in-house venture of Fusanosuke Kuhara's mining company in Hitachi, Ibaraki. Odaira moved headquarters to Tokyo in 1918. Odaira coined the company's toponymic name by superimposing two kanji characters: hi meaning "sun" and tachi meaning "rise".
World War II had a significant impact on the company with many of its factories being destroyed by Allied bombing raids, and discord after the war. Founder Odaira was removed from the company and Hitachi Zosen Corporation was spun out. Hitachi's reconstruction efforts after the war were hindered by a labor strike in 1950. Meanwhile, Hitachi went public in 1949.
Hitachi America, Ltd. was established in 1959.
The Soviet Union started to produce air conditioners in 1975. The Baku factory was established under the license of the Japanese company, Hitachi. Volumes of production of |
https://en.wikipedia.org/wiki/Irreducible%20polynomial | In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the field to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as if it is considered as a polynomial with real coefficients. One says that the polynomial is irreducible over the integers but not over the reals.
Polynomial irreducibility can be considered for polynomials with coefficients in an integral domain, and there are two common definitions. Most often, a polynomial over an integral domain is said to be irreducible if it is not the product of two polynomials that have their coefficients in , and that are not unit in . Equivalently, for this definition, an irreducible polynomial is an irreducible element in the rings of polynomials over . If is a field, the two definitions of irreducibility are equivalent. For the second definition, a polynomial is irreducible if it cannot be factored into polynomials with coefficients in the same domain that both have a positive degree. Equivalently, a polynomial is irreducible if it is irreducible over the field of fractions of the integral domain. For example, the polynomial is irreducible for the second definition, and not for the first one. On the other hand, is irreducible in for the two definitions, while it is reducible in
A polynomial that is irreducible over any field containing the coefficients is absolutely irreducible. By the fundamental theorem of algebra, a univariate polynomial is absolutely irreducible if and only if its degree is on |
https://en.wikipedia.org/wiki/Apollo%20Guidance%20Computer | The Apollo Guidance Computer (AGC) was a digital computer produced for the Apollo program that was installed on board each Apollo command module (CM) and Apollo Lunar Module (LM). The AGC provided computation and electronic interfaces for guidance, navigation, and control of the spacecraft. The AGC was the first computer based on silicon integrated circuits. The computer's performance was comparable to the first generation of home computers from the late 1970s, such as the Apple II, TRS-80, and Commodore PET.
The AGC has a 16-bit word length, with 15 data bits and one parity bit. Most of the software on the AGC is stored in a special read-only memory known as core rope memory, fashioned by weaving wires through and around magnetic cores, though a small amount of read/write core memory is available.
Astronauts communicated with the AGC using a numeric display and keyboard called the DSKY (for "display and keyboard", pronounced "DIS-kee"). The AGC and its DSKY user interface were developed in the early 1960s for the Apollo program by the MIT Instrumentation Laboratory and first flew in 1966.
Operation
Astronauts manually flew Project Gemini with control sticks, but computers flew most of Project Apollo except briefly during lunar landings. Each Moon flight carried two AGCs, one each in the command module and the Apollo Lunar Module, with the exception of Apollo 7 which was an Earth orbit mission and Apollo 8 which did not need a lunar module for its lunar orbit mission. The AGC in the command module was the center of its guidance, navigation and control (GNC) system. The AGC in the lunar module ran its Apollo PGNCS (primary guidance, navigation and control system), with the acronym pronounced as pings.
Each lunar mission had two additional computers:
The Launch Vehicle Digital Computer (LVDC) on the Saturn V booster instrumentation ring
the Abort Guidance System (AGS, pronounced ags) of the lunar module, to be used in the event of failure of the LM PGNCS. The A |
https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau%20manifold | In algebraic geometry, a Calabi–Yau manifold, also known as a Calabi–Yau space, is a particular type of manifold which has properties, such as Ricci flatness, yielding applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry. Their name was coined by , after who first conjectured that such surfaces might exist, and who proved the Calabi conjecture.
Calabi–Yau manifolds are complex manifolds that are generalizations of K3 surfaces in any number of complex dimensions (i.e. any even number of real dimensions). They were originally defined as compact Kähler manifolds with a vanishing first Chern class and a Ricci-flat metric, though many other similar but inequivalent definitions are sometimes used.
Definitions
The motivational definition given by Shing-Tung Yau is of a compact Kähler manifold with a vanishing first Chern class, that is also Ricci flat.
There are many other definitions of a Calabi–Yau manifold used by different authors, some inequivalent. This section summarizes some of the more common definitions and the relations between them.
A Calabi–Yau -fold or Calabi–Yau manifold of (complex) dimension is sometimes defined as a compact -dimensional Kähler manifold satisfying one of the following equivalent conditions:
The canonical bundle of is trivial.
has a holomorphic -form that vanishes nowhere.
The structure group of the tangent bundle of can be reduced from to .
has a Kähler metric with global holonomy contained in .
These conditions imply that the first integral Chern class of vanishes. Nevertheless, the converse is not true. The simplest examples where this happens are hyperelliptic surfaces, finite quotients of a complex torus of complex dimension 2, which have vanishing first integral Chern class but non-trivial canonical bundle.
For a compact -dimensional Kä |
https://en.wikipedia.org/wiki/Disjoint%20union | In mathematics, a disjoint union (or discriminated union) of a family of sets is a set often denoted by with an injection of each into such that the images of these injections form a partition of (that is, each element of belongs to exactly one of these images). A disjoint union of a family of pairwise disjoint sets is their union.
In category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. In this context, the notation is often used.
The disjoint union of two sets and is written with infix notation as . Some authors use the alternative notation or (along with the corresponding or ).
A standard way for building the disjoint union is to define as the set of ordered pairs such that and the injection as
Example
Consider the sets and It is possible to index the set elements according to set origin by forming the associated sets
where the second element in each pair matches the subscript of the origin set (for example, the in matches the subscript in etc.). The disjoint union can then be calculated as follows:
Set theory definition
Formally, let be a family of sets indexed by The disjoint union of this family is the set
The elements of the disjoint union are ordered pairs Here serves as an auxiliary index that indicates which the element came from.
Each of the sets is canonically isomorphic to the set
Through this isomorphism, one may consider that is canonically embedded in the disjoint union.
For the sets and are disjoint even if the sets and are not.
In the extreme case where each of the is equal to some fixed set for each the disjoint union is the Cartesian product of and :
Occasionally, the notation
is used for the disjoint union of a family of sets, or the notation for the disjoint union of two sets. This notation is meant to be suggestive of the fact that the cardinality of the disjoint union is the sum of the cardinalities of the terms in the fam |
https://en.wikipedia.org/wiki/Code%20generation%20%28compiler%29 | In computing, code generation is part of the process chain of a compiler and converts intermediate representation of source code into a form (e.g., machine code) that can be readily executed by the target system.
Sophisticated compilers typically perform multiple passes over various intermediate forms. This multi-stage process is used because many algorithms for code optimization are easier to apply one at a time, or because the input to one optimization relies on the completed processing performed by another optimization. This organization also facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code generation stages (the backend) needs to change from target to target. (For more information on compiler design, see Compiler.)
The input to the code generator typically consists of a parse tree or an abstract syntax tree. The tree is converted into a linear sequence of instructions, usually in an intermediate language such as three-address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, a peephole optimization pass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass.)parse tree
Major tasks
In addition to the basic conversion from an intermediate representation into a linear sequence of machine instructions, a typical code generator tries to optimize the generated code in some way.
Tasks which are typically part of a sophisticated compiler's "code generation" phase include:
Instruction selection: which instructions to use.
Instruction scheduling: in which order to put those instructions. Scheduling is a speed optimization that can have a critical effect on pipelined machines.
Register allocation: the allocation of variables to processor registers
Debug data generation if required so the code c |
https://en.wikipedia.org/wiki/Semantic%20analysis%20%28compilers%29 | Semantic analysis or context sensitive analysis is a process in compiler construction, usually after parsing, to gather necessary semantic information from the source code. It usually includes type checking, or makes sure a variable is declared before use which is impossible to describe in the extended Backus–Naur form and thus not easily detected during parsing.
See also
Attribute grammar
Context-sensitive language
Semantic analysis (computer science)
References
Compiler construction
Program analysis |
https://en.wikipedia.org/wiki/Online%20analytical%20processing | Online analytical processing, or OLAP (), is an approach to answer multi-dimensional analytical (MDA) queries swiftly in computing. OLAP is part of the broader category of business intelligence, which also encompasses relational databases, report writing and data mining. Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas, with new applications emerging, such as agriculture.
The term OLAP was created as a slight modification of the traditional database term online transaction processing (OLTP).
OLAP tools enable users to analyze multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing. Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of the OLAP cube and view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.).
Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad hoc queries with a rapid execution time. They borrow aspects of navigational databases, hierarchical databases and relational databases.
OLAP is typically contrasted to OLTP (online transaction processing), which is generally characterized by much less complex queries, in a larger volu |
https://en.wikipedia.org/wiki/SpiderMonkey | SpiderMonkey is open-source JavaScript and WebAssembly engine by the Mozilla Foundation.
It is the first JavaScript engine, written by Brendan Eich at Netscape Communications, and later released as open source and currently maintained by the Mozilla Foundation. It is used in the Firefox web browser.
History
Eich "wrote JavaScript in ten days" in 1995,
having been "recruited to Netscape with the promise of 'doing Scheme' in the browser".
(The idea of using Scheme was abandoned when "engineering management [decided] that the language must 'look like Java.) In late 1996, Eich, needing to "pay off [the] substantial technical debt" left from the first year, "stayed home for two weeks to rewrite Mocha as the codebase that became known as SpiderMonkey". (Mocha was the original working name for the language.)
In 2011, Eich transferred management of the SpiderMonkey code to Dave Mandelin.
Versions
Standards
SpiderMonkey implements the ECMA-262 specification (ECMAScript). ECMA-357 (ECMAScript for XML (E4X)) was dropped in early 2013.
Internals
SpiderMonkey is written in C/C++ and contains an interpreter, the IonMonkey JIT compiler, and a garbage collector.
TraceMonkey
TraceMonkey was the first JIT compiler written for the JavaScript language. Initially introduced as an option in a beta release and introduced in Brendan Eich's blog on August 23, 2008, the compiler became part of the mainline release as part of SpiderMonkey in Firefox 3.5, providing "performance improvements ranging between 20 and 40 times faster" than the baseline interpreter in Firefox 3.
Instead of compiling whole functions, TraceMonkey was a tracing JIT, which operates by recording control flow and data types during interpreter execution. This data then informed the construction of trace trees, highly specialized paths of native code.
Improvements to JägerMonkey eventually made TraceMonkey obsolete, especially with the development of the SpiderMonkey type inference engine. TraceMonkey is absent fro |
https://en.wikipedia.org/wiki/Birthday%20attack | A birthday attack is a bruteforce collision attack that exploits the mathematics behind the birthday problem in probability theory. This attack can be used to abuse communication between two or more parties. The attack depends on the higher likelihood of collisions found between random attack attempts and a fixed degree of permutations (pigeonholes). With a birthday attack, it is possible to find a collision of a hash function with chance in , with being the classical preimage resistance security with the same probability. There is a general (though disputed) result that quantum computers can perform birthday attacks, thus breaking collision resistance, in .
Although there are some digital signature vulnerabilities associated with the birthday attack, it cannot be used to break an encryption scheme any faster than a brute-force attack.
Understanding the problem
As an example, consider the scenario in which a teacher with a class of 30 students (n = 30) asks for everybody's birthday (for simplicity, ignore leap years) to determine whether any two students have the same birthday (corresponding to a hash collision as described further). Intuitively, this chance may seem small. Counter-intuitively, the probability that at least one student has the same birthday as any other student on any day is around 70% (for n = 30), from the formula .
If the teacher had picked a specific day (say, 16 September), then the chance that at least one student was born on that specific day is , about 7.9%.
In a birthday attack, the attacker prepares many different variants of benign and malicious contracts, each having a digital signature. A pair of benign and malicious contracts with the same signature is sought. In this fictional example, suppose that the digital signature of a string is the first byte of its SHA-256 hash. The pair found is indicated in green – note that finding a pair of benign contracts (blue) or a pair of malicious contracts (red) is useless. After the victim |
https://en.wikipedia.org/wiki/Actuarial%20science | Actuarial science is the discipline that applies mathematical and statistical methods to assess risk in insurance, pension, finance, investment and other industries and professions. More generally, actuaries apply rigorous mathematics to model matters of uncertainty and life expectancy.
Actuaries are professionals trained in this discipline. In many countries, actuaries must demonstrate their competence by passing a series of rigorous professional examinations focused in fields such as probability and predictive analysis.
Actuarial science includes a number of interrelated subjects, including mathematics, probability theory, statistics, finance, economics, financial accounting and computer science. Historically, actuarial science used deterministic models in the construction of tables and premiums. The science has gone through revolutionary changes since the 1980s due to the proliferation of high speed computers and the union of stochastic actuarial models with modern financial theory.
Many universities have undergraduate and graduate degree programs in actuarial science. In 2010, a study published by job search website CareerCast ranked actuary as the #1 job in the United States. The study used five key criteria to rank jobs: environment, income, employment outlook, physical demands, and stress. A similar study by U.S. News & World Report in 2006 included actuaries among the 25 Best Professions that it expects will be in great demand in the future.
Subfields
Life insurance, pensions and healthcare
Actuarial science became a formal mathematical discipline in the late 17th century with the increased demand for long-term insurance coverage such as burial, life insurance, and annuities. These long term coverages required that money be set aside to pay future benefits, such as annuity and death benefits many years into the future. This requires estimating future contingent events, such as the rates of mortality by age, as well as the development of mathematical tec |
https://en.wikipedia.org/wiki/Biophilia%20hypothesis | The biophilia hypothesis (also called BET) suggests that humans possess an innate tendency to seek connections with nature and other forms of life. Edward O. Wilson introduced and popularized the hypothesis in his book, Biophilia (1984). He defines biophilia as "the urge to affiliate with other forms of life".
Natural affinity for living systems
"Biophilia" is an innate affinity of life or living systems. The term was first used by Erich Fromm to describe a psychological orientation of being attracted to all that is alive and vital. Wilson uses the term in a related sense when he suggests that biophilia describes "the connections that human beings subconsciously seek with the rest of life." He proposed the possibility that the deep affiliations humans have with other life forms and nature as a whole are rooted in our biology. Both positive and negative (including phobic) affiliations toward natural objects (species, phenomenon) as compared to artificial objects are evidence for biophilia.
Although named by Fromm, the concept of biophilia has been proposed and defined many times over. Aristotle was one of many to put forward a concept that could be summarized as "love of life". Diving into the term philia, or friendship, Aristotle evokes the idea of reciprocity and how friendships are beneficial to both parties in more than just one way, but especially in the way of happiness.
The hypothesis has since been developed as part of theories of evolutionary psychology. Taking on an evolutionary perspective people are drawn towards life and nature can be explained in part due to our evolutionary history of residing in natural environments, only recently in our history have we shifted towards an urbanized lifestyle. These connections to nature can still be seen in people today as people gravitate towards, identify with, and desire to connect with nature. These connections are not limited to any one component part of nature, in general people show connections to a wide r |
https://en.wikipedia.org/wiki/All-silica%20fiber | All-silica fiber, or silica-silica fiber, is an optical fiber whose core and cladding are made of silica glass. The refractive index of the core glass is higher than that of the cladding. These fibers are typically step-index fibers. The cladding of an all-silica fiber should not be confused with the polymer overcoat of the fiber.
All-silica fiber is usually used as the medium for the purpose of transmitting optical signals. It is of technical interest in the fields of communications, broadcasting and television, due to its physical properties of low transmission loss, large bandwidth and light weight.
Applications
The practical application of optical fibers in various optical networks determines the requirements for the technical performance of optical fibers. For short-distance fiber-optic transmission networks, the multi-mode optical fiber is suitable for laser transmission and wider bandwidths, so as to support larger capacity of serial signal information transmission. For long-distance submarine optical cable transmission systems, in order to reduce the number of expensive optical fiber amplifiers, it is important to consider using optical fibers with large mode field diameter area and negative dispersion to increase the transmission distance. The focus of the land-based long-distance transmission system is to be able to transmit more wavelengths, each of which should be transmitted at a high rate as much as possible. Even if the dispersion value of the optical fiber with the changes of the wavelength is minimum, the dispersion of fiber still needs to be solved. For local area networks, since the transmission distance is relatively short, the focus of consideration is on the cost of the optical network rather than the cost of transmission. In other words, it is necessary to solve the add/drop multiplexing problem of the upper/lower path in the optical fiber transmission system, and at the same time, the cost of the add/drop wavelength must be minimized.
Dis |
https://en.wikipedia.org/wiki/Hermitian%20matrix | In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and :
or in matrix form:
Hermitian matrices can be understood as the complex extension of real symmetric matrices.
If the conjugate transpose of a matrix is denoted by then the Hermitian property can be written concisely as
Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are although in quantum mechanics, typically means the complex conjugate only, and not the conjugate transpose.
Alternative characterizations
Hermitian matrices can be characterized in a number of equivalent ways, some of which are listed below:
Equality with the adjoint
A square matrix is Hermitian if and only if it is equal to its adjoint, that is, it satisfies
for any pair of vectors where denotes the inner product operation.
This is also the way that the more general concept of self-adjoint operator is defined.
Reality of quadratic forms
An matrix is Hermitian if and only if
Spectral properties
A square matrix is Hermitian if and only if it is unitarily diagonalizable with real eigenvalues.'Applications
Hermitian matrices are fundamental to quantum mechanics because they describe operators with necessarily real eigenvalues. An eigenvalue of an operator on some quantum state is one of the possible measurement outcomes of the operator, which necessitates the need for operators with real eigenvalues.
Examples and solutions
In this section, the conjugate transpose of matrix is denoted as the transpose of matrix is denoted as and conjugate of matrix is denoted as
See the following example:
The diagonal elements m |
https://en.wikipedia.org/wiki/LCARS | In the Star Trek fictional universe, LCARS (; an acronym for Library Computer Access/Retrieval System) is a computer operating system. Within Star Trek chronology, the term was first used in the Star Trek: The Next Generation series.
Production
The LCARS graphical user interface was designed by scenic art supervisor and technical consultant Michael Okuda. The original design concept was influenced by a request from Gene Roddenberry that the instrument panels not have a great deal of activity on them. This minimalized look was designed to give a sense that the technology was much more advanced than in the original Star Trek.
On Star Trek: The Next Generation, many of the buttons were labeled with the initials of members of the production crew and were referred to as "Okudagrams."
PADD
The LCARS interface is often seen used on a PADD (Personal Access Display Device), a hand-held computer.
At , similarly sized modern tablet computers such as the Nexus 7, Amazon Fire, BlackBerry PlayBook, and iPad Mini have been compared with the PADD. Several mobile apps were created which offered an LCARS-style interface.
Legal
CBS Television Studios claims to hold the copyright on LCARS. Google was sent a DMCA letter to remove the Android app called Tricorder since its use of the LCARS interface was un-licensed. The application was later re-uploaded under a different title, but it was removed again.
References
External links
Fictional software
Operating systems
Star Trek terminology |
https://en.wikipedia.org/wiki/Transfinite%20number | In mathematics, transfinite numbers or infinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers. These include the transfinite cardinals, which are cardinal numbers used to quantify the size of infinite sets, and the transfinite ordinals, which are ordinal numbers used to provide an ordering of infinite sets. The term transfinite was coined in 1895 by Georg Cantor, who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals as infinite numbers. Nevertheless, the term transfinite also remains in use.
Notable work on transfinite numbers was done by Wacław Sierpiński: Leçons sur les nombres transfinis (1928 book) much expanded into Cardinal and Ordinal Numbers (1958, 2nd ed. 1965).
Definition
Any finite natural number can be used in at least two ways: as an ordinal and as a cardinal. Cardinal numbers specify the size of sets (e.g., a bag of marbles), whereas ordinal numbers specify the order of a member within an ordered set (e.g., "the man from the left" or "the day of January"). When extended to transfinite numbers, these two concepts are no longer in one-to-one correspondence. A transfinite cardinal number is used to describe the size of an infinitely large set, while a transfinite ordinal is used to describe the location within an infinitely large set that is ordered. The most notable ordinal and cardinal numbers are, respectively:
(Omega): the lowest transfinite ordinal number. It is also the order type of the natural numbers under their usual linear ordering.
(Aleph-null): the first transfinite cardinal number. It is also the cardinality of the natural numbers. If the axiom of choice holds, the next higher cardinal number is aleph-one, If not, there may be other cardinals which are incomparable with aleph-one and |
https://en.wikipedia.org/wiki/Automaton | An automaton (; : automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions. Some automata, such as bellstrikers in mechanical clocks, are designed to give the illusion to the casual observer that they are operating under their own power or will, like a mechanical robot. The term has long been commonly associated with automated puppets that resemble moving humans or animals, built to impress and/or to entertain people.
Animatronics are a modern type of automata with electronics, often used for the portrayal of characters or creatures in films and in theme park attractions.
Etymology
The word "automaton" is the latinization of the Ancient Greek , , (neuter) "acting of one's own will". This word was first used by Homer to describe an automatic door opening, or automatic movement of wheeled tripods. It is more often used to describe non-electronic moving machines, especially those that have been made to resemble human or animal actions, such as the jacks on old public striking clocks, or the cuckoo and any other animated figures on a cuckoo clock.
History
Ancient
In ancient Egyptian legends, statues of divinities, mostly made of stone, metal or wood, were animated and played a key role in religious ceremonies. They were believed to have a soul (a kꜣ), derived from the divinity they represented. In the New Kingdom of Egypt, from the 16th century BC until the 11th century BC, ancient Egyptians would frequently consult these statues for advice. The statues would reply with a movement of the head. According to Egyptian lore, pharaoh Hatshepsut dispatched her squadron to the "Land of Incense" after consulting with the statue of Amun.
There are many examples of automata in Greek mythology: Hephaestus created automata for his workshop; Talos was an artificial man of bronze; King Alkinous of the Phaiakians employed gold and silver watchdogs. Acco |
https://en.wikipedia.org/wiki/Programming%20paradigm | Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms.
Some paradigms are concerned mainly with implications for the execution model of the language, such as allowing side effects, or whether the sequence of operations is defined by the execution model. Other paradigms are concerned mainly with the way that code is organized, such as grouping a code into units along with the state that is modified by the code. Yet others are concerned mainly with the style of syntax and grammar.
Some common programming paradigms are,
Imperative in which the programmer instructs the machine how to change its state,
procedural which groups instructions into procedures,
object-oriented which groups instructions with the part of the state they operate on,
Declarative in which the programmer merely declares properties of the desired result, but not how to compute it
functional in which the desired result is declared as the value of a series of function applications,
logic in which the desired result is declared as the answer to a question about a system of facts and rules,
reactive in which the desired result is declared with data streams and the propagation of change
Symbolic techniques such as reflection, which allow the program to refer to itself, might also be considered as a programming paradigm. However, this is compatible with the major paradigms and thus is not a real paradigm in its own right.
For example, languages that fall into the imperative paradigm have two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit. Meanwhile, in object-oriented programming, code is orga |
https://en.wikipedia.org/wiki/Constraint%20programming | Constraint programming (CP) is a paradigm for solving combinatorial problems that draws on a wide range of techniques from artificial intelligence, computer science, and operations research. In constraint programming, users declaratively state the constraints on the feasible solutions for a set of decision variables. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found. In addition to constraints, users also need to specify a method to solve these constraints. This typically draws upon standard methods like chronological backtracking and constraint propagation, but may use customized code like a problem-specific branching heuristic.
Constraint programming takes its root from and can be expressed in the form of constraint logic programming, which embeds constraints into a logic program. This variant of logic programming is due to Jaffar and Lassez, who extended in 1987 a specific class of constraints that were introduced in Prolog II. The first implementations of constraint logic programming were Prolog III, CLP(R), and CHIP.
Instead of logic programming, constraints can be mixed with functional programming, term rewriting, and imperative languages.
Programming languages with built-in support for constraints include Oz (functional programming) and Kaleidoscope (imperative programming). Mostly, constraints are implemented in imperative languages via constraint solving toolkits, which are separate libraries for an existing imperative language.
Constraint logic programming
Constraint programming is an embedding of constraints in a host language. The first host languages used were logic programming languages, so the field was initially called constraint logic programming. The two paradigms share many important features, like logical variables and backtracking. Today most Prolog implementations include one or more librari |
https://en.wikipedia.org/wiki/E-card | E-card is an electronic postcard or greeting card, with the primary difference being that it is created using digital media instead of paper or other traditional materials. E-cards are made available in many different ways, usually on various Internet sites. They can be sent to a recipient virtually, usually via e-mail or an instant messaging service.
Since e-cards are digital "content", they are highly editable, allowing them to be extensively personalized by the sender. They are also capable of presenting animated gifs or videos.
Typically a catalog of E-cards is made available on a publisher's website. After selecting a card, the sender can personalize it to various degrees by adding a message, photo, or video. Finally, the sender specifies the recipient's e-mail address and the website delivers an e-mail message to the recipient on behalf of the sender.
Technological evolution
Since its conception in 1994 by Judith Donath, the technology behind the E-Card has changed significantly. One technical aspect that has remained mostly constant is the delivery mechanism: the e-mail received by the recipient contains not the E-card itself, but an individually coded link back to the publisher's website that displays the sender's card.
Postcards and greeting cards
Like their paper counterparts, "postcards" use visual art (static or animated images or video) and provide a space for a personal note to be added. These were the first type of E-cards in use. Like their paper counterparts, cyber "greeting cards" provide a greeting along with visual art. Variations range from E-cards with fixed greetings like a paper card to selectable greetings (from drop-down lists or other selection options) to changeable suggested greetings.
Flash animation
This type of E-card is based on two-dimensional vector animation controlled with a scripting language. The format is proprietary to Adobe; however, widespread usage of Adobe's software prior to its discontinuation allowed this type of |
https://en.wikipedia.org/wiki/Coevolution | In biology, coevolution occurs when two or more species reciprocally affect each other's evolution through the process of natural selection. The term sometimes is used for two traits in the same species affecting each other's evolution, as well as gene-culture coevolution.
Charles Darwin mentioned evolutionary interactions between flowering plants and insects in On the Origin of Species (1859). Although he did not use the word coevolution, he suggested how plants and insects could evolve through reciprocal evolutionary changes. Naturalists in the late 1800s studied other examples of how interactions among species could result in reciprocal evolutionary change. Beginning in the 1940s, plant pathologists developed breeding programs that were examples of human-induced coevolution. Development of new crop plant varieties that were resistant to some diseases favored rapid evolution in pathogen populations to overcome those plant defenses. That, in turn, required the development of yet new resistant crop plant varieties, producing an ongoing cycle of reciprocal evolution in crop plants and diseases that continues to this day.
Coevolution as a major topic for study in nature expanded rapidly from the 1960s, when Daniel H. Janzen showed coevolution between acacias and ants (see below) and Paul R. Ehrlich and Peter H. Raven suggested how coevolution between plants and butterflies may have contributed to the diversification of species in both groups. The theoretical underpinnings of coevolution are now well-developed (e.g., the geographic mosaic theory of coevolution), and demonstrate that coevolution can play an important role in driving major evolutionary transitions such as the evolution of sexual reproduction or shifts in ploidy. More recently, it has also been demonstrated that coevolution can influence the structure and function of ecological communities, the evolution of groups of mutualists such as plants and their pollinators, and the dynamics of infectious disease |
https://en.wikipedia.org/wiki/Message%20switching | In telecommunications, message switching involves messages routed in their entirety, one hop at a time. It evolved from circuit switching and was the precursor of packet switching.
An example of message switching is email in which the message is sent through different intermediate servers to reach the mail server for storing. Unlike packet switching, the message is not divided into smaller units and sent independently over the network.
History
Western Union operated a message switching system, Plan 55-A, for processing telegrams in the 1950s. Leonard Kleinrock wrote a doctoral thesis at the Massachusetts Institute of Technology in 1962 that analyzed queueing delays in this system.
Message switching was built by Collins Radio Company, Newport Beach, California, during the period 1959–1963 for sale to large airlines, banks and railroads.
The original design for the ARPANET was Wesley Clark's April 1967 proposal for using Interface Message Processors to create a message switching network. After the seminal meeting at the first ACM Symposium on Operating Systems Principles in October 1967, where Roger Scantlebury presented Donald Davies work and mentioned the work of Paul Baran, Larry Roberts incorporated packet switching into the design.
The SITA High-Level Network (HLN) became operational in 1969, handling data traffic for airlines in real time via a message-switched network over common carrier leased lines. It was organised to act like a packet-switching network.
Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks. Each message is treated as a separate entity. Each message contains addressing information, and at each switch this information is read and the transfer path to the next switch is decided. Depending on network conditions, a conversation of several messages may not be transferred over the same path. Each message is stored (usually on hard drive due to RAM limitations) before being transmi |
https://en.wikipedia.org/wiki/Lignin | Lignin is a class of complex organic polymers that form key structural materials in the support tissues of most plants. Lignins are particularly important in the formation of cell walls, especially in wood and bark, because they lend rigidity and do not rot easily. Chemically, lignins are polymers made by cross-linking phenolic precursors.
History
Lignin was first mentioned in 1813 by the Swiss botanist A. P. de Candolle, who described it as a fibrous, tasteless material, insoluble in water and alcohol but soluble in weak alkaline solutions, and which can be precipitated from solution using acid. He named the substance "lignine", which is derived from the Latin word lignum, meaning wood. It is one of the most abundant organic polymers on Earth, exceeded only by cellulose and chitin. Lignin constitutes 30% of terrestrial non-fossil organic carbon on Earth, and 20 to 35% of the dry mass of wood.
Lignin is present in red algae, which suggest that the common ancestor of plants and red algae also synthesised lignin. This finding also suggests that the original function of lignin was structural as it plays this role in the red alga Calliarthron, where it supports joints between calcified segments.
Structure
Lignin is a collection of highly heterogeneous polymers derived from a handful of precursor lignols. Heterogeneity arises from the diversity and degree of crosslinking between these lignols. The lignols that crosslink are of three main types, all derived from phenylpropane: coniferyl alcohol (4-hydroxy-3-methoxyphenylpropane; its radical, G, is sometimes called guaiacyl), sinapyl alcohol (3,5-dimethoxy-4-hydroxyphenylpropane; its radical, S, is sometimes called syringyl), and paracoumaryl alcohol (4-hydroxyphenylpropane; its radical, H, is sometimes called 4-hydroxyphenyl).
The relative amounts of the precursor "monomers" (lignols or monolignols) vary according to the plant source. Lignins are typically classified according to their syringyl/guaiacyl (S/G) ratio |
https://en.wikipedia.org/wiki/Attractor | In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed.
In finite-dimensional systems, the evolving variable may be represented algebraically as an n-dimensional vector. The attractor is a region in n-dimensional space. In physical systems, the n dimensions may be, for example, two or three positional coordinates for each of one or more physical entities; in economic systems, they may be separate variables such as the inflation rate and the unemployment rate.
If the evolving variable is two- or three-dimensional, the attractor of the dynamic process can be represented geometrically in two or three dimensions, (as for example in the three-dimensional case depicted to the right). An attractor can be a point, a finite set of points, a curve, a manifold, or even a complicated set with a fractal structure known as a strange attractor (see strange attractor below). If the variable is a scalar, the attractor is a subset of the real number line. Describing the attractors of chaotic dynamical systems has been one of the achievements of chaos theory.
A trajectory of the dynamical system in the attractor does not have to satisfy any special constraints except for remaining on the attractor, forward in time. The trajectory may be periodic or chaotic. If a set of points is periodic or chaotic, but the flow in the neighborhood is away from the set, the set is not an attractor, but instead is called a repeller (or repellor).
Motivation of attractors
A dynamical system is generally described by one or more differential or difference equations. The equations of a given dynamical system specify its behavior over any given short period of time. To determine the system's behavior for a longer period, it is often necessary to integrate the equations, either throu |
https://en.wikipedia.org/wiki/Phase%20space | In dynamical systems theory and control theory, a phase space or state space is a space in which all possible "states" of a dynamical system or a control system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs.
Principles
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and al |
https://en.wikipedia.org/wiki/42%20%28number%29 | 42 (forty-two) is the natural number that follows 41 and precedes 43.
Mathematics
Forty-two (42) is a pronic number and an abundant number; its prime factorization () makes it the second sphenic number and also the second of the form ().
Additional properties of the number 42 include:
It is the number of isomorphism classes of all simple and oriented directed graphs on four vertices. In other words, it is the number of all possible outcomes (up to isomorphism) of a tournament consisting of four teams where the game between any pair of teams results in three possible outcomes: the first team wins, the second team wins, or there is a draw. The group stage of the FIFA World cup is a good example.
It is the third primary pseudoperfect number.
It is a Catalan number. Consequently, 42 is the number of noncrossing partitions of a set of five elements, the number of triangulations of a heptagon, the number of rooted ordered binary trees with six leaves, the number of ways in which five pairs of nested parentheses can be arranged, etc.
It is an alternating sign matrix number, that is, the number of 4-by-4 alternating sign matrices.
It is the smallest number that is equal to the sum of the nonprime proper divisors of , i.e.,
It is the number of partitions of 10—the number of ways of expressing 10 as a sum of positive integers (note a different sense of partition from that above).
1111123, one of the 42 unordered integer partitions of 10, has 42 ordered compositions, since
The angle of 42 degrees can be constructed with only a compass and straight edge and using the golden ratio in 18 degrees, i.e. the difference between constructible angles 60 and 18.
Given 27 same-size cubes whose nominal values progress from 1 to 27, a 3 × 3 × 3 magic cube can be constructed such that every row, column, and corridor, and every diagonal passing through the center, is composed of three numbers whose sum of values is 42.
It is the third pentadecagonal number. It is a meandric |
https://en.wikipedia.org/wiki/Mortality%20rate | Mortality rate, or death rate, is a measure of the number of deaths (in general, or due to a specific cause) in a particular population, scaled to the size of that population, per unit of time. Mortality rate is typically expressed in units of deaths per 1,000 individuals per year; thus, a mortality rate of 9.5 (out of 1,000) in a population of 1,000 would mean 9.5 deaths per year in that entire population, or 0.95% out of the total. It is distinct from "morbidity", which is either the prevalence or incidence of a disease, and also from the incidence rate (the number of newly appearing cases of the disease per unit of time).
An important specific mortality rate measure is the crude death rate, which looks at mortality from all causes in a given time interval for a given population. , for instance, the CIA estimates that the crude death rate globally will be 7.7 deaths per 1,000 people in a population per year. In a generic form, mortality rates can be seen as calculated using , where d represents the deaths from whatever cause of interest is specified that occur within a given time period, p represents the size of the population in which the deaths occur (however this population is defined or limited), and is the conversion factor from the resulting fraction to another unit (e.g., multiplying by to get mortality rate per 1,000 individuals).
Crude death rate, globally
The crude death rate is defined as "the mortality rate from all causes of death for a population," calculated as the "[t]otal number of deaths during a given time interval" divided by the "[m]id-interval population", per 1,000 or 100,000; for instance, the population of the U.S. was around 290,810,000 in 2003, and in that year, approximately 2,419,900 deaths occurred in total, giving a crude death (mortality) rate of 832 deaths per 100,000. , the CIA estimates the U.S. crude death rate will be 8.3 per 1,000, while it estimates that the global rate will be 7.7 per 1,000.
According to the World Health |
https://en.wikipedia.org/wiki/Multiple%20dispatch | Multiple dispatch or multimethods is a feature of some programming languages in which a function or method can be dynamically dispatched based on the run-time (dynamic) type or, in the more general case, some other attribute of more than one of its arguments. This is a generalization of single-dispatch polymorphism where a function or method call is dynamically dispatched based on the derived type of the object on which the method has been called. Multiple dispatch routes the dynamic dispatch to the implementing function or method using the combined characteristics of one or more arguments.
Understanding dispatch
Developers of computer software typically organize source code into named blocks variously called subroutines, procedures, subprograms, functions, or methods. The code in the function is executed by calling it – executing a piece of code that references its name. This transfers control temporarily to the called function; when the function's execution has completed, control is typically transferred back to the instruction in the caller that follows the reference.
Function names are usually selected so as to be descriptive of the function's purpose. It is sometimes desirable to give several functions the same name, often because they perform conceptually similar tasks, but operate on different types of input data. In such cases, the name reference at the function call site is not sufficient for identifying the block of code to be executed. Instead, the number and type of the arguments to the function call are also used to select among several function implementations.
In more conventional, i.e., single-dispatch object-oriented programming languages, when invoking a method (sending a message in Smalltalk, calling a member function in C++), one of its arguments is treated specially and used to determine which of the (potentially many) classes of methods of that name is to be applied. In many languages, the special argument is indicated syntactically; for ex |
https://en.wikipedia.org/wiki/Generic%20function | In computer programming, a generic function is a function defined for polymorphism.
In statically typed languages
In statically typed languages (such as C++ and Java), the term generic functions refers to a mechanism for compile-time polymorphism (static dispatch), specifically parametric polymorphism. These are functions defined with TypeParameters, intended to be resolved with compile time type information. The compiler uses these types to instantiate suitable versions, resolving any function overloading appropriately.
In Common Lisp Object System
In some systems for object-oriented programming such as the Common Lisp Object System (CLOS) and Dylan, a generic function is an entity made up of all methods having the same name. Typically a generic function is an instance of a class that inherits both from function and standard-object. Thus generic functions are both functions (that can be called with and applied to arguments) and ordinary objects. The book The Art of the Metaobject Protocol explains the implementation and use of CLOS generic functions in detail.
One of the early object-oriented programming extensions to Lisp is Flavors. It used the usual message sending paradigm influenced by Smalltalk. The Flavors syntax to send a message is:
(send object :message)
With New Flavors, it was decided the message should be a real function and the usual function calling syntax should be used:
(message object)
message now is a generic function, an object and function in its own right. Individual implementations of the message are called methods.
The same idea was implemented in CommonLoops. New Flavors and CommonLoops were the main influence for the Common Lisp Object System.
Example
Common Lisp
Define a generic function with two parameters object-1 and object-2. The name of the generic function is collide.
(defgeneric collide (object-1 object-2))
Methods belonging to the generic function are defined outside of classes.
Here we define a method for the gen |
https://en.wikipedia.org/wiki/List%20of%20statistics%20articles |
0–9
1.96
2SLS (two-stage least squares) redirects to instrumental variable
3SLS – see three-stage least squares
68–95–99.7 rule
100-year flood
A
A priori probability
Abductive reasoning
Absolute deviation
Absolute risk reduction
Absorbing Markov chain
ABX test
Accelerated failure time model
Acceptable quality limit
Acceptance sampling
Accidental sampling
Accuracy and precision
Accuracy paradox
Acquiescence bias
Actuarial science
Adapted process
Adaptive estimator
Additive Markov chain
Additive model
Additive smoothing
Additive white Gaussian noise
Adjusted Rand index – see Rand index (subsection)
ADMB software
Admissible decision rule
Age adjustment
Age-standardized mortality rate
Age stratification
Aggregate data
Aggregate pattern
Akaike information criterion
Algebra of random variables
Algebraic statistics
Algorithmic inference
Algorithms for calculating variance
All models are wrong
All-pairs testing
Allan variance
Alignments of random points
Almost surely
Alpha beta filter
Alternative hypothesis
Analyse-it – software
Analysis of categorical data
Analysis of covariance
Analysis of molecular variance
Analysis of rhythmic variance
Analysis of variance
Analytic and enumerative statistical studies
Ancestral graph
Anchor test
Ancillary statistic
ANCOVA redirects to Analysis of covariance
Anderson–Darling test
ANOVA
ANOVA on ranks
ANOVA–simultaneous component analysis
Anomaly detection
Anomaly time series
Anscombe transform
Anscombe's quartet
Antecedent variable
Antithetic variates
Approximate Bayesian computation
Approximate entropy
Arcsine distribution
Area chart
Area compatibility factor
ARGUS distribution
Arithmetic mean
Armitage–Doll multistage model of carcinogenesis
Arrival theorem
Artificial neural network
Ascertainment bias
ASReml software
Association (statistics)
Association mapping
Association scheme
Assumed mean
Astrostatistics
Asymptotic distribution
Asymptotic equipartition property (information theory)
Asymptotic normality redirects to Asymptotic dis |
https://en.wikipedia.org/wiki/Machine%20tool | A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material.
The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels m |
https://en.wikipedia.org/wiki/86-DOS | 86-DOS (known internally as QDOS, for Quick and Dirty Operating System) is a discontinued operating system developed and marketed by Seattle Computer Products (SCP) for its Intel 8086-based computer kit.
86-DOS shared a few of its commands with other operating systems like OS/8 and CP/M, which made it easy to port programs from the latter. Its application programming interface was very similar to that of CP/M. The system was licensed and then purchased by Microsoft and developed further as MS-DOS and PC DOS.
History
Origins
86-DOS was created because sales of the Seattle Computer Products 8086 computer kit, demonstrated in June 1979 and shipped in November, were languishing due to the absence of an operating system. The only software that SCP could sell with the board was Microsoft's Standalone Disk BASIC-86, which Microsoft had developed on a prototype of SCP's hardware. SCP wanted to offer the 8086-version of CP/M that Digital Research had initially announced for November 1979, but it was delayed and its release date was uncertain. This was not the first time Digital Research had lagged behind hardware developments; two years earlier it had been slow to adapt CP/M for new floppy disk formats and hard disk drives. In April 1980, SCP assigned 24-year-old Tim Paterson to develop a substitute for CP/M-86.
Using a CP/M-80 manual as reference Paterson modeled 86-DOS after its architecture and interfaces, but adapted to meet the requirements of Intel's 8086 16-bit processor, for easy (and partially automated) source-level translatability of the many existing 8-bit CP/M programs; porting them to either DOS or CP/M-86 was about equally difficult, and eased by the fact that Intel had already published a method that could be used to automatically translate software from the Intel 8080 processor, for which CP/M had been designed, to the new 8086 instruction set. At the same time he made a number of changes and enhancements to address what he saw as CP/M's shortcomings. CP |
https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20inequality | In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of spaces.
The numbers and above are said to be Hölder conjugates of each other. The special case gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if is infinite, the right-hand side also being infinite in that case. Conversely, if is in and is in , then the pointwise product is in .
Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space , and also to establish that is the dual space of for .
Hölder's inequality (in a slightly different form) was first found by . Inspired by Rogers' work, gave another proof as part of a work developing the concept of convex and concave functions and introducing Jensen's inequality, which was in turn named for work of Johan Jensen building on Hölder's work.
Remarks
Conventions
The brief statement of Hölder's inequality uses some conventions.
In the definition of Hölder conjugates, means zero.
If , then and stand for the (possibly infinite) expressions
If , then stands for the essential supremum of , similarly for .
The notation with is a slight abuse, because in general it is only a norm of if is finite and is considered as equivalence class of -almost everywhere equal functions. If and , then the notation is adequate.
On the right-hand side of Hölder's inequality, 0 × ∞ as well as ∞ × 0 means 0. Multiplying with ∞ gives ∞.
Estimates for integrable products
As above, let and denote measurable real- or complex-valued functions defined on . If is finite, then the pointwise products of with and its complex conjugate function are -integrable, the estimate
and the similar one for hold, and Hölder's inequality can be applied to the right-hand side. In particular, if and are in the Hilbert space , then Hölder's inequality for implies
where the angle brack |
https://en.wikipedia.org/wiki/OLED | The organic light-emitting diode (OLED), also known as organic electroluminescent (organic EL) diode, is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound that emits light in response to an electric current. This organic layer is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, and portable systems such as smartphones and handheld game consoles. A major area of research is the development of white OLED devices for use in solid-state lighting applications.
There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell (LEC) which has a slightly different mode of operation. An OLED display can be driven with a passive-matrix (PMOLED) or active-matrix (AMOLED) control scheme. In the PMOLED scheme, each row and line in the display is controlled sequentially, one by one, whereas AMOLED control uses a thin-film transistor (TFT) backplane to directly access and switch each individual pixel on or off, allowing for higher resolution and larger display sizes.
OLED is fundamentally different from LED which is based on a p-n diode structure. In LEDs doping is used to create p- and n- regions by changing the conductivity of the host semiconductor. OLEDs do not employ a p-n structure. Doping of OLEDs is used to increase radiative efficiency by direct modification of the quantum-mechanical optical recombination rate. Doping is additionally used to determine the wavelength of photon emission.
An OLED display works without a backlight because it emits its own visible light. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions (such as a dark room), an OLED screen can achieve a higher cont |
https://en.wikipedia.org/wiki/Covariance%20matrix | In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the and directions contain all of the necessary information; a matrix would be necessary to fully characterize the two-dimensional variation.
Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself).
The covariance matrix of a random vector is typically denoted by , or .
Definition
Throughout this article, boldfaced unsubscripted and are used to refer to random vectors, and Roman subscripted and are used to refer to scalar random variables.
If the entries in the column vector
are random variables, each with finite variance and expected value, then the covariance matrix is the matrix whose entry is the covariance
where the operator denotes the expected value (mean) of its argument.
Conflicting nomenclatures and notations
Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications, call the matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector .
Both forms are quite standard, and there is no ambiguity between them. The matrix is also often called the variance-covariance matrix, since the diagonal terms are in fact variances.
By comparis |
https://en.wikipedia.org/wiki/Bioregionalism | Bioregionalism is a philosophy that suggests that political, cultural, and economic systems are more sustainable and just if they are organized around naturally defined areas called bioregions, similar to ecoregions. Bioregions are defined through physical and environmental features, including watershed boundaries and soil and terrain characteristics. Bioregionalism stresses that the determination of a bioregion is also a cultural phenomenon, and emphasizes local populations, knowledge, and solutions.
Bioregionalism asserts "that a bioregion's environmental components (geography, climate, plant life, animal life, etc.) directly influence ways for human communities to act and interact with each other which are, in turn, optimal for those communities to thrive in their environment. As such, those ways to thrive in their totality—be they economic, cultural, spiritual, or political—will be distinctive in some capacity as being a product of their bioregional environment."
Bioregionalism is a concept that goes beyond national boundaries—an example is the concept of Cascadia, a region that is sometimes considered to consist of most of Oregon and Washington, the Alaska Panhandle, the far north of California and the West Coast of Canada, sometimes also including some or all of Idaho and western Montana. Another example of a bioregion, which does not cross national boundaries, but does overlap state lines, is the Ozarks, a bioregion also referred to as the Ozarks Plateau, which consists of southern Missouri, northwest Arkansas, the northeast corner of Oklahoma, southeast corner of Kansas.
Bioregions are not synonymous with ecoregions as defined by bodies such as the World Wildlife Fund or the Commission for Environmental Cooperation; the latter are scientifically based and focused on wildlife and vegetation. Bioregions, by contrast are human regions, informed by nature but with a social and political element. In this way bioregionalism is simply political localism with an |
https://en.wikipedia.org/wiki/Handheld%20Device%20Markup%20Language | The Handheld Device Markup Language (HDML) is a markup language intended for display on handheld computers, information appliances, smartphones, etc.. It is similar to HTML, but for wireless and handheld devices with small displays, like PDA, mobile phones and so on.
It was originally developed in about 1996 by Unwired Planet, the company that became Phone.com and then Openwave. HDML was submitted to W3C for standardization, but was not turned into a standard. Instead it became an important influence on the development and standardization of WML, which then replaced HDML in practice. Unlike WML, HDML has no support for scripts.
See also
Wireless Application Protocol
List of document markup languages
Comparison of document markup languages
References
Markup languages
Computer-related introductions in 1996
Mobile web |
https://en.wikipedia.org/wiki/RADIUS | Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized authentication, authorization, and accounting (AAA) management for users who connect and use a network service. RADIUS was developed by Livingston Enterprises in 1991 as an access server authentication and accounting protocol. It was later brought into IEEE 802 and IETF standards.
RADIUS is a client/server protocol that runs in the application layer, and can use either TCP or UDP. Network access servers, which control access to a network, usually contain a RADIUS client component that communicates with the RADIUS server. RADIUS is often the back-end of choice for 802.1X authentication. A RADIUS server is usually a background process running on UNIX or Microsoft Windows.
Protocol components
RADIUS is an AAA (authentication, authorization, and accounting) protocol that manages network access. RADIUS uses two types of packets to manage the full AAA process: Access-Request, which manages authentication and authorization; and Accounting-Request, which manages accounting. Authentication and authorization are defined in RFC 2865 while accounting is described by RFC 2866.
Authentication and authorization
The user or machine sends a request to a Network Access Server (NAS) to gain access to a particular network resource using access credentials. The credentials are passed to the NAS device via the link-layer protocol—for example, Point-to-Point Protocol (PPP) in the case of many dialup or DSL providers or posted in an HTTPS secure web form.
In turn, the NAS sends a RADIUS Access Request message to the RADIUS server, requesting authorization to grant access via the RADIUS protocol.
This request includes access credentials, typically in the form of username and password or security certificate provided by the user. Additionally, the request may contain other information which the NAS knows about the user, such as its network address or phone number, and information regar |
https://en.wikipedia.org/wiki/Headphones | Headphones are a pair of small loudspeaker drivers worn on or around the head over a user's ears. They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earphones or, colloquially, cans. Circumaural ('around the ear') and supra-aural ('over the ear') headphones use a band over the top of the head to hold the speakers in place. Another type, known as earbuds or earpieces consist of individual units that plug into the user's ear canal. A third type are bone conduction headphones, which typically wrap around the back of the head and rest in front of the ear canal, leaving the ear canal open. In the context of telecommunication, a headset is a combination of headphone and microphone.
Headphones connect to a signal source such as an audio amplifier, radio, CD player, portable media player, mobile phone, video game console, or electronic musical instrument, either directly using a cord, or using wireless technology such as Bluetooth, DECT or FM radio. The first headphones were developed in the late 19th century for use by telephone operators, to keep their hands free. Initially the audio quality was mediocre and a step forward was the invention of high fidelity headphones.
Headphones exhibit a range of different audio reproduction quality capabilities. Headsets designed for telephone use typically cannot reproduce sound with the high fidelity of expensive units designed for music listening by audiophiles. Headphones that use cables typically have either a or phone jack for plugging the headphones into the audio source. Some stereo earbuds are wireless, using Bluetooth connectivity to transmit the audio signal by radio waves from source devices like cellphones and digital players. As a result of the Walkman effect, beginning in the 1980s, he |
https://en.wikipedia.org/wiki/Timeline%20of%20quantum%20computing%20and%20communication | This is a timeline of quantum computing.
1960s
1968
Stephen Wiesner invented conjugate coding (published in ACM SIGACT News 15(1):78–88).
1970s
1970
James Park articulated the no-cloning theorem.
1973
Alexander Holevo published a paper showing that n qubits can carry more than n classical bits of information, but at most n classical bits are accessible (a result known as "Holevo's theorem" or "Holevo's bound").
Charles H. Bennett showed that computation can be done reversibly.
1975
R. P. Poplavskii published "Thermodynamical models of information processing" (in Russian) which showed the computational infeasibility of simulating quantum systems on classical computers, due to the superposition principle.
1976
Polish mathematical physicist Roman Stanisław Ingarden published the paper "Quantum Information Theory" in Reports on Mathematical Physics, vol. 10, 43–72, 1976 (The paper was submitted in 1975). It is one of the first attempts at creating a quantum information theory, showing that Shannon information theory cannot directly be generalized to the quantum case, but rather that it is possible to construct a quantum information theory, which is a generalization of Shannon's theory, within the formalism of a generalized quantum mechanics of open systems and a generalized concept of observables (the so-called semi-observables).
1980s
1980
Paul Benioff described the first quantum mechanical model of a computer. In this work, Benioff showed that a computer could operate under the laws of quantum mechanics by describing a Schrödinger equation description of Turing machines, laying a foundation for further work in quantum computing. The paper was submitted in June 1979 and published in April 1980.
Yuri Manin briefly motivated the idea of quantum computing.
Tommaso Toffoli introduced the reversible Toffoli gate, which (together with initialized ancilla bits) is functionally complete for reversible classical computation.
1981
At the First Conference on |
https://en.wikipedia.org/wiki/Exponential%20growth | Exponential growth is a process that increases quantity over time. It occurs when the instantaneous rate of change (that is, the derivative) of a quantity with respect to time is proportional to the quantity itself. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). Exponential growth is the inverse of logarithmic growth.
If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay since the function values form a geometric progression.
The formula for exponential growth of a variable at the growth rate , as time goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
where is the value of at time 0. The growth of a bacterial colony is often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due to compound interest, and the spread of viral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning into logistic growth.
Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.
Examples
Biology
The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, so there is no more of t |
https://en.wikipedia.org/wiki/Minkowski%20inequality | In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality
with equality for if and only if and are positively linearly dependent; that is, for some or Here, the norm is given by:
if or in the case by the essential supremum
The Minkowski inequality is the triangle inequality in In fact, it is a special case of the more general fact
where it is easy to see that the right-hand side satisfies the triangular inequality.
Like Hölder's inequality, the Minkowski inequality can be specialized to sequences and vectors by using the counting measure:
for all real (or complex) numbers and where is the cardinality of (the number of elements in ).
The inequality is named after the German mathematician Hermann Minkowski.
Proof
First, we prove that has finite -norm if and both do, which follows by
Indeed, here we use the fact that is convex over (for ) and so, by the definition of convexity,
This means that
Now, we can legitimately talk about If it is zero, then Minkowski's inequality holds. We now assume that is not zero. Using the triangle inequality and then Hölder's inequality, we find that
We obtain Minkowski's inequality by multiplying both sides by
Minkowski's integral inequality
Suppose that and are two -finite measure spaces and is measurable. Then Minkowski's integral inequality is , :
with obvious modifications in the case If and both sides are finite, then equality holds only if a.e. for some non-negative measurable functions and
If is the counting measure on a two-point set then Minkowski's integral inequality gives the usual Minkowski inequality as a special case: for putting for the integral inequality gives
If the measurable function is non-negative then for all
This notation has been generalized to
for with Using this notation, manipulation of the expon |
https://en.wikipedia.org/wiki/Green%20roof | A green roof or living roof is a roof of a building that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane. It may also include additional layers such as a root barrier and drainage and irrigation systems. Container gardens on roofs, where plants are maintained in pots, are not generally considered to be true green roofs, although this is debated. Rooftop ponds are another form of green roofs which are used to treat greywater. Vegetation, soil, drainage layer, roof barrier and irrigation system constitute green roof.
Green roofs serve several purposes for a building, such as absorbing rainwater, providing insulation, creating a habitat for wildlife, increasing benevolence, and decreasing stress of the people around the roof by providing a more aesthetically pleasing landscape, and helping to lower urban air temperatures and mitigate the heat island effect. Green roofs are suitable for retrofit or redevelopment projects as well as new buildings and can be installed on small garages or larger industrial, commercial and municipal buildings. They effectively use the natural functions of plants to filter water and treat air in urban and suburban landscapes. There are two types of green roof: intensive roofs, which are thicker, with a minimum depth of , and can support a wider variety of plants but are heavier and require more maintenance, and extensive roofs, which are shallow, ranging in depth from , lighter than intensive green roofs, and require minimal maintenance.
The term green roof may also be used to indicate roofs that use some form of green technology, such as a cool roof, a roof with solar thermal collectors or photovoltaic panels. Green roofs are also referred to as eco-roofs, oikosteges, vegetated roofs, living roofs, greenroofs and VCPH (Horizontal Vegetated Complex Partitions)
Environmental benefits
Thermal reduction and energy conservation
Green roofs improve and reduce energy consumption. |
https://en.wikipedia.org/wiki/CD%20player | A CD player is an electronic device that plays audio compact discs, which are a digital optical disc data storage format. CD players were first sold to consumers in 1982. CDs typically contain recordings of audio material such as music or audiobooks. CD players may be part of home stereo systems, car audio systems, personal computers, or portable CD players such as CD boomboxes. Most CD players produce an output signal via a headphone jack or RCA jacks. To use a CD player in a home stereo system, the user connects an RCA cable from the RCA jacks to a hi-fi (or other amplifier) and loudspeakers for listening to music. To listen to music using a CD player with a headphone output jack, the user plugs headphones or earphones into the headphone jack.
Modern units can play audio formats other than the original CD PCM audio coding, such as MP3, AAC and WMA. DJs playing dance music at clubs often use specialized players with an adjustable playback speed to alter the pitch and tempo of the music. Audio engineers using CD players to play music for an event through a sound reinforcement system use professional audio-grade CD players. CD playback functionality is also available on CD-ROM/DVD-ROM drive equipped computers as well as on DVD players and most optical disc-based home video game consoles.
History
American inventor James T. Russell is known for inventing the first system to record digital video information on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966, and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's recording patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s.
The compact disc is not based on Russell's invention, it is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals.
Prototypes were deve |
https://en.wikipedia.org/wiki/Index%20of%20accounting%20articles | This page is an index of accounting topics.
A
Accounting ethics - Accounting information system - Accounting research - Activity-Based Costing - Assets
B
Balance sheet
- Big Four auditors
- Bond
- Bookkeeping
- Book value
C
Cash-basis accounting
- Cash-basis versus accrual-basis accounting
- Cash flow statement
- Certified General Accountant
- Certified Management Accountants
- Certified Public Accountant
- Chartered accountant
- Chart of accounts
- Common stock
- Comprehensive income
- Construction accounting
- Convention of conservatism
- Convention of disclosure
- Cost accounting
- Cost of capital
- Cost of goods sold
- Creative accounting
- Credit
- Credit note
- Current asset
- Current liability
D
Debitcapital reserve
- Debit note
- Debt
- Deficit (disambiguation)
- Depreciation
- Diluted earnings per share
- Dividend
- Double-entry bookkeeping system
- Dual aspect
E
E-accounting
- EBIT
- EBITDA
- Earnings per share
- Engagement Letter
- Entity concept
- Environmental accounting
- Expense
- Equity
- Equivalent Annual Cost
F
Financial Accounting Standards Board
- Financial accountancy
- Financial audit
- Financial reports
- Financial statements
- Fixed assets
- Fixed assets management
- Forensic accounting
- Fraud deterrence
- Free cash flow
- Fund accounting
G
Gain
- General ledger
- Generally Accepted Accounting Principles
- Going concern
- Goodwill
- Governmental Accounting Standards Board
H
Historical cost - History of accounting
I
Income
- Income statement
- Institute of Chartered Accountants in England and Wales
- Institute of Chartered Accountants of Scotland
- Institute of Management Accountants
- Intangible asset
- Interest
- Internal audit
- International Accounting Standards Board
- International Accounting Standards Committee
- International Accounting Standards
- International Federation of Accountants
- International Financial Reporting Standards
- Inventory
- Investment
- Invoices
- Indian Accounting Standards
J
Job costing
- Journal
L
|
https://en.wikipedia.org/wiki/Distributed%20hash%20table | A distributed hash table (DHT) is a distributed system that provides a lookup service similar to a hash table. Key–value pairs are stored in a DHT, and any participating node can efficiently retrieve the value associated with a given key. The main advantage of a DHT is that nodes can be added or removed with minimum work around re-distributing keys. Keys are unique identifiers which map to particular values, which in turn can be anything from addresses, to documents, to arbitrary data. Responsibility for maintaining the mapping from keys to values is distributed among the nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows a DHT to scale to extremely large numbers of nodes and to handle continual node arrivals, departures, and failures.
DHTs form an infrastructure that can be used to build more complex services, such as anycast, cooperative web caching, distributed file systems, domain name services, instant messaging, multicast, and also peer-to-peer file sharing and content distribution systems. Notable distributed networks that use DHTs include BitTorrent's distributed tracker, the Kad network, the Storm botnet, the Tox instant messenger, Freenet, the YaCy search engine, and the InterPlanetary File System.
History
DHT research was originally motivated, in part, by peer-to-peer (P2P) systems such as Freenet, Gnutella, BitTorrent and Napster, which took advantage of resources distributed across the Internet to provide a single useful application. In particular, they took advantage of increased bandwidth and hard disk capacity to provide a file-sharing service.
These systems differed in how they located the data offered by their peers. Napster, the first large-scale P2P content delivery system, required a central index server: each node, upon joining, would send a list of locally held files to the server, which would perform searches and refer the queries to the nodes that held the results. This centra |
https://en.wikipedia.org/wiki/Runtime%20%28program%20lifecycle%20phase%29 | In computer science, runtime, run time, or execution time is the final phase of a computer programs life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program.
A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language.
Implementation details
When a program is to be executed, a loader first performs the necessary memory setup and links the program with any dynamically linked libraries it needs, and then the execution begins starting from the program's entry point. In some cases, a language or implementation will have these tasks done by the language runtime instead, though this is unusual in mainstream languages on common consumer operating systems.
Some program debugging can only be performed (or is more efficient or accurate when performed) at runtime. Logic errors and array bounds checking are examples. For this reason, some programming bugs are not discovered until the program is tested in a production environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end-user may encounter a "runtime error" message.
Application errors (exceptions)
Exception handling is one language feature |
https://en.wikipedia.org/wiki/The%20Library%20of%20Babel | "The Library of Babel" () is a short story by Argentine author and librarian Jorge Luis Borges (1899–1986), conceiving of a universe in the form of a vast library containing all possible 410-page books of a certain format and character set.
The story was originally published in Spanish in Borges' 1941 collection of stories El jardín de senderos que se bifurcan (The Garden of Forking Paths). That entire book was, in turn, included within his much-reprinted Ficciones (1944). Two English-language translations appeared approximately simultaneously in 1962, one by James E. Irby in a diverse collection of Borges's works titled Labyrinths and the other by Anthony Kerrigan as part of a collaborative translation of the entirety of Ficciones.
Plot
Borges' narrator describes how his universe consists of an enormous expanse of adjacent hexagonal rooms. In each room, there is an entrance on one wall, the bare necessities for human survival on another wall, and four walls of bookshelves. Though the order and content of the books are random and apparently completely meaningless, the inhabitants believe that the books contain every possible ordering of just 25 basic characters (22 letters, the period, the comma, and space). Though the vast majority of the books in this universe are pure gibberish, the library also must contain, somewhere, every coherent book ever written, or that might ever be written, and every possible permutation or slightly erroneous version of every one of those books. The narrator notes that the library must contain all useful information, including predictions of the future, biographies of any person, and translations of every book in all languages. Conversely, for many of the texts, some language could be devised that would make it readable with any of a vast number of different contents.
Despite—indeed, because of—this glut of information, all books are totally useless to the reader, leaving the librarians in a state of suicidal despair. This leads some |
https://en.wikipedia.org/wiki/Wireless%20access%20point | In computer networking, a wireless access point, or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a wired network. As a standalone device, the AP may have a wired connection to a router, but, in a wireless router, it can also be an integral component of the router itself. An AP is differentiated from a hotspot, which is a physical location where Wi-Fi access is available.
Although WAP has been used incorrectly to describe an Access Point, the clear definition is Wireless Application Protocol which describes a protocol rather than a physical device.
Connections
An AP connects directly to a wired local area network, typically Ethernet, and the AP then provides wireless connections using wireless LAN technology, typically Wi-Fi, for other devices to use that wired connection. APs support the connection of multiple wireless devices through their one wired connection.
Wireless data standards
There are many wireless data standards that have been introduced for wireless access point and wireless router technology. New standards have been created to accommodate the increasing need for faster wireless connections. Some wireless routers provide backward compatibility with older Wi-Fi technologies as many devices were manufactured for use with older standards.
802.11a
802.11b
802.11g
802.11n (Wi-Fi 4)
802.11ac (Wi-Fi 5)
802.11ax, (Wi-Fi 6)
Wireless access point vs. ad hoc network
Some people confuse wireless access points with wireless ad hoc networks. An ad hoc network uses a connection between two or more devices without using a wireless access point; The devices communicate directly when in range. Because setup is easy and does not require an access point, an ad hoc network is used in situations such as a quick data exchange or a multiplayer video game. Due to its peer-to-peer layout, ad hoc Wi-Fi connections are similar to connections available using Bluetooth.
Ad hoc connections are gener |
https://en.wikipedia.org/wiki/Deprecation | In several fields, especially computing, deprecation is the discouragement of use of some terminology, feature, design, or practice, typically because it has been superseded or is no longer considered efficient or safe, without completely removing it or prohibiting its use. Typically, deprecated materials are not completely removed to ensure legacy compatibility or back up practice in case new methods are not functional in an odd scenario.
It can also imply that a feature, design, or practice will be removed or discontinued entirely in the future.
Etymology
In general English usage, the infinitive "to deprecate" means "to express disapproval of (something)". It derives from the Latin verb deprecari, meaning "to ward off (a disaster) by prayer".
An early documented usage of "deprecate" in this sense is in Usenet posts in 1984, referring to obsolete features in 4.2BSD and the C programming language. An expanded definition of "deprecate" was cited in the Jargon File in its 1991 revision, and similar definitions are found in commercial software documentation from 2014 and 2023.
Software
While a deprecated software feature remains in the software, its use may raise warning messages recommending alternative practices. Deprecated status may also indicate the feature will be removed in the future. Features are deprecated, rather than immediately removed, to provide backward compatibility and to give programmers time to bring affected code into compliance with the new standard.
Among the most common reasons for deprecation are:
The feature has been replaced by a more powerful alternative feature. For instance, the Linux kernel contains two modules to communicate with Windows networks: smbfs and cifs. The latter provides better security, supports more protocol features, and integrates better with the rest of the kernel. Since the inclusion of cifs, smbfs has been deprecated.
The feature contains a design flaw, frequently a security flaw, and so should be avoided, but exi |
https://en.wikipedia.org/wiki/XMPP | Extensible Messaging and Presence Protocol (XMPP, originally named Jabber) is an open communication protocol designed for instant messaging (IM), presence information, and contact list maintenance. Based on XML (Extensible Markup Language), it enables the near-real-time exchange of structured data between two or more network entities. Designed to be extensible, the protocol offers a multitude of applications beyond traditional IM in the broader realm of message-oriented middleware, including signalling for VoIP, video, file transfer, gaming and other uses.
Unlike most commercial instant messaging protocols, XMPP is defined in an open standard in the application layer. The architecture of the XMPP network is similar to email; anyone can run their own XMPP server and there is no central master server. This federated open system approach allows users to interoperate with others on any server using a 'JID' user account, similar to an email address. XMPP implementations can be developed using any software license and many server, client, and library implementations are distributed as free and open-source software. Numerous freeware and commercial software implementations also exist.
Originally developed by the open-source community, the protocols were formalized as an approved instant messaging standard in 2004 and have been continuously developed with new extensions and features. Various XMPP client software are available on both desktop and mobile platforms and devices - by 2003 the protocol was used by over ten million people worldwide on the network, according to the XMPP Standards Foundation.
Protocol characteristics
Decentralization
The XMPP network architecture is reminiscent of the Simple Mail Transfer Protocol (SMTP), a client–server model; clients do not talk directly to one another as it is decentralized - anyone can run a server. By design, there is no central authoritative server as there is with messaging services such as AIM, WLM, WhatsApp or Telegra |
https://en.wikipedia.org/wiki/Satellite%20dish | A satellite dish is a dish-shaped type of parabolic antenna designed to receive or transmit information by radio waves to or from a communication satellite. The term most commonly means a dish which receives direct-broadcast satellite television from a direct broadcast satellite in geostationary orbit.
History
Parabolic antennas referred to as "dish" antennas had been in use long before satellite television. The term satellite dish was coined in 1978 during the beginning of the satellite television industry, and came to refer to dish antennas that send and/or receive signals from communications satellites. Taylor Howard of San Andreas, California, adapted an ex-military dish in 1976 and became the first person to receive satellite television signals using it.
The first satellite television dishes were built to receive signals on the C-band analog, and were very large. The front cover of the 1979 Neiman-Marcus Christmas catalog featured the first home satellite TV stations on sale. The dishes were nearly in diameter. The satellite dishes of the early 1980s were in diameter and made of fiberglass with an embedded layer of wire mesh or aluminium foil, or solid aluminium or steel.
Satellite dishes made of wire mesh first came out in the early 1980s, and were at first in diameter. As the front-end technology improved and the noise figure of the LNBs fell, the size shrank to a few years later, and continued to get smaller reducing to feet by the late 1980s and by the early 1990s. Larger dishes continued to be used, however. In December 1988, Luxembourg's Astra 1A satellite began transmitting analog television signals on the Ku band for the European market. This allowed small dishes (90 cm) to be used reliably for the first time.
In the early 1990s, four large American cable companies founded PrimeStar, a direct broadcasting company using medium power satellites. The relatively strong Ku band transmissions allowed the use of dishes as small as 90 cm for the fir |
https://en.wikipedia.org/wiki/IBM%20701 | The IBM 701 Electronic Data Processing Machine, known as the Defense Calculator while in development, was IBM’s first commercial scientific computer and its first series production mainframe computer, which was announced to the public on May 21, 1952. It was invented and developed by Jerrier Haddad and Nathaniel Rochester based on the IAS machine at Princeton.
The IBM 701 was the first computer in the IBM 700/7000 series, which were IBM’s high-end computers until the arrival of the IBM System/360 in 1964.
The business-oriented sibling of the 701 was the IBM 702 and a lower-cost general-purpose sibling was the IBM 650, which gained fame as the first mass-produced computer.
History
IBM 701 competed with Remington Rand's UNIVAC 1103 in the scientific computation market, which had been developed for the NSA, so it was held secret until permission to market it was obtained in 1951. In early 1954, a committee of the Joint Chiefs of Staff requested that the two machines be compared for the purpose of using them for a Joint Numerical Weather Prediction project. Based on the trials, the two machines had comparable computational speed, with a slight advantage for IBM's machine, but the UNIVAC was favored unanimously for its significantly faster input-output equipment.
Nineteen IBM 701 systems were installed. The first 701 was delivered to IBM's world headquarters in New York. Eight went to aircraft companies. At the Lawrence Livermore National Laboratory, having an IBM 701 meant that scientists could run nuclear explosives computations faster.
"I think there is a world market for maybe five computers" is often attributed to Thomas Watson Sr., chairman and CEO of IBM, in 1943. This misquote may stem from a statement by his son, Thomas Watson Jr. at the 1953 IBM annual stockholders' meeting. Watson Jr. was describing the market acceptance of the IBM 701 computer. Before production began, Watson visited with 20 companies that were potential customers. This is what he said a |
https://en.wikipedia.org/wiki/Luck | Luck is the phenomenon and belief that defines the experience of improbable events, especially improbably positive or negative ones. The naturalistic interpretation is that positive and negative events may happen at any time, both due to random and non-random natural and artificial processes, and that even improbable events can happen by random chance. In this view, the epithet "lucky" or "unlucky" is a descriptive label that refers to an event's positivity, negativity, or improbability.
Supernatural interpretations of luck consider it to be an attribute of a person or object, or the result of a favorable or unfavorable view of a deity upon a person. These interpretations often prescribe how luckiness or unluckiness can be obtained, such as by carrying a lucky charm or offering sacrifices or prayers to a deity. Saying someone is "born lucky" may hold different meanings, depending on the interpretation: it could simply mean that they have been born into a good family or circumstance; or that they habitually experience improbably positive events, due to some inherent property, or due to the lifelong favor of a god or goddess in a monotheistic or polytheistic religion.
Many superstitions are related to luck, though these are often specific to a given culture or set of related cultures, and sometimes contradictory. For example, lucky symbols include the number 7 in Christian-influenced cultures, the number 8 in Chinese-influenced cultures. Unlucky symbols and events include entering and leaving a house by different doors or breaking a mirror in Greek culture, throwing rocks into the wind in Navajo culture, and ravens in Western culture. Some of these associations may derive from related facts or desires. For example, in Western culture opening an umbrella indoors might be considered unlucky partly because it could poke someone in the eye, whereas shaking hands with a chimney sweep might be considered lucky partly because it is a kind but unpleasant thing to do g |
https://en.wikipedia.org/wiki/Seven%20Bridges%20of%20K%C3%B6nigsberg | The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler in 1736 laid the foundations of graph theory and prefigured the idea of topology.
The city of Königsberg in Prussia (now Kaliningrad, Russia) was set on both sides of the Pregel River, and included two large islands—Kneiphof and Lomse—which were connected to each other, and to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once.
By way of specifying the logical task unambiguously, solutions involving either
reaching an island or mainland bank other than via one of the bridges, or
accessing any bridge without crossing to its other end
are explicitly unacceptable.
Euler proved that the problem has no solution. The difficulty he faced was the development of a suitable technique of analysis, and of subsequent tests that established this assertion with mathematical rigor.
Euler's analysis
Euler first pointed out that the choice of route inside each land mass is irrelevant and that the only important feature of a route is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms (laying the foundations of graph theory), eliminating all features except the list of land masses and the bridges connecting them. In modern terms, one replaces each land mass with an abstract "vertex" or node, and each bridge with an abstract connection, an "edge", which only serves to record which pair of vertices (land masses) is connected by that bridge. The resulting mathematical structure is a graph.
→
→
Since only the connection information is relevant, the shape of pictorial representations of a graph may be distorted in any way, without changing the graph itself. Only the existence (or absence) of an edge between each pair of nodes is significant. For example, it does not matter whether the edges drawn are st |
https://en.wikipedia.org/wiki/UNIVAC%201102 | The UNIVAC 1102 or ERA 1102 was designed by Engineering Research Associates for the United States Air Force's Arnold Engineering Development Center in Tullahoma, Tennessee in response to a request for proposal issued in 1950. The Air Force needed three computers to do data reduction for two wind tunnels and an engine test facility.
The 1102 was a variant of the UNIVAC 1101, using its 24-bit word and a smaller (only 8,192 words) drum memory. The machine had 2,700 vacuum tubes, weighed , and occupied of floor area.
The computers were connected to data channels coming from the wind tunnels and the engine facility. There were five typewriters for printed output, five paper tape punches, and four pen plotters to produce graphs.
The three computers and related peripherals were delivered between July 1954 and July 1956 at a total price of $1,400,000. Software for the computers was developed entirely at the Arnold Engineering Development center. All programming was done in machine code (assemblers and compilers were never developed).
See also
List of UNIVAC products
History of computing hardware
List of vacuum tube computers
References
1102
24-bit computers
Vacuum tube computers
Military computers
Computer-related introductions in 1953 |
https://en.wikipedia.org/wiki/Rate%E2%80%93distortion%20theory | Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D.
Introduction
Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions.
Rate–distortion theory was created by Claude Shannon in his foundational work on information theory.
In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compressi |
https://en.wikipedia.org/wiki/Maher%20Arar | Maher Arar () (born 1970) is a telecommunications engineer with dual Syrian and Canadian citizenship who has resided in Canada since 1987.
Arar was detained during a layover at John F. Kennedy International Airport in September 2002 on his way home to Canada from a family vacation in Tunis. He was held without charges in solitary confinement in the United States for nearly two weeks, questioned, and denied meaningful access to a lawyer. The US government suspected him of being a member of Al Qaeda and deported him, not to Canada, his current home and the passport on which he was travelling, but to Syria. He was detained in Syria for almost a year, during which time he was tortured, according to the findings of a commission of inquiry ordered by the Canadian government, until his release to Canada. The Syrian government later stated that Arar was "completely innocent." A Canadian commission publicly cleared Arar of any links to terrorism, and the government of Canada later settled out of court with Arar. He received C$10.5 million and Prime Minister Stephen Harper formally apologized to Arar for Canada's role in his "terrible ordeal." Arar's story is frequently referred to as "extraordinary rendition" but the US government insisted it was a case of deportation.
Arar, represented by lawyers from the Center for Constitutional Rights, filed a lawsuit in the Eastern District of New York, Arar v. Ashcroft, seeking compensatory damages and a declaration that the actions of the US government were illegal and violated his constitutional, civil, and international human rights. After the lawsuit was dismissed by the Federal District Court, the Second Circuit Court of Appeals upheld the dismissal on November 2, 2009. The Supreme Court of the United States declined to review the case on June 14, 2010.
Early life
Maher Arar was born in Syria in 1970 and moved to Canada with his parents at the age of 17 in 1987 to avoid mandatory military service. In 1991, Arar became a Canadia |
https://en.wikipedia.org/wiki/Triviality%20%28mathematics%29 | In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove.
The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic.
Trivial and nontrivial solutions
In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others:
Empty set: the set containing no or null members
Trivial group: the mathematical group containing only the identity element
Trivial ring: a ring defined on a singleton set
"Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation
where is a function whose derivative is . The trivial solution is the zero function
while a nontrivial solution is the exponential function
The differential equation with boundary conditions is important in mathematics and |
https://en.wikipedia.org/wiki/PowerPC%207xx | The PowerPC 7xx is a family of third generation 32-bit PowerPC microprocessors designed and manufactured by IBM and Motorola (spun off as Freescale Semiconductor bought by NXP Semiconductors). This family is called the PowerPC G3 by Apple Computer (later Apple Inc.), which introduced it on November 10, 1997. The term "PowerPC G3" is often, and incorrectly, imagined to be a microprocessor when in fact a number of microprocessors from different vendors have been used. Such designations were applied to Mac computers such as the PowerBook G3, the multicolored iMacs, iBooks and several desktops, including both the Beige and Blue and White Power Macintosh G3s. The low power requirements and small size made the processors ideal for laptops and the name lived out its last days at Apple in the iBook.
The 7xx family is also widely used in embedded devices like printers, routers, storage devices, spacecraft, and video game consoles. The 7xx family had its shortcomings, namely lack of SMP support and SIMD capabilities and a relatively weak FPU. Motorola's 74xx range of processors picked up where the 7xx left off.
Processors
PowerPC 740/750
The PowerPC 740 and 750 (codename Arthur) were introduced in late 1997 as an evolutionary replacement for the PowerPC 603e. Enhancements included a faster 60x system bus (66 MHz), larger L1 caches (32 KB instruction and 32 KB data), a second integer unit, an enhanced floating point unit, and higher core frequency. The 750 had support for an optional 256, 512 or 1024 KB external unified L2 cache. The cache controller and cache tags are on-die. The cache was accessed via a dedicated 64-bit bus.
The 740 and 750 added dynamic branch prediction and a 64-entry branch target instruction cache (BTIC). Dynamic branch prediction uses the recorded outcome of a branch stored in a 512-entry by 2-bit branch history table (BHT) to predict its outcome. The BTIC caches the first two instructions at a branch target.
The 740/750 models had 6.35 million |
https://en.wikipedia.org/wiki/Mixing%20console | A mixing console or mixing desk is an electronic device for mixing audio signals, used in sound recording and reproduction and sound reinforcement systems. Inputs to the console include microphones, signals from electric or electronic instruments, or recorded sounds. Mixers may control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.
Mixing consoles are used for applications including recording studios, public address systems, sound reinforcement systems, nightclubs, broadcasting, and post-production. A typical, simple application combines signals from microphones on stage into an amplifier that drives one set of loudspeakers for the audience. A DJ mixer may have only two channels, for mixing two record players. A coffeehouse's tiny stage might only have a six-channel mixer, enough for two singer-guitarists and a percussionist. A nightclub stage's mixer for rock music shows may have 24 channels for mixing the signals from a rhythm section, lead guitar and several vocalists. A mixing console in a professional recording studio may have as many as 96 channels.
In practice, mixers do more than simply mix signals. They can provide phantom power for condenser microphones; pan control, which changes a sound's apparent position in the stereo soundfield; filtering and equalization, which enables sound engineers to boost or cut selected frequencies to improve the sound; dynamic range compression, which allows engineers to increase the overall gain of the system or channel without exceeding the dynamic limits of the system; routing facilities, to send the signal from the mixer to another device, such as a sound recording system or a control room; and monitoring facilities, whereby one of a number of sources can be routed to loudspeakers or headphones for listening, often without affecting the mixer's main output. Some mixers have onboard |
https://en.wikipedia.org/wiki/Offshore%20construction | Offshore construction is the installation of structures and facilities in a marine environment, usually for the production and transmission of electricity, oil, gas and other resources. It is also called maritime engineering.
Construction and pre-commissioning is typically performed as much as possible onshore. To optimize the costs and risks of installing large offshore platforms, different construction strategies have been developed.
One strategy is to fully construct the offshore facility onshore, and tow the installation to site floating on its own buoyancy. Bottom founded structure are lowered to the seabed by de-ballasting (see for instance Condeep or Cranefree), whilst floating structures are held in position with substantial mooring systems.
The size of offshore lifts can be reduced by making the construction modular, with each module being constructed onshore and then lifted using a crane vessel into place onto the platform. A number of very large crane vessels were built in the 1970s which allow very large single modules weighing up to 14,000 tonnes to be fabricated and then lifted into place.
Specialist floating hotel vessels known as flotels or accommodation rigs are used to accommodate workers during the construction and hook-up phases. This is a high cost activity due to the limited space and access to materials.
Oil platforms are key fixed installations from which drilling and production activity is carried out. Drilling rigs are either floating vessels for deeper water or jack-up designs which are a barge with liftable legs. Both of these types of vessel are constructed in marine yards but are often involved during the construction phase to pre-drill some production wells.
Other key factors in offshore construction are the weather windows which define periods of relatively light weather during which continuous construction or other offshore activity can take place. Safety of personnel is another key construction parameter, an obvious hazard b |
https://en.wikipedia.org/wiki/Law%20of%20the%20instrument | The law of the instrument, law of the hammer, Maslow's hammer (or gavel), or golden hammer is a cognitive bias that involves an over-reliance on a familiar tool. Abraham Maslow wrote in 1966, "If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail."
The concept is attributed both to Maslow and to Abraham Kaplan, although the hammer and nail line may not be original to either of them.
History
The English expression "a Birmingham screwdriver", meaning a hammer, refers to the practice of using the one tool for all purposes, and predates both Kaplan and Maslow by at least a century.
In 1868, a London periodical, Once a Week, contained this observation: "Give a boy a hammer and chisel; show him how to use them; at once he begins to hack the doorposts, to take off the corners of shutter and window frames, until you teach him a better use for them, and how to keep his activity within bounds."
Kaplan
The first recorded statement of the concept was Abraham Kaplan's, in 1964: "I call it the law of the instrument, and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding."
In February 1962 Kaplan, then a professor of philosophy, gave a banquet speech at a conference of the American Educational Research Association that was being held at UCLA. An article in the June 1962 issue of the Journal of Medical Education stated that "the highlight of the 3-day meeting ... was to be found in Kaplan's comment on the choice of methods for research. He urged that scientists exercise good judgment in the selection of appropriate methods for their research. Because certain methods happen to be handy, or a given individual has been trained to use a specific method, is no assurance that the method is appropriate for all problems. He cited Kaplan’s Law of the Instrument: 'Give a boy a hammer and everything he meets has to be pounded.
In The Conduct of Inquiry: Methodology for Behav |
https://en.wikipedia.org/wiki/Poisson%27s%20equation | Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson.
Statement of the equation
Poisson's equation is
where is the Laplace operator, and and are real or complex-valued functions on a manifold. Usually, is given, and is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as , and so Poisson's equation is frequently written as
In three-dimensional Cartesian coordinates, it takes the form
When identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
Newtonian gravity
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity:
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
Substituting this into Gauss's law,
yields Poisson's equation for gravity:
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance from a central point mass (i.e., the fundamental solution). In three dimensions the potential is
which is equivalent to Newton's law of |
https://en.wikipedia.org/wiki/Integration%20by%20substitution | In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
Substitution for a single variable
Introduction (indefinite integrals)
Before stating the result rigorously, consider a simple case using indefinite integrals.
Compute
Set This means or in differential form, Now:
where is an arbitrary constant of integration.
This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand.
For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same.
Statement for definite integrals
Let be a differentiable function with a continuous derivative, where is an interval. Suppose that is a continuous function. Then:
In Leibniz notation, the substitution yields:
Working heuristically with infinitesimals yields the equation
which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.
The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w''-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing t |
https://en.wikipedia.org/wiki/List%20of%20named%20matrices | This article lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. Important examples include the identity matrix given by
and the zero matrix of dimension . For example:
.
Further ways of classifying matrices are according to their eigenvalues, or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry, have particular matrices that are applied chiefly in these areas.
Constant matrices
The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker delta δij for two integers i and j which is 1 if i = j and 0 else.
Specific patterns for entries
The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal).
Matrices satisfying some equations
A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by
This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix |
https://en.wikipedia.org/wiki/ISO/IEC%20646 | ISO/IEC 646 is a set of ISO/IEC standards, described as Information technology — ISO 7-bit coded character set for information interchange and developed in cooperation with ASCII at least since 1964. Since its first edition in 1967 it has specified a 7-bit character code from which several national standards are derived.
ISO/IEC 646 was also ratified by ECMA as ECMA-6. The first version of ECMA-6 had been published in 1965, based on work the ECMA's Technical Committee TC1 had carried out since December 1960.
Characters in the ISO/IEC 646 Basic Character Set are invariant characters. Since that portion of ISO/IEC 646, that is the invariant character set shared by all countries, specified only those letters used in the ISO basic Latin alphabet, countries using additional letters needed to create national variants of ISO/IEC 646 to be able to use their native scripts. Since transmission and storage of 8-bit codes was not standard at the time, the national characters had to be made to fit within the constraints of 7 bits, meaning that some characters that appear in ASCII do not appear in other national variants of ISO/IEC 646.
History
ISO/IEC 646 and its predecessor ASCII (ASA X3.4) largely endorsed existing practice regarding character encodings in the telecommunications industry.
As ASCII did not provide a number of characters needed for languages other than English, a number of national variants were made that substituted some less-used characters with needed ones. Due to the incompatibility of the various national variants, an International Reference Version (IRV) of ISO/IEC 646 was introduced, in an attempt to at least restrict the replaced set to the same characters in all variants. The original version (ISO 646 IRV) differed from ASCII only in that code point 0x24, ASCII's dollar sign ($) was replaced by the international currency symbol (¤). The final 1991 version of the code ISO/IEC 646:1991 is also known as ITU T.50, International Reference Alphabet or I |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.