source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/FR-4 | FR-4 (or FR4) is a NEMA grade designation for glass-reinforced epoxy laminate material. FR-4 is a composite material composed of woven fiberglass cloth with an epoxy resin binder that is flame resistant (self-extinguishing).
"FR" stands for "flame retardant", and does not denote that the material complies with the standard UL94V-0 unless testing is performed to UL 94, Vertical Flame testing in Section 8 at a compliant lab. The designation FR-4 was created by NEMA in 1968.
FR-4 glass epoxy is a popular and versatile high-pressure thermoset plastic laminate grade with good strength to weight ratios. With near zero water absorption, FR-4 is most commonly used as an electrical insulator possessing considerable mechanical strength. The material is known to retain its high mechanical values and electrical insulating qualities in both dry and humid conditions. These attributes, along with good fabrication characteristics, lend utility to this grade for a wide variety of electrical and mechanical applications.
Grade designations for glass epoxy laminates are: G-10, G-11, FR-4, FR-5 and FR-6. Of these, FR-4 is the grade most widely in use today. G-10, the predecessor to FR-4, lacks FR-4's self-extinguishing flammability characteristics. Hence, FR-4 has since replaced G-10 in most applications.
FR-4 epoxy resin systems typically employ bromine, a halogen, to facilitate flame-resistant properties in FR-4 glass epoxy laminates. Some applications where thermal destruction of the material is a desirable trait will still use G-10 non flame resistant.
Properties
Which materials fall into the "FR-4" category is defined in the NEMA LI 1-1998 standard. Typical physical and electrical properties of FR-4 are as follows. The abbreviations LW (lengthwise, warp yarn direction) and CW (crosswise, fill yarn direction) refer to the conventional perpendicular fiber orientations in the XY plane of the board (in-plane). In terms of Cartesian coordinates, lengthwise is along the x-axis, cr |
https://en.wikipedia.org/wiki/Self-Monitoring%2C%20Analysis%20and%20Reporting%20Technology | Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T., often written as SMART) is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs). Its primary function is to detect and report various indicators of drive reliability with the intent of anticipating imminent hardware failures.
When S.M.A.R.T. data indicates a possible imminent drive failure, software running on the host system may notify the user so action can be taken to prevent data loss, and the failing drive can be replaced and data integrity maintained.
Background
Hard disk and other storage drives are subject to failures (see hard disk drive failure) which can be classified within two basic classes:
Predictable failures which result from slow processes such as mechanical wear and gradual degradation of storage surfaces. Monitoring can determine when such failures are becoming more likely.
Unpredictable failures which occur without warning due to anything from electronic components becoming defective to a sudden mechanical failure, including failures related to improper handling.
Mechanical failures account for about 60% of all drive failures.
While the eventual failure may be catastrophic, most mechanical failures result from gradual wear and there are usually certain indications that failure is imminent. These may include increased heat output, increased noise level, problems with reading and writing of data, or an increase in the number of damaged disk sectors.
PCTechGuide's page on S.M.A.R.T. (2003) comments that the technology has gone through three phases:
Accuracy
A field study at Google covering over 100,000 consumer-grade drives from December 2005 to August 2006 found correlations between certain S.M.A.R.T. information and annualized failure rates:
In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to |
https://en.wikipedia.org/wiki/LexisNexis | LexisNexis is a part of the RELX corporation (formerly Reed Elsevier) that sells data analytics products and various databases that are accessed through online portals, including portals for computer-assisted legal research (CALR), newspaper search, and consumer information. During the 1970s, LexisNexis began to make legal and journalistic documents more accessible electronically. the company had the world's largest electronic database for legal and public-records–related information.
History
LexisNexis is owned by RELX (formerly known as Reed Elsevier).
According to Trudi Bellardo Hahn and Charles P. Bourne, LexisNexis (originally founded as LEXIS) is historically significant because it was the first of the early information services to both envision and actually bring about a future in which large populations of end users would directly interact with computer databases, rather than going through professional intermediaries like librarians. The developers of several other early information services in the 1970s harbored similar ambitions (e.g., OCLC's WorldCat), but met with financial, structural, and technological constraints and were forced to retreat to the professional intermediary model until the early 1990s.
The LexisNexis story begins in western Pennsylvania in 1956, when attorney John Horty began to explore the use of CALR technology in support of his work on comparative hospital law at the University of Pittsburgh Health Law Center. Horty was surprised to discover the extent to which the laws governing hospital administration varied from one state to another across the United States and began building a computer database to help him keep track of it all.
In 1965, Horty's work inspired the Ohio State Bar Association (OSBA) to independently develop its own CALR system, Ohio Bar Automated Research (OBAR). In 1967, the OSBA signed a contract with Data Corporation, a local defense contractor, to build OBAR based on the OSBA's written specifications. Data |
https://en.wikipedia.org/wiki/Bendixson%E2%80%93Dulac%20theorem | In mathematics, the Bendixson–Dulac theorem on dynamical systems states that if there exists a function (called the Dulac function) such that the expression
has the same sign () almost everywhere in a simply connected region of the plane, then the plane autonomous system
has no nonconstant periodic solutions lying entirely within the region. "Almost everywhere" means everywhere except possibly in a set of measure 0, such as a point or line.
The theorem was first established by Swedish mathematician Ivar Bendixson in 1901 and further refined by French mathematician Henri Dulac in 1923 using Green's theorem.
Proof
Without loss of generality, let there exist a function such that
in simply connected region . Let be a closed trajectory of the plane autonomous system in . Let be the interior of . Then by Green's theorem,
Because of the constant sign, the left-hand integral in the previous line must evaluate to a positive number. But on , and , so the bottom integrand is in fact 0 everywhere and for this reason the right-hand integral evaluates to 0. This is a contradiction, so there can be no such closed trajectory .
References
Henri Dulac (1870-1955) was a French mathematician from Fayence
Differential equations
Theorems in dynamical systems |
https://en.wikipedia.org/wiki/List%20of%20partial%20differential%20equation%20topics | This is a list of partial differential equation topics.
General topics
Partial differential equation
Nonlinear partial differential equation
list of nonlinear partial differential equations
Boundary condition
Boundary value problem
Dirichlet problem, Dirichlet boundary condition
Neumann boundary condition
Stefan problem
Wiener–Hopf problem
Separation of variables
Green's function
Elliptic partial differential equation
Singular perturbation
Cauchy–Kovalevskaya theorem
H-principle
Atiyah–Singer index theorem
Bäcklund transform
Viscosity solution
Weak solution
Loewy decomposition of linear differential equations
Specific partial differential equations
Broer–Kaup equations
Burgers' equation
Euler equations
Fokker–Planck equation
Hamilton–Jacobi equation, Hamilton–Jacobi–Bellman equation
Heat equation
Laplace's equation
Laplace operator
Harmonic function
Spherical harmonic
Poisson integral formula
Klein–Gordon equation
Korteweg–de Vries equation
Modified KdV–Burgers equation
Maxwell's equations
Navier–Stokes equations
Poisson's equation
Primitive equations (hydrodynamics)
Schrödinger equation
Wave equation
Numerical methods for PDEs
Finite difference
Finite element method
Finite volume method
Boundary element method
Multigrid
Spectral method
Computational fluid dynamics
Alternating direction implicit
Related areas of mathematics
Calculus of variations
Harmonic analysis
Ordinary differential equation
Sobolev space
Partial differential equations |
https://en.wikipedia.org/wiki/Hausdorff%20paradox | The Hausdorff paradox is a paradox in mathematics named after Felix Hausdorff. It involves the sphere (a 3-dimensional sphere in ). It states that if a certain countable subset is removed from , then the remainder can be divided into three disjoint subsets and such that and are all congruent. In particular, it follows that on there is no finitely additive measure defined on all subsets such that the measure of congruent sets is equal (because this would imply that the measure of is simultaneously , , and of the non-zero measure of the whole sphere).
The paradox was published in Mathematische Annalen in 1914 and also in Hausdorff's book, Grundzüge der Mengenlehre, the same year. The proof of the much more famous Banach–Tarski paradox uses Hausdorff's ideas. The proof of this paradox relies on the axiom of choice.
This paradox shows that there is no finitely additive measure on a sphere defined on all subsets which is equal on congruent pieces. (Hausdorff first showed in the same paper the easier result that there is no countably additive measure defined on all subsets.) The structure of the group of rotations on the sphere plays a crucial role here the statement is not true on the plane or the line. In fact, as was later shown by Banach, it is possible to define an "area" for all bounded subsets in the Euclidean plane (as well as "length" on the real line) in such a way that congruent sets will have equal "area". (This Banach measure, however, is only finitely additive, so it is not a measure in the full sense, but it equals the Lebesgue measure on sets for which the latter exists.) This implies that if two open subsets of the plane (or the real line) are equi-decomposable then they have equal area.
See also
References
Further reading
(Original article; in German)
External links
Hausdorff Paradox on ProofWiki
Mathematical paradoxes
Theorems in analysis
Measure theory |
https://en.wikipedia.org/wiki/Ecotype | In evolutionary ecology, an ecotype, sometimes called ecospecies, describes a genetically distinct geographic variety, population, or race within a species, which is genotypically adapted to specific environmental conditions.
Typically, though ecotypes exhibit phenotypic differences (such as in morphology or physiology) stemming from environmental heterogeneity, they are capable of interbreeding with other geographically adjacent ecotypes without loss of fertility or vigor.
Definition
An ecotype is a variant in which the phenotypic differences are too few or too subtle to warrant being classified as a subspecies. These different variants can occur in the same geographic region where distinct habitats such as meadow, forest, swamp, and sand dunes provide ecological niches. Where similar ecological conditions occur in widely separated places, it is possible for a similar ecotype to occur in the separated locations. An ecotype is different from a subspecies, which may exist across a number of different habitats. In animals, ecotypes owe their differing characteristics to the effects of a very local environment. Therefore, ecotypes have no taxonomic rank.
Terminology
Ecotypes are closely related to morphs. In the context of evolutionary biology, genetic polymorphism is the occurrence in the equilibrium of two or more distinctly different phenotypes within a population of a species, in other words, the occurrence of more than one form or morph. The frequency of these discontinuous forms (even that of the rarest) is too high to be explained by mutation. In order to be classified as such, morphs must occupy the same habitat at the same time and belong to a panmictic population (whose members can all potentially interbreed). Polymorphism is actively and steadily maintained in populations of species by natural selection (most famously sexual dimorphism in humans) in contrast to transient polymorphisms where conditions in a habitat change in such a way that a "form" is be |
https://en.wikipedia.org/wiki/Dissipation%20factor | In physics, the dissipation factor (DF) is a measure of loss-rate of energy of a mode of oscillation (mechanical, electrical, or electromechanical) in a dissipative system. It is the reciprocal of quality factor, which represents the "quality" or durability of oscillation.
Explanation
Electrical potential energy is dissipated in all dielectric materials, usually in the form of heat. In a capacitor made of a dielectric placed between conductors, the typical lumped element model includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR) as shown below. The ESR represents losses in the capacitor. In a good capacitor the ESR is very small, and in a poor capacitor the ESR is large. However, ESR is sometimes a minimum value to be required. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity with physical origins in both the dielectric's conduction electrons and dipole relaxation phenomena. In dielectric only one of either the conduction electrons or the dipole relaxation typically dominates loss. For the case of the conduction electrons being the dominant loss, then
where
is the dielectric's bulk conductivity,
is the lossless permittivity of the dielectric, and
is the angular frequency of the AC current i,
is the lossless capacitance.
If the capacitor is used in an AC circuit, the dissipation factor due to the non-ideal capacitor is expressed as the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor, or
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's dissipation factor is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. This gives rise to the parameter known as the loss tangent tan δ where
Alternatively, can b |
https://en.wikipedia.org/wiki/Vacuum%20chamber | A vacuum chamber is a rigid enclosure from which air and other gases are removed by a vacuum pump. This results in a low-pressure environment within the chamber, commonly referred to as a vacuum. A vacuum environment allows researchers to conduct physical experiments or to test mechanical devices which must operate in outer space (for example) or for processes such as vacuum drying or vacuum coating. Chambers are typically made of metals which may or may not shield applied external magnetic fields depending on wall thickness, frequency, resistivity, and permeability of the material used. Only some materials are suitable for vacuum use.
Chambers often have multiple ports, covered with vacuum flanges, to allow instruments or windows to be installed in the walls of the chamber. In low to medium-vacuum applications, these are sealed with elastomer o-rings. In higher vacuum applications, the flanges have knife edges machined onto them, which cut into a copper gasket when the flange is bolted on.
A type of vacuum chamber frequently used in the field of spacecraft engineering is a thermal vacuum chamber, which provides a thermal environment representing what a spacecraft would experience in space.
Vacuum chamber materials
Vacuum chambers can be constructed of many materials. "Metals are arguably the most prevalent vacuum chamber materials." The strength, pressure, and permeability are considerations for selecting chamber material.
Common materials are:
Stainless Steel
Aluminum
Mild Steel
Brass
High density ceramic
Glass
Acrylic
Hardened steel
Vacuum degassing
"Vacuum is the process of using vacuum to remove gases from compounds which become entrapped in the
mixture when mixing the components." To assure a bubble-free mold when mixing resin and silicone rubbers and slower-setting harder resins, a vacuum chamber is required. A small vacuum chamber is needed for de-airing (eliminating air bubbles) for materials prior to their setting. The process is fairly strai |
https://en.wikipedia.org/wiki/Index%20of%20evolutionary%20biology%20articles | This is a list of topics in evolutionary biology.
A
abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism
B
Henry Walter Bates – biological organisation – Brassica oleracea – breed
C
Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference
D
Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse
E
E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance
Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf
evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evol |
https://en.wikipedia.org/wiki/Green%20Hills%20Software | Green Hills Software is a privately owned company that builds operating systems and programming tools for embedded systems. The firm was founded in 1982 by Dan O'Dowd and Carl Rosenberg. Its headquarters are in Santa Barbara, California.
History
Green Hills Software and Wind River Systems enacted a 99-year contract as cooperative peers in the embedded software engineering market throughout the 1990s, with their relationship ending in a series of lawsuits throughout the early 2000s. This resulted in their opposite parting of ways, whereupon Wind River devoted itself to publicly embrace Linux and open-source software but Green Hills initiated a public relations campaign decrying its use in issues of national security.
In 2008, the Green Hills real-time operating system (RTOS) named Integrity-178 was the first system to be certified by the National Information Assurance Partnership (NIAP), composed of National Security Agency (NSA) and National Institute of Standards and Technology (NIST), to Evaluation Assurance Level (EAL) 6+.
By November 2008, it was announced that a commercialized version of Integrity 178-B will be available to be sold to the private sector by Integrity Global Security, a subsidiary of Green Hills Software.
On March 27, 2012, a contract was announced between Green Hills Software and Nintendo. This designates MULTI as the official integrated development environment and toolchain for Nintendo and its licensed developers to program the Wii U video game console.
On February 25, 2014, it was announced that the operating system Integrity had been chosen by Urban Aeronautics for their AirMule flying car unmanned aerial vehicle (UAV), since renamed the Tactical Robotics Cormorant.
Selected products
Real-time operating systems
Integrity is a POSIX real-time operating system (RTOS). An Integrity variant, named Integrity-178B, was certified to Common Criteria Evaluation Assurance Level (EAL) 6+, High Robustness in November 2008.
Micro Velosity (styli |
https://en.wikipedia.org/wiki/Pidgin%20code | In computer programming, pidgin code is a mixture of several programming languages in the same program, or pseudocode that is a mixture of a programming language with natural language descriptions. Hence the name: the mixture is a programming language analogous to a pidgin in natural languages.
In numerical computation, mathematical style pseudocode is sometimes called pidgin code, for example pidgin ALGOL (the origin of the concept), pidgin Fortran, pidgin BASIC, pidgin Pascal, and pidgin C. It is a compact and often informal notation that blends syntax taken from a conventional programming language with mathematical notation, typically using set theory and matrix operations, and perhaps also natural language descriptions.
It can be understood by a wide range of mathematically trained people, and is used as a way to describe algorithms where the control structure is made explicit at a rather high level of detail, while some data structures are still left at an abstract level, independent of any specific programming language.
Normally non-ASCII typesetting is used for the mathematical equations, for example by means of TeX or MathML markup, or proprietary Formula editor formats.
These are examples of articles that contain mathematical style pseudo code:
Algorithm
Conjugate gradient method
Ford-Fulkerson algorithm
Gauss–Seidel method
Generalized minimal residual method
Jacobi eigenvalue algorithm
Jacobi method
Karmarkar's algorithm
Particle swarm optimization
Stone method
Successive over-relaxation
Symbolic Cholesky decomposition
Tridiagonal matrix algorithm
See also
Pseudocode
Algorithm description languages |
https://en.wikipedia.org/wiki/Might%20and%20Magic%20Book%20One%3A%20The%20Secret%20of%20the%20Inner%20Sanctum | Might and Magic Book One: Secret of the Inner Sanctum (also known as simply Might and Magic) is an early role-playing video game, first in the popular and influential Might and Magic franchise. It was released in 1986 as New World Computing's debut, ported to numerous platforms and re-released continuously through the early 1990s.
Plot and setting
The game is set on the world of VARN which features expansive outdoor terrain, castles, caves, underground cities and an Astral Plane.
The game centers on six adventurers who are trying to discover the secret of the Inner Sanctum: a kind of "holy grail" quest. While trying to discover the Inner Sanctum, the heroes discover information about a mysterious character named Corak and his hunt for the missing villain Sheltem. They end up unmasking Sheltem, who had been masquerading as the King, and defeating his evil machinations. At the end of the game they go through the "Gates to Another World" and travel to CRON, not knowing that Sheltem has also escaped to that world.
Although it appears to take place in a straightforward medieval fantasy setting of knights in armor, mythical monsters and magicians, a number of science fiction elements are revealed later in the game, down to the actual meaning of VARN (Vehicular Astropod Research Nacelle). This was a relatively common trait of early CRPGs, as also seen in the oldest Ultima and Wizardry titles. For example, the Sheltem plot is first introduced when the adventurers visit the site of a crashed space ship and are told by aliens that their prisoner is at large in the world.
Game mechanics
Characters
The characters in Might and Magic and its successors are defined by a number of rules, conforming loosely to the fantasy role-playing archetypes.
Characters have "statistics" (analogous to Dungeons and Dragons Ability scores) of Might, Endurance, Accuracy, Personality, Intelligence and Luck.
There are six character classes:
Knight characters are based on the Dungeons and Dra |
https://en.wikipedia.org/wiki/Binary%20delta%20compression | Binary delta compression is a technology used in software deployment for distributing patches.
Explanation
Downloading large amounts of data over the Internet for software updates can induce high network traffic problems, especially when a network of computers is involved. Binary Delta Compression technology allows a major reduction of download size by only transferring the difference between the old and the new files during the update process.
Implementation
In real-world implementations, it is common to also use standard compression techniques (such as Lempel-Ziv) while compressing. This makes sense because LZW already works by referring back to re-used strings. ZDelta is a good example of this, as it is built from ZLib. The algorithm works by referring to common patterns not only in the file to be compressed, but also in a source file. The benefits of this are that even if there are few similarities between the original and the new file, a good data compression ratio is attained.
See also
Delta encoding
Remote Differential Compression
External links
White paper for Microsoft's implementation of BDC technology
A binary delta compression used by google for rolling out its updates with less size
Software distribution |
https://en.wikipedia.org/wiki/Transduction%20%28machine%20learning%29 | In logic, statistical inference, and supervised learning,
transduction or transductive inference is reasoning from
observed, specific (training) cases to specific (test) cases. In contrast,
induction is reasoning from observed training cases
to general rules, which are then applied to the test cases. The distinction is
most interesting in cases where the predictions of the transductive model are
not achievable by any inductive model. Note that this is caused by transductive
inference on different test sets producing mutually inconsistent predictions.
Transduction was introduced by Vladimir Vapnik in the 1990s, motivated by
his view that transduction is preferable to induction since, according to him, induction requires
solving a more general problem (inferring a function) before solving a more
specific problem (computing outputs for new cases): "When solving a problem of
interest, do not solve a more general problem as an intermediate step. Try to
get the answer that you really need but not a more general one." A similar
observation had been made earlier by Bertrand Russell:
"we shall reach the conclusion that Socrates is mortal with a greater approach to
certainty if we make our argument purely inductive than if we go by way of 'all men are mortal' and then use
deduction" (Russell 1912, chap VII).
An example of learning which is not inductive would be in the case of binary
classification, where the inputs tend to cluster in two groups. A large set of
test inputs may help in finding the clusters, thus providing useful information
about the classification labels. The same predictions would not be obtainable
from a model which induces a function based only on the training cases. Some
people may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. An example of an algorithm in this category is the Transductive Support Vector Machine (TSVM).
A third possible motivation which leads to transduction arise |
https://en.wikipedia.org/wiki/Semantic%20security | In cryptography, a semantically secure cryptosystem is one where only negligible information about the plaintext can be feasibly extracted from the ciphertext. Specifically, any probabilistic, polynomial-time algorithm (PPTA) that is given the ciphertext of a certain message (taken from any distribution of messages), and the message's length, cannot determine any partial information on the message with probability non-negligibly higher than all other PPTA's that only have access to the message length (and not the ciphertext). This concept is the computational complexity analogue to Shannon's concept of perfect secrecy. Perfect secrecy means that the ciphertext reveals no information at all about the plaintext, whereas semantic security implies that any information revealed cannot be feasibly extracted.
History
The notion of semantic security was first put forward by Goldwasser and Micali in 1982. However, the definition they initially proposed offered no straightforward means to prove the security of practical cryptosystems. Goldwasser/Micali subsequently demonstrated that semantic security is equivalent to another definition of security called ciphertext indistinguishability under chosen-plaintext attack. This latter definition is more common than the original definition of semantic security because it better facilitates proving the security of practical cryptosystems.
Symmetric-key cryptography
In the case of symmetric-key algorithm cryptosystems, an adversary must not be able to compute any information about a plaintext from its ciphertext. This may be posited as an adversary, given two plaintexts of equal length and their two respective ciphertexts, cannot determine which ciphertext belongs to which plaintext.
Public-key cryptography
For an asymmetric key encryption algorithm cryptosystem to be semantically secure, it must be infeasible for a computationally bounded adversary to derive significant information about a message (plaintext) when given only i |
https://en.wikipedia.org/wiki/Schoenflies%20notation | The Schoenflies (or Schönflies) notation, named after the German mathematician Arthur Moritz Schoenflies, is a notation primarily used to specify point groups in three dimensions. Because a point group alone is completely adequate to describe the symmetry of a molecule, the notation is often sufficient and commonly used for spectroscopy. However, in crystallography, there is additional translational symmetry, and point groups are not enough to describe the full symmetry of crystals, so the full space group is usually used instead. The naming of full space groups usually follows another common convention, the Hermann–Mauguin notation, also known as the international notation.
Although Schoenflies notation without superscripts is a pure point group notation, optionally, superscripts can be added to further specify individual space groups. However, for space groups, the connection to the underlying symmetry elements is much more clear in Hermann–Mauguin notation, so the latter notation is usually preferred for space groups.
Symmetry elements
Symmetry elements are denoted by i for centers of inversion, C for proper rotation axes, σ for mirror planes, and S for improper rotation axes (rotation-reflection axes). C and S are usually followed by a subscript number (abstractly denoted n) denoting the order of rotation possible.
By convention, the axis of proper rotation of greatest order is defined as the principal axis. All other symmetry elements are described in relation to it. A vertical mirror plane (containing the principal axis) is denoted σv; a horizontal mirror plane (perpendicular to the principal axis) is denoted σh.
Point groups
In three dimensions, there are an infinite number of point groups, but all of them can be classified by several families.
Cn (for cyclic) has an n-fold rotation axis.
Cnh is Cn with the addition of a mirror (reflection) plane perpendicular to the axis of rotation (horizontal plane).
Cnv is Cn with the addition of n mirror pla |
https://en.wikipedia.org/wiki/TransUnion | TransUnion is an American consumer credit reporting agency. TransUnion collects and aggregates information on over one billion individual consumers in over thirty countries including "200 million files profiling nearly every credit-active consumer in the United States". Its customers include over 65,000 businesses. Based in Chicago, Illinois, TransUnion's 2014 revenue was US$1.3 billion. It is the smallest of the three largest credit agencies, along with Experian and Equifax (known as the "Big Three").
TransUnion also markets credit reports and other credit and fraud-protection products directly to consumers. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year.
Additionally a growing segment of TransUnion's business is its business offerings that use advanced big data, particularly its deep AI-TLOxp product.
History
TransUnion was originally formed in 1968 as a holding company for Union Tank Car Company, making TransUnion a descendant of Standard Oil through Union Tank Car Company. The following year, it acquired the Credit Bureau of Cook County, which possessed and maintained 3.6 million credit accounts. In 1981, a Chicago-based holding company, The Marmon Group, acquired TransUnion for approximately $688 million.
In 2010, Goldman Sachs Capital Partners and Advent International acquired it from Madison Dearborn Partners. In 2014, TransUnion acquired Hank Asher's data company TLO. On June 25, 2015, TransUnion became a publicly traded company for the first time, trading under the symbol TRU.
TransUnion eventually began to offer products and services for both businesses and consumers. For businesses, TransUnion updated its traditional credit score offering to include trended data that helps predict consumer repayment and debt behavior. This product, referred to as CreditVision, launched in October 2013.
Its SmartMove™ service facilitates credit and background checks for landlords. Th |
https://en.wikipedia.org/wiki/Equifax | Equifax Inc. is an American multinational consumer credit reporting agency headquartered in Atlanta, Georgia and is one of the three largest consumer credit reporting agencies, along with Experian and TransUnion (together known as the "Big Three"). Equifax collects and aggregates information on over 800 million individual consumers and more than 88 million businesses worldwide. In addition to credit and demographic data and services to business, Equifax sells credit monitoring and fraud prevention services directly to consumers.
Equifax operates or has investments in 24 countries in the Americas, Europe, and Asia Pacific. With over 14,000 employees worldwide, Equifax has nearly US$5 billion in annual revenue and is traded on the New York Stock Exchange (NYSE) under the symbol EFX.
History
Equifax was founded by Cator and Guy Woolford in Atlanta, Georgia, as Retail Credit Company in 1899. By 1920, the company had offices throughout the United States and Canada. By the 1960s, Retail Credit Company was one of the nation's largest credit bureaus, holding files on millions of American and Canadian citizens. Even though the company continued to do credit reporting, the majority of its business was making reports to insurance companies when people applied for new insurance policies, such as life, auto, fire and medical insurance. RCC also investigated insurance claims and made employment reports when people were seeking new jobs. Most of the credit work was then being done by a subsidiary, Retailers Commercial Agency.
Retail Credit Company's information holdings and willingness to sell its information attracted criticism in the 1960s and 1970s. These included that it collected "...facts, statistics, inaccuracies and rumors ... about virtually every phase of a person's life; his marital troubles, jobs, school history, childhood, sex life, and political activities." The company was also alleged to reward its employees for collecting derogatory information on consumers. |
https://en.wikipedia.org/wiki/Glycocalyx | The glycocalyx (: glycocalyces or glycocalyxes), also known as the pericellular matrix and sometime cell coat, is a glycoprotein and glycolipid covering that surrounds the cell membranes of bacteria, epithelial cells, and other cells. It was described in a review article in 1970.
Animal epithelial cells have a fuzz-like coating on the external surface of their plasma membranes. This viscous coating is the glycocalyx that consists of several carbohydrate moieties of membrane glycolipids and glycoproteins, which serve as backbone molecules for support. Generally, the carbohydrate portion of the glycolipids found on the surface of plasma membranes helps these molecules contribute to cell–cell recognition, communication, and intercellular adhesion.
The glycocalyx is a type of identifier that the body uses to distinguish between its own healthy cells and transplanted tissues, diseased cells, or invading organisms. Included in the glycocalyx are cell-adhesion molecules that enable cells to adhere to each other and guide the movement of cells during embryonic development. The glycocalyx plays a major role in regulation of endothelial vascular tissue, including the modulation of red blood cell volume in capillaries.
The term was initially applied to the polysaccharide matrix coating epithelial cells, but its functions have been discovered to go well beyond that.
In vascular endothelial tissue
The glycocalyx is located on the apical surface of vascular endothelial cells which line the lumen. When vessels are stained with cationic dyes such as Alcian blue stain, transmission electron microscopy shows a small, irregularly shaped layer extending approximately 50–100 nm into the lumen of a blood vessel. Another study used osmium tetroxide staining during freeze substitution, and showed that the endothelial glycocalyx could be up to 11 μm thick. It is present throughout a diverse range of microvascular beds (capillaries) and macrovessels (arteries and veins). The glycocaly |
https://en.wikipedia.org/wiki/196%20%28number%29 | 196 (one hundred [and] ninety-six) is the natural number following 195 and preceding 197.
In mathematics
196 is a square number, the square of 14. As the square of a Catalan number, it counts the number of walks of length 8 in the positive quadrant of the integer grid that start and end at the origin, moving diagonally at each step. It is part of a sequence of square numbers beginning 0, 1, 4, 25, 196, ... in which each number is the smallest square that differs from the previous number by a triangular number.
There are 196 one-sided heptominoes, the polyominoes made from 7 squares. Here, one-sided means that asymmetric polyominoes are considered to be distinct from their mirror images.
A Lychrel number is a natural number which cannot form a palindromic number through the iterative process of repeatedly reversing its digits and adding the resulting numbers. 196 is the smallest number conjectured to be a Lychrel number in base 10; the process has been carried out for over a billion iterations without finding a palindrome, but no one has ever proven that it will never produce one.
See also
196 (disambiguation)
References
Arithmetic dynamics
Integers |
https://en.wikipedia.org/wiki/151%20%28number%29 | 151 (one hundred [and] fifty-one) is a natural number. It follows 150 and precedes 152.
In mathematics
151 is the 36th prime number, the previous is 149, with which it comprises a twin prime. 151 is also a palindromic prime, a centered decagonal number, and a lucky number.
151 appears in the Padovan sequence, preceded by the terms 65, 86, 114; it is the sum of the first two of these.
151 is a unique prime in base 2, since it is the only prime with period 15 in base 2.
There are 151 4-uniform tilings, such that the symmetry of tilings with regular polygons have four orbits of vertices.
151 is the number of uniform paracompact honeycombs with infinite facets and vertex figures in the third dimension, which stem from 23 different Coxeter groups. Split into two whole numbers, 151 is the sum of 75 and 76, both relevant numbers in Euclidean and hyperbolic 3-space:
75 is the total number of non-prismatic uniform polyhedra, which incorporate regular polyhedra, semiregular polyhedra, and star polyhedra,
75 uniform compound polyhedra, inclusive of seven types of families of prisms and antiprisms,
76 is the number of unique uniform compact hyperbolic honeycombs that are solely generated from Wythoff constructions.
While 151 is the 36th indexed prime, its twin prime 149 has a reciprocal whose repeating decimal expansion has a digit sum of 666, which is the magic constant in a prime reciprocal magic square equal to the sum of the first 36 non-zero integers, or equivalently the 36th triangular number.
Furthermore, the sum between twin primes (149, 151) is 300, which in turn is the 24th triangular number.
In music
The song 151 by Slick Shoes
The song Cell 151 from Steve Hackett's Highly Strung album
The song Psalm 151 by the Polish band Kult
The song Oddfellows Local 151 by R.E.M. of the album Document
The song "151" by Armando from A Bugged Out Mix
The song "151" by Jedi Mind Tricks
The song "151 Rum (J.I.D song)" by J.I.D
The song "Hyakugojyuuichi" (151 in J |
https://en.wikipedia.org/wiki/Television%20channel%20frequencies | The following tables show the frequencies assigned to broadcast television channels in various regions of the world, along with the ITU letter designator for the system used. The frequencies shown are for the analogue video and audio carriers. The channel itself occupies several megahertz of bandwidth. For example, North American channel 1 occupies the spectrum from 44 to 50 MHz. See Broadcast television systems for a table of signal characteristics, including bandwidth, by ITU letter designator.
International normalization for analog TV systems
International broadcasting television frenquencies are divided in two part of the spectrum; the Very high frequency or "VHF“ band and the Ultra high frequency or "UHF“ band.
VHF
Americas (most countries), South Korea, Taiwan, Myanmar, and the Philippines
During World War II, the frequencies originally assigned as channels 13 to 18 were appropriated by the U.S. military, which still uses them to this day. It was also decided to move the allocation for FM radio from the 42-50 MHz band to a larger 88-106 MHz band (later extended to the current 88-108 MHz FM band). This required a reassignment of the VHF channels to the plan currently in use.
Assignments since February 25, 1946
System M 525 lines (most countries in the Americas and Caribbean, South Korea, Taiwan and the Philippines)
System N 625 lines (used in Argentina, Paraguay and Uruguay)
FM channel 200, 87.9 MHz, overlaps TV 6. This is used only by K200AA. Channel 6A is only used in South Korea and the Philippines.
TV 6 analog audio can be heard on FM 87.75 on most broadcast radio receivers as well as on a European TV tuned to channel 4A or channel C, but at lower volume than wideband FM broadcast stations, because of the lower deviation.
Channel 1 audio is the same as European Channel 2 audio and the video is the same as European Channel 2A. Channel 2 video is the same as European Channel 3 video.
Japan
The frequency spacing for each channel is 6 MHz as th |
https://en.wikipedia.org/wiki/Goal%20difference | Goal difference, goal differential or points difference is a form of tiebreaker used to rank sport teams which finish on equal points in a league competition. Either "goal difference" or "points difference" is used, depending on whether matches are scored by goals (as in ice hockey and association football) or by points (as in rugby union and basketball).
Goal difference is calculated as the number of goals scored in all league matches minus the number of goals conceded, and is sometimes known simply as plus–minus. Goal difference was first introduced as a tiebreaker in association football, at the 1970 FIFA World Cup, and was adopted by the Football League in England five years later. It has since spread to many other competitions, where it is typically used as either the first or, after tying teams' head-to-head records, second tiebreaker. Goal difference is zero sum, in that a gain for one team (+1) is exactly balanced by the loss for their opponent (–1). Therefore, the sum of the goal differences in a league table is always zero (provided the teams have only played each other).
Goal difference has often replaced the older goal average, or goal ratio. Goal average is the number of goals scored divided by the number of goals conceded, and is therefore a dimensionless quantity. It was replaced by goal difference, which was thought to encourage more attacking play, encouraging teams to score more goals (or points) as opposed to defending against conceding. However goal average is still used as a tiebreaker in Australia, where it is referred to as "percentage". This is calculated as points scored divided by points conceded, and then multiplied by 100.
If two or more teams' total points scored and goal differences are both equal, then often goals scored is used as a further tiebreaker, with the team scoring the most goals winning. After this a variety of other tiebreakers may be used.
Goal difference v. goal average
The different schemes can lead to strikingly di |
https://en.wikipedia.org/wiki/SimLife | SimLife: The Genetic Playground is a video game produced by Maxis in 1992. The concept of the game is to simulate an ecosystem; players may modify the genetics of the plants and animals that inhabit the virtual world. The point of this game is to experiment and create a self-sustaining ecosystem. SimLife was re-released in 1993 as part of the SimClassics Volume 1 compilation, alongside SimCity Classic and SimAnt for PC, Mac and Amiga.
Development
The producers of SimLife refer to it as "The Genetic Playground". The game allows users to explore the interaction of life-forms and environments. Users can manipulate the genetics of both plants and animals to determine whether these new species could survive in the Earth's various environments. Players can also create new worlds with distinctive environments to see how certain species (earth's species or their own) fare within them.
SimLife gives players the power to:
Create and modify worlds.
Create and modify plants and animals at the genetic level. Exclusive animals appearing in this game are the Killer Penguin, the Monkeyphant, and the Orgot.
Design environments and ecosystems.
Study genetics in action.
Simulate and control evolution.
Change the physics of the universe in your computer.
Reception
Computer Gaming World in 1993 praised SimLife, stating that "By neatly bridging the gap between entertainment and education, SL brings the engrossing science of genetics within reach of any interested person".
Games Finder gave SimLife a score of 7 out of 10.
In 1993, SimLife received a Codie award from the Software Publishers Association for Best Simulation.
See also
Spore
Impossible Creatures
L.O.L.: Lack of Love
Evolution: The Game of Intelligent Life
E.V.O.: Search for Eden
Creatures
SimEarth
''Unnatural Selection (video game)
References
External links
SimLife : Maxis at the Internet Archive
Manual
1992 video games
Amiga games
Amiga 1200 games
Classic Mac OS games
DOS games
God games
Biologica |
https://en.wikipedia.org/wiki/Algebra%20of%20sets | In mathematics, the algebra of sets, not to be confused with the mathematical structure of an algebra of sets, defines the properties and laws of sets, the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.
Any set of sets closed under the set-theoretic operations forms a Boolean algebra with the join operator being union, the meet operator being intersection, the complement operator being set complement, the bottom being and the top being the universe set under consideration.
Fundamentals
The algebra of sets is the set-theoretic analogue of the algebra of numbers. Just as arithmetic addition and multiplication are associative and commutative, so are set union and intersection; just as the arithmetic relation "less than or equal" is reflexive, antisymmetric and transitive, so is the set relation of "subset".
It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. For a basic introduction to sets see the article on sets, for a fuller account see naive set theory, and for a full rigorous axiomatic treatment see axiomatic set theory.
The fundamental properties of set algebra
The binary operations of set union () and intersection () satisfy many identities. Several of these identities or "laws" have well established names.
Commutative property:
Associative property:
Distributive property:
The union and intersection of sets may be seen as analogous to the addition and multiplication of numbers. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersection distributes over union. However, unlike addition and multiplication, union also distributes over intersection.
Two additional pairs of properties involve th |
https://en.wikipedia.org/wiki/Gene%20gun | In genetic engineering, a gene gun or biolistic particle delivery system is a device used to deliver exogenous DNA (transgenes), RNA, or protein to cells. By coating particles of a heavy metal with a gene of interest and firing these micro-projectiles into cells using mechanical force, an integration of desired genetic information can be introduced into desired cells. The technique involved with such micro-projectile delivery of DNA is often referred to as biolistics, short for "biological ballistics".
This device is able to transform almost any type of cell and is not limited to the transformation of the nucleus; it can also transform organelles, including plastids and mitochondria.
Gene gun design
The gene gun was originally a Crosman air pistol modified to fire dense tungsten particles. It was invented by John C Sanford, Ed Wolf, and Nelson Allen at Cornell University along with Ted Klein of DuPont between 1983 and 1986. The original target was onions (chosen for their large cell size), and the device was used to deliver particles coated with a marker gene which would relay a signal if proper insertion of the DNA transcript occurred. Genetic transformation was demonstrated upon observed expression of the marker gene within onion cells.
The earliest custom manufactured gene guns (fabricated by Nelson Allen) used a 22 caliber nail gun cartridge to propel a polyethylene cylinder (bullet) down a 22 caliber Douglas barrel. A droplet of the tungsten powder coated with genetic material was placed onto the bullet and shot down into a Petri dish below. The bullet welded to the disk below the Petri plate, and the genetic material blasted into the sample with a doughnut effect involving devastation in the middle of the sample with a ring of good transformation around the periphery. The gun was connected to a vacuum pump and was placed under a vacuum while firing. The early design was put into limited production by a Rumsey-Loomis (a local machine shop then at Mecklenb |
https://en.wikipedia.org/wiki/Superframe | In telecommunications, superframe (SF) is a T1 framing standard. In the 1970s it replaced the original T1/D1 framing scheme of the 1960s in which the framing bit simply alternated between 0 and 1.
Superframe is sometimes called D4 Framing to avoid confusion with single-frequency signaling. It was first supported by the D2 channel bank, but it was first widely deployed with the D4 channel bank.
In order to determine where each channel is located in the stream of data being received, each set of 24 channels is aligned in a frame. The frame is 192 bits long (8 * 24), and is terminated with a 193rd bit, the framing bit, which is used to find the end of the frame.
In order for the framing bit to be located by receiving equipment, a predictable pattern is sent on this bit. Equipment will search for a bit which has the correct pattern, and will align its framing based on that bit. The pattern sent is 12 bits long, so every group of 12 frames is called a superframe. The pattern used in the 193rd bit is 100011 011100.
Each channel sends two bits of call supervision data during each superframe using robbed-bit signaling during frames 6 and 12 of the superframe.
More specifically, after the 6th and 12th bit in the superframe pattern, the least significant data bit of each channel (bit 8; T1 data is sent big-endian and uses 1-origin numbering) is replaced by a "channel-associated signalling" bit (bits A and B, respectively).
Superframe remained in service in many places through the turn of the century, replaced by the improved extended superframe (ESF) of the 1980s in applications where its additional features were desired.
Extended superframe
In telecommunications, extended superframe (ESF) is a T1 framing standard. ESF is sometimes called D5 Framing because it was first used in the D5 channel bank, invented in the 1980s.
It is preferred to its predecessor, superframe, because it includes a cyclic redundancy check (CRC) and 4000 bit/s channel capacity for a data |
https://en.wikipedia.org/wiki/Solitude | Solitude, also known as social withdrawal, is a state of seclusion or isolation, meaning lack of socialisation. Effects can be either positive or negative, depending on the situation. Short-term solitude is often valued as a time when one may work, think, or rest without disturbance. It may be desired for the sake of privacy. Undesirable long-term solitude may stem from soured relationships, loss of loved ones, deliberate choice, infectious disease, mental disorders, neurological disorders such as circadian rhythm sleep disorder, or circumstances of employment or situation.
A distinction has been made between solitude and loneliness. In this sense, these two words refer, respectively, to the joy and the pain of being alone.
Health effects
Symptoms from complete isolation, called sensory deprivation, may include anxiety, sensory illusions, or distortions of time and perception. However, this is the case when there is no stimulation of the sensory systems at all and not just lack of contact with people. Thus, this can be avoided by having other things to keep one's mind busy.
Long-term solitude is often seen as undesirable, causing loneliness or reclusion resulting from inability to establish relationships. Furthermore, it might lead to clinical depression, although some people do not react to it negatively. Monks regard long-term solitude as a means of spiritual enlightenment. Marooned people have been left in solitude for years without any report of psychological symptoms afterwards. Some psychological conditions (such as schizophrenia and schizoid personality disorder) are strongly linked to a tendency to seek solitude.
Enforced loneliness (solitary confinement) has been a punishment method throughout history. It is often considered a form of torture.
Emotional isolation is a state of isolation where one feels emotionally separated from others despite having a well-functioning social network.
Researchers, including Robert J. Coplan and Julie C. Bowker, have r |
https://en.wikipedia.org/wiki/Netcat | netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP. The command is designed to be a dependable back-end that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and investigation tool, since it can produce almost any kind of connection its user could need and has a number of built-in capabilities.
It is able to perform port scanning, file transferring and port listening.
Features
The original netcat's features include:
Outbound or inbound connections, TCP or UDP, to or from any ports
Full DNS forward/reverse checking, with appropriate warnings
Ability to use any local source port
Ability to use any locally configured network source address
Built-in port-scanning capabilities, with randomization
Built-in loose source-routing capability
Can read command line arguments from standard input
Slow-send mode, one line every N seconds
Hex dump of transmitted and received data
Optional ability to let another program service establish connections
Optional telnet-options responder
Rewrites like GNU's and OpenBSD's support additional features. For example, OpenBSD's nc supports TLS, and GNU netcat natively supports a tunneling mode supporting UDP and TCP (optionally allowing one to be tunneled over the other) in a single command, where other versions may require piping data from one netcat instance to another.
Ports and reimplementations
The original version of netcat was a Unix program. The last version (1.10) was released in March 1996.
There are several implementations on POSIX systems, including rewrites from scratch like GNU netcat or OpenBSD netcat,
the latter of which supports IPv6 and TLS. The OpenBSD version has been ported to the FreeBSD base,
Windows/Cygwin,
and Linux.
Mac OS X comes with netcat installed as of OSX 10.13 or users can use MacPorts to install a variant.
A DOS version of netcat cal |
https://en.wikipedia.org/wiki/Line%2010%20%28Beijing%20Subway%29 | Line 10 of the Beijing Subway () is the second loop line in Beijing's rapid transit network as well as the second longest and most widely used line. The line is in length, and runs entirely underground through Haidian, Chaoyang and Fengtai Districts, either directly underneath or just beyond the 3rd Ring Road. The Line 10 loop is situated between outside the Line 2 loop, which circumnavigates Beijing's old Inner City. Every subway line through the city centre intersects with Line 10, which has 24 transfer stations along route, and 45 stations in all. Line 10's color is capri.
Line 10 was the world's longest rapid transit loop line since its completion in May 2013 till March 2023 and one of the longest entirely underground subway lines requiring 104 minutes to complete one full journey in either direction.
History
Planning
The Beijing Subway network was originally conceived to have only one loop line. The booming economy and explosive population growth of Beijing put huge demand on Line 2, surpassing its designed capacity. In 2001 and 2002, the China Academy of Urban Planning and Design proposed two "L-shaped" lines named Line 10 and 11. Together they would form a second loop around Beijing and relieve pressure on line 2.
Phase I
On December 27, 2003, in preparation for the 2008 Summer Olympics in Beijing, Phase 1 of Line 10 started construction. On July 19, 2008, Phase I of Line 10 entered operation ahead of the opening of the Olympic Games. It was in length and had 22 stations. Phase I consisted of the northern and eastern sides of Line 10's rectangular loop from to forming an inverted "L"-shaped line.
Phase II
Construction on Phase II began on December 28, 2007. which meant that the original plan for Line 11 was not incorporated into the final network design and was instead absorbed into Line 10. Line 10 formed the second full loop around Beijing. In 2010, the Ministry of Railways proposed that Fengtai Railway Station was to be renovated and expanded to |
https://en.wikipedia.org/wiki/ESET%20NOD32 | ESET NOD32 Antivirus, commonly known as NOD32, is an antivirus software package made by the Slovak company ESET. ESET NOD32 Antivirus is sold in two editions, Home Edition and Business Edition. The Business Edition packages add ESET Remote Administrator allowing for server deployment and management, mirroring of threat signature database updates and the ability to install on Microsoft Windows Server operating systems.
History
NOD32
The acronym NOD stands for Nemocnica na Okraji Disku ("Hospital at the end of the disk"), a pun related to the Czechoslovak medical drama series Nemocnice na kraji města (Hospital at the End of the City). The first version of NOD32 called NOD-ICE was a DOS-based program. It was created in 1987 by Miroslav Trnka and Peter Paško at the time when computer viruses started to become increasingly prevalent on PCs running DOS. Due to the limitations of the OS (lack of multitasking among others) it didn't feature any on-demand/on-access protection nor most of the other features of the current versions. Besides the virus scanning and cleaning functionality it only featured heuristic analysis. With the increasing popularity of the Windows environment, advent of 32-bit CPUs, a shift in the PC market and increasing popularity of the Internet came the need for a completely different antivirus approach as well. Thus the original program was re-written and christened "NOD32" to emphasize both the radical shift from the previous version and its Win32 system compatibility.
Initially the program gained popularity with IT workers in Eastern European countries, as ESET was based in Slovakia. Though the program's abbreviation was originally pronounced as individual letters, the worldwide use of the program led to the more common single-word pronunciation, sounding like the English word nod. Additionally, the "32" portion of the name was added with the release of a 32-bit version in the Windows 9x era. The company reached its 10000th update to virus defi |
https://en.wikipedia.org/wiki/Finitely%20generated%20group | In algebra, a finitely generated group is a group G that has some finite generating set S so that every element of G can be written as the combination (under the group operation) of finitely many elements of S and of inverses of such elements.
By definition, every finite group is finitely generated, since S can be taken to be G itself. Every infinite finitely generated group must be countable but countable groups need not be finitely generated. The additive group of rational numbers Q is an example of a countable group that is not finitely generated.
Examples
Every quotient of a finitely generated group G is finitely generated; the quotient group is generated by the images of the generators of G under the canonical projection.
A group that is generated by a single element is called cyclic. Every infinite cyclic group is isomorphic to the additive group of the integers Z.
A locally cyclic group is a group in which every finitely generated subgroup is cyclic.
The free group on a finite set is finitely generated by the elements of that set (§Examples).
A fortiori, every finitely presented group (§Examples) is finitely generated.
Finitely generated abelian groups
Every abelian group can be seen as a module over the ring of integers Z, and in a finitely generated abelian group with generators x1, ..., xn, every group element x can be written as a linear combination of these generators,
x = α1⋅x1 + α2⋅x2 + ... + αn⋅xn
with integers α1, ..., αn.
Subgroups of a finitely generated abelian group are themselves finitely generated.
The fundamental theorem of finitely generated abelian groups states that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of which are unique up to isomorphism.
Subgroups
A subgroup of a finitely generated group need not be finitely generated. The commutator subgroup of the free group on two generators is an example of a subgroup of a finitely generated group |
https://en.wikipedia.org/wiki/Register-transfer%20level | In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals.
Register-transfer-level abstraction is used in hardware description languages (HDLs) like Verilog and VHDL to create high-level representations of a circuit, from which lower-level representations and ultimately actual wiring can be derived. Design at the RTL level is typical practice in modern digital design.
Unlike in software compiler design, where the register-transfer level is an intermediate representation and at the lowest level, the RTL level is the usual input that circuit designers operate on. In fact, in circuit synthesis, an intermediate language between the input register transfer level representation and the target netlist is sometimes used. Unlike in netlist, constructs such as cells, functions, and multi-bit registers are available. Examples include FIRRTL and RTLIL.
Transaction-level modeling is a higher level of electronic system design.
RTL description
A synchronous circuit consists of two kinds of elements: registers (Sequential logic) and combinational logic. Registers (usually implemented as D flip-flops) synchronize the circuit's operation to the edges of the clock signal, and are the only elements in the circuit that have memory properties. Combinational logic performs all the logical functions in the circuit and it typically consists of logic gates.
For example, a very simple synchronous circuit is shown in the figure. The inverter is connected from the output, Q, of a register to the register's input, D, to create a circuit that changes its state on each rising edge of the clock, clk. In this circuit, the combinational logic consists of the inverter.
When designing digital integrated circuits with a hardware description language (HDL), the designs are usually engineered at a higher |
https://en.wikipedia.org/wiki/Hemp | Hemp, or industrial hemp, is a botanical class of Cannabis sativa cultivars grown specifically for industrial or medicinal use. The main difference between hemp and marijuana is their THC content. In legal terms, hemp or hemp products, must have less than 0.2 - 0.3% THC, depending on the country's laws. It can be used to make a wide range of products. Along with bamboo, hemp is among the fastest growing plants on Earth. It was also one of the first plants to be spun into usable fiber 50,000 years ago. It can be refined into a variety of commercial items, including paper, rope, textiles, clothing, biodegradable plastics, paint, insulation, biofuel, food, and animal feed.
Although chemotype I cannabis and hemp (types II, III, IV, V) are both Cannabis sativa and contain the psychoactive component tetrahydrocannabinol (THC), they represent distinct cultivar groups, typically with unique phytochemical compositions and uses. Hemp typically has lower concentrations of total THC and may have higher concentrations of cannabidiol (CBD), which potentially mitigates the psychoactive effects of THC. The legality of hemp varies widely among countries. Some governments regulate the concentration of THC and permit only hemp that is bred with an especially low THC content into commercial production.
Etymology
The etymology is uncertain but there appears to be no common Proto-Indo-European source for the various forms of the word; the Greek term () is the oldest attested form, which may have been borrowed from an earlier Scythian or Thracian word. Then it appears to have been borrowed into Latin, and separately into Slavic and from there into Baltic, Finnish, and Germanic languages.
In the Germanic languages, following Grimm's law, the "k" would have changed to "h" with the first Germanic sound shift, giving Proto-Germanic *hanapiz, after which it may have been adapted into the Old English form, , . Barber (1991) however, argued that the spread of the name "kannabis" was due to |
https://en.wikipedia.org/wiki/Flags%20of%20the%20World%20%28website%29 | Flags of the World (abbreviated FOTW or FotW) is an Internet-based vexillological association and resource. Its principal project is the Internet's largest website devoted to vexillology, containing comprehensive information about various flags, and an associated mailing list. The mailing list began as a discussion group in about September 1993, while the website was founded by Giuseppe Bottasini in late 1994, and Rob Raeside took over as director in 1998. Flags of the World became the 56th member of the FIAV in 2001.
Flags of the World describes itself as "...an Internet group, the sole purpose of which is the advancement of the pursuit of vexillology, that is the creation and development of a body of knowledge about flags and flag usage of all types".
Both the website and the mailing list operate in the English language, though there are members from around the world and as such information from many languages is translated and included. The mailing list is monitored by the FOTW Listmaster, while work on the website is coordinated by the FOTW Editorial Director.
Website
An editorial staff of 21 unpaid volunteers manages and edits the FOTW website, which (as of 2013) contains more than 81,000 pages about flags and more than 179,000 images of flags, and also includes an extensive online dictionary of vexillology.
The website is updated once a week with fresh material. Due to the high amount of material there is an editing backlog, causing some areas of FOTW to contain outdated information.
Disclaimer
By its own admission, the quality of the information published on the website varies greatly: it contains not only well-known and official flags, but also drawings and contributions based only on rumors, which are documented accordingly. The published contributions are explicitly declared as private opinions. FOTW declines any responsibility for the truthfulness and accuracy of the published information.
Mailing list
The source for material on the FOTW website |
https://en.wikipedia.org/wiki/Modulus%20of%20continuity | In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if and only if
for all x and y in the domain of f. Since moduli of continuity are required to be infinitesimal at 0, a function turns out to be uniformly continuous if and only if it admits a modulus of continuity. Moreover, relevance to the notion is given by the fact that sets of functions sharing the same modulus of continuity are exactly equicontinuous families. For instance, the modulus ω(t) := kt describes the k-Lipschitz functions, the moduli ω(t) := ktα describe the Hölder continuity, the modulus ω(t) := kt(|log t|+1) describes the almost Lipschitz class, and so on. In general, the role of ω is to fix some explicit functional dependence of ε on δ in the (ε, δ) definition of uniform continuity. The same notions generalize naturally to functions between metric spaces. Moreover, a suitable local version of these notions allows to describe quantitatively the continuity at a point in terms of moduli of continuity.
A special role is played by concave moduli of continuity, especially in connection with extension properties, and with approximation of uniformly continuous functions. For a function between metric spaces, it is equivalent to admit a modulus of continuity that is either concave, or subadditive, or uniformly continuous, or sublinear (in the sense of growth). Actually, the existence of such special moduli of continuity for a uniformly continuous function is always ensured whenever the domain is either a compact, or a convex subset of a normed space. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios
are uniformly bounded for all pairs (x, x′) bounded away from the diagonal of X x X. The functions with the latter property constitute a special subclass of the unifor |
https://en.wikipedia.org/wiki/Maurer%E2%80%93Cartan%20form | In mathematics, the Maurer–Cartan form for a Lie group is a distinguished differential one-form on that carries the basic infinitesimal information about the structure of . It was much used by Élie Cartan as a basic ingredient of his method of moving frames, and bears his name together with that of Ludwig Maurer.
As a one-form, the Maurer–Cartan form is peculiar in that it takes its values in the Lie algebra associated to the Lie group . The Lie algebra is identified with the tangent space of at the identity, denoted . The Maurer–Cartan form is thus a one-form defined globally on which is a linear mapping of the tangent space at each into . It is given as the pushforward of a vector in along the left-translation in the group:
Motivation and interpretation
A Lie group acts on itself by multiplication under the mapping
A question of importance to Cartan and his contemporaries was how to identify a principal homogeneous space of . That is, a manifold identical to the group , but without a fixed choice of unit element. This motivation came, in part, from Felix Klein's Erlangen programme where one was interested in a notion of symmetry on a space, where the symmetries of the space were transformations forming a Lie group. The geometries of interest were homogeneous spaces , but usually without a fixed choice of origin corresponding to the coset .
A principal homogeneous space of is a manifold abstractly characterized by having a free and transitive action of on . The Maurer–Cartan form gives an appropriate infinitesimal characterization of the principal homogeneous space. It is a one-form defined on satisfying an integrability condition known as the Maurer–Cartan equation. Using this integrability condition, it is possible to define the exponential map of the Lie algebra and in this way obtain, locally, a group action on .
Construction
Intrinsic construction
Let be the tangent space of a Lie group at the identity (its Lie algebra). acts on |
https://en.wikipedia.org/wiki/Protoplast | Protoplast (), is a biological term coined by Hanstein in 1880 to refer to the entire cell, excluding the cell wall. Protoplasts can be generated by stripping the cell wall from plant, bacterial, or fungal cells by mechanical, chemical or enzymatic means.
Protoplasts differ from spheroplasts in that their cell wall has been completely removed. Spheroplasts retain part of their cell wall. In the case of Gram-negative bacterial spheroplasts, for example, the peptidoglycan component of the cell wall has been removed but the outer membrane component has not.
Enzymes for the preparation of protoplasts
Cell walls are made of a variety of polysaccharides. Protoplasts can be made by degrading cell walls with a mixture of the appropriate polysaccharide-degrading enzymes:
During and subsequent to digestion of the cell wall, the protoplast becomes very sensitive to osmotic stress. This means cell wall digestion and protoplast storage must be done in an isotonic solution to prevent rupture of the plasma membrane.
Uses for protoplasts
Protoplasts can be used to study membrane biology, including the uptake of macromolecules and viruses . These are also used in somaclonal variation.
Protoplasts are widely used for DNA transformation (for making genetically modified organisms), since the cell wall would otherwise block the passage of DNA into the cell. In the case of plant cells, protoplasts may be regenerated into whole plants first by growing into a group of plant cells that develops into a callus and then by regeneration of shoots (caulogenesis) from the callus using plant tissue culture methods. Growth of protoplasts into callus and regeneration of shoots requires the proper balance of plant growth regulators in the tissue culture medium that must be customized for each species of plant. Unlike protoplasts from vascular plants, protoplasts from mosses, such as Physcomitrella patens, do not need phytohormones for regeneration, nor do they form a callus during regenera |
https://en.wikipedia.org/wiki/Enterprise%20value | Enterprise value (EV), total enterprise value (TEV), or firm value (FV) is an economic measure reflecting the market value of a business (i.e. as distinct from market price). It is a sum of claims by all claimants: creditors (secured and unsecured) and shareholders (preferred and common). Enterprise value is one of the fundamental metrics used in business valuation, financial analysis, accounting, portfolio analysis, and risk analysis.
Enterprise value is more comprehensive than market capitalization, which only reflects common equity. Importantly, EV reflects the opportunistic nature of business and may change substantially over time because of both external and internal conditions. Therefore, financial analysts often use a comfortable range of EV in their calculations.
EV equation
For detailed information on the valuation process see Valuation (finance).
Enterprise value =
common equity at market value (this line item is also known as "market cap")
+ debt at market value (here debt refers to interest-bearing liabilities, both long-term and short-term)
+ preferred equity at market value
+ unfunded pension liabilities and other debt-deemed provisions
– value of associate companies
– cash and cash equivalents.
Understanding
A simplified way to understand the EV concept is to envision purchasing an entire business. If you settle with all the security holders, you pay EV. Counterintuitively, increases or decreases in enterprise value do not necessarily correspond to "value creation" or "value destruction". Any acquisition of assets (whether paid for in cash or through share issues) will increase EV, whether or not those assets are productive. Similarly, reductions in capital intensity (for example by reducing working capital) will reduce EV.
EV can be negative if the company, for example, holds abnormally high amounts of cash that are not reflected in the market value of the stock and total capitalization.
All the components are relevant in liquidation a |
https://en.wikipedia.org/wiki/Ramachandran%20plot | In biochemistry, a Ramachandran plot (also known as a Rama plot, a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ψ against φ of amino acid residues in protein structure. The figure on the left illustrates the definition of the φ and ψ backbone dihedral angles (called φ and φ' by Ramachandran). The ω angle at the peptide bond is normally 180°, since the partial-double-bond character keeps the peptide bond planar. The figure in the top right shows the allowed φ,ψ backbone conformational regions from the Ramachandran et al. 1963 and 1968 hard-sphere calculations: full radius in solid outline, reduced radius in dashed, and relaxed tau (N-Cα-C) angle in dotted lines. Because dihedral angle values are circular and 0° is the same as 360°, the edges of the Ramachandran plot "wrap" right-to-left and bottom-to-top. For instance, the small strip of allowed values along the lower-left edge of the plot are a continuation of the large, extended-chain region at upper left.
Uses
A Ramachandran plot can be used in two somewhat different ways. One is to show in theory which values, or conformations, of the ψ and φ angles are possible for an amino-acid residue in a protein (as at top right). A second is to show the empirical distribution of datapoints observed in a single structure (as at right, here) in usage for structure validation, or else in a database of many structures (as in the lower 3 plots at left). Either case is usually shown against outlines for the theoretically favored regions.
Amino-acid preferences
One might expect that larger side chains would result in more restrictions and consequently a smaller allowable region in the Ramachandran plot, but the effect of side chains is small. In practice, the major effect seen is that of the presence or absence of the methylene group at Cβ. Glycine has only a hydroge |
https://en.wikipedia.org/wiki/CATH%20database | The CATH Protein Structure Classification database is a free, publicly available online resource that provides information on the evolutionary relationships of protein domains. It was created in the mid-1990s by Professor Christine Orengo and colleagues including Janet Thornton and David Jones, and continues to be developed by the Orengo group at University College London. CATH shares many broad features with the SCOP resource, however there are also many areas in which the detailed classification differs greatly.
Hierarchical organization
Experimentally determined protein three-dimensional structures are obtained from the Protein Data Bank and split into their consecutive polypeptide chains, where applicable. Protein domains are identified within these chains using a mixture of automatic methods and manual curation.
The domains are then classified within the CATH structural hierarchy: at the Class (C) level, domains are assigned according to their secondary structure content, i.e. all alpha, all beta, a mixture of alpha and beta, or little secondary structure; at the Architecture (A) level, information on the secondary structure arrangement in three-dimensional space is used for assignment; at the Topology/fold (T) level, information on how the secondary structure elements are connected and arranged is used; assignments are made to the Homologous superfamily (H) level if there is good evidence that the domains are related by evolution i.e. they are homologous.
Additional sequence data for domains with no experimentally determined structures are provided by CATH's sister resource, Gene3D, which are used to populate the homologous superfamilies. Protein sequences from UniProtKB and Ensembl are scanned against CATH HMMs to predict domain sequence boundaries and make homologous superfamily assignments.
Releases
The CATH team aim to provide official releases of the CATH classification every 12 months. This release process is important because it allows for the provi |
https://en.wikipedia.org/wiki/Automatic%20double%20tracking | Automatic double-tracking or artificial double-tracking (ADT) is an analogue recording technique designed to enhance the sound of voices or instruments during the mixing process. It uses tape delay to create a delayed copy of an audio signal which is then played back at slightly varying speed controlled by an oscillator and combined with the original. The effect is intended to simulate the sound of the natural doubling of voices or instruments achieved by double tracking. The technique was developed in 1966 by engineers at Abbey Road Studios in London at the request of the Beatles.
Background
As early as the 1950s, it was discovered that double-tracking the lead vocal in a song gave it a richer, more appealing sound, especially for singers with weak or light voices. Use of this technique became possible with the advent of magnetic tape for use in sound recording. Originally, a pair of single-track tape recorders were used to produce the effect; later, multitrack tape machines were used. Early pioneers of this technique were Les Paul and Buddy Holly.
Before the development of ADT, it was necessary to record and mix multiple takes of the vocal track. Because it is nearly impossible for a performer to sing or play the same part in exactly the same way twice, a recording and blending of two different performances of the same part will create a fuller, "chorused" effect with double tracking. But if one simply plays back two copies of the same performance in perfect sync, the two sound images become one and no double tracking effect is produced.
Invention
ADT was invented specially for the Beatles during the spring of 1966 by Ken Townsend, a recording engineer employed at EMI's Abbey Road Studios, mainly at the request of John Lennon, who despised the tedium of double tracking during sessions and regularly expressed a desire for a technical alternative.
Townsend came up with a system using tape delay, . Townsend's system added a second tape recorder to the regular se |
https://en.wikipedia.org/wiki/Cable%20modem%20termination%20system | A cable modem termination system (CMTS, also called a CMTS Edge Router) is a piece of equipment, typically located in a cable company's headend or hubsite, which is used to provide high speed data services, such as cable Internet or Voice over Internet Protocol, to cable subscribers. A CMTS provides many of the same functions provided by the DSLAM in a DSL system.
Connections
In order to provide high speed data services, a cable company will connect its headend to the Internet via very high capacity data links to a network service provider. On the subscriber side of the headend, the CMTS enables communication with subscribers' cable modems. Different CMTSs are capable of serving different cable modem population sizes—ranging from 4,000 cable modems to 150,000 or more, depending in part on traffic, although it is recommended for an I-CMTS to service, for example, 30,000 subscribers (cable modems). A given headend may have between 1–12 CMTSs to service the cable modem population served by that headend or HFC hub.
One way to think of a CMTS is to imagine a router with Ethernet interfaces (connections) on one side and coaxial cable RF interfaces on the other side. The Ethernet side is known as the Network Side Interface or NSI.
A CMTS has separate RF interfaces and connectors for downlink and uplink signals. The RF/coax interfaces carry RF signals to and from coaxial "trunks" connected to subscribers' cable modems, using one pair of connectors per trunk, one for downlink and the other for uplink. In other words, there can be a pair of RF connectors for every service group, although it is possible to configure a network with different numbers of connectors that service a set of service groups, based on the number of downstream and upstream channels the cable modems in every service group use. Every connector has a finite number of channels it can carry, such as 16 channels per downstream connector, and 4 channels per upstream connector, depending on the CMTS. For exam |
https://en.wikipedia.org/wiki/Antimicrobial | An antimicrobial is an agent that kills microorganisms (microbicide) or stops their growth (bacteriostatic agent). Antimicrobial medicines can be grouped according to the microorganisms they act primarily against. For example, antibiotics are used against bacteria, and antifungals are used against fungi. They can also be classified according to their function. The use of antimicrobial medicines to treat infection is known as antimicrobial chemotherapy, while the use of antimicrobial medicines to prevent infection is known as antimicrobial prophylaxis.
The main classes of antimicrobial agents are disinfectants (non-selective agents, such as bleach), which kill a wide range of microbes on non-living surfaces to prevent the spread of illness, antiseptics (which are applied to living tissue and help reduce infection during surgery), and antibiotics (which destroy microorganisms within the body). The term antibiotic originally described only those formulations derived from living microorganisms but is now also applied to synthetic agents, such as sulfonamides or fluoroquinolones. Though the term used to be restricted to antibacterials (and is often used as a synonym for them by medical professionals and in medical literature), its context has broadened to include all antimicrobials. Antibacterial agents can be further subdivided into bactericidal agents, which kill bacteria, and bacteriostatic agents, which slow down or stall bacterial growth. In response, further advancements in antimicrobial technologies have resulted in solutions that can go beyond simply inhibiting microbial growth. Instead, certain types of porous media have been developed to kill microbes on contact. Overuse or misuse of antimicrobials can lead to the development of antimicrobial resistance.
History
Antimicrobial use has been common practice for at least 2000 years. Ancient Egyptians and ancient Greeks used specific molds and plant extracts to treat infection.
In the 19th century, microbiologist |
https://en.wikipedia.org/wiki/Minimal%20counterexample | In mathematics, a minimal counterexample is the smallest example which falsifies a claim, and a proof by minimal counterexample is a method of proof which combines the use of a minimal counterexample with the ideas of proof by induction and proof by contradiction. More specifically, in trying to prove a proposition P, one first assumes by contradiction that it is false, and that therefore there must be at least one counterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexample C that is minimal. In regard to the argument, C is generally something quite hypothetical (since the truth of P excludes the possibility of C), but it may be possible to argue that if C existed, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the proposition P is indeed true.
If the form of the contradiction is that we can derive a further counterexample D, that is smaller than C in the sense of the working hypothesis of minimality, then this technique is traditionally called proof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof.
The assumption that if there is a counterexample, there is a minimal counterexample, is based on a well-ordering of some kind. The usual ordering on the natural numbers is clearly possible, by the most usual formulation of mathematical induction; but the scope of the method can include well-ordered induction of any kind.
Examples
The minimal counterexample method has been much used in the classification of finite simple groups. The Feit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was based on the hypothesis of some, and therefore some minimal, simple group G of odd order. Every proper subgroup of G can be assumed a solvable group, meaning that m |
https://en.wikipedia.org/wiki/HD-MAC | HD-MAC (High Definition Multiplexed Analogue Components) was a broadcast television standard proposed by the European Commission in 1986, as part of Eureka 95 project. It belongs to the MAC - Multiplexed Analogue Components standard family. It is an early attempt by the EEC to provide High-definition television (HDTV) in Europe. It is a complex mix of analogue signal (based on the Multiplexed Analogue Components standard), multiplexed with digital sound, and assistance data for decoding (DATV). The video signal (1250 lines/50 fields per second in 16:9 aspect ratio, with 1152 visible lines) was encoded with a modified D2-MAC encoder.
HD-MAC could be decoded by normal D2-MAC standard definition receivers, but no extra resolution was obtained and certain artifacts were visible. To decode the signal in full resolution a specific HD-MAC tuner was required .
Naming convention
The European Broadcasting Union video format description is as follows: width x height [scan type: i or p] / number of full frames per second
European standard definition digital broadcasts use 720×576i/25, meaning 25 720 pixels wide and 576 pixels high interlaced frames: odd lines (1, 3, 5 ...) are grouped to build the odd field, which is transmitted first, then it is followed by the even field containing lines 2, 4, 6... Thus, there are two fields in a frame, resulting in a field frequency of 25 × 2 = 50 Hz.
The visible part of the video signal provided by an HD-MAC receiver was 1152i/25, which exactly doubles the vertical resolution of standard definition. The amount of information is multiplied by 4, considering the encoder started its operations from a 1440x1152i/25 sampling grid.
Standard history
Work on HD-MAC specification started officially in May 1986. The purpose was to react against a Japanese proposal, supported by the US, which aimed to establish the NHK-designed Hi-Vision (also known as MUSE) system as a world standard. Besides preservation of the European electronic industr |
https://en.wikipedia.org/wiki/Stochastic%20resonance | Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor, can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal.
This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research.
Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics.
Technical description
Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak.
Strictly sp |
https://en.wikipedia.org/wiki/Module%20file | Module file (MOD music, tracker music) is a family of music file formats originating from the MOD file format on Amiga systems used in the late 1980s. Those who produce these files (using the software called music trackers) and listen to them form the worldwide MOD scene, a part of the demoscene subculture.
The mass interchange of "MOD music" or "tracker music" (music stored in module files created with trackers) evolved from early FIDO networks. Many websites host large numbers of these files, the most comprehensive of them being the Mod Archive.
Nowadays, most module files, including ones in compressed form, are supported by most popular media players such as VLC, Foobar2000, Exaile and many others (mainly due to inclusion of common playback libraries such as libmodplug for gstreamer).
Structure
Module files store digitally recorded samples and several "patterns" or "pages" of music data in a form similar to that of a spreadsheet. These patterns contain note numbers, instrument numbers, and controller messages. The number of notes that can be played simultaneously depends on how many "tracks" there are per pattern. And the song is built of a pattern list, that tells in what order these patterns shall be played in the song.
A disadvantage of module files is that there is no real standard specification in how the modules should be played back properly, which may result in modules sounding different in different players, sometimes quite significantly so. This is mostly due to effects that can be applied to the samples in the module file and how the authors of different players choose to implement them. However, tracker music has the advantage of requiring very little CPU overhead for playback, and is executed in real-time.
Popular formats
Each module file format builds on concepts introduced in its predecessors.
The MOD format (.MOD)
The MOD format was the first file format for tracked music. A very basic version of this format (with only very few pattern co |
https://en.wikipedia.org/wiki/Center%20tap | In electronics, a center tap (CT) is a contact made to a point halfway along a winding of a transformer or inductor, or along the element of a resistor or a potentiometer.
Taps are sometimes used on inductors for the coupling of signals, and may not necessarily be at the half-way point, but rather, closer to one end. A common application of this is in the Hartley oscillator. Inductors with taps also permit the transformation of the amplitude of alternating current (AC) voltages for the purpose of power conversion, in which case, they are referred to as autotransformers, since there is only one winding. An example of an autotransformer is an automobile ignition coil.
Potentiometer tapping provides one or more connections along the device's element, along with the usual connections at each of the two ends of the element, and the slider connection. Potentiometer taps allow for circuit functions that would otherwise not be available with the usual construction of just the two end connections and one slider connection.
Volts center tapped
Volts center tapped (VCT) describes the voltage output of a center tapped transformer. For example, a 24 VCT transformer will measure 24 VAC across the outer two taps (winding as a whole), and 12 VAC from each outer tap to the center-tap (half winding). These two 12 VAC supplies are 180 degrees out of phase with each other, measured with respect to the tap, thus making it easy to derive positive and negative 12 volt DC power supplies from them.
Applications and history
In vacuum tube audio amplifiers, center-tapped transformers were sometimes used as the phase inverter to drive the two output tubes of a push-pull stage. The technique is nearly as old as electronic amplification and is well documented, for example, in The Radiotron Designer's Handbook, Third Edition of 1940. This technique was carried over into transistor designs also, part of the reason for which was that capacitors were large, expensive and unreliable. However, si |
https://en.wikipedia.org/wiki/Burroughs%20MCP | The MCP (Master Control Program) is the operating system of the Burroughs B5000/B5500/B5700 and the B6500 and successors, including the Unisys Clearpath/MCP systems.
MCP was originally written in 1961 in ESPOL (Executive Systems Problem Oriented Language). In the 1970s, MCP was converted to NEWP which was a better structured, more robust, and more secure form of ESPOL.
The MCP was a leader in many areas, including: the first operating system to manage multiple processors, the first commercial implementation of virtual memory, and the first OS written exclusively in a high-level language.
History
In 1961, the MCP was the first OS written exclusively in a high-level language (HLL). The Burroughs Large System (B5000 and successors) were unique in that they were designed with the expectation that all software, including system software, would be written in an HLL rather than in assembly language, which was a unique and innovative approach in 1961.
Unlike IBM, which faced hardware competition after the departure of Gene Amdahl, Burroughs software only ever ran on Burroughs hardware due to a lack of compatible third party hardware. For this reason, Burroughs was free to distribute the source code of all software it sold, including the MCP, which was designed with this openness in mind. For example, upgrading required the user to recompile the system software and apply any needed local patches. At the time, this was common practice, and was necessary as it was not unusual for customers (especially large ones, such as the Federal Reserve) to modify the program to fit their specific needs. As a result, a Burroughs Users Group was formed, which held annual meetings and allowed users to exchange their own extensions to the OS and other parts of the system software suite. Many such extensions have found their way into the base OS code over the years, and are now available to all customers. As such, the MCP could be considered one of the earliest open-source projects.
Burro |
https://en.wikipedia.org/wiki/Leafnode | Leafnode is a store-and-forward NNTP (or Usenet) proxy server designed for small sites with just a few active newsgroups, but very easy to set up and maintain, when compared to INN. Originally created by Arnt Gulbrandsen in 1995 while he was working at Trolltech, it is currently maintained by Matthias Andree and Ralf Wildenhues.
The term leaf node can also be used to describe a node on a binary tree (or any other sort of tree that has nodes) which has no sub-nodes.
References
External links
Official website
Leafnode at SourceForge.org
Installing and Configuring LeafNode
Usenet
Usenet servers |
https://en.wikipedia.org/wiki/DNA-binding%20protein | DNA-binding proteins are proteins that have DNA-binding domains and thus have a specific or general affinity for single- or double-stranded DNA. Sequence-specific DNA-binding proteins generally interact with the major groove of B-DNA, because it exposes more functional groups that identify a base pair.
Examples
DNA-binding proteins include transcription factors which modulate the process of transcription, various polymerases, nucleases which cleave DNA molecules, and histones which are involved in chromosome packaging and transcription in the cell nucleus. DNA-binding proteins can incorporate such domains as the zinc finger, the helix-turn-helix, and the leucine zipper (among many others) that facilitate binding to nucleic acid. There are also more unusual examples such as transcription activator like effectors.
Non-specific DNA-protein interactions
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones. In prokaryotes, multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin i |
https://en.wikipedia.org/wiki/Isoelectric%20focusing | Isoelectric focusing (IEF), also known as electrofocusing, is a technique for separating different molecules by differences in their isoelectric point (pI). It is a type of zone electrophoresis usually performed on proteins in a gel that takes advantage of the fact that overall charge on the molecule of interest is a function of the pH of its surroundings.
Procedure
IEF involves adding an ampholyte solution into immobilized pH gradient (IPG) gels. IPGs are the acrylamide gel matrix co-polymerized with the pH gradient, which result in completely stable gradients except the most alkaline (>12) pH values. The immobilized pH gradient is obtained by the continuous change in the ratio of immobilines. An immobiline is a weak acid or base defined by its pK value.
A protein that is in a pH region below its isoelectric point (pI) will be positively charged and so will migrate toward the cathode (negatively charged electrode). As it migrates through a gradient of increasing pH, however, the protein's overall charge will decrease until the protein reaches the pH region that corresponds to its pI. At this point it has no net charge and so migration ceases (as there is no electrical attraction toward either electrode). As a result, the proteins become focused into sharp stationary bands with each protein positioned at a point in the pH gradient corresponding to its pI. The technique is capable of extremely high resolution with proteins differing by a single charge being fractionated into separate bands.
Molecules to be focused are distributed over a medium that has a pH gradient (usually created by aliphatic ampholytes). An electric current is passed through the medium, creating a "positive" anode and "negative" cathode end. Negatively charged molecules migrate through the pH gradient in the medium toward the "positive" end while positively charged molecules move toward the "negative" end. As a particle moves toward the pole opposite of its charge it moves through the changin |
https://en.wikipedia.org/wiki/Valuation%20ring | In abstract algebra, a valuation ring is an integral domain D such that for every element x of its field of fractions F, at least one of x or x−1 belongs to D.
Given a field F, if D is a subring of F such that either x or x−1 belongs to
D for every nonzero x in F, then D is said to be a valuation ring for the field F or a place of F. Since F in this case is indeed the field of fractions of D, a valuation ring for a field is a valuation ring. Another way to characterize the valuation rings of a field F is that valuation rings D of F have F as their field of fractions, and their ideals are totally ordered by inclusion; or equivalently their principal ideals are totally ordered by inclusion. In particular, every valuation ring is a local ring.
The valuation rings of a field are the maximal elements of the set of the local subrings in the field partially ordered by dominance or refinement, where
dominates if and .
Every local ring in a field K is dominated by some valuation ring of K.
An integral domain whose localization at any prime ideal is a valuation ring is called a Prüfer domain.
Definitions
There are several equivalent definitions of valuation ring (see below for the characterization in terms of dominance). For an integral domain D and its field of fractions K, the following are equivalent:
For every nonzero x in K, either x is in D or x−1 is in D.
The ideals of D are totally ordered by inclusion.
The principal ideals of D are totally ordered by inclusion (i.e. the elements in D are, up to units, totally ordered by divisibility.)
There is a totally ordered abelian group Γ (called the value group) and a valuation ν: K → Γ ∪ {∞} with D = { x ∈ K | ν(x) ≥ 0 }.
The equivalence of the first three definitions follows easily. A theorem of states that any ring satisfying the first three conditions satisfies the fourth: take Γ to be the quotient K×/D× of the unit group of K by the unit group of D, and take ν to be the natural projection. We can turn Γ into |
https://en.wikipedia.org/wiki/Edman%20degradation | Edman degradation, developed by Pehr Edman, is a method of sequencing amino acids in a peptide. In this method, the amino-terminal residue is labeled and cleaved from the peptide without disrupting the peptide bonds between other amino acid residues.
Mechanism
Phenyl isothiocyanate is reacted with an uncharged N-terminal amino group, under mildly alkaline conditions, to form a cyclical phenylthiocarbamoyl derivative. Then, under acidic conditions, this derivative of the terminal amino acid is cleaved as a thiazolinone derivative. The thiazolinone amino acid is then selectively extracted into an organic solvent and treated with acid to form the more stable phenylthiohydantoin (PTH)- amino acid derivative that can be identified by using chromatography or electrophoresis. This procedure can then be repeated again to identify the next amino acid. A major drawback to this technique is that the peptides being sequenced in this manner cannot have more than 50 to 60 residues (and in practice, under 30). The peptide length is limited due to the cyclical derivatization not always going to completion. The derivatization problem can be resolved by cleaving large peptides into smaller peptides before proceeding with the reaction. It is able to accurately sequence up to 30 amino acids with modern machines capable of over 99% efficiency per amino acid. An advantage of the Edman degradation is that it only uses 10 - 100 pico-moles of peptide for the sequencing process. The Edman degradation reaction was automated in 1967 by Edman and Beggs to speed up the process and 100 automated devices were in use worldwide by 1973.
Limitations
Because the Edman degradation proceeds from the N-terminus of the protein, it will not work if the N-terminus has been chemically modified (e.g. by acetylation or formation of pyroglutamic acid). Sequencing will stop if a non-α-amino acid is encountered (e.g. isoaspartic acid), since the favored five-membered ring intermediate is unable to be formed. |
https://en.wikipedia.org/wiki/Bottling%20line | Bottling lines are production lines that fill a product, generally a beverage, into bottles on a large scale. Many prepared foods are also bottled, such as sauces, syrups, marinades, oils and vinegars.
Beer bottling process
Packaging of bottled beer typically involves drawing the product from a holding tank and filling it into bottles in a filling machine (filler), which are then capped, labeled and packed into cases or cartons. Many smaller breweries send their bulk beer to large facilities for contract bottling—though some will bottle by hand. Virtually all beer bottles are glass.
The first step in bottling beer is depalletising, where the empty bottles are removed from the original pallet packaging delivered from the manufacturer, so that individual bottles may be handled. The bottles may then be rinsed with filtered water or air, and may have carbon dioxide injected into them in attempt to reduce the level of oxygen within the bottle. The bottle then enters a "filler" which fills the bottle with beer and may also inject a small amount of inert gas (usually carbon dioxide or nitrogen) on top of the beer to disperse the oxygen, as oxygen can ruin the quality of the product via oxidation. Finally, the bottles go through a "capper", which applies a bottle cap, sealing the bottle. A few beers are bottled with a cork and cage.
Next the bottle enters a labelling machine ("labeller") where a label is applied. To ensure traceability of the product, a lot number, generally the date and time of bottling, may also be printed on the bottle. The product is then packed into boxes and warehoused, ready for sale.
Depending on the magnitude of the bottling endeavor, there are many different types of bottling machinery available. Liquid level machines fill bottles so they appear to be filled to the same line on every bottle, while volumetric filling machines fill each bottle with exactly the same amount of liquid. Overflow pressure fillers are the most popular machines with b |
https://en.wikipedia.org/wiki/Tine%20test | The tine test is a multiple-puncture tuberculin skin test used to aid in the medical diagnosis of tuberculosis (TB). The tine test is similar to the Heaf test, although the Mantoux test is usually used instead. There are multiple forms of the tine tests which usually fall into two categories: the old tine test (OT) and the purified protein derivative (PPD) tine test. Common brand names of the test include Aplisol, Aplitest, Tuberculin PPD TINE TEST, and Tubersol.
Procedure
This test uses a small "button" that has four to six short needles coated with TB antigens (tuberculin), either an old tuberculin or a PPD-tuberculin. The needles are pressed into the skin (usually on the inner side of the forearm), forcing the antigens into the skin. The test is then read 48 to 72 hours later by measuring the size of the largest papule or induration. Indications are usually classified as positive, negative, or doubtful.
Because it is not possible to control precisely the amount of tuberculin used in the tine test, a positive test should be verified using the Mantoux test.
PPD
Tuberculin is a glycerol extract of the tubercle bacillus. Purified protein derivative (PPD) tuberculin is a precipitate of non-species-specific molecules obtained from filtrates of sterilized, concentrated cultures. It was first described by Robert Koch in 1890 and then Giovanni Petragnani.A batch of PPD created in 1939 serves as the US and international standard, called PPD-S. PPD-S concentration is not standardized for multiple-puncture techniques, and should be designed for the specific multiple-puncture system.
Comparison to Mantoux test
The American Thoracic Society or Centers for Disease Control and Prevention (CDC) do not recommend the tine test, since the amount of tuberculin that enters the skin cannot be measured. For this reason, the tine test is often considered to be less reliable.
Contrary to this, however, studies have shown that the tine test can give results that correlate well to t |
https://en.wikipedia.org/wiki/Prehensility | Prehensility is the quality of an appendage or organ that has adapted for grasping or holding. The word is derived from the Latin term prehendere, meaning "to grasp". The ability to grasp is likely derived from a number of different origins. The most common are tree-climbing and the need to manipulate food.
Examples
Appendages that can become prehensile include:
Uses
Prehensility affords animals a great natural advantage in manipulating their environment for feeding, climbing, digging, and defense. It enables many animals, such as primates, to use tools to complete tasks that would otherwise be impossible without highly specialized anatomy. For example, chimpanzees have the ability to use sticks to obtain termites and grubs in a manner similar to human fishing. However, not all prehensile organs are applied to tool use; the giraffe tongue, for instance, is instead used in feeding and self-cleaning.
References
Animal anatomy |
https://en.wikipedia.org/wiki/Sol%20LeWitt | Solomon "Sol" LeWitt (September 9, 1928 – April 8, 2007) was an American artist linked to various movements, including conceptual art and minimalism.
LeWitt came to fame in the late 1960s with his wall drawings and "structures" (a term he preferred instead of "sculptures") but was prolific in a wide range of media including drawing, printmaking, photography, painting, installation, and artist's books. He has been the subject of hundreds of solo exhibitions in museums and galleries around the world since 1965. The first biography of the artist, Sol LeWitt: A Life of Ideas, by Lary Bloom, was published by Wesleyan University Press in the spring of 2019.
Life
LeWitt was born in Hartford, Connecticut, to a family of Jewish immigrants from Russia. His father died when he was 6. His mother took him to art classes at the Wadsworth Atheneum in Hartford. After receiving a BFA from Syracuse University in 1949, LeWitt traveled to Europe where he was exposed to Old Master paintings. Shortly thereafter, he served in the Korean War, first in California, then Japan, and finally Korea. LeWitt moved to New York City in 1953 and set up a studio on the Lower East Side, in the old Ashkenazi Jewish settlement on Hester Street. During this time he studied at the School of Visual Arts while also pursuing his interest in design at Seventeen magazine, where he did paste-ups, mechanicals, and photostats. In 1955, he was a graphic designer in the office of architect I.M. Pei for a year. Around that time, LeWitt also discovered the work of the late 19th-century photographer Eadweard Muybridge, whose studies in sequence and locomotion were an early influence for him. These experiences, combined with an entry-level job as a night receptionist and clerk he took in 1960 at the Museum of Modern Art (MoMA) in New York, would influence LeWitt's later work.
At MoMA, LeWitt's co-workers included fellow artists Robert Ryman, Dan Flavin, Gene Beery, and Robert Mangold, and the future art critic and |
https://en.wikipedia.org/wiki/Vortex%20tube | The vortex tube, also known as the Ranque-Hilsch vortex tube, is a mechanical device that separates a compressed gas into hot and cold streams. The gas emerging from the hot end can reach temperatures of , and the gas emerging from the cold end can reach . It has no moving parts and is considered an environmentally friendly technology because it can work solely on compressed air and does not use Freon. Its efficiency is low, however, counteracting its other environmental advantages.
Pressurised gas is injected tangentially into a swirl chamber near one end of a tube, leading to a rapid rotation—the first vortex—as it moves along the inner surface of the tube to the far end. A conical nozzle allows gas specifically from this outer layer to escape at that end through a valve. The remainder of the gas is forced to return in an inner vortex of reduced diameter within the outer vortex. Gas from the inner vortex transfers heat to the gas in the outer vortex, so the outer layer is hotter at the far end than it was initially. The gas in the central vortex is likewise cooler upon its return to the starting-point, where it is released from the tube.
Method of operation
To explain the temperature separation in a vortex tube, there are two main approaches:
Fundamental approach: the physics
This approach is based on first-principles physics alone and is not limited to vortex tubes only, but applies to moving gas in general. It shows that temperature separation in a moving gas is due only to enthalpy conservation in a moving frame of reference.
The thermal process in the vortex tube can be estimated in the following way:
The main physical phenomenon of the vortex tube is the temperature separation between the cold vortex core and the warm vortex periphery. The "vortex tube effect" is fully explained with the work equation of Euler, also known as Euler's turbine equation, which can be written in its most general vectorial form as:
,
where is the total, or stagnation t |
https://en.wikipedia.org/wiki/Temporal%20Key%20Integrity%20Protocol | Temporal Key Integrity Protocol (TKIP ) is a security protocol used in the IEEE 802.11 wireless networking standard. TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as an interim solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left Wi-Fi networks without viable link-layer security, and a solution was required for already deployed hardware. However, TKIP itself is no longer considered secure, and was deprecated in the 2012 revision of the 802.11 standard.
Background
On October 31, 2002, the Wi-Fi Alliance endorsed TKIP under the name Wi-Fi Protected Access (WPA). The IEEE endorsed the final version of TKIP, along with more robust solutions such as 802.1X and the AES based CCMP, when they published IEEE 802.11i-2004 on 23 July 2004. The Wi-Fi Alliance soon afterwards adopted the full specification under the marketing name WPA2.
TKIP was resolved to be deprecated by the IEEE in January 2009.
Technical details
TKIP and the related WPA standard implement three new security features to address security problems encountered in WEP protected networks. First, TKIP implements a key mixing function that combines the secret root key with the initialization vector before passing it to the RC4 cipher initialization. WEP, in comparison, merely concatenated the initialization vector to the root key, and passed this value to the RC4 routine. This permitted the vast majority of the RC4 based WEP related key attacks. Second, WPA implements a sequence counter to protect against replay attacks. Packets received out of order will be rejected by the access point. Finally, TKIP implements a 64-bit Message Integrity Check (MIC) and re-initializes the sequence number each time when a new key (Temporal Key) is used.
To be able to run on legacy WEP hardware with minor upgrades, TKIP uses RC4 as its cipher. TKIP also provides a rekeying mechanism. TKIP ensures that every data packet |
https://en.wikipedia.org/wiki/Preimage%20attack | In cryptography, a preimage attack on cryptographic hash functions tries to find a message that has a specific hash value. A cryptographic hash function should resist attacks on its preimage (set of possible inputs).
In the context of attack, there are two types of preimage resistance:
preimage resistance: for essentially all pre-specified outputs, it is computationally infeasible to find any input that hashes to that output; i.e., given , it is difficult to find an such that .
second-preimage resistance: for a specified input, it is computationally infeasible to find another input which produces the same output; i.e., given , it is difficult to find a second input such that .
These can be compared with a collision resistance, in which it is computationally infeasible to find any two distinct inputs , that hash to the same output; i.e., such that .
Collision resistance implies second-preimage resistance, but does not guarantee preimage resistance. Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition to , is already known right from the start).
Applied preimage attacks
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through a brute-force attack. For an -bit hash, this attack has a time complexity , which is considered too high for a typical output size of = 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result that quantum computers perform a structured preimage attack in , which also implies second preimage and thus a collision attack.
Faster preimage attacks can be found by cryptanalysing certain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocol |
https://en.wikipedia.org/wiki/Collision%20attack | In cryptography, a collision attack on a cryptographic hash tries to find two inputs producing the same hash value, i.e. a hash collision. This is in contrast to a preimage attack where a specific target hash value is specified.
There are roughly two types of collision attacks:
Classical collision attack Find two different messages m1 and m2 such that hash(m1) = hash(m2).
More generally:
Chosen-prefix collision attack Given two different prefixes p1 and p2, find two suffixes s1 and s2 such that hash(p1 ∥ s1) = hash(p2 ∥ s2), where ∥ denotes the concatenation operation.
Classical collision attack
Much like symmetric-key ciphers are vulnerable to brute force attacks, every cryptographic hash function is inherently vulnerable to collisions using a birthday attack. Due to the birthday problem, these attacks are much faster than a brute force would be. A hash of n bits can be broken in 2n/2 time steps (evaluations of the hash function).
Mathematically stated, a collision attack finds two different messages m1 and m2, such that hash(m1) = hash(m2). In a classical collision attack, the attacker has no control over the content of either message, but they are arbitrarily chosen by the algorithm.
More efficient attacks are possible by employing cryptanalysis to specific hash functions. When a collision attack is discovered and is found to be faster than a birthday attack, a hash function is often denounced as "broken". The NIST hash function competition was largely induced by published collision attacks against two very commonly used hash functions, MD5 and SHA-1. The collision attacks against MD5 have improved so much that, as of 2007, it takes just a few seconds on a regular computer. Hash collisions created this way are usually constant length and largely unstructured, so cannot directly be applied to attack widespread document formats or protocols.
However, workarounds are possible by abusing dynamic constructs present in many formats. In this way, two documents woul |
https://en.wikipedia.org/wiki/Livestock%20branding | Livestock branding is a technique for marking livestock so as to identify the owner. Originally, livestock branding only referred to hot branding large stock with a branding iron, though the term now includes alternative techniques. Other forms of livestock identification include freeze branding, inner lip or ear tattoos, earmarking, ear tagging, and radio-frequency identification (RFID), which is tagging with a microchip implant. The semi-permanent paint markings used to identify sheep are called a paint or color brand. In the American West, branding evolved into a complex marking system still in use today.
History
The act of marking livestock with fire-heated marks to identify ownership has origins in ancient times, with use dating back to the ancient Egyptians around 2,700 BCE. Among the ancient Romans, the symbols used for brands were sometimes chosen as part of a magic spell aimed at protecting animals from harm.
In English lexicon, the word "brand", common to most Germanic languages (from which root also comes "burn", cf. German Brand "burning, fire"), originally meant anything hot or burning, such as a "firebrand", a burning stick. By the European Middle Ages, it commonly identified the process of burning a mark into stock animals with thick hides, such as cattle, so as to identify ownership under animus revertendi. The practice became particularly widespread in nations with large cattle grazing regions, such as Spain.
These European customs were imported to the Americas and were further refined by the vaquero tradition in what today is the southwestern United States and northern Mexico. In the American West, a "branding iron" consisted of an iron rod with a simple symbol or mark, which cowboys heated in a fire. After the branding iron turned red hot, the cowboy pressed the branding iron against the hide of the cow. The unique brand meant that cattle owned by multiple ranches could then graze freely together on the open range. Cowboys could then separate t |
https://en.wikipedia.org/wiki/Pit%20sword | The pit sword (also known as a rodmeter) is a blade of metal or plastic that extends into the water beneath the hull of a ship. It is part of the pitometer log, a device for measuring the ship's speed through the water.
See also
Electromagnetic log
Pitot tube
References
External links
Historical site with information on US Navy submarine pitometer log systems during World War II. This site shows the pit sword or rodmeter as it is deployed.
Measuring instruments |
https://en.wikipedia.org/wiki/Cyrix%20Cx486SLC | The Cyrix Cx486SLC is a x86 microprocessor that was developed by Cyrix. It was one of Cyrix's first CPU offerings, released after years of selling math coprocessors that competed with Intel's units and offered better performance at a comparable or lower price. It was announced in March of 1992, and released 2 months later in May, with a price of $119. It was priced competitively against the Intel 486SX, causing Intel to lower the price of their chip from $286 to $119 in just days.
Specifications
The 486SLC is based on the i386SX bus, and was intended as an entry-level chip to compete with the Intel 386SX and 486SX. SGS-Thomson and Texas Instruments manufactured the 486SLC for Cyrix. Texas Instruments also sold it under its own name as TX486SLC. Later, Texas Instruments also released their own version of the chip, the TI486SXLC which featured 8KB internal cache vs 1KB in the original Cyrix design. These chips went under the name Potomac, and Cyrix would receive full royalties for them. The similarly named IBM 486SLC, 486SLC2, 486SLC3 (16-bit external bus version of IBM 486DLC aka Blue Lightning) and IBM 386SLC are often confused with the Cyrix chips, but are not related and instead based on an Intel CPU core.
Introduced in May 1992, like the later and more famous Cyrix Cx5x86 it was a hybrid CPU, incorporating features of a new CPU (in this case the Intel 80486) while having a pin out similar to the existing 386SX, enabling existing board designs to be easily modified for the new chip. It ran at speeds of 20, 25, 33, and 40 MHz, although it had difficulty running reliably at 40 MHz using some operating systems. In December 1992 Cyrix also released the Cyrix Cx486SLC/e (25, 33 MHz) which offered power management, and a low voltage laptop version, the 3.3v Cyrix Cx486SLC/e-V (20, 25 MHz). In December 1993 Cyrix released a clock doubled version (25/50 MHz) cx486SLC2, as well as a clip-on upgrade for 386SX systems, the cx486SRx2 (16/33, 20/40 & 25/50 MHz).
The 486SLC |
https://en.wikipedia.org/wiki/Byzantine%20fault | A Byzantine fault (also Byzantine generals problem, interactive consistency, source congruency, error avalanche, Byzantine agreement problem, and Byzantine failure) is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.
In a Byzantine fault, a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers. It is difficult for the other components to declare it failed and shut it out of the network, because they need to first reach a consensus regarding which component has failed in the first place. Byzantine fault tolerance (BFT) is the resilience of a fault-tolerant computer system to such conditions.
Definition
A Byzantine fault is any fault presenting different symptoms to different observers. A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus among distributed nodes.
As an analogy of the fault's simplest form, consider a number of generals who are attacking a fortress. The generals must decide as a group whether to attack or retreat; some may prefer to attack, while others prefer to retreat. The important thing is that all generals agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat.
The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy, they may do so selectively. For instance, if nine generals are voting, four of whom support attacking wh |
https://en.wikipedia.org/wiki/Specific%20absorption%20rate | Specific absorption rate (SAR) is a measure of the rate at which energy is absorbed per unit mass by a human body when exposed to a radio frequency (RF) electromagnetic field. It is defined as the power absorbed per mass of tissue and has units of watts per kilogram (W/kg).
SAR is usually averaged either over the whole body, or over a small sample volume (typically 1 g or 10 g of tissue). The value cited is then the maximum level measured in the body part studied over the stated volume or mass.
Calculation
SAR for electromagnetic energy can be calculated from the electric field within the tissue as
where
is the sample electrical conductivity,
is the RMS electric field,
is the sample density,
is the volume of the sample.
SAR measures exposure to fields between 100 kHz and 10 GHz (known as radio waves). It is commonly used to measure power absorbed from mobile phones and during MRI scans. The value depends heavily on the geometry of the part of the body that is exposed to the RF energy and on the exact location and geometry of the RF source. Thus tests must be made with each specific source, such as a mobile-phone model and at the intended position of use.
Mobile phone SAR testing
When measuring the SAR due to a mobile phone the phone is placed against a representation of a human head (a "SAR Phantom") in a talk position. The SAR value is then measured at the location that has the highest absorption rate in the entire head, which in the case of a mobile phone is often as close to the phone's antenna as possible. Measurements are made for different positions on both sides of the head and at different frequencies representing the frequency bands at which the device can transmit. Depending on the size and capabilities of the phone, additional testing may also be required to represent usage of the device while placed close to the user's body and/or extremities. Various governments have defined maximum SAR levels for RF energy emitted by mobile devices:
United |
https://en.wikipedia.org/wiki/Aspic | Aspic or meat jelly () is a savory gelatin made with a meat stock or broth, set in a mold to encase other ingredients. These often include pieces of meat, seafood, vegetable, or eggs. Aspic is also sometimes referred to as aspic gelée or aspic jelly. In its simplest form, aspic is essentially a gelatinous version of conventional soup.
History
The 10th-century Kitab al-Tabikh, the earliest known Arabic cookbook, contains a recipe for a fish aspic called . This dish was made by boiling several large fish heads with vinegar, parsley, cassia, whole onions, rue, black pepper, ginger, spikenard, galangal, clove, coriander seeds, and long pepper. The resulting dish was then colored with saffron to give it a "radiant red" color. The cooked fish heads and seasonings were then removed from the cooking liquid before the tongues and the lips were returned to steep until the liquid and everything in it had cooled and gelatinized.
According to one poetic reference by Ibrahim ibn al-Mahdi, who described a version of the dish prepared with Iraqi carp, it was "like ruby on the platter, set in a pearl ... steeped in saffron thus, like garnet it looks, vibrantly red, shimmering on silver".
Historically, meat aspics were made even before fruit- and vegetable-flavoured aspics. By the Middle Ages, cooks had discovered that a thickened meat broth could be made into a jelly. A detailed recipe for aspic is found in Le Viandier, written in or around 1375.
In the early 19th century, the French chef Marie-Antoine Carême created chaudfroid. The term chaudfroid means "hot cold" in French, referring to foods that were prepared hot and served cold. Aspic was used as a chaudfroid sauce in many cold fish and poultry meals, where it added moisture and flavour to the food. Carême also invented various types of aspic and ways of preparing it.
Aspic came into prominence in America in the early 20th century. By the 1950s, meat aspic was a popular dinner staple, as were other gelatin-based dishes suc |
https://en.wikipedia.org/wiki/Total%20least%20squares | In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.
The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, low-rank approximation of the data matrix.
Linear model
Background
In the least squares method of data modeling, the objective function, S,
is minimized, where r is the vector of residuals and W is a weighting matrix. In linear least squares the model contains equations which are linear in the parameters appearing in the parameter vector , so the residuals are given by
There are m observations in y and n parameters in β with m>n. X is a m×n matrix whose elements are either constants or functions of the independent variables, x. The weight matrix W is, ideally, the inverse of the variance-covariance matrix of the observations y. The independent variables are assumed to be error-free. The parameter estimates are found by setting the gradient equations to zero, which results in the normal equations
Allowing observation errors in all variables
Now, suppose that both x and y are observed subject to error, with variance-covariance matrices and respectively. In this case the objective function can be written as
where and are the residuals in x and y respectively. Clearly these residuals cannot be independent of each other, but they must be constrained by some kind of relationship. Writing the model function as , the constraints are expressed by m condition equations.
Thus, the problem is to minimize the objective function subject to the m constraints. It is solved by the use of Lagrange multipliers. After some algebraic manipulations, the result is obtained.
or alternatively
where M is the v |
https://en.wikipedia.org/wiki/Mathematical%20Alphanumeric%20Symbols | Mathematical Alphanumeric Symbols is a Unicode block comprising styled forms of Latin and Greek letters and decimal digits that enable mathematicians to denote different notions with different letter styles. The letters in various fonts often have specific, fixed meanings in particular areas of mathematics. By providing uniformity over numerous mathematical articles and books, these conventions help to read mathematical formulas. These also may be used to differentiate between concepts that share a letter in a single problem.
Unicode now includes many such symbols (in the range U+1D400–U+1D7FF). The rationale behind this is that it enables design and usage of special mathematical characters (fonts) that include all necessary properties to differentiate from other alphanumerics, e.g. in mathematics an italic "𝐴" can have a different meaning from a roman letter "A". Unicode originally included a limited set of such letter forms in its Letterlike Symbols block before completing the set of Latin and Greek letter forms in this block beginning in version 3.1.
Unicode expressly recommends that these characters not be used in general text as a substitute for presentational markup; the letters are specifically designed to be semantically different from each other. Unicode does include a set of normal serif letters in the set. Still they have found some usage on social media, for example by people who want a stylized user name, and in email spam, in an attempt to bypass filters.
All these letter shapes may be manipulated with MathML's attribute mathvariant.
The introduction date of some of the more commonly used symbols can be found in the Table of mathematical symbols by introduction date.
Tables of styled letters and digits
These tables show all styled forms of Latin and Greek letters, symbols and digits in the Unicode Standard, with the normal unstyled forms of these characters shown with a cyan background (the basic unstyled letters may be serif or sans-serif depen |
https://en.wikipedia.org/wiki/Reductive%20dechlorination | In organochlorine chemistry, reductive dechlorination describes any chemical reaction which cleaves the covalent bond between carbon and chlorine via reductants, to release chloride ions. Many modalities have been implemented, depending on the application. Reductive dechlorination is often applied to remediation of chlorinated pesticides or dry cleaning solvents. It is also used occasionally in the synthesis of organic compounds, e.g. as pharmaceuticals.
Chemical
Dechlorination is a well-researched reaction in organic synthesis, although it is not often used. Usually stoichiometric amounts of dechlorinating agent are required. In one classic application, the Ullmann reaction, chloroarenes are coupled to biphenyl]]s. For example, the activated substrate 2-chloronitrobenzene is converted into 2,2'-dinitrobiphenyl with a copper - bronze alloy.
Zerovalent iron effects similar reactions. Organophosphorus(III) compounds effect gentle dechlorinations. The products are alkenes and phosphorus(V).
Alkaline earth metals and zinc are used for more difficult dechlorinations. The side product is zinc chloride.
Biological
Vicinal reduction involves the removal of two halogen atoms that are adjacent on the same alkane or alkene, leading to the formation of an additional carbon-carbon bond.
Biological reductive dechlorination is often effected by certain species of bacteria. Sometimes the bacterial species are highly specialized for organochlorine respiration and even a particular electron donor, as in the case of Dehalococcoides and Dehalobacter. In other examples, such as Anaeromyxobacter, bacteria have been isolated that are capable of using a variety of electron donors and acceptors, with a subset of possible electron acceptors being organochlorines. These reactions depend on a molecule which tends to be very aggressively sought after by some microbes, vitamin B12.
Bioremediation using reductive dechlorination
Reductive dechlorination of chlorinated organic molecule |
https://en.wikipedia.org/wiki/Iptables | iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules. The filters are organized in different tables, which contain chains of rules for how to treat network traffic packets. Different kernel modules and programs are currently used for different protocols; iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and to Ethernet frames.
iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. On most Linux systems, iptables is installed as and documented in its man pages, which can be opened using man iptables when installed. It may also be found in /sbin/iptables, but since iptables is more like a service rather than an "essential binary", the preferred location remains .
The term iptables is also commonly used to inclusively refer to the kernel-level components. x_tables is the name of the kernel module carrying the shared code portion used by all four modules that also provides the API used for extensions; subsequently, Xtables is more or less used to refer to the entire firewall (v4, v6, arp, and eb) architecture.
iptables superseded ipchains; and the successor of iptables is nftables, which was released on 19 January 2014 and was merged into the Linux kernel mainline in kernel version 3.13.
Overview
iptables allows the system administrator to define tables containing chains of rules for the treatment of packets. Each table is associated with a different kind of packet processing. Packets are processed by sequentially traversing the rules in chains. A rule in a chain can cause a goto or jump to another chain, and this can be repeated to whatever level of nesting is desired. (A jump is like a “call”, i.e. the point that was jumped from is remembered.) Every network packet arriving at or leaving from the computer traverses at least one chain.
The origin of the p |
https://en.wikipedia.org/wiki/Web%20Proxy%20Auto-Discovery%20Protocol | The Web Proxy Auto-Discovery (WPAD) Protocol is a method used by clients to locate the URL of a configuration file using DHCP and/or DNS discovery methods. Once detection and download of the configuration file is complete, it can be executed to determine the proxy for a specified URL.
History
The WPAD protocol only outlines the mechanism for discovering the location of this file, but the most commonly deployed configuration file format is the proxy auto-config format originally designed by Netscape in 1996 for Netscape Navigator 2.0.
The WPAD protocol was drafted by a consortium of companies including Inktomi Corporation, Microsoft Corporation, RealNetworks, Inc., and Sun Microsystems, Inc. (now Oracle Corp.). WPAD is documented in an INTERNET-DRAFT which expired in December 1999. However, WPAD is still supported by all major browsers. WPAD was first included with Internet Explorer 5.0.
Context
In order for all browsers in an organization to be supplied the same proxy policy, without configuring each browser manually, both the below technologies are required:
Proxy auto-config (PAC) standard: create and publish one central proxy configuration file. Details are discussed in a separate article.
Web Proxy Auto-Discovery Protocol (WPAD) standard: ensure that an organization's browsers will find this file without manual configuration. This is the topic of this article.
The WPAD standard defines two alternative methods the system administrator can use to publish the location of the proxy configuration file, using the Dynamic Host Configuration Protocol (DHCP) or the Domain Name System (DNS):
Before fetching its first page, a web browser implementing this method sends a DHCPINFORM query to the local DHCP server, and uses the URL from the WPAD option in the server's reply. If the DHCP server does not provide the desired information, DNS is used. If, for example, the network name of the user's computer is pc.department.branch.example.com, the browser will try the fol |
https://en.wikipedia.org/wiki/Primordial%20sandwich | The concept of the primordial sandwich was proposed by the chemist Günter Wächtershäuser to describe the possible origins of the first cell membranes, and, therefore, the first cell.
According to the two main models of abiogenesis, RNA world and iron-sulfur world, prebiotic processes existed before the development of the cell membrane. The difficulty with this idea, however, is that it is almost impossible to create a complex molecule such as RNA (or even its molecular precursor, pre-RNA) directly from simple organic molecules dissolved in a global ocean (Joyce, 1991), because without some mechanism to concentrate these organic molecules, they would be too dilute to generate the necessary chemical reactions to transform them from simple organic molecules into genuine prebiotic molecules.
To address this problem, Wächtershäuser proposed that concentration might occur by concentration upon ("adsorption to") the surfaces of minerals. With the accumulation of enough amphipathic molecules (such as phospholipids), a bilayer will self-organize, and any molecules caught inside will become the contents of a liposome, and would be concentrated enough to allow chemical reactions to transform organic molecules into prebiotic molecules.
Although developed for his own iron-sulfur world model, the idea of the primordial sandwich has also been adopted by some adherents of the RNA world model.
See also
Primordial sea
Primordial soup
Notes
External links
Minerals and the Origin of Life
Astrobiology, Volume 2, Number 4, "The First Cell Membranes"
Origin of life |
https://en.wikipedia.org/wiki/Zigzag | A zigzag is a pattern made up of small corners at variable angles, though constant within the zigzag, tracing a path between two parallel lines; it can be described as both jagged and fairly regular.
In geometry, this pattern is described as a skew apeirogon. From the point of view of symmetry, a regular zigzag can be generated from a simple motif like a line segment by repeated application of a glide reflection.
Although the origin of the word is unclear, its first printed appearances were in French-language books and ephemera of the late 17th century.
Examples of zigzags
The trace of a triangle wave or a sawtooth wave is a zigzag.
Pinking shears are designed to cut cloth or paper with a zigzag edge, to lessen fraying.
In sewing, a zigzag stitch is a machine stitch in a zigzag pattern.
The zigzag arch is an architectural embellishment used in Islamic, Byzantine, Norman and Romanesque architecture.
In seismology, earthquake recorded in a "zigzag line" form by using seismograph.
See also
Serpentine shape
Infinite skew polygon
References
Bibliography
Patterns |
https://en.wikipedia.org/wiki/Wedderburn%E2%80%93Etherington%20number | The Wedderburn–Etherington numbers are an integer sequence named for Ivor Malcolm Haddon Etherington and Joseph Wedderburn that can be used to count certain kinds of binary trees. The first few numbers in the sequence are
0, 1, 1, 1, 2, 3, 6, 11, 23, 46, 98, 207, 451, 983, 2179, 4850, 10905, 24631, 56011, ... ()
Combinatorial interpretation
These numbers can be used to solve several problems in combinatorial enumeration. The nth number in the sequence (starting with the number 0 for n = 0)
counts
The number of unordered rooted trees with n leaves in which all nodes including the root have either zero or exactly two children. These trees have been called Otter trees, after the work of Richard Otter on their combinatorial enumeration. They can also be interpreted as unlabeled and unranked dendrograms with the given number of leaves.
The number of unordered rooted trees with n nodes in which the root has degree zero or one and all other nodes have at most two children. Trees in which the root has at most one child are called planted trees, and the additional condition that the other nodes have at most two children defines the weakly binary trees. In chemical graph theory, these trees can be interpreted as isomers of polyenes with a designated leaf atom chosen as the root.
The number of different ways of organizing a single-elimination tournament for n players (with the player names left blank, prior to seeding players into the tournament). The pairings of such a tournament may be described by an Otter tree.
The number of different results that could be generated by different ways of grouping the expression for a binary multiplication operation that is assumed to be commutative but neither associative nor idempotent. For instance can be grouped into binary multiplications in three ways, as , , or . This was the interpretation originally considered by both Etherington and Wedderburn. An Otter tree can be interpreted as a grouped expression in which each leaf node cor |
https://en.wikipedia.org/wiki/Highly%20totient%20number | A highly totient number is an integer that has more solutions to the equation , where is Euler's totient function, than any integer below it. The first few highly totient numbers are
1, 2, 4, 8, 12, 24, 48, 72, 144, 240, 432, 480, 576, 720, 1152, 1440 , with 2, 3, 4, 5, 6, 10, 11, 17, 21, 31, 34, 37, 38, 49, 54, and 72 totient solutions respectively. The sequence of highly totient numbers is a subset of the sequence of smallest number with exactly solutions to .
The totient of a number , with prime factorization , is the product:
Thus, a highly totient number is a number that has more ways of being expressed as a product of this form than does any smaller number.
The concept is somewhat analogous to that of highly composite numbers, and in the same way that 1 is the only odd highly composite number, it is also the only odd highly totient number (indeed, the only odd number to not be a nontotient). And just as there are infinitely many highly composite numbers, there are also infinitely many highly totient numbers, though the highly totient numbers get tougher to find the higher one goes, since calculating the totient function involves factorization into primes, something that becomes extremely difficult as the numbers get larger.
Example
There are five numbers (15, 16, 20, 24, and 30) whose totient number is 8. No positive integer smaller than 8 has as many such numbers, so 8 is highly totient.
Table
See also
Highly cototient number
References
Integer sequences |
https://en.wikipedia.org/wiki/British%20National%20Vegetation%20Classification |
The British National Vegetation Classification or NVC is a system of classifying natural habitat types in Great Britain according to the vegetation they contain.
A large scientific meeting of ecologists, botanists, and other related professionals in the United Kingdom resulted in the publication of a compendium of five books: British Plant Communities, edited by John S. Rodwell, which detail the incidence of plant species in twelve major habitat types in the British natural environment. They are the first systematic and comprehensive account of the vegetation types of the country. They cover all natural, semi-natural and major artificial habitats in Great Britain (not Northern Ireland) and represent fifteen years of research by leading plant ecologists.
From the data collated from the books, commercial software products have been developed to help to classify vegetation identified into one of the many habitat types found in Great Britain – these include MATCH, TABLEFIT and MAVIS.
Terminology used in connection with the British National Vegetation Classification
The following is a list of terms used in connection with the British National Vegetation Classification, together with their meanings:
Communities, subcommunities and variants
A community is the fundamental unit of categorisation for vegetation.
A subcommunity is a distinct recognisable subdivision of a community.
A variant is a further subdivision of a subcommunity.
Constant species
A constant species in a community is a species that is always present in any given stand of vegetation belonging to that community.
For a list of the constant species, and the NVC communities in which they are present, see List of constant species in the British National Vegetation Classification.
Rare species
A rare species is a species which is associated with a particular community and is rare nationally.
The sources used by the authors of British Plant Communities for assessing rarity were as follows.
a) fo |
https://en.wikipedia.org/wiki/Elementary%20equivalence | In model theory, a branch of mathematical logic, two structures M and N of the same signature σ are called elementarily equivalent if they satisfy the same first-order σ-sentences.
If N is a substructure of M, one often needs a stronger condition. In this case N is called an elementary substructure of M if every first-order σ-formula φ(a1, …, an) with parameters a1, …, an from N is true in N if and only if it is true in M.
If N is an elementary substructure of M, then M is called an elementary extension of N. An embedding h: N → M is called an elementary embedding of N into M if h(N) is an elementary substructure of M.
A substructure N of M is elementary if and only if it passes the Tarski–Vaught test: every first-order formula φ(x, b1, …, bn) with parameters in N that has a solution in M also has a solution in N when evaluated in M. One can prove that two structures are elementarily equivalent with the Ehrenfeucht–Fraïssé games.
Elementary embeddings are used in the study of large cardinals, including rank-into-rank.
Elementarily equivalent structures
Two structures M and N of the same signature σ are elementarily equivalent if every first-order sentence (formula without free variables) over σ is true in M if and only if it is true in N, i.e. if M and N have the same complete first-order theory.
If M and N are elementarily equivalent, one writes M ≡ N.
A first-order theory is complete if and only if any two of its models are elementarily equivalent.
For example, consider the language with one binary relation symbol '<'. The model R of real numbers with its usual order and the model Q of rational numbers with its usual order are elementarily equivalent, since they both interpret '<' as an unbounded dense linear ordering. This is sufficient to ensure elementary equivalence, because the theory of unbounded dense linear orderings is complete, as can be shown by the Łoś–Vaught test.
More generally, any first-order theory with an infinite model has non-isomorphi |
https://en.wikipedia.org/wiki/Single-photon%20avalanche%20diode | A single-photon avalanche diode (SPAD) is a solid-state photodetector within the same family as photodiodes and avalanche photodiodes (APDs), while also being fundamentally linked with basic diode behaviours. As with photodiodes and APDs, a SPAD is based around a semi-conductor p-n junction that can be illuminated with ionizing radiation such as gamma, x-rays, beta and alpha particles along with a wide portion of the electromagnetic spectrum from ultraviolet (UV) through the visible wavelengths and into the infrared (IR).
In a photodiode, with a low reverse bias voltage, the leakage current changes linearly with absorption of photons, i.e. the liberation of current carriers (electrons and/or holes) due to the internal photoelectric effect. However, in a SPAD, the reverse bias is so high that a phenomenon called impact ionisation occurs which is able to cause an avalanche current to develop. Simply, a photo-generated carrier is accelerated by the electric field in the device to a kinetic energy which is enough to overcome the ionisation energy of the bulk material, knocking electrons out of an atom. A large avalanche of current carriers grows exponentially and can be triggered from as few as a single photon-initiated carrier. A SPAD is able to detect single photons providing short duration trigger pulses that can be counted. However, they can also be used to obtain the time of arrival of the incident photon due to the high speed that the avalanche builds up and the device's low timing jitter.
The fundamental difference between SPADs and APDs or photodiodes, is that a SPAD is biased well above its reverse-bias breakdown voltage and has a structure that allows operation without damage or undue noise. While an APD is able to act as a linear amplifier, the level of impact ionisation and avalanche within the SPAD has prompted researchers to liken the device to a Geiger-counter in which output pulses indicate a trigger or "click" event. The diode bias region that gives r |
https://en.wikipedia.org/wiki/CCMP%20%28cryptography%29 | Counter Mode Cipher Block Chaining Message Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode Protocol (CCMP) is an encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard. CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol.
Technical details
CCMP uses CCM that combines CTR mode for data confidentiality and cipher block chaining message authentication code (CBC-MAC) for authentication and integrity. CCM protects the integrity of both the MPDU data field and selected portions of the IEEE 802.11 MPDU header. CCMP is based on AES processing and uses a 128-bit key and a 128-bit block size. CCMP uses CCM with the following two parameters:
M = 8; indicating that the MIC is 8 octets (eight bytes).
L = 2; indicating that the Length field is 2 octets.
A CCMP Medium Access Control Protocol Data Unit (MPDU) comprises five sections. The first is the MAC header which contains the destination and source address of the data packet. The second is the CCMP header which is composed of 8 octets and consists of the packet number (PN), the Ext IV, and the key ID. The packet number is a 48-bit number stored across 6 octets. The PN codes are the first two and last four octets of the CCMP header and are incremented for each subsequent packet. Between the PN codes are a reserved octet and a Key ID octet. The Key ID octet contains the Ext IV (bit 5), Key ID (bits 6–7), and a reserved subfield (bits 0–4). CCMP uses these values to encrypt the data unit and the MIC. The third section is the data unit which is the data being sent in the packet. The fourth is the message integrity code (MIC) which protects t |
https://en.wikipedia.org/wiki/PC%20Weasel%202000 | PC Weasel 2000 was a line of graphics cards designed by Middle Digital Incorporated (Herb Peyerl and Jonathan Levine) which output to a serial port instead of a monitor. This allows servers using PC hardware with conventional BIOSes or operating systems lacking serial capability to be administered remotely.
The product was introduced in 1999 and began shipping January 20, 2000.
PCI and ISA models are available. The PC Weasel is also connected to a PS/2-compatible keyboard port and effectively emulates a keyboard by converting characters obtained from the serial port to keyboard scancodes.
The card may also be connected to the reset pins of the motherboard and reboot the machine on command.
The PC Weasel is an open-source product. Every purchaser of the board is granted a license for the card's onboard microcontroller. The microcode, stored in flash memory, is modifiable using a gcc-based toolchain.
See also
Lights out management (LOM)
Network Console on Acid (NCA)
coreboot
External links
PC Weasel 2000 web site at archive.org
release announcement at archive.org
Graphics cards
System administration
Out-of-band management |
https://en.wikipedia.org/wiki/List%20of%20sensors | This is a list of sensors sorted by sensor type.
Acoustic, sound, vibration
Geophone
Hydrophone
Microphone
Pickup
Seismometer
Sound locator
Automotive
Air flow meter
AFR sensor
Air–fuel ratio meter
Blind spot monitor
Crankshaft position sensor (CKP)
Curb feeler
Defect detector
Engine coolant temperature sensor
Hall effect sensor
Wheel speed sensor
Airbag sensors
Automatic transmission speed sensor
Brake fluid pressure sensor
Camshaft position sensor (CMP)
Cylinder Head Temperature gauge
Engine crankcase pressure sensor
Exhaust gas temperature sensor
Fuel level sensor
Fuel pressure sensor
Knock sensor
Light sensor
MAP sensor
Mass airflow sensor
Oil level sensor
Oil pressure sensor
Omniview technology
Oxygen sensor (O2)
Parking sensor
Radar gun
Radar sensor
Speed sensor
Throttle position sensor
Tire pressure sensor
Torque sensor
Transmission fluid temperature sensor
Turbine speed sensor
Variable reluctance sensor
Vehicle speed sensor
Water-in-fuel sensor
Wheel speed sensor
ABS sensors
Chemical
Breathalyzer
Carbon dioxide sensor
Carbon monoxide detector
Catalytic bead sensor
Chemical field-effect transistor
Chemiresistor
Electrochemical gas sensor
Electronic nose
Electrolyte–insulator–semiconductor sensor
Energy-dispersive X-ray spectroscopy
Fluorescent chloride sensors
Holographic sensor
Hydrocarbon dew point analyzer
Hydrogen sensor
Hydrogen sulfide sensor
Infrared point sensor
Ion-selective electrode
ISFET
Nondispersive infrared sensor
Microwave chemistry sensor
Morphix Chameleon
Nitrogen oxide sensor
Nondispersive infrared sensor
Olfactometer
Optode
Oxygen sensor
Ozone monitor
Pellistor
pH glass electrode
Potentiometric sensor
Redox electrode
Smoke detector
Zinc oxide nanorod sensor
Electric current, electric potential, magnetic, radio
Current sensor
Daly detector
Electroscope
Electron multiplier
Faraday cup
Galvanometer
Hall effect sensor
Hall probe
Magnetic anomaly detector
Magnetometer
Magnetoresistance
MEMS magnetic field sensor
Metal detector
Planar |
https://en.wikipedia.org/wiki/USDA%20soil%20taxonomy | USDA soil taxonomy (ST) developed by the United States Department of Agriculture and the National Cooperative Soil Survey provides an elaborate classification of soil types according to several parameters (most commonly their properties) and in several levels: Order, Suborder, Great Group, Subgroup, Family, and Series. The classification was originally developed by Guy Donald Smith, former director of the U.S. Department of Agriculture's soil survey investigations.
Discussion
A taxonomy is an arrangement in a systematic manner; the USDA soil taxonomy has six levels of classification. They are, from most general to specific: order, suborder, great group, subgroup, family and series. Soil properties that can be measured quantitatively are used in this classification system – they include: depth, moisture, temperature, texture, structure, cation exchange capacity, base saturation, clay mineralogy, organic matter content and salt content. There are 12 soil orders (the top hierarchical level) in soil taxonomy. The names of the orders end with the suffix -sol. The criteria for the different soil orders include properties that reflect major differences in the genesis of soils. The orders are:
Alfisol – soils with aluminium and iron. They have horizons of clay accumulation, and form where there is enough moisture and warmth for at least three months of plant growth. They constitute 10% of soils worldwide.
Andisol – volcanic ash soils. They are young soils. They cover 1% of the world's ice-free surface.
Aridisol – dry soils forming under desert conditions which have fewer than 90 consecutive days of moisture during the growing season and are nonleached. They include nearly 12% of soils on Earth. Soil formation is slow, and accumulated organic matter is scarce. They may have subsurface zones of caliche or duripan. Many aridisols have well-developed Bt horizons showing clay movement from past periods of greater moisture.
Entisol – recently formed soils that lack well-d |
https://en.wikipedia.org/wiki/Tong%20Dizhou | Tong Dizhou (; May 28, 1902 – March 30, 1979) was a Chinese embryologist known for his contributions to the field of cloning. He was a vice president of Chinese Academy of Science.
Biography
Born in Yinxian, Zhejiang province, Tong graduated from Fudan University in 1924 with a degree in biology, and received a PhD in zoology in 1930 from Free University Brussels (ULB).
In 1963, Tong inserted DNA of a male carp into the egg of a female carp and became the first to successfully clone a fish. He is regarded as "the father of China's clone".
Tong was also an academician at the Chinese Academy of Sciences and the first director of its Institute of Oceanology from its founding in 1950 until 1978.
Tong died on 30 March 1979 at Beijing Hospital in Beijing.
References
1902 births
1979 deaths
20th-century biologists
20th-century Chinese scientists
Biologists from Zhejiang
Cloning
Educators from Ningbo
Free University of Brussels (1834–1969) alumni
Fudan University alumni
Academic staff of Fudan University
Members of Academia Sinica
Members of the Chinese Academy of Sciences
Academic staff of the National Central University
Scientists from Ningbo
Academic staff of Tongji University
Vice Chairpersons of the National Committee of the Chinese People's Political Consultative Conference |
https://en.wikipedia.org/wiki/IBM%20Future%20Systems%20project | The Future Systems project (FS) was a research and development project undertaken in IBM in the early 1970s, aiming to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware.
History
Background
Until the end of the 1960s, IBM had been making most of its profit on hardware, bundling support software and services along with its systems to make them more attractive. Only hardware carried a price tag, but those prices included an allocation for software and services.
Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. This changed more dramatically when Gene Amdahl left IBM and set up a company making System/370 compatible systems that were both faster and less expensive than IBM's offerings. In early 1971, an internal IBM task force, Project Counterpoint, concluded that the compatible mainframe business was indeed a viable business and that the basis for charging for software and services as part of the hardware price would quickly vanish.
Another strategic issue was that the cost of computing was steadily going down while the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.
At the same time, IBM was under legal attack for its dominant position and its policy of bundling software and services in the hardware price, so that any attempt at "re-bundl |
https://en.wikipedia.org/wiki/Pseudo-differential%20operator | In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
History
The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza.
They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators.
Motivation
Linear differential operators with constant coefficients
Consider a linear differential operator with constant coefficients,
which acts on smooth functions with compact support in Rn.
This operator can be written as a composition of a Fourier transform, a simple multiplication by the
polynomial function (called the symbol)
and an inverse Fourier transform, in the form:
Here, is a multi-index, are complex numbers, and
is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants to facilitate the calculation of Fourier transforms.
Derivation of formula ()
The Fourier transform of a smooth function u, compactly supported in Rn, is
and Fourier's inversion formula gives
By applying P(D) to this representation of u and using
one obtains formula ().
Representation of solutions to partial differential equations
To solve the partial differential equation
we (formally) apply the Fourier transform on both sides and obtain the algebraic equation
If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ):
By Fourier's inversion formula, a solution is
Here it is assumed that:
P(D) is a linear differential operator with constant coefficients,
its symbol P(ξ) |
https://en.wikipedia.org/wiki/Addition%20theorem | In mathematics, an addition theorem is a formula such as that for the exponential function:
ex + y = ex · ey,
that expresses, for a particular function f, f(x + y) in terms of f(x) and f(y). Slightly more generally, as is the case with the trigonometric functions and , several functions may be involved; this is more apparent than real, in that case, since there is an algebraic function of (in other words, we usually take their functions both as defined on the unit circle).
The scope of the idea of an addition theorem was fully explored in the nineteenth century, prompted by the discovery of the addition theorem for elliptic functions. To "classify" addition theorems it is necessary to put some restriction on the type of function G admitted, such that
F(x + y) = G(F(x), F(y)).
In this identity one can assume that F and G are vector-valued (have several components). An algebraic addition theorem is one in which G can be taken to be a vector of polynomials, in some set of variables. The conclusion of the mathematicians of the time was that the theory of abelian functions essentially exhausted the interesting possibilities: considered as a functional equation to be solved with polynomials, or indeed rational functions or algebraic functions, there were no further types of solution.
In more contemporary language this appears as part of the theory of algebraic groups, dealing with commutative groups. The connected, projective variety examples are indeed exhausted by abelian functions, as is shown by a number of results characterising an abelian variety by rather weak conditions on its group law. The so-called quasi-abelian functions are all known to come from extensions of abelian varieties by commutative affine group varieties. Therefore, the old conclusions about the scope of global algebraic addition theorems can be said to hold. A more modern aspect is the theory of formal groups.
See also
Timeline of abelian varieties
Addition theorem for spherical harmonic |
https://en.wikipedia.org/wiki/Security%20Assertion%20Markup%20Language | Security Assertion Markup Language (SAML, pronounced SAM-el, ) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. SAML is an XML-based markup language for security assertions (statements that service providers use to make access-control decisions). SAML is also:
A set of XML-based protocol messages
A set of protocol message bindings
A set of profiles (utilizing all of the above)
An important use case that SAML addresses is web-browser single sign-on (SSO). Single sign-on is relatively easy to accomplish within a security domain (using cookies, for example) but extending SSO across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The SAML Web Browser SSO profile was specified and standardized to promote interoperability.
Overview
The SAML specification defines three roles: the principal (typically a human user), the identity provider (IdP) and the service provider (SP). In the primary use case addressed by SAML, the principal requests a service from the service provider. The service provider requests and obtains an authentication assertion from the identity provider. On the basis of this assertion, the service provider can make an access control decision, that is, it can decide whether to perform the service for the connected principal.
At the heart of the SAML assertion is a subject (a principal within the context of a particular security domain) about which something is being asserted. The subject is usually (but not necessarily) a human. As in the SAML 2.0 Technical Overview, the terms subject and principal are used interchangeably in this document.
Before delivering the subject-based assertion from IdP to the SP, the IdP may request some information from the principal—such as a user name and password—in order to authenticate the principal. SAML specifies the content of the assertion |
https://en.wikipedia.org/wiki/TickIT | TickIT is a certification program for companies in the software development and computer industries, supported primarily by the United Kingdom and Swedish industries through UKAS and SWEDAC respectively. Its general objective is to improve software quality.
History
In the 1980s, the UK government's CCTA organisation promoted the use of IT standards in the UK public sector, with work on BS5750 (Quality Management) leading to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MoD and participation of software development companies.
The TickIT Guide
TickIT also includes a guide. This provides guidance in understanding and applying ISO 9001 in the IT industry. It gives a background to the TickIT scheme, including its origins and objectives. Furthermore, it provides detailed information on how to implement a Quality System and the expected structure and content relevant to software activities. The TickIT guide also assists in defining appropriate measures and/or metrics. Various TickIT Guides have been issued, including "Guide to Software Quality Management and Certification using EN29001".
References
Bamford, Robert; Deibler, William (2003). ISO 9001: 2000 for Software and Systems Providers: An Engineering Approach (1st ed.). CRC-Press.
External links
TickITplus
Information assurance standards
Information technology governance
Information technology organisations based in the United Kingdom
Software quality |
https://en.wikipedia.org/wiki/Clerk%20of%20works | A clerk of works or clerk of the works (CoW) is employed by an architect or a client on a construction site. The role is primarily to represent the interests of the client in regard to ensuring that the quality of both materials and workmanship are in accordance with the design information such as specification and engineering drawings, in addition to recognized quality standards. The role is defined in standard forms of contract such as those published by the Joint Contracts Tribunal. Clerks of works are also the most highly qualified non-commissioned tradesmen in the Royal Engineers. The qualification can be held in three specialisms: electrical, mechanical and construction.
Historically, the clerk of works was employed by the architect on behalf of a client, or by local authorities to oversee public works. The clerks of works can also be employed by the client (state body/local authority/private client) to monitor design and build projects where the traditional role of the architect is within the design and build project team.
Maître d'oeuvre (master of work) is a term used in many Francophone jurisdictions for the office that carries out this job in major projects; the Channel Tunnel project had such an office. In Italy, the term used is direttore dei lavori (manager of the works).
Origins of the title
The job title is believed to derive from the 13th century when monks and priests (i.e. "clerics" or "clerks") were accepted as being more literate than the builders of the age and took on the responsibility of supervising the works associated with the erection of churches and other religious property. As craftsmen and masons became more educated they in turn took on the role, but the title did not change. By the 19th century the role had expanded to cover the majority of building works, and the clerk of works was drawn from experienced tradesmen who had wide knowledge and understanding of the building process.
The role
The role, to this day, is based on the im |
https://en.wikipedia.org/wiki/List%20of%20manifolds | This is a list of particular manifolds, by Wikipedia page. See also list of geometric topology topics. For categorical listings see :Category:Manifolds and its subcategories.
Generic families of manifolds
Euclidean space, Rn
n-sphere, Sn
n-torus, Tn
Real projective space, RPn
Complex projective space, CPn
Quaternionic projective space, HPn
Flag manifold
Grassmann manifold
Stiefel manifold
Lie groups provide several interesting families. See Table of Lie groups for examples. See also: List of simple Lie groups and List of Lie group topics.
Manifolds of a specific dimension
1-manifolds
Circle, S1
Long line
Real line, R
Real projective line, RP1 ≅ S1
2-manifolds
Cylinder, S1 × R
Klein bottle, RP2 # RP2
Klein quartic (a genus 3 surface)
Möbius strip
Real projective plane, RP2
Sphere, S2
Surface of genus g
Torus
Double torus
3-manifolds
3-sphere, S3
3-torus, T3
Poincaré homology sphere
SO(3) ≅ RP3
Solid Klein bottle
Solid torus
Whitehead manifold
Meyerhoff manifold
Weeks manifold
For more examples see 3-manifold.
4-manifolds
Complex projective plane
Del Pezzo surface
E8 manifold
Enriques surface
Exotic R4
Hirzebruch surface
K3 surface
For more examples see 4-manifold.
Special types of manifolds
Manifolds related to spheres
Brieskorn manifold
Exotic sphere
Homology sphere
Homotopy sphere
Lens space
Spherical 3-manifold
Special classes of Riemannian manifolds
Einstein manifold
Ricci-flat manifold
G2 manifold
Kähler manifold
Calabi–Yau manifold
Hyperkähler manifold
Quaternionic Kähler manifold
Riemannian symmetric space
Spin(7) manifold
Categories of manifolds
Manifolds definable by a particular choice of atlas
Affine manifold
Analytic manifold
Complex manifold
Differentiable (smooth) manifold
Piecewise linear manifold
Lipschitz manifold
Topological manifold
Manifolds with additional structure
Almost complex manifold
Almost symplectic manifold
Calibrated manifold
Complex manifold
Contac |
https://en.wikipedia.org/wiki/Backstaff | The backstaff is a navigational instrument that was used to measure the altitude of a celestial body, in particular the Sun or Moon. When observing the Sun, users kept the Sun to their back (hence the name) and observed the shadow cast by the upper vane on a horizon vane. It was invented by the English navigator John Davis, who described it in his book Seaman's Secrets in 1594.
Types of backstaffs
Backstaff is the name given to any instrument that measures the altitude of the sun by the projection of a shadow. It appears that the idea for measuring the sun's altitude using back observations originated with Thomas Harriot. Many types of instruments evolved from the cross-staff that can be classified as backstaffs. Only the Davis quadrant remains dominant in the history of navigation instruments. Indeed, the Davis quadrant is essentially synonymous with backstaff. However, Davis was neither the first nor the last to design such an instrument and others are considered here as well.
Davis quadrant
Captain John Davis invented a version of the backstaff in 1594. Davis was a navigator who was quite familiar with the instruments of the day such as the mariner's astrolabe, the quadrant and the cross-staff. He recognized the inherent drawbacks of each and endeavoured to create a new instrument that could reduce those problems and increase the ease and accuracy of obtaining solar elevations.
One early version of the quadrant staff is shown in Figure 1. It had an arc affixed to a staff so that it could slide along the staff (the shape is not critical, though the curved shape was chosen). The arc (A) was placed so that it would cast its shadow on the horizon vane (B). The navigator would look along the staff and observe the horizon through a slit in the horizon vane. By sliding the arc so that the shadow aligned with the horizon, the angle of the sun could be read on the graduated staff. This was a simple quadrant, but it was not as accurate as one might like. The |
https://en.wikipedia.org/wiki/Algebraic%20function | In mathematics, an algebraic function is a function that can be defined
as the root of an irreducible polynomial equation. Quite often algebraic functions are algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are:
Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by
.
In more precise terms, an algebraic function of degree in one variable is a function that is continuous in its domain and satisfies a polynomial equation
where the coefficients are polynomial functions of , with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the 's. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients.
The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number.
Sometimes, coefficients that are polynomial over a ring are considered, and one then talks about "functions algebraic over ".
A function which is not algebraic is called a transcendental function, as it is for example the case of . A composition of transcendental functions can give an algebraic function: .
As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n
functions, sometimes also called branches. Consider for example the equation of the unit circle:
This determines y, except only up to an overall sign; accordingly, it has two branches:
An algebraic fu |
https://en.wikipedia.org/wiki/Ring%20circuit | In electricity supply design, a ring circuit is an electrical wiring technique in which sockets and the distribution point are connected in a ring. It is contrasted with the usual radial circuit, in which sockets and the distribution point are connected in a line with the distribution point at one end.
Ring circuits are also known as ring final circuits and often incorrectly as ring mains, a term used historically, or informally simply as rings.
It is used primarily in the United Kingdom, where it was developed, and to a lesser extent in Ireland and Hong Kong.
This design enables the use of smaller-diameter wire than would be used in a radial circuit of equivalent total current capacity. The reduced diameter conductors in the flexible cords connecting an appliance to the plug intended for use with sockets on a ring circuit are individually protected by a fuse in the plug. Its advantages over radial circuits are therefore reduced quantity of copper used, and greater flexibility of appliances and equipment that can be connected.
Ideally, the ring circuit acts like two radial circuits proceeding in opposite directions around the ring, the dividing point between them dependent on the distribution of load in the ring. If the load is evenly split across the two directions, the current in each direction is half of the total, allowing the use of wire with half the total current-carrying capacity. In practice, the load does not always split evenly, so thicker wire is used.
Description
The ring starts at the consumer unit (also known as fuse box, distribution board, or breaker box), visits each socket in turn, and then returns to the consumer unit. The ring is fed from a fuse or circuit breaker in the consumer unit.
Ring circuits are commonly used in British wiring with socket-outlets taking fused plugs to BS 1363. Because the breaker rating is much higher than that of any one socket outlet, the system can only be used with fused plugs or fused appliance outlets. Th |
https://en.wikipedia.org/wiki/Bulkhead%20line | Bulkhead line is an officially set line along a shoreline, usually beyond the dry land, to demark a territory allowable to be treated as dry land, to separate the jurisdictions of dry land and water authorities, for construction and riparian activities, to establish limits to the allowable obstructions to navigation and other waterfront uses.
In particular, it may limit the construction of piers in the absence of an official pierhead line.
Various jurisdictions may define it in different ways. A formal definition may read as follows: A geographic line along a reach of navigable water that has been adopted by a municipal ordinance and approved by the Department of Natural Resources, and which allows limited filling between this bulkhead line and the original ordinary high water mark, except where such filling is prohibited by the floodway provisions. (Several municipalities in Wisconsin use wording closely approximating this sample.)
References
Coastal geography |
https://en.wikipedia.org/wiki/Exterior%20Gateway%20Protocol | The Exterior Gateway Protocol (EGP) was a routing protocol used to connect different autonomous systems on the Internet from the mid-1980s until the mid-1990s, when it was replaced by Border Gateway Protocol (BGP).
History
EGP was developed by Bolt, Beranek and Newman in the early 1980s. It was first described in RFC 827 and formally specified in RFC 904.
RFC 1772 outlined a migration path from EGP to BGP.
References
See also
Interior gateway protocol
Internet protocols
Internet Standards
Routing protocols |
https://en.wikipedia.org/wiki/Data%20manipulation%20language | A data manipulation language (DML) is a computer programming language used for adding (inserting), deleting, and modifying (updating) data in a database. A DML is often a sublanguage of a broader database language such as SQL, with the DML comprising some of the operators in the language. Read-only selecting of data is sometimes distinguished as being part of a separate data query language (DQL), but it is closely related and sometimes also considered a component of a DML; some operators may perform both selecting (reading) and writing.
A popular data manipulation language is that of Structured Query Language (SQL), which is used to retrieve and manipulate data in a relational database. Other forms of DML are those used by IMS/DLI, CODASYL databases, such as IDMS and others.
SQL
In SQL, the data manipulation language comprises the SQL-data change statements, which modify stored data but not the schema or database objects. Manipulation of persistent database objects, e.g., tables or stored procedures, via the SQL schema statements, rather than the data stored within them, is considered to be part of a separate data definition language (DDL). In SQL these two categories are similar in their detailed syntax, data types, expressions etc., but distinct in their overall function.
The SQL-data change statements are a subset of the SQL-data statements; this also contains the SELECT query statement, which strictly speaking is part of the DQL, not the DML. In common practice though, this distinction is not made and SELECT is widely considered to be part of DML, so the DML consists of all SQL-data statements, not only the SQL-data change statements. The SELECT ... INTO ... form combines both selection and manipulation, and thus is strictly considered to be DML because it manipulates (i.e. modifies) data.
Data manipulation languages have their functional capability organized by the initial word in a statement, which is almost always a verb. In the case of SQL, these verbs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.