source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Aeronautical%20Fixed%20Telecommunication%20Network
The Aeronautical Fixed Telecommunications Network (AFTN) is a worldwide system of aeronautical fixed circuits provided, as part of the Aeronautical Fixed Service, for the exchange of messages and/or digital data between aeronautical fixed stations having the same or compatible communications characteristics. AFTN comprises aviation entities including: ANS (Air Navigation Services) providers, aviation service providers, airport authorities and government agencies, to name a few. It exchanges vital information for aircraft operations such as distress messages, urgency messages, flight safety messages, meteorological messages, flight regularity messages and aeronautical administrative messages. Communications infrastructure The original AFTN infrastructure consisted of landline teleprinter links between the major centers. Some long distance and international links were based on duplex radioteletype transmissions and leased lines. When it upgraded to CIDIN (Common ICAO Data Interchange Network), it was upgraded to X.25 links at much higher data rates. As the Aeronautical Message Handling System (AMHS) comes online over the next decade, it will switch to X.400 links, with either dedicated lines or tunneled through IP. AFTN Station address format An AFTN address is an eight-letter-group composed of a four-letter ICAO Location Indicator plus a three-letter-group identifying an organization or service addressed and an additional letter. The additional letter represents a department, division or process within the organization/function addressed. The letter X is used to complete the address when an explicit identification of the department, division or process is not required. For instance: LEBBYNYX. Location Indicator - A four-letter code group formulated in accordance with rules prescribed by ICAO and assigned to the location of an aeronautical fixed station. In the ICAO DOC7910, location indicators that are assigned to locations to which messages can not be addres
https://en.wikipedia.org/wiki/Categorical%20logic
Categorical logic is the branch of mathematics in which tools and concepts from category theory are applied to the study of mathematical logic. It is also notable for its connections to theoretical computer science. In broad terms, categorical logic represents both syntax and semantics by a category, and an interpretation by a functor. The categorical framework provides a rich conceptual background for logical and type-theoretic constructions. The subject has been recognisable in these terms since around 1970. Overview There are three important themes in the categorical approach to logic: Categorical semantics Categorical logic introduces the notion of structure valued in a category C with the classical model theoretic notion of a structure appearing in the particular case where C is the category of sets and functions. This notion has proven useful when the set-theoretic notion of a model lacks generality and/or is inconvenient. R.A.G. Seely's modeling of various impredicative theories, such as System F, is an example of the usefulness of categorical semantics. It was found that the connectives of pre-categorical logic were more clearly understood using the concept of adjoint functor, and that the quantifiers were also best understood using adjoint functors. Internal languages This can be seen as a formalization and generalization of proof by diagram chasing. One defines a suitable internal language naming relevant constituents of a category, and then applies categorical semantics to turn assertions in a logic over the internal language into corresponding categorical statements. This has been most successful in the theory of toposes, where the internal language of a topos together with the semantics of intuitionistic higher-order logic in a topos enables one to reason about the objects and morphisms of a topos "as if they were sets and functions". This has been successful in dealing with toposes that have "sets" with properties incompatible with classical lo
https://en.wikipedia.org/wiki/Occurs%20check
In computer science, the occurs check is a part of algorithms for syntactic unification. It causes unification of a variable V and a structure S to fail if S contains V. Application in theorem proving In theorem proving, unification without the occurs check can lead to unsound inference. For example, the Prolog goal will succeed, binding X to a cyclic structure which has no counterpart in the Herbrand universe. As another example, without occurs-check, a resolution proof can be found for the non-theorem : the negation of that formula has the conjunctive normal form , with and denoting the Skolem function for the first and second existential quantifier, respectively; the literals and are unifiable without occurs check, producing the refuting empty clause. Rational tree unification Prolog implementations usually omit the occurs check for reasons of efficiency, which can lead to circular data structures and looping. By not performing the occurs check, the worst case complexity of unifying a term with term is reduced in many cases from to ; in the particular, frequent case of variable-term unifications, runtime shrinks to . Modern implementations, based on Colmerauer's Prolog II, use rational tree unification to avoid looping. However it is difficult to keep the complexity time linear in the presence of cyclic terms. Examples where Colmerauers algorithm becomes quadratic can be readily constructed, but refinement proposals exist. See image for an example run of the unification algorithm given in Unification (computer science)#A unification algorithm, trying to solve the goal , however without the occurs check rule (named "check" there); applying rule "eliminate" instead leads to a cyclic graph (i.e. an infinite term) in the last step. Sound unification ISO Prolog implementations have the built-in predicate unify_with_occurs_check/2 for sound unification but are free to use unsound or even looping algorithms when unification is invoked otherwise,
https://en.wikipedia.org/wiki/OBJ%20%28programming%20language%29
OBJ is a programming language family introduced by Joseph Goguen in 1976, and further worked on by Jose Meseguer. Overview It is a family of declarative "ultra high-level" languages. It features abstract types, generic modules, subsorts (subtypes with multiple inheritance), pattern-matching modulo equations, E-strategies (user control over laziness), module expressions (for combining modules), theories and views (for describing module interfaces) for the massively parallel RRM (rewrite rule machine). Members of the OBJ family of languages include CafeOBJ, Eqlog, FOOPS, Kumo, Maude, OBJ2, and OBJ3. OBJ2 OBJ2 is a programming language with Clear-like parametrised modules and a functional system based on equations. OBJ3 OBJ3 is a version of OBJ based on order-sorted rewriting. OBJ3 is agent-oriented and runs on Kyoto Common Lisp AKCL. See also Automated theorem proving Comparison of programming languages Formal methods References J. A. Goguen, Higher-Order Functions Considered Unnecessary for Higher-Order Programming. In Research Topics in Functional Programming (June 1990). pp. 309–351. "Principles of OBJ2", K. Futatsugi et al., 12th POPL, ACM 1985, pp. 52–66. External links The OBJ archive The OBJ family Information and OBJ3 manual, PostScript format Academic programming languages Functional languages Logic in computer science Formal specification languages Theorem proving software systems Term-rewriting programming languages
https://en.wikipedia.org/wiki/Cairo%20%28operating%20system%29
Cairo was the codename for a project at Microsoft from 1991 to 1996. Its charter was to build technologies for a next-generation operating system that would fulfill Bill Gates's vision of "information at your fingertips." Cairo never shipped, although portions of its technologies have since appeared in other products. Overview Cairo was announced at the 1991 Microsoft Professional Developers Conference by Jim Allchin. It was demonstrated publicly (including a demo system for all attendees to use) at the 1993 Cairo/Win95 PDC. Microsoft changed stance on Cairo several times, sometimes calling it a product, other times referring to it as a collection of technologies. Features Cairo used distributed computing concepts to make information available quickly and seamlessly across a worldwide network of computers. The Windows 95 user interface was based on the initial design work that was done on the Cairo user interface. DCE/RPC shipped in Windows NT 3.1. Content Indexing is now a part of Internet Information Server and Windows Desktop Search. The remaining component is the object file system. It was once planned to be implemented in the form of WinFS as part of Windows Vista but development was cancelled in June 2006, with some of its technologies merged into other Microsoft products such as Microsoft SQL Server 2008, also known under the codename "Katmai". See also History of Microsoft Windows List of Microsoft codenames References Notes Distributed operating systems Microsoft Windows Microsoft operating systems Object-oriented operating systems Uncompleted Microsoft initiatives
https://en.wikipedia.org/wiki/CEN/XFS
CEN/XFS or XFS (extensions for financial services) provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as EFTPOS terminals and ATMs which are unique to the financial industry. It is an international standard promoted by the European Committee for Standardization (known by the acronym CEN, hence CEN/XFS). The standard is based on the WOSA Extensions for Financial Services or WOSA/XFS developed by Microsoft. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. XFS provides a common API for accessing and manipulating various financial services devices regardless of the manufacturer. History Chronology: 1991 - Microsoft forms "Banking Solutions Vendor Council" 1995 - WOSA/XFS 1.11 released 1997 - WOSA/XFS 2.0 released - additional support for 24 hours-a-day unattended operation 1998 - adopted by European Committee for Standardization as an international standard. 2000 - XFS 3.0 released by CEN 2008 - XFS 3.10 released by CEN 2011 - XFS 3.20 released by CEN 2015 - XFS 3.30 released by CEN 2020 - XFS 3.40 released by CEN WOSA/XFS changed name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. XFS middleware While the perceived benefit of XFS is similar to Java's "write once, run anywhere" mantra, often different hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that applications typically use a middleware to even out the differences between various platforms implementation of XFS. Notable XFS middleware platforms include: F1 Solutions - F1 TPS (multi-vendor ATM & POS solution) Serquo - Dwide (REST API middleware for XFS) Nexus Software LLC - Nexu
https://en.wikipedia.org/wiki/Bigram
A bigram or digram is a sequence of two adjacent elements from a string of tokens, which are typically letters, syllables, or words. A bigram is an n-gram for n=2. The frequency distribution of every bigram in a string is commonly used for simple statistical analysis of text in many applications, including in computational linguistics, cryptography, and speech recognition. Gappy bigrams or skipping bigrams are word pairs which allow gaps (perhaps avoiding connecting words, or allowing some simulation of dependencies, as in a dependency grammar). Applications Bigrams, along with other n-grams, are used in most successful language models for speech recognition. Bigram frequency attacks can be used in cryptography to solve cryptograms. See frequency analysis. Bigram frequency is one approach to statistical language identification. Some activities in logology or recreational linguistics involve bigrams. These include attempts to find English words beginning with every possible bigram, or words containing a string of repeated bigrams, such as logogogue. Bigram frequency in the English language The frequency of the most common letter bigrams in a large English corpus is: th 3.56% of 1.17% io 0.83% he 3.07% ed 1.17% le 0.83% in 2.43% is 1.13% ve 0.83% er 2.05% it 1.12% co 0.79% an 1.99% al 1.09% me 0.79% re 1.85% ar 1.07% de 0.76% on 1.76% st 1.05% hi 0.76% at 1.49% to 1.05% ri 0.73% en 1.45% nt 1.04% ro 0.73% nd 1.35% ng 0.95% ic 0.70% ti 1.34% se 0.93% ne 0.69% es 1.34% ha 0.93% ea 0.69% or 1.28% as 0.87% ra 0.69% te 1.20% ou 0.87% ce 0.65% See also Digraph (orthography) Letter frequency Sørensen–Dice coefficient References Formal languages Classical cryptography Natural language processing
https://en.wikipedia.org/wiki/Bose%20gas
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas. It is composed of bosons, which have an integer value of spin, and abide by Bose–Einstein statistics. The statistical mechanics of bosons were developed by Satyendra Nath Bose for a photon gas, and extended to massive particles by Albert Einstein who realized that an ideal gas of bosons would form a condensate at a low enough temperature, unlike a classical ideal gas. This condensate is known as a Bose–Einstein condensate. Introduction and examples Bosons are quantum mechanical particles that follow Bose–Einstein statistics, or equivalently, that possess integer spin. These particles can be classified as elementary: these are the Higgs boson, the photon, the gluon, the W/Z and the hypothetical graviton; or composite like the atom of hydrogen, the atom of 16O, the nucleus of deuterium, mesons etc. Additionally, some quasiparticles in more complex systems can also be considered bosons like the plasmons (quanta of charge density waves). The first model that treated a gas with several bosons, was the photon gas, a gas of photons, developed by Bose. This model leads to a better understanding of Planck's law and the black-body radiation. The photon gas can be easily expanded to any kind of ensemble of massless non-interacting bosons. The phonon gas, also known as Debye model, is an example where the normal modes of vibration of the crystal lattice of a metal, can be treated as effective massless bosons. Peter Debye used the phonon gas model to explain the behaviour of heat capacity of metals at low temperature. An interesting example of a Bose gas is an ensemble of helium-4 atoms. When a system of 4He atoms is cooled down to temperature near absolute zero, many quantum mechanical effects are present. Below 2.17 kelvins, the ensemble starts to behave as a superfluid, a fluid with almost zero viscosity. The Bose gas is the most simple quantitative model that explains this phas
https://en.wikipedia.org/wiki/Worldspan
Worldspan is a provider of travel technology and content and a part of the Travelport GDS business. It offers worldwide electronic distribution of travel information, Internet products and connectivity, and e-commerce capabilities for travel agencies, travel service providers and corporations. Its primary system is commonly known as a Global Distribution System (GDS), which is used by travel agents and travel related websites to book airline tickets, hotel rooms, rental cars, tour packages and associated products. Worldspan also hosts IT services and product solutions for major airlines. Recent events In December, 2006, Travelport, owner of the Galileo GDS, Gullivers Travel Associates (GTA) and a controlling share in Orbitz, agreed to acquire Worldspan. However, at the time, management of Travelport did not commit to the eventual merging of the two GDS systems, saying that they were considering all options, including running both systems in parallel. On August 21, 2007, the acquisition was completed for $1.4 billion and Worldspan became a part of Travelport GDS, which also includes Galileo and other related businesses. On September 28, 2008, the Galileo and Apollo GDS were moved from the Travelport datacenter in Denver, Colorado to the Worldspan datacenter in Atlanta, Georgia (although they continue to be run as separate systems from the Worldspan GDS). In 2012, Worldspan customers were migrated from the TPF-based FareSource pricing engine to Travelport's Linux-based 360 Fares pricing engine already used by Galileo and Apollo. Although the three systems share a common pricing platform, they continue to operate as separate GDS. History Worldspan was formed in early 1990 by Delta Air Lines, Northwest Airlines, and TWA to operate and sell its GDS services to travel agencies worldwide. Worldspan operated very effectively and profitably, successfully expanding its business in markets throughout North America, South America, Europe, and Asia. As a result, in mid-
https://en.wikipedia.org/wiki/Link%20encryption
Link encryption is an approach to communications security that encrypts and decrypts all network traffic at each network routing point (e.g. network switch, or node through which it passes) until arrival at its final destination. This repeated decryption and encryption is necessary to allow the routing information contained in each transmission to be read and employed further to direct the transmission toward its destination, before which it is re-encrypted. This contrasts with end-to-end encryption where internal information, but not the header/routing information, is encrypted by the sender at the point of origin and only decrypted by the intended recipient. Link encryption offers two main advantages: encryption is automatic so there is less opportunity for human error. if the communications link operates continuously and carries an unvarying level of traffic, link encryption defeats traffic analysis. On the other hand, end-to-end encryption ensures only the intended recipient has access to the plaintext. Link encryption can be used with end-to-end systems by superencrypting the messages. Bulk encryption refers to encrypting a large number of circuits at once, after they have been multiplexed. References Cryptography
https://en.wikipedia.org/wiki/End-to-end%20encryption
End-to-end encryption (E2EE) is a private communication system in which only communicating users can participate. As such, no one, including the communication system provider, telecom providers, Internet providers or malicious actors, can access the cryptographic keys needed to converse. End-to-end encryption is intended to prevent data being read or secretly modified, other than by the true sender and recipient(s). The messages are encrypted by the sender but the third party does not have a means to decrypt them, and stores them encrypted. The recipients retrieve the encrypted data and decrypt it themselves. Because no third parties can decipher the data being communicated or stored, for example, companies that provide end-to-end encryption are unable to hand over texts of their customers' messages to the authorities. In 2022, the UK's Information Commissioner's Office, the government body responsible for enforcing online data standards, stated that opposition to E2EE was misinformed and the debate too unbalanced, with too little focus on benefits, since E2EE "helped keep children safe online" and law enforcement access to stored data on servers was "not the only way" to find abusers. E2EE and privacy In many messaging systems, including email and many chat networks, messages pass through intermediaries and are stored by a third party, from which they are retrieved by the recipient. Even if the messages are encrypted, they are only encrypted 'in transit', and are thus accessible by the service provider, regardless of whether server-side disk encryption is used. Server-side disk encryption simply prevents unauthorized users from viewing this information. It does not prevent the company itself from viewing the information, as they have the key and can simply decrypt this data. This allows the third party to provide search and other features, or to scan for illegal and unacceptable content, but also means they can be read and misused by anyone who has acces
https://en.wikipedia.org/wiki/Code%20injection
Code injection is the exploitation of a computer bug that is caused by processing invalid data. The injection is used by an attacker to introduce (or "inject") code into a vulnerable computer program and change the course of execution. The result of successful code injection can be disastrous, for example, by allowing computer viruses or computer worms to propagate. Code injection vulnerabilities occur when an application sends untrusted data to an interpreter. Injection flaws are most often found in SQL, LDAP, XPath, NoSQL queries, OS commands, XML parsers, SMTP headers, program arguments, etc. Injection flaws tend to be easier to discover when examining source code than via testing. Scanners and fuzzers can help find injection flaws. Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover. Certain types of code injection are errors in interpretation, giving special meaning to user input. Similar interpretation errors exist outside the world of computer science such as the comedy routine Who's on First?. In the routine, there is a failure to distinguish proper names from regular words. Likewise, in some types of code injection, there is a failure to distinguish user input from system commands. Code injection techniques are popular in system hacking or cracking to gain information, privilege escalation or unauthorized access to a system. Code injection can be used malevolently for many purposes, including: Arbitrarily modifying values in a database through SQL injection. The impact of this can range from website defacement to serious compromise of sensitive data. Installing malware or executing malevolent code on a server by injecting server scripting code (such as PHP or ASP). Privilege escalation to root permissions by exploiting Shell Injection vulnerabilities in a setuid root binary on UNIX, or Local System by exploiting a service on Microsoft Windows. Attacking
https://en.wikipedia.org/wiki/Megascale%20engineering
Megascale engineering (or macro-engineering) is a form of exploratory engineering concerned with the construction of structures on an enormous scale. Typically these structures are at least in length—in other words, at least one megameter, hence the name. Such large-scale structures are termed megastructures. In addition to large-scale structures, megascale engineering is also defined as including the transformation of entire planets into a human-habitable environment, a process known as terraforming or planetary engineering. This might also include transformation of the surface conditions, changes in the planetary orbit, and structures in orbit intended to modify the energy balance. Astroengineering is the extension of megascale engineering to megastructures on a stellar scale or larger, such as Dyson spheres, Ringworlds, and Alderson disks. Several megascale structure concepts such as Dyson spheres, Dyson swarms, and Matrioshka brains would likely be built upon space-based solar power satellites. Other planetary engineering or interstellar transportation concepts would likely require space-based solar power satellites and the accompanying space logistics infrastructure for their power or construction. Megascale engineering often plays a major part in the plot of science fiction movies and books. The micro-gravity environment of outer space provides several potential benefits for the engineering of these structures. These include minimizing the loads on the structure, the availability of large quantities of raw materials in the form of asteroids, and an ample supply of energy from the Sun. The capabilities to employ these advantages are not yet available, however, so they provide material for science fiction themes. Quite a few megastructures have been designed on paper as exploratory engineering. However, the list of existing and planned megastructures is complicated by classifying what exactly constitutes a megastructure. By strict definition, no megastru
https://en.wikipedia.org/wiki/TPS%20report
A TPS report ("test procedure specification") is a document used by a quality assurance group or individual, particularly in software engineering, that describes the testing procedures and the testing process. Definition The official definition and creation is provided by the Institute of Electrical and Electronics Engineers (IEEE) as follows: In popular culture Office Space Its use in popular culture increased after the comedic 1999 film Office Space. In the movie, multiple managers and coworkers inquire about an error that protagonist Peter Gibbons (played by Ron Livingston) makes in omitting a cover sheet to send with his "TPS reports". It is used by Gibbons as an example that he has eight different bosses to whom he directly reports. According to the film's writer and director Mike Judge, the abbreviation stood for "Test Program Set" in the movie. After Office Space, "TPS report" has come to connote pointless, mindless paperwork, and an example of "literacy practices" in the work environment that are "meaningless exercises imposed upon employees by an inept and uncaring management" and "relentlessly mundane and enervating". Other references and allusions In King of the Hill (also produced by Mike Judge), Kahn is being chewed out, then remarks to his boss "No sir, I filed my TPS report yesterday." The 2015 puzzle video game Please, Don't Touch Anything featured the question "What is a TPS Report?" as one of many hidden clues that lead to a unique ending. In Lost season 1, episode 4, John Locke's boss says "Locke, I told you I need those TPS reports done by noon today." In Ralph Breaks the Internet, a TPS report is visibly hanging in one of the cubicles seen during Ralph's viral video montage. However, it was incorrectly placed in a cubicle in the accounting department, where TPS reports are not functionally relevant. In Borderlands 2, a legendary weapon is named the "Actualizer" with a flavor text description of "We need to talk about your DPS reports",
https://en.wikipedia.org/wiki/Red/black%20concept
The red/black concept, sometimes called the red–black architecture or red/black engineering, refers to the careful segregation in cryptographic systems of signals that contain sensitive or classified plaintext information (red signals) from those that carry encrypted information, or ciphertext (black signals). Therefore, the red side is usually considered the internal side, and the black side the more public side, with often some sort of guard, firewall or data-diode between the two. In NSA jargon, encryption devices are often called blackers, because they convert red signals to black. TEMPEST standards spelled out in Tempest/2-95 specify shielding or a minimum physical distance between wires or equipment carrying or processing red and black signals. Different organizations have differing requirements for the separation of red and black fiber optic cables. Red/black terminology is also applied to cryptographic keys. Black keys have themselves been encrypted with a "key encryption key" (KEK) and are therefore benign. Red keys are not encrypted and must be treated as highly sensitive material. Red/Gray/Black The NSA's Commercial Solutions for Classified (CSfC) program, which uses two layers of independent, commercial off-the-shelf cryptographic products to protect classified information, includes a red/gray/black concept. In this extension of the red/black concept, the separated gray compartment handles data that has been encrypted only once, which happens at the red/gray boundary. The gray/black interface adds or removes a second layer of encryption. See also Computer security Secure by design Security engineering References Cryptography Secure communication Security engineering
https://en.wikipedia.org/wiki/Superreal%20number
In abstract algebra, the superreal numbers are a class of extensions of the real numbers, introduced by H. Garth Dales and W. Hugh Woodin as a generalization of the hyperreal numbers and primarily of interest in non-standard analysis, model theory, and the study of Banach algebras. The field of superreals is itself a subfield of the surreal numbers. Dales and Woodin's superreals are distinct from the super-real numbers of David O. Tall, which are lexicographically ordered fractions of formal power series over the reals. Formal definition Suppose X is a Tychonoff space and C(X) is the algebra of continuous real-valued functions on X. Suppose P is a prime ideal in C(X). Then the factor algebra A = C(X)/P is by definition an integral domain that is a real algebra and that can be seen to be totally ordered. The field of fractions F of A is a superreal field if F strictly contains the real numbers , so that F is not order isomorphic to . If the prime ideal P is a maximal ideal, then F is a field of hyperreal numbers (Robinson's hyperreals being a very special case). References Bibliography Field (mathematics) Real closed field Infinity
https://en.wikipedia.org/wiki/Germ%20layer
A germ layer is a primary layer of cells that forms during embryonic development. The three germ layers in vertebrates are particularly pronounced; however, all eumetazoans (animals that are sister taxa to the sponges) produce two or three primary germ layers. Some animals, like cnidarians, produce two germ layers (the ectoderm and endoderm) making them diploblastic. Other animals such as bilaterians produce a third layer (the mesoderm) between these two layers, making them triploblastic. Germ layers eventually give rise to all of an animal's tissues and organs through the process of organogenesis. History Caspar Friedrich Wolff observed organization of the early embryo in leaf-like layers. In 1817, Heinz Christian Pander discovered three primordial germ layers while studying chick embryos. Between 1850 and 1855, Robert Remak had further refined the germ cell layer (Keimblatt) concept, stating that the external, internal and middle layers form respectively the epidermis, the gut, and the intervening musculature and vasculature. The term "mesoderm" was introduced into English by Huxley in 1871, and "ectoderm" and "endoderm" by Lankester in 1873. Evolution Among animals, sponges show the least amount of compartmentalization, having a single germ layer. Although they have differentiated cells (e.g. collar cells), they lack true tissue coordination. Diploblastic animals, Cnidaria and Ctenophora, show an increase in compartmentalization, having two germ layers, the endoderm and ectoderm. Diploblastic animals are organized into recognisable tissues. All bilaterian animals (from flatworms to humans) are triploblastic, possessing a mesoderm in addition to the germ layers found in Diploblasts. Triploblastic animals develop recognizable organs. Development Fertilization leads to the formation of a zygote. During the next stage, cleavage, mitotic cell divisions transform the zygote into a hollow ball of cells, a blastula. This early embryonic form undergoes gastrulati
https://en.wikipedia.org/wiki/Cephalization
Cephalization is an evolutionary trend in which, over many generations, the mouth, sense organs, and nerve ganglia become concentrated at the front end of an animal, producing a head region. This is associated with movement and bilateral symmetry, such that the animal has a definite head end. This led to the formation of a highly sophisticated brain in three groups of animals, namely the arthropods, cephalopod molluscs, and vertebrates. Animals without bilateral symmetry Cnidaria, such as the radially symmetrical Hydrozoa, show some degree of cephalization. The Anthomedusae have a head end with their mouth, photoreceptive cells, and a concentration of neural cells. Bilateria Cephalization is a characteristic feature of the Bilateria, a large group containing the majority of animal phyla. These have the ability to move, using muscles, and a body plan with a front end that encounters stimuli first as the animal moves forwards, and accordingly has evolved to contain many of the body's sense organs, able to detect light, chemicals, and gravity. There is often also a collection of nerve cells able to process the information from these sense organs, forming a brain in several phyla and one or more ganglia in others. Acoela The Acoela are basal bilaterians, part of the Xenacoelomorpha. They are small and simple animals, and have very slightly more nerve cells at the head end than elsewhere, not forming a distinct and compact brain. This represents an early stage in cephalization. Flatworms The Platyhelminthes (flatworms) have a more complex nervous system than the Acoela, and are lightly cephalized, for instance having an eyespot above the brain, near the front end. Complex active bodies The philosopher Michael Trestman noted that three bilaterian phyla, namely the arthropods, the molluscs in the shape of the cephalopods, and the chordates, were distinctive in having "complex active bodies", something that the acoels and flatworms did not have. Any such animal, whe
https://en.wikipedia.org/wiki/Computer%20network%20naming%20scheme
In computing, naming schemes are often used for objects connected into computer networks. Naming schemes in computing Server naming is a common tradition. It makes it more convenient to refer to a machine by name than by its IP address. The CIA named their servers after states. Server names may be named by their role or follow a common theme such as colors, countries, cities, planets, chemical element, scientists, etc. If servers are in multiple different geographical locations they may be named by closest airport code. Such as web-01, web-02, web-03, mail-01, db-01, db-02. Airport code example: lax-001 lax-002 arn-001 City-State-Nation example: 3-character unique number 2-character production/development classifier 3-character city ID 2-character state/province/region ID 2-character nation ID Thus, a production server in Minneapolis, Minnesota would be nnn.ps.min.mn.us.example.com, or a development server in Vancouver, BC, would be nnn.ds.van.bc.ca.example.com. Large networks often use a systematic naming scheme, such as using a location (e.g. a department) plus a purpose to generate a name for a computer. For example, a web server in NY may be called "nyc-www-04.xyz.net". However, smaller networks will frequently use a more personalized naming scheme to keep track of the many hosts. Popular naming schemes include trees, planets, rocks, etc. Network naming can be hierarchical in nature, such as the Internet's Domain Name System. Indeed, the Internet employs several universally applicable naming methods: uniform resource name (URN), uniform resource locator (URL), and uniform resource identifier (URI). See also Systematic name Geospatial network Naming convention References External links - "Choosing a Name for Your Computer" - "The Naming of Hosts" Naming schemes Naming conventions in Active Directory URIs, URLs, and URNs: Clarifications and Recommendations 1.0 Naming conventions Network addressing Servers (computing)
https://en.wikipedia.org/wiki/Characteristic%20%28algebra%29
In mathematics, the characteristic of a ring , often denoted , is defined to be the smallest positive number of copies of the ring's multiplicative identity () that will sum to the additive identity (). If no such number exists, the ring is said to have characteristic zero. That is, is the smallest positive number such that: if such a number exists, and otherwise. Motivation The special definition of the characteristic zero is motivated by the equivalent definitions characterized in the next section, where the characteristic zero is not required to be considered separately. The characteristic may also be taken to be the exponent of the ring's additive group, that is, the smallest positive integer such that: for every element of the ring (again, if exists; otherwise zero). This definition applies in the more general class of a rngs (see ); for (unital) rings the two definitions are equivalent due to their distributive law. Equivalent characterizations The characteristic is the natural number such that is the kernel of the unique ring homomorphism from to . The characteristic is the natural number such that contains a subring isomorphic to the factor ring , which is the image of the above homomorphism. When the non-negative integers are partially ordered by divisibility, then is the smallest and is the largest. Then the characteristic of a ring is the smallest value of for which . If nothing "smaller" (in this ordering) than will suffice, then the characteristic is . This is the appropriate partial ordering because of such facts as that is the least common multiple of and , and that no ring homomorphism exists unless divides . The characteristic of a ring is precisely if the statement for all implies that is a multiple of . Case of rings If and are rings and there exists a ring homomorphism , then the characteristic of divides the characteristic of . This can sometimes be used to exclude the possibility of certain ring h
https://en.wikipedia.org/wiki/Modus%20vivendi
Modus vivendi (plural modi vivendi) is a Latin phrase that means "mode of living" or "way of life". In international relations, it often is used to mean an arrangement or agreement that allows conflicting parties to coexist in peace. In science, it is used to describe lifestyles. Modus means "mode", "way", "method", or "manner". Vivendi means "of living". The phrase is often used to describe informal and temporary arrangements in political affairs. For example, if two sides reach a modus vivendi regarding disputed territories, despite political, historical or cultural incompatibilities, an accommodation of their respective differences is established for the sake of contingency. In diplomacy, a modus vivendi is an instrument for establishing an international accord of a temporary or provisional nature, intended to be replaced by a more substantial and thorough agreement, such as a treaty. Armistices and instruments of surrender are intended to achieve a modus vivendi. Examples The term often refers to Anglo-French relations from the 1815 end of the Napoleonic Wars to the 1904 Entente Cordiale. On 7 January 1948, the United States, Britain and Canada, concluded an agreement known as the modus vivendi, that allowed for limited sharing of technical information on nuclear weapons which officially repealed the Quebec Agreement. See also References External links Definition of key terms used in the UN Treaty Collection Behavior Latin political words and phrases
https://en.wikipedia.org/wiki/ReplayGain
ReplayGain is a proposed technical standard published by David Robinson in 2001 to measure and normalize the perceived loudness of audio in computer audio formats such as MP3 and Ogg Vorbis. It allows media players to normalize loudness for individual tracks or albums. This avoids the common problem of having to manually adjust volume levels between tracks when playing audio files from albums that have been mastered at different loudness levels. Although this de facto standard is now formally known as ReplayGain, it was originally known as Replay Gain and is sometimes abbreviated RG. ReplayGain is supported in a large number of media software and portable devices. Operation ReplayGain works by first performing a psychoacoustic analysis of an entire audio track or album to measure peak level and perceived loudness. Equal-loudness contours are used to compensate for frequency effects and statistical analysis is used to accommodate for effects related to time. The difference between the measured perceived loudness and the desired target loudness is calculated; this is considered the ideal replay gain value. Typically, the replay gain and peak level values are then stored as metadata in the audio file. ReplayGain-capable audio players use the replay gain metadata to automatically attenuate or amplify the signal on a per-track or per-album basis such that tracks or albums play at a similar loudness level. The peak level metadata can be used to prevent gain adjustments from inducing clipping in the playback device. Metadata The original ReplayGain proposal specified an 8-byte field in the header of any file. Most implementations now use tags for ReplayGain information. FLAC and Ogg Vorbis use the REPLAYGAIN_* Vorbis comment fields. MP3 files usually use ID3v2. Other formats such as AAC and WMA use their native tag formats with a specially formatted tag entry listing the track's replay gain and peak loudness. ReplayGain utilities usually add metadata to the audio fil
https://en.wikipedia.org/wiki/Norton%20AntiVirus
Norton AntiVirus is an anti-virus or anti-malware software product founded by Peter Norton, developed and distributed by Symantec (now Gen Digital) since 1990 as part of its Norton family of computer security products. It uses signatures and heuristics to identify viruses. Other features included in it are e-mail spam filtering and phishing protection. Symantec distributes the product as a download, a box copy, and as OEM software. Norton AntiVirus and Norton Internet Security, a related product, held a 61% US retail market share for security suites as of the first half of 2007. Competitors, in terms of market share in this study, include antivirus products from CA, Trend Micro, and Kaspersky Lab. Norton AntiVirus runs on Microsoft Windows, Linux, and macOS. Windows 7 support was in development for versions 2006 through 2008. Version 2009 has Windows 7 supported update already. Versions 2010, 2011, and 2012 all natively support Windows 7, without needing an update. Version 12 is the only version fully compatible with Mac OS X Lion. With the 2015 series of products, Symantec made changes in its portfolio and briefly discontinued Norton AntiVirus. This action was later reversed with the introduction of Norton AntiVirus Basic. Origins In May 1989, Symantec launched Symantec Antivirus for the Macintosh (SAM). SAM 2.0, released March 1990, incorporated technology allowing users to easily update SAM to intercept and eliminate new viruses, including many that didn't exist at the time of the program's release. In August 1990 Symantec acquired Peter Norton Computing from Peter Norton. Norton and his company developed various DOS utilities including the Norton Utilities, which did not include antivirus features. Symantec continued the development of acquired technologies. The technologies are marketed under the name of "Norton", with the tagline "from Symantec". Norton's crossed-arm pose, a registered U.S. trademark, was traditionally featured on Norton product packagin
https://en.wikipedia.org/wiki/Cell%20death
Cell death is the event of a biological cell ceasing to carry out its functions. This may be the result of the natural process of old cells dying and being replaced by new ones, as in programmed cell death, or may result from factors such as diseases, localized injury, or the death of the organism of which the cells are part. Apoptosis or Type I cell-death, and autophagy or Type II cell-death are both forms of programmed cell death, while necrosis is a non-physiological process that occurs as a result of infection or injury. Programmed cell death Programmed cell death (PCD) is cell death mediated by an intracellular program. PCD is carried out in a regulated process, which usually confers advantage during an organism's life-cycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits separate. PCD serves fundamental functions during both plant and metazoa (multicellular animals) tissue development. Apoptosis Apoptosis is the processor of programmed cell death (PCD) that may occur in multicellular organisms. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation. It is now thought that – in a developmental context – cells are induced to positively commit suicide whilst in a homeostatic context; the absence of certain survival factors may provide the impetus for suicide. There appears to be some variation in the morphology and indeed the biochemistry of these suicide pathways; some treading the path of "apoptosis", others following a more generalized pathway to deletion, but both usually being genetically and synthetically motivated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade, however, presumably
https://en.wikipedia.org/wiki/DOSEMU
DOSEMU, stylized as dosemu, is a compatibility layer software package that enables DOS operating systems (e.g., MS-DOS, DR-DOS, FreeDOS) and application software to run atop Linux on x86-based PCs (IBM PC compatible computers). Features It uses a combination of hardware-assisted virtualization features and high-level emulation. It can thus achieve nearly native speed for 8086-compatible DOS operating systems and applications on x86 compatible processors, and for DOS Protected Mode Interface (DPMI) applications on x86 compatible processors as well as on x86-64 processors. DOSEMU includes an 8086 processor emulator for use with real-mode applications in x86-64 long mode. DOSEMU is only available for x86 and x86-64 Linux systems (Linux 3.15 x86-64 systems cannot enter DPMI by default. This is fixed in 3.16). DOSEMU is an option for people who need or want to continue to use legacy DOS software; in some cases virtualisation is good enough to drive external hardware such as device programmers connected to the parallel port. According to its manual, "dosemu" is a user-level program which uses certain special features of the Linux kernel and the 80386 processor to run DOS in a DOS box. The DOS box, relying on a combination of hardware and software, has these abilities: Virtualize all input-output and processor control instructions Supports the word size and addressing modes of the iAPX86 processor family's "real mode", while still running within the full protected mode environment Trap all DOS and BIOS system calls and emulate such calls as needed for proper operation and good performance Simulate a hardware environment over which DOS programs are accustomed to having control. Provide DOS services through native Linux services; for example, dosemu can provide a virtual hard disk drive which is actually a Linux directory hierarchy. API-level support for Packet driver, IPX, Berkeley sockets (dosnet). See also Comparison of platform virtualization software Vir
https://en.wikipedia.org/wiki/Formally%20real%20field
In mathematics, in particular in field theory and real algebra, a formally real field is a field that can be equipped with a (not necessarily unique) ordering that makes it an ordered field. Alternative definitions The definition given above is not a first-order definition, as it requires quantifiers over sets. However, the following criteria can be coded as (infinitely many) first-order sentences in the language of fields and are equivalent to the above definition. A formally real field F is a field that also satisfies one of the following equivalent properties: −1 is not a sum of squares in F. In other words, the Stufe of F is infinite. (In particular, such a field must have characteristic 0, since in a field of characteristic p the element −1 is a sum of 1s.) This can be expressed in first-order logic by , , etc., with one sentence for each number of variables. There exists an element of F that is not a sum of squares in F, and the characteristic of F is not 2. If any sum of squares of elements of F equals zero, then each of those elements must be zero. It is easy to see that these three properties are equivalent. It is also easy to see that a field that admits an ordering must satisfy these three properties. A proof that if F satisfies these three properties, then F admits an ordering uses the notion of prepositive cones and positive cones. Suppose −1 is not a sum of squares; then a Zorn's Lemma argument shows that the prepositive cone of sums of squares can be extended to a positive cone . One uses this positive cone to define an ordering: if and only if belongs to P. Real closed fields A formally real field with no formally real proper algebraic extension is a real closed field. If K is formally real and Ω is an algebraically closed field containing K, then there is a real closed subfield of Ω containing K. A real closed field can be ordered in a unique way, and the non-negative elements are exactly the squares. Notes References Field (ma
https://en.wikipedia.org/wiki/CherryOS
CherryOS was a PowerPC G4 processor emulator for x86 Microsoft Windows platforms, which allowed various Apple Inc. programs to be operated on Windows XP. Announced and made available for pre-orders on October 12, 2004, it was developed by Maui X-Stream (MXS), a startup company based in Lahaina, Hawaii and a subsidiary of Paradise Television. The program encountered a number of launch difficulties its first year, including a poorly-reviewed soft launch in October 2004, wherein Wired Magazine argued that CherryOS used code grafted directly from PearPC, an older open-source emulator. Lead developer Arben Kryeziu subsequently stated that PearPC had provided the inspiration for CherryOS, but "not the work, not the architecture. With their architecture I'd never get the speed." After further development, CherryOS 1.0 was released in its final form on March 8, 2005, with support for CD, DVD, USB, FireWire, and Ethernet. It was described as automatically detecting "hardware and network connections" and allowing "for the use of virtually any OS X-ready application," including Safari and Mail. Estimated to be compatible with approximately 70 percent of PCs, MXS again fielded accusations that CherryOS 1.0 incorporated code from PearPC. MXS argued CherryOS was "absolutely not" a knockoff," and that though "certain generic code strings and screen verbiage used in Pear PC are also used in CherryOS... they are not proprietary to the Pear PC product." Shortly afterwards the creators of PearPC were reported to be "contemplating" litigation against Maui X-Stream, and on April 6, 2005, CherryOS was announced to be on hold. A day later, CherryOS announced that "due to overwhelming demand, Cherry open source project launches May 1, 2005." History Background and development On October 12, 2004, the emulator CherryOS was announced by Maui X-Stream (MXS), a startup company based in Lahaina, Hawaii and a subsidiary of Paradise Television. At the time MXS was best known for developi
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20constant
In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of continued fractions. In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely The term "Lévy's constant" is sometimes used to refer to (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function for and zero otherwise. This gives Lévy's constant as . The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem. See also Khinchin's constant References Further reading External links Continued fractions Mathematical constants Paul Lévy (mathematician)
https://en.wikipedia.org/wiki/Ernst%20Alexanderson
Ernst Frederick Werner Alexanderson (January 25, 1878 – May 14, 1975) was a Swedish-American electrical engineer, who was a pioneer in radio and television development. He invented the Alexanderson alternator, an early radio transmitter used between 1906 and the 1930s for longwave long distance radio transmission. Alexanderson also created the amplidyne, a direct current amplifier used during the Second World War for controlling anti-aircraft guns. Background Alexanderson was born in Uppsala, Sweden. He studied at the University of Lund (1896–97) and was educated at the Royal Institute of Technology in Stockholm and the Technische Hochschule in Berlin, Germany. He emigrated to the United States in 1902 and spent much of his life working for the General Electric and Radio Corporation of America. Engineering work Alexanderson designed the Alexanderson alternator, an early longwave radio transmitter, one of the first devices which could transmit modulated audio (sound) over radio waves. He had been employed at General Electric for only a short time when GE received an order from Canadian-born professor and researcher Reginald Fessenden, then working for the US Weather Bureau, for a specialized alternator with much higher frequency than others in existence at that time, for use as a radio transmitter. Fessenden had been working on the problem of transmitting sound by radio waves, and had concluded that a new type of radio transmitter was needed, a continuous wave transmitter. Designing a machine that would rotate fast enough to produce radio waves proved a formidable challenge. Alexanderson's family were convinced the huge spinning rotors would fly apart and kill him, and he set up a sandbagged bunker from which to test them. In the summer of 1906 Mr. Alexanderson's first effort, a 50 kHz alternator, was installed in Fessenden's radio station in Brant Rock, Massachusetts. By fall its output had been improved to 500 watts and 75 kHz. On Christmas Eve, 1906, Fessende
https://en.wikipedia.org/wiki/Phytoremediation
Phytoremediation technologies use living plants to clean up soil, air and water contaminated with hazardous contaminants. It is defined as "the use of green plants and the associated microorganisms, along with proper soil amendments and agronomic techniques to either contain, remove or render toxic environmental contaminants harmless". The term is an amalgam of the Greek phyto (plant) and Latin remedium (restoring balance). Although attractive for its cost, phytoremediation has not been demonstrated to redress any significant environmental challenge to the extent that contaminated space has been reclaimed. Phytoremediation is proposed as a cost-effective plant-based approach of environmental remediation that takes advantage of the ability of plants to concentrate elements and compounds from the environment and to detoxify various compounds. The concentrating effect results from the ability of certain plants called hyperaccumulators to bioaccumulate chemicals. The remediation effect is quite different. Toxic heavy metals cannot be degraded, but organic pollutants can be, and are generally the major targets for phytoremediation. Several field trials confirmed the feasibility of using plants for environmental cleanup. Background Soil remediation is expensive and complicated process. Traditional methods involve removal of the contaminated soil followed by treatment and return of the treated soil. Phytoremediation could in principle be a more cost effective solution. Phytoremediation may be applied to polluted soil or static water environment. This technology has been increasingly investigated and employed at sites with soils contaminated heavy metals like with cadmium, lead, aluminum, arsenic and antimony. These metals can cause oxidative stress in plants, destroy cell membrane integrity, interfere with nutrient uptake, inhibit photosynthesis and decrease plant chlorophyll. Phytoremediation has been used successfully include the restoration of abandoned metal mine
https://en.wikipedia.org/wiki/KSD-64
The KSD-64[A] Crypto Ignition Key (CIK) is an NSA-developed EEPROM chip packed in a plastic case that looks like a toy key. The model number is due to its storage capacity — 64 kibibits (65,536bits, or 8KiB), enough to store multiple encryption keys. Most frequently it was used in key-splitting applications: either the encryption device or the KSD-64 alone is worthless, but together they can be used to make encrypted connections. It was also used alone as a fill device for transfer of key material, as for the initial seed key loading of an STU-III secure phone. Newer systems, such as the Secure Terminal Equipment, use the Fortezza PC card as a security token instead of the KSD-64. The KSD-64 was withdrawn from the market in 2014. Over one million were produced in its 30-year life. Operation The CIK is a small device which can be loaded with a 128·bit sequence which is different for each user. When the device is removed from the machine, that sequence is automatically added (mod 2) to the unique key in the machine, thus leaving it stored in encrypted form. When it is reattached, the unique key in the machine is decrypted, and it is now ready to operate in the normal way. The analogy with an automobile ignition key is close, thus the name. If the key is lost, the user is still safe unless the finder or thief can match it with the user's machine. In case of loss, the user gets a new CIK, effectively changing the lock in the cipher machine, and gets back in business. The ignition key sequence can be provided in several ways. In the first crypto-equipment to use the idea (the KY-70), the CIK is loaded with its sequence at NSA and supplied to each user like any other item of keying material. Follow-on application (as in the STU-II) use an even more clever scheme. The CIK device is simply an empty register which can be supplied with its unique sequence from the randomizer function of the parent machine itself. Not only that, each time the device is removed and re-in
https://en.wikipedia.org/wiki/Sign%20bit
In computer science, the sign bit is a bit in a signed number representation that indicates the sign of a number. Although only signed numeric data types have a sign bit, it is invariably located in the most significant bit position, so the term may be used interchangeably with "most significant bit" in some contexts. Almost always, if the sign bit is 0, the number is non-negative (positive or zero). If the sign bit is 1 then the number is negative, although formats other than two's complement integers allow a signed zero: distinct "positive zero" and "negative zero" representations, the latter of which does not correspond to the mathematical concept of a negative number. In the two's complement representation, the sign bit has the weight where is the number of bits. In the ones' complement representation, the most negative value is , but there are two representations of zero, one for each value of the sign bit. In a sign-and-magnitude representation of numbers, the value of the sign bit determines whether the numerical value is positive or negative. Floating-point numbers, such as IEEE format, IBM format, VAX format, and even the format used by the Zuse Z1 and Z3 use a sign-and-magnitude representation. When using a complement representation, to convert a signed number to a wider format the additional bits must be filled with copies of the sign bit in order to preserve its numerical value, a process called sign extension or sign propagation. References Binary arithmetic Computer arithmetic Sign (mathematics)
https://en.wikipedia.org/wiki/Exponent%20bias
In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder. To solve this problem the exponent is stored as an unsigned value which is suitable for comparison, and when being interpreted it is converted into an exponent within a signed range by subtracting the bias. By arranging the fields such that the sign bit takes the most significant bit position, the biased exponent takes the middle position, then the significand will be the least significant bits and the resulting value will be ordered properly. This is the case whether or not it is interpreted as a floating-point or integer value. The purpose of this is to enable high speed comparisons between floating-point numbers using fixed-point hardware. To calculate the bias for an arbitrarily sized floating-point number apply the formula 2k−1 − 1 where k is the number of bits in the exponent. When interpreting the floating-point number, the bias is subtracted to retrieve the actual exponent. For a half-precision number, the exponent is stored in the range 1 .. 30 (0 and 31 have special meanings), and is interpreted by subtracting the bias for an 5-bit exponent (15) to get an exponent value in the range −14 .. +15. For a single-precision number, the exponent is stored in the range 1 .. 254 (0 and 255 have special meanings), and is interpreted by subtracting the bias for an 8-bit exponent (127) to get an exponent value in the range −126 .. +127. For a double-precision number, the exponent is stored in the range 1 .. 2046 (0 and 2047 have special meanings), and is interpreted by subtracting the bias for an 11-bit exponent (1023) to get an exponent value in the ra
https://en.wikipedia.org/wiki/Aerial%20survey
Aerial survey is a method of collecting geomatics or other imagery by using airplanes, helicopters, UAVs, balloons or other aerial methods. Typical types of data collected include aerial photography, Lidar, remote sensing (using various visible and invisible bands of the electromagnetic spectrum, such as infrared, gamma, or ultraviolet) and also geophysical data (such as aeromagnetic surveys and gravity. It can also refer to the chart or map made by analysing a region from the air. Aerial survey should be distinguished from satellite imagery technologies because of its better resolution, quality and atmospheric conditions (which can negatively impact and obscure satellite observation). Today, aerial survey is sometimes recognized as a synonym for aerophotogrammetry, part of photogrammetry where the camera is placed in the air. Measurements on aerial images are provided by photogrammetric technologies and methods. Aerial surveys can provide information on many things not visible from the ground. Terms used in aerial survey exposure station or air station the position of the optical center of the camera at the moment of exposure. flying height the elevation of the exposure station above the datum (usually mean sea level). altitude the vertical distance of the aircraft above the Earth's surface. tilt the angle between the aerial camera and the horizontal axis perpendicular to the line of flight. tip the angle between the aerial camera and the line of flight. principal point the point of intersection of the optical axis of the aerial camera with the photographical plane. isocentre the point on the aerial photograph in which the bisector of the angle of tilt meets the photograph. nadir point the image of the nadir, i.e. the point on the aerial photograph where a plumbline dropped from the front nodal point pierces the photograph. scale ratio of the focal length of the camera objective and the distance of the exposure station from the ground. azimuth the clockwi
https://en.wikipedia.org/wiki/Lordosis%20behavior
Lordosis behavior (), also known as mammalian lordosis (Greek lordōsis, from lordos "bent backward") or presenting, is the naturally occurring body posture for sexual receptivity to copulation present in females of most mammals including rodents, elephants, and cats. The primary characteristics of the behavior are a lowering of the forelimbs but with the rear limbs extended and hips raised, ventral arching of the spine and a raising, or sideward displacement, of the tail. During lordosis, the spine curves dorsoventrally so that its apex points towards the abdomen. Description Lordosis is a reflex action that causes many non-primate female mammals to adopt a body position that is often crucial to reproductive behavior. The posture moves the pelvic tilt in an anterior direction, with the posterior pelvis rising up, the bottom angling backward and the front angling downward. Lordosis aids in copulation as it elevates the hips, thereby facilitating penetration by the penis. It is commonly seen in female mammals during estrus (being "in heat"). Lordosis occurs during copulation itself and in some species, like the cat, during pre-copulatory behavior. Neurobiology The lordosis reflex arc is hardwired in the spinal cord, at the level of the lumbar and sacral vertebrae (L1, L2, L5, L6 and S1). In the brain, several regions modulate the lordosis reflex. The vestibular nuclei and the cerebellum, via the vestibular tract, send information which makes it possible to coordinate the lordosis reflex with postural balance. More importantly, the ventromedial hypothalamus sends projections that inhibit the reflex at the spinal level, so it is not activated at all times. Sex hormones control reproduction and coordinate sexual activity with the physiological state. Schematically, at the breeding season, and when an ovum is available, hormones (especially estrogen) simultaneously induce ovulation and estrus (heat). Under the action of estrogen in the hypothalamus, the lordosis reflex
https://en.wikipedia.org/wiki/Slotket
In computer hardware terminology, slotkets, also known as slockets, (both short for slot to socket adapter) are adapters that allow socket-based microprocessors to be used on slot-based motherboards. Slotkets were first created to allow the use of Socket 8 Pentium Pro processors on Slot 1 motherboards. Later, they became more popular for inserting Socket 370 Intel Celerons into Slot 1 based motherboards. This lowered costs for computer builders, especially with dual processor machines. High-end motherboards accepting two Slot 1 processors (usually Pentium 2) were widely available, but double-socketed motherboards for the less expensive Socket 370 Celerons were not. The slotkets remained popular in the transition period from Slot to Socket-based Pentium III processors by allowing CPU upgrades in existing Slot 1 motherboards. Slotkets were never introduced to take advantage of the AMD Athlon processors' transition from the Slot A form factor to the Socket A form factor. Adapters that go the other way around (from socket-based motherboards to slot-based CPUs) have never been introduced, because Socket 8 based motherboards do not support the higher clock frequencies of Slot 1 based processors. Today, slotkets have largely disappeared, as Intel and AMD have not manufactured CPUs in slot form factors since 1999. See also CPU socket External links How to Install a Slocket CPU sockets
https://en.wikipedia.org/wiki/Virtual%20team
A virtual team (also known as a geographically dispersed team, distributed team, or remote team) usually refers to a group of individuals who work together from different geographic locations and rely on communication technology such as email, instant messaging, and video or voice conferencing services in order to collaborate. The term can also refer to groups or teams that work together asynchronously or across organizational levels. Powell, Piccoli and Ives (2004) define virtual teams as "groups of geographically, organizationally and/or time dispersed workers brought together by information and telecommunication technologies to accomplish one or more organizational tasks." As documented by Gibson (2020), virtual teams grew in importance and number during 2000-2020, particularly in light of the 2020 Covid-19 pandemic which forced many workers to collaborate remotely with each other as they worked from home. As the proliferation of fiber optic technology has significantly increased the scope of off-site communication,  there has been a tremendous increase in both the use of virtual teams and scholarly attention devoted to understanding how to make virtual teams more effective (see Stanko & Gibson, 2009; Hertel, Geister & Konradt, 2005; and Martins, Gilson & Maaynard, 2004 for reviews). When utilized successfully, virtual teams allow companies to procure the best expertise without geographical restrictions, to integrate information, knowledge, and resources from a broad variety of contexts within the same team, and to acquire and apply knowledge to critical tasks in global firms. According to Hambley, O'Neil, & Kline (2007), "virtual teams require new ways of working across boundaries through systems, processes, technology, and people, which requires effective leadership." Such work often involves learning processes such as integrating and sharing different location-specific knowledge and practices, which must work in concert for the multi-unit firm to be aligned.
https://en.wikipedia.org/wiki/Total%20derivative
In mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when is a function of several variables, because when is a function of a single variable, the total derivative is the same as the ordinary derivative of the function. The total derivative as a linear map Let be an open subset. Then a function is said to be (totally) differentiable at a point if there exists a linear transformation such that The linear map is called the (total) derivative or (total) differential of at . Other notations for the total derivative include and . A function is (totally) differentiable if its total derivative exists at every point in its domain. Conceptually, the definition of the total derivative expresses the idea that is the best linear approximation to at the point . This can be made precise by quantifying the error in the linear approximation determined by . To do so, write where equals the error in the approximation. To say that the derivative of at is is equivalent to the statement where is little-o notation and indicates that is much smaller than as . The total derivative is the unique linear transformation for which the error term is this small, and this is the sense in which it is the best linear approximation to . The function is differentiable if and only if each of its components is differentiable, so when studying total derivatives, it is often possible to work one coordinate at a time in the codomain. However, the same is not true of the coordinates in the domain. It is true that if is differentiable at , then each partial derivative exists at . The converse
https://en.wikipedia.org/wiki/Glossary%20of%20cryptographic%20keys
This glossary lists types of keys as the term is used in cryptography, as opposed to door locks. Terms that are primarily used by the U.S. National Security Agency are marked (NSA). For classification of keys according to their usage see cryptographic key types. 40-bit key - key with a length of 40 bits, once the upper limit of what could be exported from the U.S. and other countries without a license. Considered very insecure. See key size for a discussion of this and other lengths. authentication key - Key used in a keyed-hash message authentication code, or HMAC. benign key - (NSA) a key that has been protected by encryption or other means so that it can be distributed without fear of its being stolen. Also called BLACK key. content-encryption key (CEK) a key that may be further encrypted using a KEK, where the content may be a message, audio, image, video, executable code, etc. crypto ignition key An NSA key storage device (KSD-64) shaped to look like an ordinary physical key. cryptovariable - NSA calls the output of a stream cipher a key or key stream. It often uses the term cryptovariable for the bits that control the stream cipher, what the public cryptographic community calls a key. data encryption key (DEK) used to encrypt the underlying data. derived key - keys computed by applying a predetermined hash algorithm or key derivation function to a password or, better, a passphrase. DRM key - A key used in Digital Rights Management to protect media electronic key - (NSA) key that is distributed in electronic (as opposed to paper) form. See EKMS. ephemeral key - A key that only exists within the lifetime of a communication session. expired key - Key that was issued for a use in a limited time frame (cryptoperiod in NSA parlance) which has passed and, hence, the key is no longer valid. FIREFLY key - (NSA) keys used in an NSA system based on public key cryptography. Key derivation function (KDF) - function used to derive a key from a secret value,
https://en.wikipedia.org/wiki/Finite-state%20transducer
A finite-state transducer (FST) is a finite-state machine with two memory tapes, following the terminology for Turing machines: an input tape and an output tape. This contrasts with an ordinary finite-state automaton, which has a single tape. An FST is a type of finite-state automaton (FSA) that maps between two sets of symbols. An FST is more general than an FSA. An FSA defines a formal language by defining a set of accepted strings, while an FST defines relations between sets of strings. An FST will read a set of strings on the input tape and generates a set of relations on the output tape. An FST can be thought of as a translator or relater between strings in a set. In morphological parsing, an example would be inputting a string of letters into the FST, the FST would then output a string of morphemes. Overview An automaton can be said to recognize a string if we view the content of its tape as input. In other words, the automaton computes a function that maps strings into the set {0,1}. Alternatively, we can say that an automaton generates strings, which means viewing its tape as an output tape. On this view, the automaton generates a formal language, which is a set of strings. The two views of automata are equivalent: the function that the automaton computes is precisely the indicator function of the set of strings it generates. The class of languages generated by finite automata is known as the class of regular languages. The two tapes of a transducer are typically viewed as an input tape and an output tape. On this view, a transducer is said to transduce (i.e., translate) the contents of its input tape to its output tape, by accepting a string on its input tape and generating another string on its output tape. It may do so nondeterministically and it may produce more than one output for each input string. A transducer may also produce no output for a given input string, in which case it is said to reject the input. In general, a transducer c
https://en.wikipedia.org/wiki/Median%20%28geometry%29
In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect each other at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. The concept of a median extends to tetrahedra. Relation to center of mass Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle. Thus the object would balance on the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from. Equal-area division Each median divides the area of the triangle in half; hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines which divide the area of the triangle into two equal parts do not pass through the centroid.) The three medians divide the triangle into six smaller triangles of equal area. Proof of equal-area property Consider a triangle ABC. Let D be the midpoint of , E be the midpoint of , F be the midpoint of , and O be the centroid (most commonly denoted G). By definition, . Thus and , where represents the area of triangle ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height. We have: Thus, and Since , therefore, . Using the same method, one can show that . Three congruent triangles In 2014 Lee Sallows discovered the following theorem: The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the
https://en.wikipedia.org/wiki/Grothendieck%20universe
In mathematics, a Grothendieck universe is a set U with the following properties: If x is an element of U and if y is an element of x, then y is also an element of U. (U is a transitive set.) If x and y are both elements of U, then is an element of U. If x is an element of U, then P(x), the power set of x, is also an element of U. If is a family of elements of U, and if is an element of U, then the union is an element of U. A Grothendieck universe is meant to provide a set in which all of mathematics can be performed. (In fact, uncountable Grothendieck universes provide models of set theory with the natural ∈-relation, natural powerset operation etc.). Elements of a Grothendieck universe are sometimes called small sets. The idea of universes is due to Alexander Grothendieck, who used them as a way of avoiding proper classes in algebraic geometry. The existence of a nontrivial Grothendieck universe goes beyond the usual axioms of Zermelo–Fraenkel set theory; in particular it would imply the existence of strongly inaccessible cardinals. Tarski–Grothendieck set theory is an axiomatic treatment of set theory, used in some automatic proof systems, in which every set belongs to a Grothendieck universe. The concept of a Grothendieck universe can also be defined in a topos. Properties As an example, we will prove an easy proposition. Proposition. If and , then . Proof. because . because , so . It is similarly easy to prove that any Grothendieck universe U contains: All singletons of each of its elements, All products of all families of elements of U indexed by an element of U, All disjoint unions of all families of elements of U indexed by an element of U, All intersections of all families of elements of U indexed by an element of U, All functions between any two elements of U, and All subsets of U whose cardinal is an element of U. In particular, it follows from the last axiom that if U is non-empty, it must contain all of its finite subsets and a
https://en.wikipedia.org/wiki/Place%20and%20route
Place and route is a stage in the design of printed circuit boards, integrated circuits, and field-programmable gate arrays. As implied by the name, it is composed of two steps, placement and routing. The first step, placement, involves deciding where to place all electronic components, circuitry, and logic elements in a generally limited amount of space. This is followed by routing, which decides the exact design of all the wires needed to connect the placed components. This step must implement all the desired connections while following the rules and limitations of the manufacturing process. Place and route is used in several contexts: Printed circuit boards, during which components are graphically placed on the board and the wires drawn between them Integrated circuits, during which a layout of a larger block of the circuit or the whole circuit is created from layouts of smaller sub-blocks FPGAs, during which logic elements are placed and interconnected on the grid of the FPGA These processes are similar at a high level, but the actual details are very different. With the large sizes of modern designs, this operation is usually performed by electronic design automation (EDA) tools. In all these contexts, the final result when placing and routing is finished is the "layout", a geometric description of the location and rotation of each part, and the exact path of each wire connecting them. Occasionally some people call the entire place-and-route process "layout". Printed circuit board The design of a printed circuit board comes after the creation of a schematic and generation of a netlist. The generated netlist is then read into a layout tool and associated with the footprints of the devices from a library. Placing and routing the devices can now start. Placing and routing is generally done in two steps. Placing the components comes first, then routing the connections between the components. The placement of components is not absolute during the routing
https://en.wikipedia.org/wiki/Remote%20access%20service
A remote access service (RAS) is any combination of hardware and software to enable the remote access tools or information that typically reside on a network of IT devices. A remote access service connects a client to a host computer, known as a remote access server. The most common approach to this service is remote control of a computer by using another device which needs internet or any other network connection. Here are the connection steps: User dials into a PC at the office. Then the office PC logs into a file server where the needed information is stored. The remote PC takes control of the office PC's monitor and keyboard, allowing the remote user to view and manipulate information, execute commands, and exchange files. Many computer manufacturers and large businesses' help desks use this service widely for technical troubleshooting of their customers' problems. Therefore you can find various professional first-party, third-party, open source, and freeware remote desktop applications. Which some of those are cross-platform across various versions of Windows, macOS, UNIX, and Linux. Remote desktop programs may include LogMeIn or TeamViewer.  To use RAS from a remote node, a RAS client program is needed, or any PPP client software. Most remote control programs work with RAS. PPP is a set of industry standard framing and authentication protocols that enable remote access. Microsoft Remote Access Server (RAS) is the predecessor to Microsoft Routing and Remote Access Server (RRAS). RRAS is a Microsoft Windows Server feature that allows Microsoft Windows clients to remotely access a Microsoft Windows network. History The term was originally coined by Microsoft when referring to their built-in Windows NT remote access tools. RAS is a service provided by Windows NT which allows most of the services which would be available on a network to be accessed over a modem link. The service includes support for dialup and logon, presents the same network interface as
https://en.wikipedia.org/wiki/Voice%20over%20WLAN
Voice over Wireless LAN (VoWLAN), also Voice over WiFi (VoWiFi), is the use of a wireless broadband network according to the IEEE 802.11 standards for the purpose of vocal conversation. In essence, it is Voice over IP (VoIP) over a Wi-Fi network. In most cases, the Wi-Fi network and voice components supporting the voice system are privately owned. VoWLAN can be conducted over any Internet accessible device, including a laptop, PDA or VoWLAN units which look and function like DECT and cellphones. Just like for IP-DECT, the VoWLAN's main advantages to consumers are cheaper local and international calls, free calls to other VoWLAN units and a simplified integrated billing of both phone and Internet service providers. Although VoWLAN and 3G have certain feature similarities, VoWLAN is different in the sense that it uses a wireless internet network (typically 802.11) rather than a cellular network. Both VoWLAN and 3G are used in different ways, although with a femtocell the two can deliver similar service to users and can be considered alternatives. Applications For a single location organisation it enables use of existing Wi-Fi network for low (or no) cost of use VoIP (hence VoWLAN) communication in a similar manner to land mobile radio system or walkie-talkie systems with push to talk and emergency broadcast channels. They are also used across multiple locations for mobile workers such as delivery drivers, these workers need to take advantage of 3G type services whereby a cellular company provide data access between the handheld device and the companies back-end network. Benefits A voice over WLAN system offers several benefits to organizations, such as hospitals and warehouses. Such advantages include increased mobility and cost savings. For instance, nurses and doctors within a hospital can maintain voice communications at any time at less cost, compared to cellular service. Types as an extension to cellular network using Generic Access Network or Unlicensed Mob
https://en.wikipedia.org/wiki/Safety%20life%20cycle
The safety life cycle is the series of phases from initiation and specifications of safety requirements, covering design and development of safety features in a safety-critical system, and ending in decommissioning of that system. This article uses software as the context but the safety life cycle applies to other areas such as construction of buildings, for example. In software development, a process is used (software life cycle) and this process consists of a few phases, typically covering initiation, analysis, design, programming, testing and implementation. The focus is to build the software. Some software have safety concerns while others do not. For example, a Leave Application System does not have safety requirements. But we are concerned about safety if a software that is used to control the components in a plane fails. So for the latter, the question is how safety, being so important, should be managed within the software life cycle. What is the Safety Life Cycle? The basic concept in building software safety, i.e. safety features in software, is that safety characteristics and behaviour of the software and system must be specified and designed into the system. The problem for any systems designer lies in reducing the risk to an acceptable level and of course, the risk tolerated will vary between applications. When a software application is to be used in a safety-related system, then this must be borne in mind at all stages in the software life cycle. The process of safety specification and assurance throughout the development and operational phases is sometimes called the ‘safety life cycle’. Phases in the Safety Life Cycle The first stages of the life cycle involve assessing the potential system hazards and estimating the risk they pose. One such method is fault tree analysis. This is followed by a safety requirements specification which is concerned with identifying safety-critical functions (functional requirements specification) and the safety
https://en.wikipedia.org/wiki/SIPRNet
The Secret Internet Protocol Router Network (SIPRNet) is "a system of interconnected computer networks used by the U.S. Department of Defense and the U.S. Department of State to transmit classified information (up to and including information classified SECRET) by packet switching over the 'completely secure' environment". It also provides services such as hypertext document access and electronic mail. As such, SIPRNet is the DoD's classified version of the civilian Internet. SIPRNet is the secret component of the Defense Information Systems Network. Other components handle communications with other security needs, such as the NIPRNet, which is used for nonsecure communications, and the Joint Worldwide Intelligence Communications System (JWICS), which is used for Top Secret communications. Access According to the U.S. Department of State Web Development Handbook, domain structure and naming conventions are the same as for the open internet, except for the addition of a second-level domain, like, e.g., "sgov" between state and gov: openforum.state.sgov.gov. Files originating from SIPRNet are marked by a header tag "SIPDIS" (SIPrnet DIStribution). A corresponding second-level domain smil.mil exists for DoD users. Access is also available to a "...small pool of trusted allies, including Australia, Canada, the United Kingdom and New Zealand...". This group (including the US) is known as the Five Eyes. SIPRNet was one of the networks accessed by Chelsea Manning, convicted of leaking the video used in WikiLeaks' "Collateral Murder" release as well as the source of the US diplomatic cables published by WikiLeaks in November 2010. Alternate names SIPRNet and NIPRNet are referred to colloquially as SIPPERnet and NIPPERnet (or simply sipper and nipper), respectively. See also CAVNET Classified website NIPRNet RIPR Intellipedia Protective distribution system NATO CRONOS References External links DISA Secret Internet Protocol Router Network (SIPRNET) by th
https://en.wikipedia.org/wiki/Biosignature
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts. Types In general, biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely. Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat
https://en.wikipedia.org/wiki/Kernel%20%28linear%20algebra%29
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map between two vector spaces and , the kernel of is the vector space of all elements of such that , where denotes the zero vector in , or more symbolically: Properties The kernel of is a linear subspace of the domain . In the linear map two elements of have the same image in if and only if their difference lies in the kernel of , that is, From this, it follows that the image of is isomorphic to the quotient of by the kernel: In the case where is finite-dimensional, this implies the rank–nullity theorem: where the term refers the dimension of the image of , while refers to the dimension of the kernel of , That is, so that the rank–nullity theorem can be restated as When is an inner product space, the quotient can be identified with the orthogonal complement in of This is the generalization to linear operators of the row space, or coimage, of a matrix. Application to modules The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply. In functional analysis If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator L: V → W is continuous if and only if the kernel of L is a closed subspace of V. Representation as matrix multiplication Consider a linear map represented as a m × n matrix A with coefficients in a field K (typically or ), that is operating on column vectors x with n components over K. The kernel of this linear map is the set of solutions to the equation , where 0 is understood as the zero vector. The dimension of the kernel of A is ca
https://en.wikipedia.org/wiki/Frigyes%20Riesz
Frigyes Riesz (, , sometimes spelled as Frederic; 22 January 1880 – 28 February 1956) was a Hungarian mathematician who made fundamental contributions to functional analysis, as did his younger brother Marcel Riesz. Life and career He was born into a Jewish family in Győr, Austria-Hungary and died in Budapest, Hungary. Between 1911 and 1919 he was a professor at the Franz Joseph University in Kolozsvár, Austria-Hungary. The post-WW1 Treaty of Trianon transferred former Austro-Hungarian territory including Kolozsvár to the Kingdom of Romania, whereupon Kolozsvár's name changed to Cluj and the University of Kolozsvár moved to Szeged, Hungary, becoming the University of Szeged. Then, Riesz was the rector and a professor at the University of Szeged, as well as a member of the Hungarian Academy of Sciences. and the Polish Academy of Learning. He was the older brother of the mathematician Marcel Riesz. Riesz did some of the fundamental work in developing functional analysis and his work has had a number of important applications in physics. He established the spectral theory for bounded symmetric operators in a form very much like that now regarded as standard. He also made many contributions to other areas including ergodic theory, topology and he gave an elementary proof of the mean ergodic theorem. Together with Alfréd Haar, Riesz founded the Acta Scientiarum Mathematicarum journal. He had an uncommon method of giving lectures: he entered the lecture hall with an assistant and a docent. The docent then began reading the proper passages from Riesz's handbook and the assistant wrote the appropriate equations on the blackboard—while Riesz himself stood aside, nodding occasionally. The Swiss-American mathematician Edgar Lorch spent 1934 in Szeged working under Riesz and wrote a reminiscence about his time there, including his collaboration with Riesz. The corpus of his bibliography was compiled by the mathematician Pál Medgyessy. Publications See also Proximit
https://en.wikipedia.org/wiki/Marcel%20Riesz
Marcel Riesz ( ; 16 November 1886 – 4 September 1969) was a Hungarian mathematician, known for work on summation methods, potential theory, and other parts of analysis, as well as number theory, partial differential equations, and Clifford algebras. He spent most of his career in Lund (Sweden). Marcel is the younger brother of Frigyes Riesz, who was also an important mathematician and at times they worked together (see F. and M. Riesz theorem). Biography Marcel Riesz was born in Győr, Austria-Hungary. He was the younger brother of the mathematician Frigyes Riesz. In 1904, he won the Loránd Eötvös competition. Upon entering the Budapest University, he also studied in Göttingen, and the academic year 1910-11 he spent in Paris. Earlier, in 1908, he attended the 1908 International Congress of Mathematicians in Rome. There he met Gösta Mittag-Leffler, in three years, Mittag-Leffler would offer Riesz to come to Sweden. Riesz obtained his PhD at Eötvös Loránd University under the supervision of Lipót Fejér. In 1911, he moved to Sweden, where from 1911 to 1925 he taught at Stockholm University. From 1926 to 1952, he was a professor at Lund University. According to Lars Gårding, Riesz arrived in Lund as a renowned star of mathematics, and for a time his appointment may have seemed like an exile. Indeed, there was no established school of mathematics in Lund at the time. However, Riesz managed to turn the tide and make the academic atmosphere more active. Retired from the Lund University, he spent 10 years at universities in the United States. As a visiting research professor, he worked in Maryland, Chicago, etc. After ten years of intense work with little rest, he suffered a breakdown. Riesz returned to Lund in 1962. After a long illness, he died there in 1969. Riesz was elected a member of the Royal Swedish Academy of Sciences in 1936. Mathematical work Classical analysis The work of Riesz as a student of Fejér in Budapest was devoted to trigonometric series:
https://en.wikipedia.org/wiki/Brown%27s%20representability%20theorem
In mathematics, Brown's representability theorem in homotopy theory gives necessary and sufficient conditions for a contravariant functor F on the homotopy category Hotc of pointed connected CW complexes, to the category of sets Set, to be a representable functor. More specifically, we are given F: Hotcop → Set, and there are certain obviously necessary conditions for F to be of type Hom(—, C), with C a pointed connected CW-complex that can be deduced from category theory alone. The statement of the substantive part of the theorem is that these necessary conditions are then sufficient. For technical reasons, the theorem is often stated for functors to the category of pointed sets; in other words the sets are also given a base point. Brown representability theorem for CW complexes The representability theorem for CW complexes, due to Edgar H. Brown, is the following. Suppose that: The functor F maps coproducts (i.e. wedge sums) in Hotc to products in Set: The functor F maps homotopy pushouts in Hotc to weak pullbacks. This is often stated as a Mayer–Vietoris axiom: for any CW complex W covered by two subcomplexes U and V, and any elements u ∈ F(U), v ∈ F(V) such that u and v restrict to the same element of F(U ∩ V), there is an element w ∈ F(W) restricting to u and v, respectively. Then F is representable by some CW complex C, that is to say there is an isomorphism F(Z) ≅ HomHotc(Z, C) for any CW complex Z, which is natural in Z in that for any morphism from Z to another CW complex Y the induced maps F(Y) → F(Z) and HomHot(Y, C) → HomHot(Z, C) are compatible with these isomorphisms. The converse statement also holds: any functor represented by a CW complex satisfies the above two properties. This direction is an immediate consequence of basic category theory, so the deeper and more interesting part of the equivalence is the other implication. The representing object C above can be shown to depend functorially on F: any natural transformation from F to
https://en.wikipedia.org/wiki/Tomato%20paste
Tomato paste is a thick paste made from tomatoes, which are cooked for several hours to reduce water content, straining out seeds and skins, and cooking the liquid again to reduce the base to a thick, rich concentrate. It is used to impart an intense tomato flavour to a variety of dishes, such as pasta, soups and braised meat. It is used as an ingredient in many world cuisines. By contrast, tomato purée is a liquid with a thinner consistency than tomato paste, while tomato sauce is even thinner in consistency. History and traditions Tomato paste is traditionally made in parts of Sicily, southern Italy and Malta by spreading out a much-reduced tomato sauce on wooden boards that are set outdoors under the hot August sun to dry the paste until it is thick enough, when it is scraped up and held together in a richly-colored dark ball. Today, this artisan product is harder to find than the thinner industrial version. Commercial production uses tomatoes with thick pericarp walls and lower overall moisture; these are very different from tomatoes typically found in a supermarket. Tomato paste became commercially available in the early 20th century. Regional differences In the UK, tomato paste is also referred to as concentrate. In the US, tomato paste is simply concentrated tomato solids (no seeds or skin), sometimes with added sweetener (high fructose corn syrup), and with a standard of identity (in the Code of Federal Regulations, see 21 CFR 155.191). Tomato purée has a lower tomato soluble solids requirement, the cutoff being 24%. For comparison, typical fresh round tomatoes have a soluble solid content of 3.5–5.5% (refractometric Brix), while cherry tomatoes have double the amount. Uses Tomato paste is added to dishes to impart an intense flavour, particularly the natural umami flavour found in tomatoes. Examples of dishes in which tomato paste may be commonly used include pasta sauces, soups, and braised meat. The paste is typically added early in the cooking
https://en.wikipedia.org/wiki/Cotton%20tensor
In differential geometry, the Cotton tensor on a (pseudo)-Riemannian manifold of dimension n is a third-order tensor concomitant of the metric. The vanishing of the Cotton tensor for is necessary and sufficient condition for the manifold to be conformally flat. By contrast, in dimensions , the vanishing of the Cotton tensor is necessary but not sufficient for the metric to be conformally flat; instead, the corresponding necessary and sufficient condition in these higher dimensions is the vanishing of the Weyl tensor, while the Cotton tensor just becomes a constant times the divergence of the Weyl tensor. For the Cotton tensor is identically zero. The concept is named after Émile Cotton. The proof of the classical result that for the vanishing of the Cotton tensor is equivalent to the metric being conformally flat is given by Eisenhart using a standard integrability argument. This tensor density is uniquely characterized by its conformal properties coupled with the demand that it be differentiable for arbitrary metrics, as shown by . Recently, the study of three-dimensional spaces is becoming of great interest, because the Cotton tensor restricts the relation between the Ricci tensor and the energy–momentum tensor of matter in the Einstein equations and plays an important role in the Hamiltonian formalism of general relativity. Definition In coordinates, and denoting the Ricci tensor by Rij and the scalar curvature by R, the components of the Cotton tensor are The Cotton tensor can be regarded as a vector valued 2-form, and for n = 3 one can use the Hodge star operator to convert this into a second order trace free tensor density sometimes called the Cotton–York tensor. Properties Conformal rescaling Under conformal rescaling of the metric for some scalar function . We see that the Christoffel symbols transform as where is the tensor The Riemann curvature tensor transforms as In -dimensional manifolds, we obtain the Ricci tensor by contracting th
https://en.wikipedia.org/wiki/Parallel%20motion%20linkage
In kinematics, the parallel motion linkage is a six-bar mechanical linkage invented by the Scottish engineer James Watt in 1784 for the double-acting Watt steam engine. It allows a rod moving practically straight up and down to transmit motion to a beam moving in an arc, without putting significant sideways strain on the rod. Description In previous engines built by Newcomen and Watt, the piston pulled one end of the walking beam downwards during the power stroke using a chain, and the weight of the pump pulled the other end of the beam downwards during the recovery stroke using a second chain, the alternating forces producing the rocking motion of the beam. In Watt's new double-acting engine, the piston produced power on both the upward and downward strokes, so a chain could not be used to transmit the force to the beam. Watt designed the parallel motion to transmit force in both directions whilst keeping the piston rod very close to vertical. He called it "parallel motion" because both the piston and the pump rod were required to move vertically, parallel to one another. In a letter to his son in 1808 describing how he arrived at the design, James Watt wrote "I am more proud of the parallel motion than of any other invention I have ever made." The sketch he included actually shows what is now known as Watt's linkage which was a linkage described in Watt's 1784 patent but it was immediately superseded by the parallel motion. The parallel motion differed from Watt's linkage by having an additional pantograph linkage incorporated in the design. This did not affect the fundamental principle but it allowed the engine room to be smaller because the linkage was more compact. The Newcomen engine's piston was propelled downward by the atmospheric pressure. Watt's device allowed live steam to be used for direct work on both sides of the piston, thus almost doubling the power, and also delivering the power more evenly through the cycle, an advantage when converting th
https://en.wikipedia.org/wiki/Logistello
Logistello is a computer program that plays the game Othello, also known as Reversi. Logistello was written by Michael Buro and is regarded as a strong player, having beaten the human world champion Takeshi Murakami six games to none in 1997 — the best Othello programs are now much stronger than any human player. Logistello's evaluation function is based on disc patterns and features over a million numerical parameters which were tuned using linear regression. See also Computer Othello External links Game artificial intelligence Reversi software
https://en.wikipedia.org/wiki/Dedekind-infinite%20set
In mathematics, a set A is Dedekind-infinite (named after the German mathematician Richard Dedekind) if some proper subset B of A is equinumerous to A. Explicitly, this means that there exists a bijective function from A onto some proper subset B of A. A set is Dedekind-finite if it is not Dedekind-infinite (i.e., no such bijection exists). Proposed by Dedekind in 1888, Dedekind-infiniteness was the first definition of "infinite" that did not rely on the definition of the natural numbers. A simple example is , the set of natural numbers. From Galileo's paradox, there exists a bijection that maps every natural number n to its square n2. Since the set of squares is a proper subset of , is Dedekind-infinite. Until the foundational crisis of mathematics showed the need for a more careful treatment of set theory, most mathematicians assumed that a set is infinite if and only if it is Dedekind-infinite. In the early twentieth century, Zermelo–Fraenkel set theory, today the most commonly used form of axiomatic set theory, was proposed as an axiomatic system to formulate a theory of sets free of paradoxes such as Russell's paradox. Using the axioms of Zermelo–Fraenkel set theory with the originally highly controversial axiom of choice included (ZFC) one can show that a set is Dedekind-finite if and only if it is finite in the usual sense. However, there exists a model of Zermelo–Fraenkel set theory without the axiom of choice (ZF) in which there exists an infinite, Dedekind-finite set, showing that the axioms of ZF are not strong enough to prove that every set that is Dedekind-finite is finite. There are definitions of finiteness and infiniteness of sets besides the one given by Dedekind that do not depend on the axiom of choice. A vaguely related notion is that of a Dedekind-finite ring. Comparison with the usual definition of infinite set This definition of "infinite set" should be compared with the usual definition: a set A is infinite when it cannot be put in bije
https://en.wikipedia.org/wiki/Clebsch%E2%80%93Gordan%20coefficients
In physics, the Clebsch–Gordan (CG) coefficients are numbers that arise in angular momentum coupling in quantum mechanics. They appear as the expansion coefficients of total angular momentum eigenstates in an uncoupled tensor product basis. In more mathematical terms, the CG coefficients are used in representation theory, particularly of compact Lie groups, to perform the explicit direct sum decomposition of the tensor product of two irreducible representations (i.e., a reducible representation into irreducible representations, in cases where the numbers and types of irreducible components are already known abstractly). The name derives from the German mathematicians Alfred Clebsch and Paul Gordan, who encountered an equivalent problem in invariant theory. From a vector calculus perspective, the CG coefficients associated with the SO(3) group can be defined simply in terms of integrals of products of spherical harmonics and their complex conjugates. The addition of spins in quantum-mechanical terms can be read directly from this approach as spherical harmonics are eigenfunctions of total angular momentum and projection thereof onto an axis, and the integrals correspond to the Hilbert space inner product. From the formal definition of angular momentum, recursion relations for the Clebsch–Gordan coefficients can be found. There also exist complicated explicit formulas for their direct calculation. The formulas below use Dirac's bra–ket notation and the Condon–Shortley phase convention is adopted. Review of the angular momentum operators Angular momentum operators are self-adjoint operators , , and that satisfy the commutation relations where is the Levi-Civita symbol. Together the three operators define a vector operator, a rank one Cartesian tensor operator, It is also known as a spherical vector, since it is also a spherical tensor operator. It is only for rank one that spherical tensor operators coincide with the Cartesian tensor operators. By developing
https://en.wikipedia.org/wiki/Lamb%20shift
In physics the Lamb shift, named after Willis Lamb, refers to an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb shift refers to a deviation from theory seen in the differing energies contained by the 2S1/2 and 2P1/2 orbitals of the hydrogen atom. The Lamb shift is caused by interactions between the virtual photons created through vacuum energy fluctuations and the electron as it moves around the hydrogen nucleus in each of these two orbitals. The Lamb shift has since played a significant role through vacuum energy fluctuations in theoretical prediction of Hawking radiation from black holes. This effect was first measured in 1947 in the Lamb–Retherford experiment on the hydrogen microwave spectrum and this measurement provided the stimulus for renormalization theory to handle the divergences. It was the harbinger of modern quantum electrodynamics developed by Julian Schwinger, Richard Feynman, Ernst Stueckelberg, Sin-Itiro Tomonaga and Freeman Dyson. Lamb won the Nobel Prize in Physics in 1955 for his discoveries related to the Lamb shift. Importance In 1978, on Lamb's 65th birthday, Freeman Dyson addressed him as follows: "Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields." Derivation This heuristic derivation of the electrodynamic level shift follows Theodore A. Welton's approach. The fluctuations in the electric and magnetic fields associated with the QED vacuum perturbs the electric potential due to the atomic nucleus. This perturbation causes a fluctuation in the position of the electron, which explains the energy shift. The difference of potential energy is given by Since
https://en.wikipedia.org/wiki/Interactome
In molecular biology, an interactome is the whole set of molecular interactions in a particular cell. The term specifically refers to physical interactions among molecules (such as those among proteins, also known as protein–protein interactions, PPIs; or between small molecules and proteins) but can also describe sets of indirect interactions among genes (genetic interactions). The word "interactome" was originally coined in 1999 by a group of French scientists headed by Bernard Jacq. Mathematically, interactomes are generally displayed as graphs. Though interactomes may be described as biological networks, they should not be confused with other networks such as neural networks or food webs. Molecular interaction networks Molecular interactions can occur between molecules belonging to different biochemical families (proteins, nucleic acids, lipids, carbohydrates, etc.) and also within a given family. Whenever such molecules are connected by physical interactions, they form molecular interaction networks that are generally classified by the nature of the compounds involved. Most commonly, interactome refers to protein–protein interaction (PPI) network (PIN) or subsets thereof. For instance, the Sirt-1 protein interactome and Sirt family second order interactome is the network involving Sirt-1 and its directly interacting proteins where as second order interactome illustrates interactions up to second order of neighbors (Neighbors of neighbors). Another extensively studied type of interactome is the protein–DNA interactome, also called a gene-regulatory network, a network formed by transcription factors, chromatin regulatory proteins, and their target genes. Even metabolic networks can be considered as molecular interaction networks: metabolites, i.e. chemical compounds in a cell, are converted into each other by enzymes, which have to bind their substrates physically. In fact, all interactome types are interconnected. For instance, protein interactomes contain ma
https://en.wikipedia.org/wiki/Regulome
Regulome refers to the whole set of regulatory components in a cell. Those components can be regulatory elements, genes, mRNAs, proteins, and metabolites. The description includes the interplay of regulatory effects between these components, and their dependence on variables such as subcellular localization, tissue, developmental stage, and pathological state. Components One of the major players in cellular regulation are transcription factors, proteins that regulate the expression of genes. Other proteins that bind to transcription factors to form transcriptional complexes might modify the activity of transcription factors, for example blocking their capacity to bind to a promoter. Signaling pathways are groups of proteins that produce an effect in a chain that transmit a signal from one part of the cell to another part, for example, linking the presence of substance at the exterior of the cell to the activation of the expression of a gene. Measuring High-throughput technologies for the analysis of biological samples (for example, DNA microarrays, proteomics analysis) allow the measurement of thousands of biological components such as mRNAs, proteins, or metabolites. Chromatin immunoprecipitation of transcription factors can be used to map transcription factor binding sites in the genome. Such techniques allow researchers to study the effects of particular substances and/or situations on a cellular sample at a genomic level (for example, by addition of a drug, or by placing cells in a situation of stress). The information obtained allows parts of the regulome to be inferred. Modeling One of the objectives of systems biology is the modeling of biological processes using mathematics and computer simulation. The production of data from techniques of genomic analysis is not always amenable to interpretation mainly due to the complexity of the data and the large number of data points. Modeling can handle the data and allow to test a hypothesis (for example, g
https://en.wikipedia.org/wiki/Metabolome
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism. In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics. Origins The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
https://en.wikipedia.org/wiki/Mitsubishi%20Electric
is a Japanese multinational electronics and electrical equipment manufacturing company headquartered in Tokyo, Japan. It was established in 1921 as a spin-off from the electrical machinery manufacturing business of Mitsubishi Shipbuilding (current Mitsubishi Heavy Industries) at the Kobe Shipyard. The products from MELCO include elevators and escalators, high-end home appliances, air conditioning, factory automation systems, train systems, electric motors, pumps, semiconductors, digital signage, and satellites. History MELCO was established as a spin-off from the Mitsubishi Group's other core company Mitsubishi Heavy Industries, then Mitsubishi Shipbuilding, as the latter divested a marine electric motor factory in Kobe, Nagasaki. It has since diversified to become the major electronics company. MELCO held the record for the fastest elevator in the world, in the 70-story Yokohama Landmark Tower, from 1993 to 2005. The company acquired Nihon Kentetsu, a Japanese home appliance manufacturer, in 2005. In 2015 the company acquired DeLclima, an Italian company that designs and produces HVAC and HPAC units, renamed Mitsubishi Electric Hydronics & IT Cooling Systems SpA in 2017. In early 2020, MELCO was identified as a victim of the year-long cyberattacks perpetrated by the Chinese hackers. In 2023, MELCO announced its plans to spend 100 billion yen to build a new semiconductor factory in Kumamoto Prefecture, with a target date of April 2026 to begin production. Products In 2021, the World Intellectual Property Organization (WIPO)’s annual World Intellectual Property Indicators report ranked Mitsubishi Electric's number of patent applications published under the PCT System as 3rd in the world, with 2,661 patent applications being published during 2020. This position is down from their previous ranking as 2nd in 2019 with 2,334 applications. Some product lines of MELCO, such as air conditioners, overlap with the products from Mitsubishi Heavy Industries partly becau
https://en.wikipedia.org/wiki/NSA%20cryptography
The vast majority of the National Security Agency's work on encryption is classified, but from time to time NSA participates in standards processes or otherwise publishes information about its cryptographic algorithms. The NSA has categorized encryption items into four product types, and algorithms into two suites. The following is a brief and incomplete summary of public knowledge about NSA algorithms and protocols. Type 1 Product A Type 1 Product refers to an NSA endorsed classified or controlled cryptographic item for classified or sensitive U.S. government information, including cryptographic equipment, assembly or component classified or certified by NSA for encrypting and decrypting classified and sensitive national security information when appropriately keyed. Type 2 Product A Type 2 Product refers to an NSA endorsed unclassified cryptographic equipment, assemblies or components for sensitive but unclassified U.S. government information. Type 3 Product Unclassified cryptographic equipment, assembly, or component used, when appropriately keyed, for encrypting or decrypting unclassified sensitive U.S. Government or commercial information, and to protect systems requiring protection mechanisms consistent with standard commercial practices. A Type 3 Algorithm refers to NIST endorsed algorithms, registered and FIPS published, for sensitive but unclassified U.S. government and commercial information. Type 4 Product A Type 4 Algorithm refers to algorithms that are registered by the NIST but are not FIPS published. Unevaluated commercial cryptographic equipment, assemblies, or components that are neither NSA nor NIST certified for any Government usage. Algorithm Suites Suite A A set of NSA unpublished algorithms that is intended for highly sensitive communication and critical authentication systems. Suite B A set of NSA endorsed cryptographic algorithms for use as an interoperable cryptographic base for both unclassified information and most classifie
https://en.wikipedia.org/wiki/Hahn%20embedding%20theorem
In mathematics, especially in the area of abstract algebra dealing with ordered structures on abelian groups, the Hahn embedding theorem gives a simple description of all linearly ordered abelian groups. It is named after Hans Hahn. Overview The theorem states that every linearly ordered abelian group G can be embedded as an ordered subgroup of the additive group ℝΩ endowed with a lexicographical order, where ℝ is the additive group of real numbers (with its standard order), Ω is the set of Archimedean equivalence classes of G, and ℝΩ is the set of all functions from Ω to ℝ which vanish outside a well-ordered set. Let 0 denote the identity element of G. For any nonzero element g of G, exactly one of the elements g or −g is greater than 0; denote this element by |g|. Two nonzero elements g and h of G are Archimedean equivalent if there exist natural numbers N and M such that N|g| > |h| and M|h| > |g|. Intuitively, this means that neither g nor h is "infinitesimal" with respect to the other. The group G is Archimedean if all nonzero elements are Archimedean-equivalent. In this case, Ω is a singleton, so ℝΩ is just the group of real numbers. Then Hahn's Embedding Theorem reduces to Hölder's theorem (which states that a linearly ordered abelian group is Archimedean if and only if it is a subgroup of the ordered additive group of the real numbers). gives a clear statement and proof of the theorem. The papers of and together provide another proof. See also . See also Archimedean group References Ordered groups Theorems in group theory
https://en.wikipedia.org/wiki/Generalized%20signal%20averaging
Within signal processing, in many cases only one image with noise is available, and averaging is then realized in a local neighbourhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data. This assumption is clearly violated at locations of image edges, and edge blurring is a direct consequence of violating the assumption. Description Averaging is a special case of discrete convolution. For a 3 by 3 neighbourhood, the convolution mask M is: The significance of the central pixel may be increased, as it approximates the properties of noise with a Gaussian probability distribution: A suitable page for beginners about matrices is at: https://web.archive.org/web/20060819141930/http://www.gamedev.net/reference/programming/features/imageproc/page2.asp The whole article starts on page: https://web.archive.org/web/20061019072001/http://www.gamedev.net/reference/programming/features/imageproc/ References Signal processing Noise (graphics) Radio technology
https://en.wikipedia.org/wiki/Angel%20problem
The angel problem is a question in combinatorial game theory proposed by John Horton Conway. The game is commonly referred to as the angels and devils game. The game is played by two players called the angel and the devil. It is played on an infinite chessboard (or equivalently the points of a 2D lattice). The angel has a power k (a natural number 1 or higher), specified before the game starts. The board starts empty with the angel in one square. On each turn, the angel jumps to a different empty square which could be reached by at most k moves of a chess king, i.e. the distance from the starting square is at most k in the infinity norm. The devil, on its turn, may add a block on any single square not containing the angel. The angel may leap over blocked squares, but cannot land on them. The devil wins if the angel is unable to move. The angel wins by surviving indefinitely. The angel problem is: can an angel with high enough power win? There must exist a winning strategy for one of the players. If the devil can force a win then it can do so in a finite number of moves. If the devil cannot force a win then there is always an action that the angel can take to avoid losing and a winning strategy for it is always to pick such a move. More abstractly, the "pay-off set" (i.e., the set of all plays in which the angel wins) is a closed set (in the natural topology on the set of all plays), and it is known that such games are determined. Of course, for any infinite game, if player 2 doesn't have a winning strategy, player 1 can always pick a move that leads to a position where player 2 doesn't have a winning strategy, but in some games, simply playing forever doesn't confer a win to player 1, so undetermined games may exist. Conway offered a reward for a general solution to this problem ($100 for a winning strategy for an angel of sufficiently high power, and $1000 for a proof that the devil can win irrespective of the angel's power). Progress was made first in higher
https://en.wikipedia.org/wiki/Protein%20sequencing
Protein sequencing is the practical process of determining the amino acid sequence of all or part of a protein or peptide. This may serve to identify the protein or characterize its post-translational modifications. Typically, partial sequencing of a protein provides sufficient information (one or more sequence tags) to identify it with reference to databases of protein sequences derived from the conceptual translation of genes. The two major direct methods of protein sequencing are mass spectrometry and Edman degradation using a protein sequenator (sequencer). Mass spectrometry methods are now the most widely used for protein sequencing and identification but Edman degradation remains a valuable tool for characterizing a protein's N-terminus. Determining amino acid composition It is often desirable to know the unordered amino acid composition of a protein prior to attempting to find the ordered sequence, as this knowledge can be used to facilitate the discovery of errors in the sequencing process or to distinguish between ambiguous results. Knowledge of the frequency of certain amino acids may also be used to choose which protease to use for digestion of the protein. The misincorporation of low levels of non-standard amino acids (e.g. norleucine) into proteins may also be determined. A generalized method often referred to as amino acid analysis for determining amino acid frequency is as follows: Hydrolyse a known quantity of protein into its constituent amino acids. Separate and quantify the amino acids in some way. Hydrolysis Hydrolysis is done by heating a sample of the protein in 6 M hydrochloric acid to 100–110 °C for 24 hours or longer. Proteins with many bulky hydrophobic groups may require longer heating periods. However, these conditions are so vigorous that some amino acids (serine, threonine, tyrosine, tryptophan, glutamine, and cysteine) are degraded. To circumvent this problem, Biochemistry Online suggests heating separate samples for different
https://en.wikipedia.org/wiki/Price%20equation
In the theory of evolution and natural selection, the Price equation (also known as Price's equation or Price's theorem) describes how a trait or allele changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the frequency of alleles within each new generation of a population. The Price equation was derived by George R. Price, working in London to re-derive W.D. Hamilton's work on kin selection. Examples of the Price equation have been constructed for various evolutionary cases. The Price equation also has applications in economics. It is important to note that the Price equation is not a physical or biological law. It is not a concise or general expression of experimentally validated results. It is rather a purely mathematical relationship between various statistical descriptors of population dynamics. It is mathematically valid, and therefore not subject to experimental verification. In simple terms, it is a mathematical restatement of the expression "survival of the fittest" which is actually self-evident, given the mathematical definitions of "survival" and "fittest". Statement The Price equation shows that a change in the average amount of a trait in a population from one generation to the next () is determined by the covariance between the amounts of the trait for subpopulation and the fitnesses of the subpopulations, together with the expected change in the amount of the trait value due to fitness, namely : Here is the average fitness over the population, and and represent the population mean and covariance respectively. 'Fitness' is the ratio of the average number of offspring for the whole population per the number of adult individuals in the population, and is that same ratio only for subpopulation . If the covariance between fitness () an
https://en.wikipedia.org/wiki/AMPL
AMPL (A Mathematical Programming Language) is an algebraic modeling language to describe and solve high-complexity problems for large-scale mathematical computing (i.e., large-scale optimization and scheduling-type problems). It was developed by Robert Fourer, David Gay, and Brian Kernighan at Bell Laboratories. AMPL supports dozens of solvers, both open source and commercial software, including CBC, CPLEX, FortMP, MOSEK, MINOS, IPOPT, SNOPT, KNITRO, and LGO. Problems are passed to solvers as nl files. AMPL is used by more than 100 corporate clients, and by government agencies and academic institutions. One advantage of AMPL is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization. Many modern solvers available on the NEOS Server (formerly hosted at the Argonne National Laboratory, currently hosted at the University of Wisconsin, Madison) accept AMPL input. According to the NEOS statistics AMPL is the most popular format for representing mathematical programming problems. Features AMPL features a mix of declarative and imperative programming styles. Formulating optimization models occurs via declarative language elements such as sets, scalar and multidimensional parameters, decision variables, objectives and constraints, which allow for concise description of most problems in the domain of mathematical optimization. Procedures and control flow statements are available in AMPL for the exchange of data with external data sources such as spreadsheets, databases, XML and text files data pre- and post-processing tasks around optimization models the construction of hybrid algorithms for problem types for which no direct efficient solvers are available. To support re-use and simplify construction of large-scale optimization problems, AMPL allows separation of model and data. AMPL supports a wide range of problem types, among them: Linear p
https://en.wikipedia.org/wiki/List%20of%20homological%20algebra%20topics
This is a list of homological algebra topics, by Wikipedia page. Basic techniques Cokernel Exact sequence Chain complex Differential module Five lemma Short five lemma Snake lemma Nine lemma Extension (algebra) Central extension Splitting lemma Projective module Injective module Projective resolution Injective resolution Koszul complex Exact functor Derived functor Ext functor Tor functor Filtration (abstract algebra) Spectral sequence Abelian category Triangulated category Derived category Applications Group cohomology Galois cohomology Lie algebra cohomology Sheaf cohomology Whitehead problem Homological conjectures in commutative algebra Homological algebra
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20M%C3%A9r%C5%91
László Mérő (born Budapest, 11 December 1949) is a Hungarian research psychologist and popular science author. He has Jewish ancestry. He is a lecturer at the Experimental Psychology Department of Eötvös Loránd University and at the business school Kürt Academy. He is also a founder and leader of a software company producing computer games. One of his projects is a computer game he is developing with Ernő Rubik, the inventor of the Rubik's Cube. He is also the leader of the Hungarian team at the World Puzzle Championship. His son is Csaba Mérő, an 8-time Hungarian go champion. His daughter, Vera Mérő, is a human rights activist and author. He represented Hungary in the Tenth International Mathematical Olympiad held in Moscow in 1968, and was awarded a Bronze Medal. He graduated from Eötvös Loránd University with a degree in Mathematics in 1974. He spent the next ten years at the Computer and Automation Institute of the Hungarian Academy of Sciences, working on various pattern recognition and artificial intelligence projects. Recognizing the limitations of artificial intelligence, he began investigating human cognition. Since 1984 he has been at the Experimental Psychology Department of Eötvös Loránd University, studying cognitive psychology and psychophysics. He has written two books, Ways of Thinking (newer translation: Habits of Mind) and Moral Calculations, that aroused the interest of the wider, non-professional public. His books analyze the quasi-rational mechanisms of people and the nature of rationality in general, undermining some common beliefs about our minds' functioning. He has been publishing in Magyar Narancs a series titled Are you the dance instructor here? (The title refers to a joke: A client enters the dancing school and asks a well-dressed man: "Are you the dance instructor here?" "Fuck no, I'm the etiquette instructor!") Several of these essays were collected in a book in 2005 (see below). Picture Volumes published in English Ways of Thin
https://en.wikipedia.org/wiki/Flor
Flor (Spanish and Portuguese for flower) in winemaking, is a film of yeast on the surface of wine, important in the manufacture of some styles of sherry. The flor is formed naturally under certain winemaking conditions, from indigenous yeasts found in the region of Andalucía in southern Spain. Normally in winemaking, it is essential to keep young wines away from exposure to air by sealing them in airtight barrels, to avoid contamination by bacteria and yeasts that tend to spoil it. However, in the manufacture of sherries, the slightly porous oak barrels are deliberately filled only about five-sixths full with the young wine, leaving "the space of two fists" empty to allow the flor yeast to take form and the bung is not completely sealed. The flor favors cooler climates and higher humidity, so the sherries produced in the coastal Sanlúcar de Barrameda and El Puerto de Santa María have a thicker cap of flor than those produced inland in Jerez. The yeast gives the resulting sherry its distinctive fresh taste, with residual flavors of fresh bread. Depending on the development of the wine, it may be aged entirely under the veil of flor to produce a fino or manzanilla sherry, or it may be fortified to limit the growth of flor and undergo oxidative aging to produce an amontillado or oloroso sherry. During the fermentation phase of sherry production, the flor yeast works anaerobically, converting sugar into ethanol. When all the sugar has been consumed, the physiology of the yeast changes to where it begins an aerobic process of breaking down and converting the acids into other compounds such as acetaldehyde. A waxy coating appears on the cells' exterior, causing the yeast to float to the surface and form a protective "blanket" thick enough to shield the wine from oxygen. This process drastically lowers the acidity of the wine and makes sherry one of the most aldehydic wines in the world. Studies have shown that for the flor to thrive, the wine must stay in a narrow alcoh
https://en.wikipedia.org/wiki/The%20Road%20to%20Reality
The Road to Reality: A Complete Guide to the Laws of the Universe is a book on modern physics by the British mathematical physicist Roger Penrose, published in 2004. It covers the basics of the Standard Model of particle physics, discussing general relativity and quantum mechanics, and discusses the possible unification of these two theories. Overview The book discusses the physical world. Many fields that 19th century scientists believed were separate, such as electricity and magnetism, are aspects of more fundamental properties. Some texts, both popular and university level, introduce these topics as separate concepts, and then reveal their combination much later. The Road to Reality reverses this process, first expounding the underlying mathematics of space–time, then showing how electromagnetism and other phenomena fall out fully formed. The book is just over 1100 pages, of which the first 383 are dedicated to mathematics—Penrose's goal is to acquaint inquisitive readers with the mathematical tools needed to understand the remainder of the book in depth. Physics enters the discussion on page 383 with the topic of spacetime. From there it moves on to fields in spacetime, deriving the classical electrical and magnetic forces from first principles; that is, if one lives in spacetime of a particular sort, these fields develop naturally as a consequence. Energy and conservation laws appear in the discussion of Lagrangians and Hamiltonians, before moving on to a full discussion of quantum physics, particle theory and quantum field theory. A discussion of the measurement problem in quantum mechanics is given a full chapter; superstrings are given a chapter near the end of the book, as are loop gravity and twistor theory. The book ends with an exploration of other theories and possible ways forward. The final chapters reflect Penrose's personal perspective, which differs in some respects from what he regards as the current fashion among theoretical physicists. He is
https://en.wikipedia.org/wiki/Wireless%20sensor%20network
Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind. These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs monitor physical conditions, such as temperature, sound, and pressure. Modern networks are bi-directional, both collecting data and enabling control of sensor activity.  The development of these networks was motivated by military applications such as battlefield surveillance. Such networks are used in industrial and consumer applications, such as industrial process monitoring and control and machine health monitoring and agriculture. A WSN is built of "nodes" – from a few to hundreds or thousands, where each node is connected to other sensors. Each such node typically has several parts: a radio transceiver with an internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from a shoebox to (theoretically) a grain of dust, although microscopic dimensions have yet to be realized. Sensor node cost is similarly variable, ranging from a few to hundreds of dollars, depending on node sophistication. Size and cost constraints constrain resources such as energy, memory, computational speed and communications bandwidth. The topology of a WSN can vary from a simple star network to an advanced multi-hop wireless mesh network. Propagation can employ routing or flooding. In computer science and telecommunications, wireless sensor networks are an active research area supporting many worksho
https://en.wikipedia.org/wiki/Internet%20Protocol%20television
Internet Protocol television (IPTV) is the delivery of television content over Internet Protocol (IP) networks. This is in contrast to delivery through traditional terrestrial, satellite, and cable television formats. Unlike downloaded media, IPTV offers the ability to stream the source media continuously. As a result, a client media player can begin playing the content (such as a TV channel) almost immediately. This is known as streaming media. Although IPTV uses the Internet protocol it is not limited to television streamed from the Internet (Internet television). IPTV is widely deployed in subscriber-based telecommunications networks with high-speed access channels into end-user premises via set-top boxes or other customer-premises equipment. IPTV is also used for media delivery around corporate and private networks. IPTV in the telecommunications arena is notable for its ongoing standardisation process (e.g., European Telecommunications Standards Institute). IPTV services may be classified into live television and live media, with or without related interactivity; time shifting of media, e.g., catch-up TV (replays a TV show that was broadcast hours or days ago), start-over TV (replays the current TV show from its beginning); and video on demand (VOD) which involves browsing and viewing items of a media catalogue. Definition Historically, many different definitions of IPTV have appeared, including elementary streams over IP networks, MPEG transport streams over IP networks and a number of proprietary systems. One official definition approved by the International Telecommunication Union focus group on IPTV (ITU-T FG IPTV) is: IPTV is defined as multimedia services such as television/video/audio/text/graphics/data delivered over IP-based networks managed to provide the required level of quality of service and experience, security, interactivity and reliability. Another definition of IPTV, relating to the telecommunications industry, is the one given by Allianc
https://en.wikipedia.org/wiki/G.%20N.%20Watson
George Neville Watson (31 January 1886 – 2 February 1965) was an English mathematician, who applied complex analysis to the theory of special functions. His collaboration on the 1915 second edition of E. T. Whittaker's A Course of Modern Analysis (1902) produced the classic "Whittaker and Watson" text. In 1918 he proved a significant result known as Watson's lemma, that has many applications in the theory on the asymptotic behaviour of exponential integrals. Life He was born in Westward Ho! in Devon the son of George Wentworth Watson, a schoolmaster and genealogist, and his wife, Mary Justina Griffith. He was educated at St Paul's School in London, as a pupil of F. S. Macaulay. He then studied Mathematics at Trinity College, Cambridge. There he encountered E. T. Whittaker, though their overlap was only two years. From 1914 to 1918 he lectured in Mathematics at University College, London. He became Professor of Pure Mathematics at the University of Birmingham in 1918, replacing Prof R S Heath, and remained in this role until 1951. He was awarded an honorary MSc Pure Science in 1919 by Birmingham University. He was President of the London Mathematical Society 1933/35. He died at Leamington Spa on 2 February 1965. Works His Treatise on the theory of Bessel functions (1922) also became a classic, in particular in regard to the asymptotic expansions of Bessel functions. He subsequently spent many years on Ramanujan's formulae in the area of modular equations, mock theta functions and q-series, and for some time looked after Ramanujan's lost notebook. Ramanujan discovered many more modular equations than all of his mathematical predecessors combined. Watson provided proofs for most of Ramanujan's modular equations. Bruce C. Berndt completed the project begun by Watson and Wilson. Much of Berndt's book Ramanujan's Notebooks, Part 3 (1998) is based upon the prior work of Watson. Watson's interests included solvable cases of the quintic equation. He introduced Wa
https://en.wikipedia.org/wiki/BEST%20Robotics
BEST (Boosting Engineering, Science, and Technology) is a national six-week robotics competition in the United States held each fall, designed to help interest middle school and high school students in possible engineering careers. The games are similar in scale to those of the FIRST Tech Challenge. History The idea for a BEST (Boosting Engineering, Science, and Technology) competition originated in 1993 when two Texas Instruments (TI) engineers, Ted Mahler and Steve Marum, were serving as guides for Engineering Day at their company site in Sherman, Texas. Together with a group of high school students, they watched a video of freshmen building a robot in Woodie Flowers's class at Massachusetts Institute of Technology. The high school students were so interested that Mahler and Marum said, "Why don't we do this?" With enthusiastic approval from TI management, North Texas BEST was born. The first competition was held in 1993 with 14 schools and 221 students (including one team from San Antonio). After learning that a San Antonio group had formed a non-profit organization to support a BEST event, North Texas BEST mentored them in providing their own BEST competition. Thus, San Antonio BEST, the second BEST competition site (or "hub"), was started in 1994. The two groups - North Texas and San Antonio - decided to meet for Texas BEST, a state playoff at Howard Payne University in Brownwood, Texas. The competition has also been held at Texas A&M University, Southern Methodist University (SMU), Texas Tech, University of North Texas (in Denton) and more recently it was hosted by the University of Texas at Dallas with the competition being held in Frisco, TX. The number of SABEST teams invited to Texas BEST is based on the ratio of schools participating at SA BEST to the total number participating at all the BEST hubs that feed Texas BEST multiplied by the total number of teams invited to Texas BEST. The number of San Antonio teams varies from year to year but is typical
https://en.wikipedia.org/wiki/Central%20Authentication%20Service
The Central Authentication Service (CAS) is a single sign-on protocol for the web. Its purpose is to permit a user to access multiple applications while providing their credentials (such as user ID and password) only once. It also allows web applications to authenticate users without gaining access to a user's security credentials, such as a password. The name CAS also refers to a software package that implements this protocol. Description The CAS protocol involves at least three parties: a client web browser, the web application requesting authentication, and the CAS server. It may also involve a back-end service, such as a database server, that does not have its own HTTP interface but communicates with a web application. When the client visits an application requiring authentication, the application redirects it to CAS. CAS validates the client's authenticity, usually by checking a username and password against a database (such as Kerberos, LDAP or Active Directory). If the authentication succeeds, CAS returns the client to the application, passing along a service ticket. The application then validates the ticket by contacting CAS over a secure connection and providing its own service identifier and the ticket. CAS then gives the application trusted information about whether a particular user has successfully authenticated. CAS allows multi-tier authentication via proxy address. A cooperating back-end service, like a database or mail server, can participate in CAS, validating the authenticity of users via information it receives from web applications. Thus, a webmail client and a webmail server can all implement CAS. History CAS was conceived and developed by Shawn Bayern of Yale University Technology and Planning. It was later maintained by Drew Mazurek at Yale. CAS 1.0 implemented single-sign-on. CAS 2.0 introduced multi-tier proxy authentication. Several other CAS distributions have been developed with new features. In December 2004, CAS became a project
https://en.wikipedia.org/wiki/Aliivibrio%20fischeri
Aliivibrio fischeri (also called Vibrio fischeri) is a Gram-negative, rod-shaped bacterium found globally in marine environments. This species has bioluminescent properties, and is found predominantly in symbiosis with various marine animals, such as the Hawaiian bobtail squid. It is heterotrophic, oxidase-positive, and motile by means of a single polar flagella. Free-living A. fischeri cells survive on decaying organic matter. The bacterium is a key research organism for examination of microbial bioluminescence, quorum sensing, and bacterial-animal symbiosis. It is named after Bernhard Fischer, a German microbiologist. Ribosomal RNA comparison led to the reclassification of this species from genus Vibrio to the newly created Aliivibrio in 2007. However, the name change is not generally accepted by most researchers, who still publish Vibrio fischeri (see Google Scholar for 2018–2019). Genome The genome for A. fischeri was completely sequenced in 2004 and consists of two chromosomes, one smaller and one larger. Chromosome 1 has 2.9 million base pairs (Mbp) and chromosome 2 has 1.3 Mbp, bringing the total genome to 4.2 Mbp. A. fischeri has the lowest G+C content of 27 Vibrio species, but is still most closely related to the higher-pathogenicity species such as V. cholerae. The genome for A. fischeri also carries mobile genetic elements. Ecology A. fischeri are globally distributed in temperate and subtropical marine environments. They can be found free-floating in oceans, as well as associated with marine animals, sediment, and decaying matter. A. fischeri have been most studied as symbionts of marine animals, including squids in the genus Euprymna and Sepiola, where A. fischeri can be found in the squids' light organs. This relationship has been best characterized in the Hawaiian bobtail squid (Euprymna scolopes), where A. fischeri is the only species of bacteria inhabiting the squid's light organ. Symbiosis with the Hawaiian bobtail squid A. fischeri coloniz
https://en.wikipedia.org/wiki/Morpholino
A Morpholino, also known as a Morpholino oligomer and as a phosphorodiamidate Morpholino oligomer (PMO), is a type of oligomer molecule (colloquially, an oligo) used in molecular biology to modify gene expression. Its molecular structure contains DNA bases attached to a backbone of methylenemorpholine rings linked through phosphorodiamidate groups. Morpholinos block access of other molecules to small (~25 base) specific sequences of the base-pairing surfaces of ribonucleic acid (RNA). Morpholinos are used as research tools for reverse genetics by knocking down gene function. This article discusses only the Morpholino antisense oligomers, which are nucleic acid analogs. The word "Morpholino" can occur in other chemical names, referring to chemicals containing a six-membered morpholine ring. To help avoid confusion with other morpholine-containing molecules, when describing oligos "Morpholino" is often capitalized as a trade name, but this usage is not consistent across scientific literature. Morpholino oligos are sometimes referred to as PMO (for phosphorodiamidate morpholino oligomer), especially in medical literature. Vivo-Morpholinos and PPMO are modified forms of Morpholinos with chemical groups covalently attached to facilitate entry into cells. Gene knockdown is achieved by reducing the expression of a particular gene in a cell. In the case of protein-coding genes, this usually leads to a reduction in the quantity of the corresponding protein in the cell. Knocking down gene expression is a method for learning about the function of a particular protein; in a similar manner, causing a specific exon to be spliced out of the RNA transcript encoding a protein can help to determine the function of the protein moiety encoded by that exon or can sometimes knock down the protein activity altogether. These molecules have been applied to studies in several model organisms, including mice, zebrafish, frogs and sea urchins. Morpholinos can also modify the splicing of p
https://en.wikipedia.org/wiki/ABA%20digital%20signature%20guidelines
The ABA digital signature guidelines are a set of guidelines published on 1 August 1996 by the American Bar Association (ABA) Section of Science and Technology Law. The authors are members of the Section's Information Security Committee. The document was the first overview of principles and a framework for the use of digital signatures and authentication in electronic commerce from a legal viewpoint, including technologies such as certificate authorities and public key infrastructure (PKI). The guidelines were a product of a four-year collaboration by 70 lawyers and technical experts from a dozen countries, and have been adopted as the model for legislation by some states in the US, including Florida and Utah. The Digital Signature Guidelines were followed by the Public Key Infrastructure Assessment Guidelines published by the ABA in 2003. A similar effort was undertaken in Slovenia by the Digital Signature Working Group (within the Chamber of Commerce and Industry of Slovenia (CCIS)). References External links American Bar Association American Bar Association Cryptography standards Works about computer law Standards of the United States
https://en.wikipedia.org/wiki/Complex%20programmable%20logic%20device
A complex programmable logic device (CPLD) is a programmable logic device with complexity between that of PALs and FPGAs, and architectural features of both. The main building block of the CPLD is a macrocell, which contains logic implementing disjunctive normal form expressions and more specialized logic operations. Features Some of the CPLD features are in common with PALs: Non-volatile configuration memory. Unlike many FPGAs, an external configuration ROM isn't required, and the CPLD can function immediately on system start-up. For many legacy CPLD devices, routing constrains most logic blocks to have input and output signals connected to external pins, reducing opportunities for internal state storage and deeply layered logic. This is usually not a factor for larger CPLDs and newer CPLD product families. Other features are in common with FPGAs: Large number of gates available. CPLDs typically have the equivalent of thousands to tens of thousands of logic gates, allowing implementation of moderately complicated data processing devices. PALs typically have a few hundred gate equivalents at most, while FPGAs typically range from tens of thousands to several million. Some provisions for logic more flexible than sum-of-product expressions, including complicated feedback paths between macro cells, and specialized logic for implementing various commonly used functions, such as integer arithmetic. The most noticeable difference between a large CPLD and a small FPGA is the presence of on-chip non-volatile memory in the CPLD, which allows CPLDs to be used for "boot loader" functions, before handing over control to other devices not having their own permanent program storage. A good example is where a CPLD is used to load configuration data for an FPGA from non-volatile memory. Distinctions CPLDs were an evolutionary step from even smaller devices that preceded them, PLAs (first shipped by Signetics), and PALs. These in turn were preceded by standard logic products
https://en.wikipedia.org/wiki/Carnivore%20%28software%29
Carnivore, later renamed DCS1000, was a system implemented by the Federal Bureau of Investigation (FBI) that was designed to monitor email and electronic communications. It used a customizable packet sniffer that could monitor all of a target user's Internet traffic. Carnivore was implemented in October 1997. By 2005 it had been replaced with improved commercial software. Development Carnivore grew out of an earlier FBI project called "Omnivore", which itself replaced an older undisclosed (at the time) surveillance tool migrated from the US Navy by FBI Director of Integrity and Compliance, Patrick W. Kelley. In September 1998, the FBI's Data Intercept Technology Unit (DITU) in Quantico, Virginia, launched a project to migrate Omnivore from Sun's Solaris operating system to a Windows NT platform. This was done to facilitate the miniaturization of the system and support a wider range of personal computer (PC) equipment. The migration project was called "Triple Phoenix" and the resulting system was named "Carnivore." Configuration The Carnivore system was a Microsoft Windows-based workstation with packet-sniffing software and a removable Jaz disk drive. This computer must be physically installed at an Internet service provider (ISP) or other location where it can "sniff" traffic on a LAN segment to look for email messages in transit. The technology itself was not highly advanced—it used a standard packet sniffer and straightforward filtering. No monitor or keyboard was present at the ISP. The critical components of the operation were the filtering criteria. Copies of every packet were made, and required filtering at a later time. To accurately match the appropriate subject, an elaborate content model was developed. An independent technical review of Carnivore for the Justice Department was prepared in 2000. Controversy Several groups and scholars expressed concern regarding the implementation, usage, and possible abuses of Carnivore. In July 2000, the Electronic F
https://en.wikipedia.org/wiki/Isozyme
In biochemistry, isozymes (also known as isoenzymes or more generally as multiple forms of enzymes) are enzymes that differ in amino acid sequence but catalyze the same chemical reaction. Isozymes usually have different kinetic parameters (e.g. different KM values), or are regulated differently. They permit the fine-tuning of metabolism to meet the particular needs of a given tissue or developmental stage. In many cases, isozymes are encoded by homologous genes that have diverged over time. Strictly speaking, enzymes with different amino acid sequences that catalyse the same reaction are isozymes if encoded by different genes, or allozymes if encoded by different alleles of the same gene; the two terms are often used interchangeably. Introduction Isozymes were first described by R. L. Hunter and Clement Markert (1957) who defined them as different variants of the same enzyme having identical functions and present in the same individual. This definition encompasses (1) enzyme variants that are the product of different genes and thus represent different loci (described as isozymes) and (2) enzymes that are the product of different alleles of the same gene (described as allozymes). Isozymes are usually the result of gene duplication, but can also arise from polyploidisation or nucleic acid hybridization. Over evolutionary time, if the function of the new variant remains identical to the original, then it is likely that one or the other will be lost as mutations accumulate, resulting in a pseudogene. However, if the mutations do not immediately prevent the enzyme from functioning, but instead modify either its function, or its pattern of expression, then the two variants may both be favoured by natural selection and become specialised to different functions. For example, they may be expressed at different stages of development or in different tissues. Allozymes may result from point mutations or from insertion-deletion (indel) events that affect the coding seque
https://en.wikipedia.org/wiki/Dataflow%20programming
In computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis and his graduate students at MIT in the 1960s. Considerations Traditionally, a program is modelled as a series of operations happening in a specific order; this may be referred to as sequential, procedural, control flow (indicating that the program chooses a specific path), or imperative programming. The program focuses on commands, in line with the von Neumann vision of sequential programming, where data is normally "at rest". In contrast, dataflow programming emphasizes the movement of data and models programs as a series of connections. Explicitly defined inputs and outputs connect operations, which function like black boxes. An operation runs as soon as all of its inputs become valid. Thus, dataflow languages are inherently parallel and can work well in large, decentralized systems. State One of the key concepts in computer programming is the idea of state, essentially a snapshot of various conditions in the system. Most programming languages require a considerable amount of state information, which is generally hidden from the programmer. Often, the computer itself has no idea which piece of information encodes the enduring state. This is a serious problem, as the state information needs to be shared across multiple processors in parallel processing machines. Most languages force the programmer to add extra code to indicate which data an
https://en.wikipedia.org/wiki/Dirac%20operator
In mathematics and quantum mechanics, a Dirac operator is a differential operator that is a formal square root, or half-iterate, of a second-order operator such as a Laplacian. The original case which concerned Paul Dirac was to factorise formally an operator for Minkowski space, to get a form of quantum theory compatible with special relativity; to get the relevant Laplacian as a product of first-order operators he introduced spinors. It was first published in 1928. Formal definition In general, let D be a first-order differential operator acting on a vector bundle V over a Riemannian manifold M. If where ∆ is the Laplacian of V, then D is called a Dirac operator. In high-energy physics, this requirement is often relaxed: only the second-order part of D2 must equal the Laplacian. Examples Example 1 D = −i ∂x is a Dirac operator on the tangent bundle over a line. Example 2 Consider a simple bundle of notable importance in physics: the configuration space of a particle with spin confined to a plane, which is also the base manifold. It is represented by a wavefunction where x and y are the usual coordinate functions on R2. χ specifies the probability amplitude for the particle to be in the spin-up state, and similarly for η. The so-called spin-Dirac operator can then be written where σi are the Pauli matrices. Note that the anticommutation relations for the Pauli matrices make the proof of the above defining property trivial. Those relations define the notion of a Clifford algebra. Solutions to the Dirac equation for spinor fields are often called harmonic spinors. Example 3 Feynman's Dirac operator describes the propagation of a free fermion in three dimensions and is elegantly written using the Feynman slash notation. In introductory textbooks to quantum field theory, this will appear in the form where are the off-diagonal Dirac matrices , with and the remaining constants are the speed of light, being Planck's constant, and the mass of a fe
https://en.wikipedia.org/wiki/F%C3%B6rster%20resonance%20energy%20transfer
Förster resonance energy transfer (FRET), fluorescence resonance energy transfer, resonance energy transfer (RET) or electronic energy transfer (EET) is a mechanism describing energy transfer between two light-sensitive molecules (chromophores). A donor chromophore, initially in its electronic excited state, may transfer energy to an acceptor chromophore through nonradiative dipole–dipole coupling. The efficiency of this energy transfer is inversely proportional to the sixth power of the distance between donor and acceptor, making FRET extremely sensitive to small changes in distance. Measurements of FRET efficiency can be used to determine if two fluorophores are within a certain distance of each other. Such measurements are used as a research tool in fields including biology and chemistry. FRET is analogous to near-field communication, in that the radius of interaction is much smaller than the wavelength of light emitted. In the near-field region, the excited chromophore emits a virtual photon that is instantly absorbed by a receiving chromophore. These virtual photons are undetectable, since their existence violates the conservation of energy and momentum, and hence FRET is known as a radiationless mechanism. Quantum electrodynamical calculations have been used to determine that radiationless (FRET) and radiative energy transfer are the short- and long-range asymptotes of a single unified mechanism. Terminology Förster resonance energy transfer is named after the German scientist Theodor Förster. When both chromophores are fluorescent, the term "fluorescence resonance energy transfer" is often used instead, although the energy is not actually transferred by fluorescence. In order to avoid an erroneous interpretation of the phenomenon that is always a nonradiative transfer of energy (even when occurring between two fluorescent chromophores), the name "Förster resonance energy transfer" is preferred to "fluorescence resonance energy transfer"; however, the latt
https://en.wikipedia.org/wiki/Java%20syntax
The syntax of Java is the set of rules defining how a Java program is written and interpreted. The syntax is mostly derived from C and C++. Unlike in C++, in Java there are no global functions or variables, but there are data members which are also regarded as global variables. All code belongs to classes and all values are objects. The only exception is the primitive types, which are not represented by a class instance for performance reasons (though can be automatically converted to objects and vice versa via autoboxing). Some features like operator overloading or unsigned integer types are omitted to simplify the language and to avoid possible programming mistakes. The Java syntax has been gradually extended in the course of numerous major JDK releases, and now supports capabilities such as generic programming and function literals (called lambda expressions in Java). Since 2017, a new JDK version is released twice a year, with each release bringing incremental improvements to the language. Basics Identifier An identifier is the name of an element in the code. There are certain standard naming conventions to follow when selecting names for elements. Identifiers in Java are case-sensitive. An identifier can contain: Any Unicode character that is a letter (including numeric letters like Roman numerals) or digit. Currency sign (such as ¥). Connecting punctuation character (such as _). An identifier cannot: Start with a digit. Be equal to a reserved keyword, null literal or boolean literal. Keywords Literals Integer literals are of int type by default unless long type is specified by appending L or l suffix to the literal, e.g. 367L. Since Java SE 7, it is possible to include underscores between the digits of a number to increase readability; for example, a number can be written as . Variables Variables are identifiers associated with values. They are declared by writing the variable's type and name, and are optionally initialized in the same statemen
https://en.wikipedia.org/wiki/WildCRU
The Wildlife Conservation Research Unit (WildCRU) is part of the Department of Zoology at the University of Oxford in England. Its mission is to achieve practical solutions to conservation problems through original scientific research, training conservation scientists to conduct research, putting scientific knowledge into practice, and educating and involving the public to achieve lasting solutions. The Unit was founded in 1986 by Professor David W. Macdonald. In 2022 Professor Amy Dickman took over from David W. Macdonald as Director. Members come from more than 30 countries and many have returned to hold influential roles in conservation. WildCRU research has been used to advise policy-makers worldwide. More than 300 scientific papers and 25 reports have been published, over a hundred fruitful collaborations have been fostered, and over 45 students have completed doctoral theses. WildCRU projects use all four elements of their Conservation Quartet: research to understand the problem, education to explain it, community involvement to ensure participation and acceptance, and implementation of a solution. The approach is interdisciplinary, linking to public health, community development and animal welfare. In a new initiative concerning ‘biodiversity and business’, WildCRU is working directly to influence policy making processes in industry. Current project areas include saving endangered species, resolving conflict, reconciling farming and wildlife, researching fundamental ecology, and managing wildlife diseases, pests and invasive species. Specific projects include protecting the Ethiopian wolf, Grevy's zebra and endemic birds in the Galapagos Islands, finding solutions to bushmeat exploitation in West Africa, community conservation education in Africa, sustainable farming, badger ecology and behavior, and the impact of American mink on native wildlife in Britain, Belarus, and Argentina. WildCRU is located in Tubney House, Abingdon Road, Tubney, Oxfordshire.
https://en.wikipedia.org/wiki/RT.X100
The RT.X100 Pro Suite was a real-time PCI video editing card manufactured by Matrox Corporation. With the use of Adobe Premiere it enabled a real time preview on TV or Video Monitor. It was generally bundled with Adobe Premiere Pro (video editing software), Adobe Audition (digital audio editor), and Adobe Encore DVD (for the creation of DVDs). The RT.X100 Pro Collection added a copy of Adobe After Effects (special effects software). It was released in 2003 and meant to replace the Matrox RT2500. See also AMD FirePro External links RT.X100 Home Page Graphics hardware Graphics processing units
https://en.wikipedia.org/wiki/FLEX%20%28protocol%29
FLEX is a communications protocol developed by Motorola and used in many pagers. FLEX provides one-way communication only (from the provider to the pager device), but a related protocol called ReFLEX provides two-way messaging. Protocol Transmission of message data occurs in one of four modes: 1600/2, 3200/2, 3200/4, or 6400/4. All modes use FSK modulation. At 1600/2 this is on a 2 level FSK signal transmitted at 1600 bits per second. At 3200/2, this is a 2 level FSK signal transmitted at 3200 bits per second. At 3200/4, this is a 4 level FSK signal transmitted at 1600 symbols per second. Each 4 level symbol represents two bits for a bit rate of 3200 bits per second. At 6400/4, this is a 4 level FSK signal transmitted at 3200 symbols per second or 6400 bits per second. Data is transmitted in a set of 128 frames that takes 4 minutes to complete. Each frame contains a sync followed by 11 data blocks. The data blocks contain 256, 512 or 1024 bits for 1600, 3200 or 6400 bits per second respectively. The standard has been designed to allow the pager's receiver to be turned off for a high percentage of the time and therefore save on battery usage. Security Transmitted data over FLEX is not encrypted. A BCH error correcting code is used to improve the integrity of the data, although this is not cryptographically secure. There have been reported instances of individuals actively listening to pager traffic (private investigators, news organizations, etc.). Usage In The Netherlands the emergency services use the Flex-protocol in the nationwide P2000 network for pagers. The traffic on this network can be monitored online. In South Australia the State's SAGRN network for the Emergency Services paging system (CFS, SES, MFS and SAAS) is run on the FLEX 1600 protocol, and can be monitored online. See also ReFLEX Mobitex DataTAC External links FLEX: The New Edge For The Paging Industry Design And Implementation Of A Practical FLEX Paging Decoder US6396411B1: Relia
https://en.wikipedia.org/wiki/List%20of%20PHP%20editors
This article contains a list of text editors with features specific to the PHP scripting language. Free editors Cross-platform Aptana Studio – Eclipse-based IDE, able to use PDT plugins, visual JS editor. Open-source, free project. (Community edition merged in). Atom – free and open-source text editor with out-of-the-box PHP support. Bluefish – a multipurpose editor with PHP syntax support, in-line PHP documentation, etc. With GVfs, supports SFTP, FTP, WebDAV, and SMB. Brackets – free and open-source editor in HTML5/NodeJS by Adobe Team the best for integration frontend CodeLite – an open source, cross platform IDE for C/C++ and PHP. The built-in plugins supports SVN, SSH/SFTP access, Git database browsing and others. Eclipse – PHP Development Tools (PDT) and PHPEclipse projects. With additional plugins supports SVN, CVS, database modelling, SSH/FTP access, database navigation, Trac integration, and others. Editra – open source editor. Syntax highlighting and (partial) code completion for PHP + HTML and other IDE-like features like code browser etc. Emacs – advanced text editor. The nXhtml addon has special support for PHP (and other template languages). The major mode web-mode.el is designed for editing mixed HTML templates. Geany – syntax highlighting for HTML + PHP. Provides PHP function list. jEdit – free/open source editor. Supports SFTP and FTP. Komodo Edit – general purpose scripting language editor with support for PHP. Free version of the commercial ActiveState Komodo IDE. Netbeans – IDE with PHP support and integration with web standards. Supports SFTP and FTP. Full support for SVN and Git since 7.2 and powerful plugin support for added functionality. SciTE – PHP syntax highlighting, compiler integration, powerful config via Lua API. Vim – provides PHP syntax highlighting, debugging. Windows ConTEXT – *No longer under development* Freeware editor with syntax highlighting. Crimson Editor – Lightweight editor. Supports FTP. Microsoft WebM
https://en.wikipedia.org/wiki/Query%20by%20humming
Query by humming (QbH) is a music retrieval system that branches off the original classification systems of title, artist, composer, and genre. It normally applies to songs or other music with a distinct single theme or melody. The system involves taking a user-hummed melody (input query) and comparing it to an existing database. The system then returns a ranked list of music closest to the input query. One example of this would be a system involving a portable media player with a built-in microphone that allows for faster searching through media files. The MPEG-7 standard includes provisions for QbH music searches. Examples of QbH systems include ACRCloud, SoundHound, Musipedia, and Tunebot. External links Query By Humming – Musical Information Retrieval in an Audio Database, paper by Asif Ghias, Jonathan Logan, David Chamberlin, Brian C. Smith; ACM Multimedia 1995 A survey presentation of QBH by Eugene Weinstein, 2006 The New Zealand Digital Library MELody inDEX, article by Rodger J. McNab, Lloyd A. Smith, David Bainbridge and Ian H. Witten; D-Lib Magazine 1997 Name that Tune: A Pilot Study in Finding a Melody from a Sung Query, article by Bryan Pardo, Jonah Shifrin, and William Birmingham, Journal of the American Society for Information Science and Technology, vol. 55 (4), pp. 283-300, 2004 Acoustic fingerprinting Music search engines Voice technology
https://en.wikipedia.org/wiki/PALASM
PALASM is an early hardware description language, used to translate Boolean functions and state transition tables into a fuse map for use with Programmable Array Logic (PAL) devices introduced by Monolithic Memories, Inc. (MMI). The language was developed by John Birkner in the early 1980s. It is not case-sensitive. The PALASM compiler was written by MMI in FORTRAN IV on an IBM 370/168. MMI made the source code available to users at no cost. By 1983, MMI customers ran versions on the DEC PDP-11, Data General NOVA, Hewlett-Packard HP 2100, MDS800 and others. A widely used MS-DOS port was produced by MMI. There was a Windows front-end written sometime later. See also Advanced Boolean Expression Language (ABEL) References PALASM 4 V1.5 download External links brouhaha.com - MMI PALASM notes with FORTRAN Source Code "MMI Datebook" with PALASM examples and users guide Hardware description languages
https://en.wikipedia.org/wiki/Point-set%20triangulation
A triangulation of a set of points in the Euclidean space is a simplicial complex that covers the convex hull of , and whose vertices belong to . In the plane (when is a set of points in ), triangulations are made up of triangles, together with their edges and vertices. Some authors require that all the points of are vertices of its triangulations. In this case, a triangulation of a set of points in the plane can alternatively be defined as a maximal set of non-crossing edges between points of . In the plane, triangulations are special cases of planar straight-line graphs. A particularly interesting kind of triangulations are the Delaunay triangulations. They are the geometric duals of Voronoi diagrams. The Delaunay triangulation of a set of points in the plane contains the Gabriel graph, the nearest neighbor graph and the minimal spanning tree of . Triangulations have a number of applications, and there is an interest to find the "good" triangulations of a given point set under some criteria as, for instance minimum-weight triangulations. Sometimes it is desirable to have a triangulation with special properties, e.g., in which all triangles have large angles (long and narrow ("splinter") triangles are avoided). Given a set of edges that connect points of the plane, the problem to determine whether they contain a triangulation is NP-complete. Regular triangulations Some triangulations of a set of points can be obtained by lifting the points of into (which amounts to add a coordinate to each point of ), by computing the convex hull of the lifted set of points, and by projecting the lower faces of this convex hull back on . The triangulations built this way are referred to as the regular triangulations of . When the points are lifted to the paraboloid of equation , this construction results in the Delaunay triangulation of . Note that, in order for this construction to provide a triangulation, the lower convex hull of the lifted set of points needs to b
https://en.wikipedia.org/wiki/Chisini%20mean
In mathematics, a function f of n variables x1, ..., xn leads to a Chisini mean M if, for every vector ⟨x1, ..., xn⟩, there exists a unique M such that f(M,M, ..., M) = f(x1,x2, ..., xn). The arithmetic, harmonic, geometric, generalised, Heronian and quadratic means are all Chisini means, as are their weighted variants. While Oscar Chisini was arguably the first to deal with "substitution means" in some depth in 1929, the idea of defining a mean as above is quite old, appearing (for example) in early works of Augustus De Morgan. See also Fréchet mean Generalized mean Jensen's inequality Quasi-arithmetic mean Stolarsky mean References Mathematical analysis Means
https://en.wikipedia.org/wiki/Freebox
The Freebox is an ADSL-VDSL-FTTH modem and a set-top box that the French Internet service provider named Free (part of the Iliad group) provides to its DSL-FTTH subscribers. Its main use is as a high-end fixed and wireless modem (802.11g MIMO), but it also allows Free to offer additional services over ADSL, such as IPTV including high definition (1080p), Video recording with timeshifting capabilities, digital radio and VoIP telephone service via one RJ-11 connector (the first version came with 2 such jacks but only one was ever activated) The Freebox is provided free to the subscribers, its value being 190 Euros, according to the operator. It is delivered with a remote control, a multimedia box equipped with a 250 GB hard drive, and accessories (cables and filters). At the end of Q2 2005, more than 1.1 million subscribers were equipped with the Freebox. According to company official's results publication, the 2 million level of Freeboxes were reached in September 2006. V6 generation, Freebox Révolution The sixth generation device is called the Freebox Révolution or V6 (Version 6). It was launched in early 2011. It is composed of a pair of devices: the ADSL modem/router and the IPTV set top box/media player. The boxes were designed by Philippe Starck. The Freebox Server device The Freebox server is a DSL modem, a router, a Wi-Fi hot spot, a NAS (250 GB hard drive), a DECT base with up to 8 connected DECT phone sets, and a digital video recorder for TNT also known as DVB-T and IPTV. As the firmware is updated, its functionalities increase. Most notably: An external hard drive can be connected to its USB and/or eSATA port. However, some TV channels cannot be recorded to an external hard drive due to copyright policy limitations. Such limitations do not apply to channels recorded from TNT. The video formats supported are quite wide in range including mp4, H264, mp2, mkv, avi and others. Some formats are not supported though firmware updates may increase the numbe
https://en.wikipedia.org/wiki/Single%20program%2C%20multiple%20data
In computing, single program, multiple data (SPMD) is a term that has been used to  refer to computational models for exploiting parallelism where-by multiple processors cooperate in the execution of a program in order to obtain results faster. The term SPMD was introduced in 1983 and was used to denote two different computational models: by Michel Auguin (University of Nice Sophia-Antipolis) and François Larbey (Thomson/Sintra), as a “fork-and-join” and data-parallel approach where the parallel tasks (“single program”) are split-up and run simultaneously in lockstep on multiple SIMD processors with different inputs, and by Frederica Darema (IBM), where “all (processors) processes  begin executing the same program... but through synchronization directives ... self-schedule themselves  to execute different instructions and act on different data” and enabling MIMD parallelization of a given program, and is a more general approach than data-parallel and more efficient than the fork-and-join for parallel execution on general purpose multiprocessors. The (IBM) SPMD is the most common style of parallel programming and can be considered a subcategory of MIMD in that it refers to MIMD execution of a given (“single”) program. It is also a prerequisite for research concepts such as active messages and distributed shared memory. SPMD vs SIMD In SPMD parallel execution, multiple autonomous processors simultaneously execute the same program at independent points, rather than in the lockstep that SIMD or SIMT imposes on different data. With SPMD, tasks can be executed on general purpose CPUs. In SIMD the same operation (instruction) is applied on multiple data to manipulate data streams (a version of SIMD is vector processing where the data are organized as vectors). Another class of processors, GPUs encompass multiple SIMD streams processing.  Note that SPMD and SIMD are not mutually exclusive; SPMD parallel execution can include SIMD, or vector, or GPU sub-processing.
https://en.wikipedia.org/wiki/List%20of%20system%20quality%20attributes
Within systems engineering, quality attributes are realized non-functional requirements used to evaluate the performance of a system. These are sometimes named architecture characteristics, or "ilities" after the suffix many of the words share. They are usually architecturally significant requirements that require architects' attention. Quality attributes Notable quality attributes include: accessibility accountability accuracy adaptability administrability affordability agility auditability autonomy availability compatibility composability confidentiality configurability correctness credibility customizability debuggability degradability determinability demonstrability dependability deployability discoverability distributability durability effectiveness efficiency evolvability extensibility failure transparency fault-tolerance fidelity flexibility inspectability installability integrity interchangeability interoperability learnability localizability maintainability manageability mobility modifiability modularity observability operability orthogonality portability precision predictability process capabilities producibility provability recoverability redundancy relevance reliability repeatability reproducibility resilience responsiveness reusability robustness safety scalability seamlessness self-sustainability serviceability (a.k.a. supportability) securability simplicity stability standards compliance survivability sustainability tailorability testability timeliness traceability transparency ubiquity understandability upgradability usability vulnerability Many of these quality attributes can also be applied to data quality. Common subsets Together, reliability, availability, serviceability, usability and installability, are referred to as RASUI. Functionality, usability, reliability, performance and supportability are together referred to as FURPS in relation to software