source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Sound%20quality
Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound. The sound quality of a reproduction or recording depends on a number of factors, including the equipment used to make it, processing and mastering done to the recording, the equipment used to reproduce it, as well as the listening environment used to reproduce it. In some cases, processing such as equalization, dynamic range compression or stereo processing may be applied to a recording to create audio that is significantly different from the original but may be perceived as more agreeable to a listener. In other cases, the goal may be to reproduce audio as closely as possible to the original. When applied to specific electronic devices, such as loudspeakers, microphones, amplifiers or headphones sound quality usually refers to accuracy, with higher quality devices providing higher accuracy reproduction. When applied to processing steps such as mastering recordings, absolute accuracy may be secondary to artistic or aesthetic concerns. In still other situations, such as recording a live musical performance, audio quality may refer to proper placement of microphones around a room to optimally use room acoustics. Digital audio Digital audio is stored in many formats. The simplest form is uncompressed PCM, where audio is stored as a series of quantized audio samples spaced at regular intervals in time. As samples are placed closer together in time, higher frequencies can be reproduced. According to the sampling theorem, any bandwidth-limited signal (that does not contain a pure sinusoidal component), bandwidth B, can be perfectly described by more than 2B samp
https://en.wikipedia.org/wiki/Dynamic%20positioning
Dynamic positioning (DP) is a computer-controlled system to automatically maintain a vessel's position and heading by using its own propellers and thrusters. Position reference sensors, combined with wind sensors, motion sensors and gyrocompasses, provide information to the computer pertaining to the vessel's position and the magnitude and direction of environmental forces affecting its position. Examples of vessel types that employ DP include ships and semi-submersible mobile offshore drilling units (MODU), oceanographic research vessels, cable layer ships and cruise ships. The computer program contains a mathematical model of the vessel that includes information pertaining to the wind and current drag of the vessel and the location of the thrusters. This knowledge, combined with the sensor information, allows the computer to calculate the required steering angle and thruster output for each thruster. This allows operations at sea where mooring or anchoring is not feasible due to deep water, congestion on the sea bottom (pipelines, templates) or other problems. Dynamic positioning may either be absolute in that the position is locked to a fixed point over the bottom, or relative to a moving object like another ship or an underwater vehicle. One may also position the ship at a favorable angle towards wind, waves and current, called weathervaning. Dynamic positioning is used by much of the offshore oil industry, for example in the North Sea, Persian Gulf, Gulf of Mexico, West Africa, and off the coast of Brazil. There are currently more than 1800 DP ships. History Dynamic positioning began in the 1960s for offshore drilling. With drilling moving into ever deeper waters, Jack-up barges could not be used any more, and anchoring in deep water was not economical. As part of Project Mohole, in 1961 the drillship Cuss 1 was fitted with four steerable propellers. The Mohole project was attempting to drill to the Moho, which required a solution for deep water drilling.
https://en.wikipedia.org/wiki/Content%20moderation
On Internet websites that invite users to post comments, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves. Various types of Internet sites permit user-generated content such as comments, including Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke. and etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, they will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site. Major platforms use a combination of algorithmic tools, user reporting and human review. Social media sites may also employ content moderators to manually inspect or remove content flagged for hate speech or other objectionable content. Other content issues include revenge porn, graphic content, child abuse material and propaganda. Some websites must also make their content hospitable to advertisements. Supervisor moderation Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community. They act as an invisible backbone, underpinning the social web in a crucial but undervalue
https://en.wikipedia.org/wiki/Molybdenum%20disulfide
Molybdenum disulfide (or moly) is an inorganic compound composed of molybdenum and sulfur. Its chemical formula is . The compound is classified as a transition metal dichalcogenide. It is a silvery black solid that occurs as the mineral molybdenite, the principal ore for molybdenum. is relatively unreactive. It is unaffected by dilute acids and oxygen. In appearance and feel, molybdenum disulfide is similar to graphite. It is widely used as a dry lubricant because of its low friction and robustness. Bulk is a diamagnetic, indirect bandgap semiconductor similar to silicon, with a bandgap of 1.23 eV. Production MoS2 is naturally found as either molybdenite, a crystalline mineral, or jordisite, a rare low temperature form of molybdenite. Molybdenite ore is processed by flotation to give relatively pure . The main contaminant is carbon. also arises by thermal treatment of virtually all molybdenum compounds with hydrogen sulfide or elemental sulfur and can be produced by metathesis reactions from molybdenum pentachloride. Structure and physical properties Crystalline phases All forms of have a layered structure, in which a plane of molybdenum atoms is sandwiched by planes of sulfide ions. These three strata form a monolayer of MoS2. Bulk MoS2 consists of stacked monolayers, which are held together by weak van der Waals interactions. Crystalline MoS2 exists in one of two phases, 2H-MoS2 and 3R-MoS2, where the "H" and the "R" indicate hexagonal and rhombohedral symmetry, respectively. In both of these structures, each molybdenum atom exists at the center of a trigonal prismatic coordination sphere and is covalently bonded to six sulfide ions. Each sulfur atom has pyramidal coordination and is bonded to three molybdenum atoms. Both the 2H- and 3R-phases are semiconducting. A third, metastable crystalline phase known as 1T-MoS2 was discovered by intercalating 2H-MoS2 with alkali metals. This phase has trigonal symmetry and is metallic. The 1T-phase can be stabili
https://en.wikipedia.org/wiki/Reflection%20%28mathematics%29
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as a set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. Such isometries have a set of fixed points (the "mirror") that is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it would look like a d. This operation is also known as a central inversion , and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane. Some mathematicians use "flip" as a synonym for "reflection". Construction In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure. To reflect point through the line using compass
https://en.wikipedia.org/wiki/Pitch%20control
A variable speed pitch control (or vari-speed) is a control on an audio device such as a turntable, tape recorder, or CD player that allows the operator to deviate from a standard speed (such as 33, 45 or even 78 rpm on a turntable), resulting in adjustments in pitch. The latter term "vari-speed" is more commonly used for tape decks, particularly in the UK. Analog pitch controls vary the voltage being used by the playback device; digital controls use digital signal processing to change the playback speed or pitch. A typical DJ deck allows the pitch to be increased or reduced by up to 8%, which is achieved by increasing or reducing the speed at which the platter rotates. Turntable or CD playing speed may be changed for beatmatching and other DJ techniques, while pitch shift using a pitch control has myriad uses in sound recording. Vari-speed in consumer cassette decks Superscope, Inc. of Sun Valley added vari-speed as a feature of portable cassette decks in 1975. The C-104 and C-105 models incorporated this feature. Superscope trademarked the name Vari-Speed in 1974. The trademark category was Computer & Software Products & Electrical & Scientific Products. The trademark goods and Services use was Magnetic tape recorders and reproducers. The trademark expired in 1995. DJ pitch usage For DJs the possibility to alter the pitch on their turntables or CD players is standard. On the Technics SL-1200 turntables there both has been the possibility of changing the pitch +/− 8% and models where you could change the pitch +/− 16%. This has become the standard amongst DJs and most software that emulates a DJ setup have their pitch control often standard at +/− 8%. With the possibility to change it to +/− 16% and some other values. By changing the pitch the DJ can alter the BPM of a record within this limited range. This is a key component for beatmatching, enabling the DJ to get two records to play at the same BPM thereby synchronising the beat for the dancers. Re
https://en.wikipedia.org/wiki/Without%20loss%20of%20generality
Without loss of generality (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as without any loss of generality or with no loss of generality) is a frequently used expression in mathematics. The term is used to indicate the assumption that follows is chosen arbitrarily, narrowing the premise to a particular case, but does not affect the validity of the proof in general. The other cases are sufficiently similar to the one presented that proving them follows by essentially the same logic. As a result, once a proof is given for the particular case, it is trivial to adapt it to prove the conclusion in all other cases. In many scenarios, the use of "without loss of generality" is made possible by the presence of symmetry. For example, if some property P(x,y) of real numbers is known to be symmetric in x and y, namely that P(x,y) is equivalent to P(y,x), then in proving that P(x,y) holds for every x and y, one may assume "without loss of generality" that x ≤ y. There is no loss of generality in this assumption, since once the case x ≤ y ⇒ P(x,y) has been proved, the other case follows by interchanging x and y : y ≤ x ⇒ P(y,x), and by symmetry of P, this implies P(x,y), thereby showing that P(x,y) holds for all cases. On the other hand, if neither such a symmetry nor another form of equivalence can be established, then the use of "without loss of generality" is incorrect and can amount to an instance of proof by example – a logical fallacy of proving a claim by proving a non-representative example. Example Consider the following theorem (which is a case of the pigeonhole principle): A proof: The above argument works because the exact same reasoning could be applied if the alternative assumption, namely, that the first object is blue, were made, or, similarly, that the words 'red' and 'blue' can be freely exchanged in the wording of the proof. As a result, the use of "without loss of generality" is valid in this case. See also Up to Mat
https://en.wikipedia.org/wiki/Coherer
The coherer was a primitive form of radio signal detector used in the first radio receivers during the wireless telegraphy era at the beginning of the 20th century. Its use in radio was based on the 1890 findings of French physicist Édouard Branly and adapted by other physicists and inventors over the next ten years. The device consists of a tube or capsule containing two electrodes spaced a small distance apart with loose metal filings in the space between. When a radio frequency signal is applied to the device, the metal particles would cling together or "cohere", reducing the initial high resistance of the device, thereby allowing a much greater direct current to flow through it. In a receiver, the current would activate a bell, or a Morse paper tape recorder to make a record of the received signal. The metal filings in the coherer remained conductive after the signal (pulse) ended so that the coherer had to be "decohered" by tapping it with a clapper actuated by an electromagnet, each time a signal was received, thereby restoring the coherer to its original state. Coherers remained in widespread use until about 1907, when they were replaced by more sensitive electrolytic and crystal detectors. History The behavior of particles or metal filings in the presence of electricity or electric sparks was noticed in many experiments well before Édouard Branly's 1890 paper and even before there was proof of the theory of electromagnetism. In 1835 Swedish scientist Peter Samuel Munk noticed a change of resistance in a mixture of metal filings in the presence of spark discharge from a Leyden jar. In 1850 Pierre Guitard found that when dusty air was electrified, the particles would tend to collect in the form of strings. The idea that particles could react to electricity was used in English engineer Samuel Alfred Varley's 1866 lightning bridge, a lightning arrester attached to telegraph lines consisting of a piece of wood with two metal spikes extending into a chamber. The
https://en.wikipedia.org/wiki/Yagi%E2%80%93Uda%20antenna
A Yagi–Uda antenna, or simply Yagi antenna, is a directional antenna consisting of two or more parallel resonant antenna elements in an end-fire array; these elements are most often metal rods acting as half-wave dipoles. Yagi–Uda antennas consist of a single driven element connected to a radio transmitter and/or receiver through a transmission line, and additional "passive radiators" with no electrical connection, usually including one so-called reflector and any number of directors. It was invented in 1926 by Shintaro Uda of Tohoku Imperial University, Japan, with a lesser role played by his boss Hidetsugu Yagi. Reflector elements (usually only one is used) are slightly longer than the driven dipole and placed behind the driven element, opposite the direction of intended transmission. Directors, on the other hand, are a little shorter and placed in front of the driven element in the intended direction. These parasitic elements are typically off-tuned short-circuited dipole elements, that is, instead of a break at the feedpoint (like the driven element) a solid rod is used. They receive and reradiate the radio waves from the driven element but in a different phase determined by their exact lengths. Their effect is to modify the driven element's radiation pattern. The waves from the multiple elements superpose and interfere to enhance radiation in a single direction, increasing the antenna's gain in that direction. Also called a beam antenna and parasitic array, the Yagi is very widely used as a directional antenna on the HF, VHF and UHF bands. It has moderate to high gain of up to 20 dBi, depending on the number of elements used, and a front-to-back ratio of up to 20 dB. It radiates linearly polarized radio waves, and can be mounted for either horizontal or vertical polarization. It is relatively lightweight, inexpensive and simple to construct. The bandwidth of a Yagi antenna, the frequency range over which it maintains its gain and feedpoint impedance, is narro
https://en.wikipedia.org/wiki/Specific%20Area%20Message%20Encoding
Specific Area Message Encoding (SAME) is a protocol used for framing and classification of broadcasting emergency warning messages. It was developed by the United States National Weather Service for use on its NOAA Weather Radio (NWR) network, and was later adopted by the Federal Communications Commission for the Emergency Alert System, then subsequently by Environment Canada for use on its Weatheradio Canada service. It is also used to set off receivers in Mexico City and surrounding areas as part of the Mexican Seismic Alert System (SASMEX). History From the 1960s to the 1980s, a special feature of the NOAA Weather Radio (NWR) system was the transmission of a single attention tone prior to the broadcast of any message alerting the general public of significant weather events. This became known as the Warning Alarm Tone (WAT). Although it served NWR well, there were many drawbacks. Without staff at media facilities to manually evaluate the need to rebroadcast an NWR message using the Emergency Broadcast System (EBS), automatic rebroadcasting of all messages preceded by just the WAT was unacceptable and impractical. Even if stations and others with the need were willing to allow for this type of automatic capture, assuming the events for activation were critical, there was no way for automated equipment at the station to know when the message was complete and restore it back to normal operation. SAME had its beginnings in the early 1980s when NOAA's National Weather Service (NWS) began experimenting with system using analog tones in a dual-tone multi-frequency (DTMF) format to transmit data with radio broadcasts. In 1985, the NWS forecast offices began experimenting with placing special digital codes at the beginning and end of every message concerning life- or property-threatening weather conditions targeting a specific area. The intent of what became SAME was to ultimately transmit a code with the initial broadcast of all NWR messages. However, the roll-out mo
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Ko%E2%80%93Rado%20theorem
In mathematics, the Erdős–Ko–Rado theorem limits the number of sets in a family of sets for which every two sets have at least one element in common. Paul Erdős, Chao Ko, and Richard Rado proved the theorem in 1938, but did not publish it until 1961. It is part of the field of combinatorics, and one of the central results of The theorem applies to families of sets that all have the same and are all subsets of some larger set of size One way to construct a family of sets with these parameters, each two sharing an element, is to choose a single element to belong to all the subsets, and then form all of the subsets that contain the chosen element. The Erdős–Ko–Rado theorem states that when is large enough for the problem to be nontrivial this construction produces the largest possible intersecting families. When there are other equally-large families, but for larger values of only the families constructed in this way can be largest. The Erdős–Ko–Rado theorem can also be described in terms of hypergraphs or independent sets in Kneser graphs. Several analogous theorems apply to other kinds of mathematical object than sets, including linear subspaces, permutations, and strings. They again describe the largest possible intersecting families as being formed by choosing an element and forming the family of all objects that contain the chosen element. Statement Suppose that is a family of distinct subsets of an set and that each two subsets share at least one element. Then the theorem states that the number of sets in is at most the binomial coefficient The requirement that is necessary for the problem to be nontrivial: all sets intersect, and the largest intersecting family consists of all sets, with The same result can be formulated as part of the theory of hypergraphs. A family of sets may also be called a hypergraph, and when all the sets (which are called "hyperedges" in this context) are the same it is called an hypergraph. The theorem thus giv
https://en.wikipedia.org/wiki/Regnal%20number
Regnal numbers are ordinal numbers used to distinguish among persons with the same name who held the same office. Most importantly, they are used to distinguish monarchs. An ordinal is the number placed after a monarch's regnal name to differentiate between a number of kings, queens or princes reigning the same territory with the same regnal name. It is common to start counting either since the beginning of the monarchy, or since the beginning of a particular line of state succession. For example, Boris III of Bulgaria and his son Simeon II were given their regnal numbers because the medieval rulers of the First and Second Bulgarian Empire were counted as well even if the present Bulgarian state dated only back to 1878 and were only distantly related to the monarchs of previous Bulgarian states. On the other hand, the kings of England and kings of Great Britain and the United Kingdom are counted starting with the Norman Conquest. That is why the son of Henry III of England is counted as Edward I, even though there were three English monarchs named Edward before the Conquest (they were distinguished by epithets instead). Sometimes legendary or fictional persons are included. For example, the Swedish kings Eric XIV (reigned 1560–68) and Charles IX (1604–11) took ordinals based on a fanciful 1544 history by Johannes Magnus, which invented six kings of each name before those accepted by later historians. A list of Swedish monarchs, represented on the map of the Estates of the Swedish Crown, produced by French engraver (1673–1721) and published in Paris in 1719, starts with Canute I and shows Eric XIV and Charles IX as Eric IV and Charles II respectively; the only Charles holding his traditional ordinal in the list is Charles XII. Also, in the case of Emperor Menelik II of Ethiopia, he chose his regnal number with reference to a mythical ancestor and first sovereign of his country (a supposed son of biblical King Solomon) to underline his legitimacy into the so-calle
https://en.wikipedia.org/wiki/X.509
In cryptography, X.509 is an International Telecommunication Union (ITU) standard defining the format of public key certificates. X.509 certificates are used in many Internet protocols, including TLS/SSL, which is the basis for HTTPS, the secure protocol for browsing the web. They are also used in offline applications, like electronic signatures. An X.509 certificate binds an identity to a public key using a digital signature. A certificate contains an identity (a hostname, or an organization, or an individual) and a public key (RSA, DSA, ECDSA, ed25519, etc.), and is either signed by a certificate authority or is self-signed. When a certificate is signed by a trusted certificate authority, or validated by other means, someone holding that certificate can use the public key it contains to establish secure communications with another party, or validate documents digitally signed by the corresponding private key. X.509 also defines certificate revocation lists, which are a means to distribute information about certificates that have been deemed invalid by a signing authority, as well as a certification path validation algorithm, which allows for certificates to be signed by intermediate CA certificates, which are, in turn, signed by other certificates, eventually reaching a trust anchor. X.509 is defined by the International Telecommunications Union's "Standardization Sector" (ITU-T's SG17), in ITU-T Study Group 17 and is based on ASN.1, another ITU-T standard. History and usage X.509 was initially issued on July 3, 1988, and was begun in association with the X.500 standard. The first tasks of it was providing users with secure access to information resources and avoiding a cryptographic man-in-the-middle attack. It assumes a strict hierarchical system of certificate authorities (CAs) for issuing the certificates. This contrasts with web of trust models, like PGP, where anyone (not just special CAs) may sign and thus attest to the validity of others' key certifica
https://en.wikipedia.org/wiki/Web%20of%20trust
In cryptography, a web of trust is a concept used in PGP, GnuPG, and other OpenPGP-compatible systems to establish the authenticity of the binding between a public key and its owner. Its decentralized trust model is an alternative to the centralized trust model of a public key infrastructure (PKI), which relies exclusively on a certificate authority (or a hierarchy of such). As with computer networks, there are many independent webs of trust, and any user (through their public key certificate) can be a part of, and a link between, multiple webs. The web of trust concept was first put forth by PGP creator Phil Zimmermann in 1992 in the manual for PGP version 2.0: Note the use of the word emergence in this context. The web of trust makes use of the concept of emergence. Operation of a web of trust All OpenPGP-compliant implementations include a certificate vetting scheme to assist with this; its operation has been termed a web of trust. OpenPGP certificates (which include one or more public keys along with owner information) can be digitally signed by other users who, by that act, endorse the association of that public key with the person or entity listed in the certificate. This is commonly done at key signing parties. OpenPGP-compliant implementations also include a vote counting scheme which can be used to determine which public key – owner association a user will trust while using PGP. For instance, if three partially trusted endorsers have vouched for a certificate (and so its included public key – owner binding), or if one fully trusted endorser has done so, the association between owner and public key in that certificate will be trusted to be correct. The parameters are user-adjustable (e.g., no partials at all, or perhaps six partials) and can be completely bypassed if desired. The scheme is flexible, unlike most public key infrastructure designs, and leaves trust decisions in the hands of individual users. It is not perfect and requires both caution and
https://en.wikipedia.org/wiki/Root%20certificate
In cryptography and computer security, a root certificate is a public key certificate that identifies a root certificate authority (CA). Root certificates are self-signed (and it is possible for a certificate to have multiple trust paths, say if the certificate was issued by a root that was cross-signed) and form the basis of an X.509-based public key infrastructure (PKI). Either it has matched Authority Key Identifier with Subject Key Identifier, in some cases there is no Authority Key identifier, then Issuer string should match with Subject string (). For instance, the PKIs supporting HTTPS for secure web browsing and electronic signature schemes depend on a set of root certificates. A certificate authority can issue multiple certificates in the form of a tree structure. A root certificate is the top-most certificate of the tree, the private key which is used to "sign" other certificates. All certificates signed by the root certificate, with the "CA" field set to true, inherit the trustworthiness of the root certificate—a signature by a root certificate is somewhat analogous to "notarizing" identity in the physical world. Such a certificate is called an intermediate certificate or subordinate CA certificate. Certificates further down the tree also depend on the trustworthiness of the intermediates. The root certificate is usually made trustworthy by some mechanism other than a certificate, such as by secure physical distribution. For example, some of the best-known root certificates are distributed in operating systems by their manufacturers. Microsoft distributes root certificates belonging to members of the Microsoft Root Certificate Program to Windows desktops and Windows Phone 8. Apple distributes root certificates belonging to members of its own root program. Incidents of root certificate misuse DigiNotar hack of 2011 In 2011, the Dutch certificate authority DigiNotar suffered a security breach. This led to the issuing of various fraudulent certificates
https://en.wikipedia.org/wiki/Certificate%20revocation%20list
In cryptography, a certificate revocation list (or CRL) is "a list of digital certificates that have been revoked by the issuing certificate authority (CA) before their scheduled expiration date and should no longer be trusted". CRLs are no longer required by the CA/Browser forum, as alternate certificate revocation technologies (such as OCSP) are increasingly used instead. Nevertheless, CRLs are still widely used by the CAs. Revocation states There are two different states of revocation defined in RFC 5280: Revoked A certificate is irreversibly revoked if, for example, it is discovered that the certificate authority (CA) had improperly issued a certificate, or if a private-key is thought to have been compromised. Certificates may also be revoked for failure of the identified entity to adhere to policy requirements, such as publication of false documents, misrepresentation of software behaviour, or violation of any other policy specified by the CA operator or its customer. The most common reason for revocation is the user no longer being in sole possession of the private key (e.g., the token containing the private key has been lost or stolen). Hold This reversible status can be used to note the temporary invalidity of the certificate (e.g., if the user is unsure if the private key has been lost). If, in this example, the private key was found and nobody had access to it, the status could be reinstated, and the certificate is valid again, thus removing the certificate from future CRLs. Reasons for revocation Reasons to revoke, hold, or unlist a certificate according to RFC 5280 are: unspecified (0) keyCompromise (1) cACompromise (2) affiliationChanged (3) superseded (4) cessationOfOperation (5) certificateHold (6) removeFromCRL (8) privilegeWithdrawn (9) aACompromise (10) Note that value 7 is not used. Publishing revocation lists A CRL is generated and published periodically, often at a defined interval. A CRL can also be published immediately afte
https://en.wikipedia.org/wiki/Certificate%20authority
In cryptography, a certificate authority or certification authority (CA) is an entity that stores, signs, and issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made about the private key that corresponds to the certified public key. A CA acts as a trusted third party—trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. The format of these certificates is specified by the X.509 or EMV standard. One particularly common use for certificate authorities is to sign certificates used in HTTPS, the secure browsing protocol for the World Wide Web. Another common use is in issuing identity cards by national governments for use in electronically signing documents. Overview Trusted certificates can be used to create secure connections to a server via the Internet. A certificate is essential in order to circumvent a malicious party which happens to be on the route to a target server which acts as if it were the target. Such a scenario is commonly referred to as a man-in-the-middle attack. The client uses the CA certificate to authenticate the CA signature on the server certificate, as part of the authorizations before launching a secure connection. Usually, client software—for example, browsers—include a set of trusted CA certificates. This makes sense, as many users need to trust their client software. A malicious or compromised client can skip any security check and still fool its users into believing otherwise. The clients of a CA are server supervisors who call for a certificate that their servers will bestow to users. Commercial CAs charge money to issue certificates, and their customers anticipate the CA's certificate to be contained within the majority of web browsers, so that safe connections to the certified servers work efficiently out-of-the-box. The q
https://en.wikipedia.org/wiki/Balun
A balun (from "balanced to unbalanced", originally, but now dated from "balancing unit") is an electrical device that allows balanced and unbalanced lines to be interfaced without disturbing the impedance arrangement of either line. A balun can take many forms and may include devices that also transform impedances but need not do so. Sometimes, in the case of transformer baluns, they use magnetic coupling but need not do so. Common-mode chokes are also used as baluns and work by eliminating, rather than rejecting, common mode signals. Types of balun Classical transformer type In classical transformers, there are two electrically separate windings of wire coils around the transformer's core. The advantage of transformer-type over other types of balun is that the electrically separate windings for input and output allow these baluns to connect circuits whose ground-level voltages are subject to ground loops or are otherwise electrically incompatible; for that reason they are often called isolation transformers. This type is sometimes called a voltage balun. The primary winding receives the input signal, and the secondary winding puts out the converted signal. The core that they are wound on may either be empty (air core) or, equivalently, a magnetically neutral material like a porcelain support, or it may be a material which is good magnetic conductor like ferrite in modern high-frequency (HF) baluns, or soft iron as in the early days of telegraphy. The electrical signal in the primary coil is converted into a magnetic field in the transformer's core. When the electrical current through the primary reverses, it causes the established magnetic field to collapse. The collapsing magnetic field then induces an electric field in the secondary winding. The ratio of loops in each winding and the efficiency of the coils' magnetic coupling determines the ratio of electrical potential (voltage) to electrical current and the total power of the output. For idealized trans
https://en.wikipedia.org/wiki/Admittance
In electrical engineering, admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the reciprocal of impedance, analogous to how conductance and resistance are defined. The SI unit of admittance is the siemens (symbol S); the older, synonymous unit is mho, and its symbol is ℧ (an upside-down uppercase omega Ω). Oliver Heaviside coined the term admittance in December 1887. Heaviside used Y to represent the magnitude of admittance, but it quickly became the conventional symbol for admittance itself through the publications of Charles Proteus Steinmetz. Heaviside probably chose Y simply because it is next to Z in the alphabet, the conventional symbol for impedance. Admittance is defined as where Y is the admittance, measured in siemens Z is the impedance, measured in ohms Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance). Likewise, admittance is not only a measure of the ease with which a steady current can flow, but also the dynamic effects of the material's susceptance to polarization: where is the admittance, measured in siemens. is the conductance, measured in siemens. is the susceptance, measured in siemens. The dynamic effects of the material's susceptance relate to the universal dielectric response, the power law scaling of a system's admittance with frequency under alternating current conditions. Conversion from impedance to admittance The impedance, Z, is composed of real and imaginary parts, where R is the resistance, measured in ohms X is the reactance, measured in ohms Admittance, just like impedance, is a complex number, made up of a real part (the conductance, G), and an imaginary part (the susceptance, B), thus: where G (conductance) and B (susceptance) are given by: The magnitude and phase of the admittance are given by: where G is the conductance
https://en.wikipedia.org/wiki/Arrow%20of%20time
The arrow of time, also called time's arrow, is the concept positing the "one-way direction" or "asymmetry" of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This direction, according to Eddington, could be determined by studying the organization of atoms, molecules, and bodies, and might be drawn upon a four-dimensional relativistic map of the world ("a solid block of paper"). The Arrow of Time paradox was originally recognized in the 1800s for gases (and other substances) as a discrepancy between microscopic and macroscopic description of thermodynamics / statistical Physics: at the microscopic level physical processes are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time. Overview The symmetry of time (T-symmetry) can be understood simply as the following: if time were perfectly symmetrical, a video of real events would seem realistic whether played forwards or backwards. Gravity, for example, is a time-reversible force. A ball that is tossed up, slows to a stop, and falls is a case where recordings would look equally realistic forwards and backwards. The system is T-symmetrical. However, the process of the ball bouncing and eventually coming to a stop is not time-reversible. While going forward, kinetic energy is dissipated and entropy is increased. Entropy may be one of the few processes that is not time-reversible. According to the statistical notion of increasing entropy, the "arrow" of time is identified with a decrease of free energy. In his book The Big Picture, physicist Sean M. Carroll compares the asymmetry of time to the asymmetry of space: While physical laws are in general isotropic, near Earth there is an obvious distinction between "up" a
https://en.wikipedia.org/wiki/Construction%20of%20the%20real%20numbers
In mathematics, there are several equivalent ways of defining the real numbers. One of them is that they form a complete ordered field that does not contain any smaller complete ordered field. Such a definition does not prove that such a complete ordered field exists, and the existence proof consists of constructing a mathematical structure that satisfies the definition. The article presents several such constructions. They are equivalent in the sense that, given the result of any two such constructions, there is a unique isomorphism of ordered field between them. This results from the above definition and is independent of particular constructions. These isomorphisms allow identifying the results of the constructions, and, in practice, to forget which construction has been chosen. Axiomatic definitions An axiomatic definition of the real numbers consists of defining them as the elements of a complete ordered field. This means the following. The real numbers form a set, commonly denoted , containing two distinguished elements denoted 0 and 1, and on which are defined two binary operations and one binary relation; the operations are called addition and multiplication of real numbers and denoted respectively with and ; the binary relation is inequality, denoted Moreover, the following properties called axioms must be satisfied. The existence of such a structure is a theorem, which is proved by constructing such a structure. A consequence of the axioms is that this structure is unique up to an isomorphism, and thus, the real numbers can be used and manipulated, without referring to the method of construction. Axioms is a field under addition and multiplication. In other words, For all x, y, and z in , x + (y + z) = (x + y) + z and x × (y × z) = (x × y) × z. (associativity of addition and multiplication) For all x and y in , x + y = y + x and x × y = y × x. (commutativity of addition and multiplication) For all x, y, and z in , x × (y + z) = (x × y) + (x
https://en.wikipedia.org/wiki/Contract%20theory
From a legal point of view, a contract is an institutional arrangement for the way in which resources flow, which defines the various relationships between the parties to a transaction or limits the rights and obligations of the parties. From an economic perspective, contract theory studies how economic actors can and do construct contractual arrangements, generally in the presence of information asymmetry. Because of its connections with both agency and incentives, contract theory is often categorized within a field known as law and economics. One prominent application of it is the design of optimal schemes of managerial compensation. In the field of economics, the first formal treatment of this topic was given by Kenneth Arrow in the 1960s. In 2016, Oliver Hart and Bengt R. Holmström both received the Nobel Memorial Prize in Economic Sciences for their work on contract theory, covering many topics from CEO pay to privatizations. Holmström focused more on the connection between incentives and risk, while Hart on the unpredictability of the future that creates holes in contracts. A standard practice in the microeconomics of contract theory is to represent the behaviour of a decision maker under certain numerical utility structures, and then apply an optimization algorithm to identify optimal decisions. Such a procedure has been used in the contract theory framework to several typical situations, labeled moral hazard, adverse selection and signalling. The spirit of these models lies in finding theoretical ways to motivate agents to take appropriate actions, even under an insurance contract. The main results achieved through this family of models involve: mathematical properties of the utility structure of the principal and the agent, relaxation of assumptions, and variations of the time structure of the contract relationship, among others. It is customary to model people as maximizers of some von Neumann–Morgenstern utility functions, as stated by expected utility
https://en.wikipedia.org/wiki/Universe%20%28mathematics%29
In mathematics, and particularly in set theory, category theory, type theory, and the foundations of mathematics, a universe is a collection that contains all the entities one wishes to consider in a given situation. In set theory, universes are often classes that contain (as elements) all sets for which one hopes to prove a particular theorem. These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory. Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set, the category of all sets, which cannot be formalized in a set theory without some notion of a universe. In type theory, a universe is a type whose elements are types. In a specific context Perhaps the simplest version is that any set can be a universe, so long as the object of study is confined to that particular set. If the object of study is formed by the real numbers, then the real line R, which is the real number set, could be the universe under consideration. Implicitly, this is the universe that Georg Cantor was using when he first developed modern naive set theory and cardinality in the 1870s and 1880s in applications to real analysis. The only sets that Cantor was originally interested in were subsets of R. This concept of a universe is reflected in the use of Venn diagrams. In a Venn diagram, the action traditionally takes place inside a large rectangle that represents the universe U. One generally says that sets are represented by circles; but these sets can only be subsets of U. The complement of a set A is then given by that portion of the rectangle outside of As circle. Strictly speaking, this is the relative complement U \ A of A relative to U; but in a context where U is the universe, it can be regarded as the absolute complement AC of A. Similarly, there is a notion of the nullary intersection, that is the intersect
https://en.wikipedia.org/wiki/Windows%20Update
Windows Update is a Microsoft service for the Windows 9x and Windows NT families of the Microsoft Windows operating system, which automates downloading and installing Microsoft Windows software updates over the Internet. The service delivers software updates for Windows, as well as the various Microsoft antivirus products, including Windows Defender and Microsoft Security Essentials. Since its inception, Microsoft has introduced two extensions of the service: Microsoft Update and Windows Update for Business. The former expands the core service to include other Microsoft products, such as Microsoft Office and Microsoft Expression Studio. The latter is available to business editions of Windows 10 and permits postponing updates or receiving updates only after they have undergone rigorous testing. As the service has evolved over the years, so has its client software. For a decade, the primary client component of the service was the Windows Update web app that could only be run on Internet Explorer. Starting with Windows Vista, the primary client component became Windows Update Agent, an integral component of the operating system. The service provides several kinds of updates. Security updates or critical updates mitigate vulnerabilities against security exploits against Microsoft Windows. Cumulative updates are updates that bundle multiple updates, both new and previously released updates. Cumulative updates were introduced with Windows 10 and have been backported to Windows 7 and Windows 8.1. Microsoft routinely releases updates on the second Tuesday of each month (known as the Patch Tuesday), but can provide them whenever a new update is urgently required to prevent a newly discovered or prevalent exploit. System administrators can configure Windows Update to install critical updates for Microsoft Windows automatically, so long as the computer has an Internet connection. In Windows 10 and Windows 11, the use of Windows Update is mandatory, however, the software ag
https://en.wikipedia.org/wiki/Double%20counting%20%28proof%20technique%29
In combinatorics, double counting, also called counting in two ways, is a combinatorial proof technique for showing that two expressions are equal by demonstrating that they are two ways of counting the size of one set. In this technique, which call "one of the most important tools in combinatorics", one describes a finite set from two perspectives leading to two distinct expressions for the size of the set. Since both expressions equal the size of the same set, they equal each other. Examples Multiplication (of natural numbers) commutes This is a simple example of double counting, often used when teaching multiplication to young children. In this context, multiplication of natural numbers is introduced as repeated addition, and is then shown to be commutative by counting, in two different ways, a number of items arranged in a rectangular grid. Suppose the grid has rows and columns. We first count the items by summing rows of items each, then a second time by summing columns of items each, thus showing that, for these particular values of and , . Forming committees One example of the double counting method counts the number of ways in which a committee can be formed from people, allowing any number of the people (even zero of them) to be part of the committee. That is, one counts the number of subsets that an -element set may have. One method for forming a committee is to ask each person to choose whether or not to join it. Each person has two choices – yes or no – and these choices are independent of those of the other people. Therefore there are possibilities. Alternatively, one may observe that the size of the committee must be some number between 0 and . For each possible size , the number of ways in which a committee of people can be formed from people is the binomial coefficient Therefore the total number of possible committees is the sum of binomial coefficients over . Equating the two expressions gives the identity a special case of the b
https://en.wikipedia.org/wiki/Necklace%20problem
The necklace problem is a problem in recreational mathematics concerning the reconstruction of necklaces (cyclic arrangements of binary values) from partial information. Formulation The necklace problem involves the reconstruction of a necklace of beads, each of which is either black or white, from partial information. The information specifies how many copies the necklace contains of each possible arrangement of black beads. For instance, for , the specified information gives the number of pairs of black beads that are separated by positions, for . This can be made formal by defining a -configuration to be a necklace of black beads and white beads, and counting the number of ways of rotating a -configuration so that each of its black beads coincides with one of the black beads of the given necklace. The necklace problem asks: if is given, and the numbers of copies of each -configuration are known up to some threshold , how large does the threshold need to be before this information completely determines the necklace that it describes? Equivalently, if the information about -configurations is provided in stages, where the th stage provides the numbers of copies of each -configuration, how many stages are needed (in the worst case) in order to reconstruct the precise pattern of black and white beads in the original necklace? Upper bounds Alon, Caro, Krasikov and Roditty showed that 1 + log2(n) is sufficient, using a cleverly enhanced inclusion–exclusion principle. Radcliffe and Scott showed that if n is prime, 3 is sufficient, and for any n, 9 times the number of prime factors of n is sufficient. Pebody showed that for any n, 6 is sufficient and, in a followup paper, that for odd n, 4 is sufficient. He conjectured that 4 is again sufficient for even n greater than 10, but this remains unproven. See also Necklace (combinatorics) Bracelet (combinatorics) Moreau's necklace-counting function Necklace splitting problem References Combinato
https://en.wikipedia.org/wiki/Word%20problem%20%28mathematics%20education%29
In science education, a word problem is a mathematical exercise (such as in a textbook, worksheet, or exam) where significant background information on the problem is presented in ordinary language rather than in mathematical notation. As most word problems involve a narrative of some sort, they are sometimes referred to as story problems and may vary in the amount of technical language used. Example A typical word problem: Tess paints two boards of a fence every four minutes, but Allie can paint three boards every two minutes. If there are 240 boards total, how many hours will it take them to paint the fence, working together? Solution process Word problems such as the above can be examined through five stages: 1. Problem Comprehension 2. Situational Solution Visualization 3. Mathematical Solution Planning 4. Solving for Solution 5. Situational Solution Visualization The linguistic properties of a word problem need to be addressed first. To begin the solution process, one must first understand what the problem is asking and what type of solution the answer will be. In the problem above, the words "minutes", "total", "hours", and "together" need to be examined. The next step is to visualize what the solution to this problem might mean. For our stated problem, the solution might be visualized by examining if the total number of hours will be greater or smaller than if it were stated in minutes. Also, it must be determined whether or not the two girls will finish at a faster or slower rate if they are working together. After this, one must plan a solution method using mathematical terms. One scheme to analyze the mathematical properties is to classify the numerical quantities in the problem into known quantities (values given in the text), wanted quantities (values to be found), and auxiliary quantities (values found as intermediate stages of the problem). This is found in the "Variables" and "Equations" sections above. Next, the mathematical processes m
https://en.wikipedia.org/wiki/Transcendental%20function
In mathematics, a transcendental function is an analytic function that does not satisfy a polynomial equation, in contrast to an algebraic function. In other words, a transcendental function "transcends" algebra in that it cannot be expressed algebraically using a finite amount of terms. Examples of transcendental functions include the exponential function, the logarithm, and the trigonometric functions. Definition Formally, an analytic function of one real or complex variable is transcendental if it is algebraically independent of that variable. This can be extended to functions of several variables. History The transcendental functions sine and cosine were tabulated from physical measurements in antiquity, as evidenced in Greece (Hipparchus) and India (jya and koti-jya). In describing Ptolemy's table of chords, an equivalent to a table of sines, Olaf Pedersen wrote: A revolutionary understanding of these circular functions occurred in the 17th century and was explicated by Leonhard Euler in 1748 in his Introduction to the Analysis of the Infinite. These ancient transcendental functions became known as continuous functions through quadrature of the rectangular hyperbola by Grégoire de Saint-Vincent in 1647, two millennia after Archimedes had produced The Quadrature of the Parabola. The area under the hyperbola was shown to have the scaling property of constant area for a constant ratio of bounds. The hyperbolic logarithm function so described was of limited service until 1748 when Leonhard Euler related it to functions where a constant is raised to a variable exponent, such as the exponential function where the constant base is e. By introducing these transcendental functions and noting the bijection property that implies an inverse function, some facility was provided for algebraic manipulations of the natural logarithm even if it is not an algebraic function. The exponential function is written Euler identified it with the infinite series where deno
https://en.wikipedia.org/wiki/Cascading%20failure
A cascading failure is a failure in a system of interconnected parts in which the failure of one or few parts leads to the failure of other parts, growing progressively as a result of positive feedback. This can occur when a single part fails, increasing the probability that other portions of the system fail. Such a failure may happen in many types of systems, including power transmission, computer networking, finance, transportation systems, organisms, the human body, and ecosystems. Cascading failures may occur when one part of the system fails. When this happens, other parts must then compensate for the failed component. This in turn overloads these nodes, causing them to fail as well, prompting additional nodes to fail one after another. In power transmission Cascading failure is common in power grids when one of the elements fails (completely or partially) and shifts its load to nearby elements in the system. Those nearby elements are then pushed beyond their capacity so they become overloaded and shift their load onto other elements. Cascading failure is a common effect seen in high voltage systems, where a single point of failure (SPF) on a fully loaded or slightly overloaded system results in a sudden spike across all nodes of the system. This surge current can induce the already overloaded nodes into failure, setting off more overloads and thereby taking down the entire system in a very short time. This failure process cascades through the elements of the system like a ripple on a pond and continues until substantially all of the elements in the system are compromised and/or the system becomes functionally disconnected from the source of its load. For example, under certain conditions a large power grid can collapse after the failure of a single transformer. Monitoring the operation of a system, in real-time, and judicious disconnection of parts can help stop a cascade. Another common technique is to calculate a safety margin for the system by comp
https://en.wikipedia.org/wiki/Aisle
An aisle is, in general, a space for walking with rows of non-walking spaces on both sides. Aisles with seating on both sides can be seen in airplanes, certain types of buildings, such as churches, cathedrals, synagogues, meeting halls, parliaments and legislatures, courtrooms, theatres, and in certain types of passenger vehicles. Their floors may be flat or, as in theatres, stepped upwards from a stage. Aisles can also be seen in shops, warehouses, and factories, where rather than seats, they have shelving to either side. In warehouses and factories, aisles may be defined by storage pallets, and in factories, aisles may separate work areas. In health clubs, exercise equipment is normally arranged in aisles. Aisles are distinguished from corridors, hallways, walkways, footpaths, pavements (American English sidewalks), trails, paths and (enclosed) "open areas" by lying between other open spaces or areas of seating, but enclosed within a structure. Typical physical characteristics Aisles have certain general physical characteristics: They are virtually always straight, not curved. They are usually fairly long. An open space that had three rows of chairs to the right of it and three to the left generally would not be considered an "aisle". Width of various types of aisles Theatres, meeting halls, shops, etc., usually have aisles wide enough for 2–3 strangers to walk past each other without feeling uncomfortably close. In such facilities, anything that could comfortably accommodate more than four people side-by-side would generally be considered an "open area", rather than an "aisle". Factory work area aisles are usually wide enough for workers to comfortably sit or stand at their work area, while allowing safe and efficient movement of persons, equipment and/or materials. Passage aisles usually are quite narrow—wide enough for a large person to carry a suitcase in each hand but not wide enough for two people to pass side-by-side without touching. Usually, even wit
https://en.wikipedia.org/wiki/Transistor%20radio
A transistor radio is a small portable radio receiver that uses transistor-based circuitry. Following the invention of the transistor in 1947—which revolutionized the field of consumer electronics by introducing small but powerful, convenient hand-held devices—the Regency TR-1 was released in 1954 becoming the first commercial transistor radio. The mass-market success of the smaller and cheaper Sony TR-63, released in 1957, led to the transistor radio becoming the most popular electronic communication device of the 1960s and 1970s. Transistor radios are still commonly used as car radios. Billions of transistor radios are estimated to have been sold worldwide between the 1950s and 2012. The pocket size of transistor radios sparked a change in popular music listening habits, allowing people to listen to music anywhere they went. Beginning around 1980, however, cheap AM transistor radios were superseded initially by the boombox and the Sony Walkman, and later on by digitally-based devices with higher audio quality such as portable CD players, personal audio players, MP3 players and (eventually) by smartphones, many of which contain FM radios. A transistor is a semiconductor device that amplifies and acts as an electronic switch. Background Before the transistor was invented, radios used vacuum tubes. Although portable vacuum tube radios were produced, they were typically bulky and heavy. The need for a low voltage high current source to power the filaments of the tubes and high voltage for the anode potential typically required two batteries. Vacuum tubes were also inefficient and fragile compared to transistors and had a limited lifetime. Bell Laboratories demonstrated the first transistor on December 23, 1947. The scientific team at Bell Laboratories responsible for the solid-state amplifier included William Shockley, Walter Houser Brattain, and John Bardeen After obtaining patent protection, the company held a news conference on June 30, 1948, at which a prototy
https://en.wikipedia.org/wiki/Perturbation%20theory%20%28quantum%20mechanics%29
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system. Approximate Hamiltonians Perturbation theory is an important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for Hamiltonians of even moderate complexity. The Hamiltonians to which we know exact solutions, such as the hydrogen atom, the quantum harmonic oscillator and the particle in a box, are too idealized to adequately describe most systems. Using perturbation theory, we can use the known solutions of these simple Hamiltonians to generate solutions for a range of more complicated systems. Applying perturbation theory Perturbation theory is applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a "small" term to the mathematical description of the exactly solvable problem. For example, by adding a perturbative electric potential to the quantum mechanical model of the hydrogen atom, tiny shifts in the spectral lines of hydrogen caused by the presence of an electric field (the Stark effect) can be calculated. This is only approximate because th
https://en.wikipedia.org/wiki/Creatures%20%28video%20game%20series%29
Creatures is an artificial life video game series created in the mid-1990s by English computer scientist Steve Grand while working for the Cambridge video game developer Millennium Interactive. The gameplay focuses on raising alien creatures known as Norns, teaching them to survive, helping them explore their world, defending them against other species, and breeding them. Words can be taught to the creatures by a learning computer (for verbs) or by repeating the name of the object while the creature looks at it. Once a creature understands language, the player can instruct their creature by typing in instructions, which the creature can choose to obey. A complete life cycle is modeled for the creatures - childhood, adolescence, adulthood, and senescence, each with its own unique needs. The gameplay is designed to foster an emotional bond between the player and their creatures. Rather than taking a scripted approach, the games in the Creatures series were driven by detailed biological and neurological simulation and its unexpected results. There have been six major Creatures releases from Creature Labs: between 1996 and 2001 there were three main games, the Docking Station add-on (generally referred to as a separate game) and two children's games, and there were three games created for console systems. Overview The program was one of the first commercial titles to code artificial life organisms from the genetic level upwards using a sophisticated biochemistry and neural network brains, including simulated senses of sight, hearing and touch. This meant that the Norns and their DNA could develop and "evolve" in increasingly diverse ways, unpredicted by the makers. By breeding certain Norns with others, some traits could be passed on to following generations. The Norns turned out to behave similarly to living creatures. Norns respond to external stimuli, such as interaction with the player, and internal stimuli, such as changes in chemical concentrations or neural a
https://en.wikipedia.org/wiki/Intranet%20strategies
In business, an intranet strategy is the use of an intranet and associated hardware and software to obtain one or more organizational objectives. An intranet is an access-restricted network used internally in an organization. An intranet uses the same concepts and technologies as the World Wide Web and Internet. This includes web browsers and servers running on the internet protocol suite and using Internet protocols such as FTP, TCP/IP, HTML, and Simple Mail Transfer Protocol (SMTP). Role of intranets Intranets are generally used for four types of applications: 1) Communication and collaboration send and receive e-mail, faxes, voice mail, and paging discussion rooms and chat rooms audio and video conferencing virtual team meetings and project collaboration online company discussions as events (e.g., IBM Jams) inhouse blogs 2) Web publishing develop and publish hyperlinked multi-media documents such as: policy manuals company newsletters product catalogs technical drawings training material telephone directories 3) Business operations and management order processing inventory control production setup and control management information systems database access 4) Intranet portal management centrally administer all network functions including servers, clients, security, directories, and traffic give users access to a variety of internal and external business tools/applications integrate different technologies conduct regular user research to identify and confirm strategy (random sample surveys, usability testing, focus groups, in-depth interviews with wireframes, etc.) Why have an intranet strategy? Having a strategy pre-supposes a planned, orderly process with proper costing and budgeting, it involves consulting with the parties who are going to be using the intranet, allows for an efficient integration with existing systems and phasing-out of older ones, has long term benefits when the intranet needs to be scaled or made more secure, maintains control and qu
https://en.wikipedia.org/wiki/Pharmacy
Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications, aiming to ensure the safe, effective, and affordable use of medicines. It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences. The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy. The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information. Pharmacists, therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients. An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used) . In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery, cosmetics, office supplies, toys, hair care products and magazines, and occasionally refreshments and groceries. In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology, prior to the formulation of the scientific method. Disciplines The field of pharmacy can generally be divided into three primary disciplines: Pharmaceutics Pharmacokinetics Medicinal Chemistry and Pharmacognosy Pharmacy Practice The boundaries between these disciplines and
https://en.wikipedia.org/wiki/Phylogeography
Phylogeography is the study of the historical processes that may be responsible for the past to present geographic distributions of genealogical lineages. This is accomplished by considering the geographic distribution of individuals in light of genetics, particularly population genetics. This term was introduced to describe geographically structured genetic signals within and among species. An explicit focus on a species' biogeography/biogeographical past sets phylogeography apart from classical population genetics and phylogenetics. Past events that can be inferred include population expansion, population bottlenecks, vicariance, dispersal, and migration. Recently developed approaches integrating coalescent theory or the genealogical history of alleles and distributional information can more accurately address the relative roles of these different historical forces in shaping current patterns. Development The term phylogeography was first used by John Avise in his 1987 work Intraspecific Phylogeography: The Mitochondrial DNA Bridge Between Population Genetics and Systematics. Historical biogeography is a synthetic discipline that addresses how historical, geological, climatic and ecological conditions influenced the past and current distribution of species. As part of historical biogeography, researchers had been evaluating the geographical and evolutionary relationships of organisms years before. Two developments during the 1960s and 1970s were particularly important in laying the groundwork for modern phylogeography; the first was the spread of cladistic thought, and the second was the development of plate tectonics theory. The resulting school of thought was vicariance biogeography, which explained the origin of new lineages through geological events like the drifting apart of continents or the formation of rivers. When a continuous population (or species) is divided by a new river or a new mountain range (i.e., a vicariance event), two populations (or spec
https://en.wikipedia.org/wiki/Spontaneous%20symmetry%20breaking
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. Overview By definition, spontaneous symmetry breaking requires the existence of physical laws (e.g. quantum mechanics) which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. For example if measurements of an observable at any two different positions have the same probability distribution, the observable has translational symmetry. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Conversely, in explicit symmetry breaking, if two outcomes are considered, the probability distributions of a pair of outcomes can be different. For example in an electric field, the forces on a charged particle are different in different directions, so the rotational symmetry is explicitly broken by the electric field which does not have this symmetry. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect. Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. For example the density, compressibility, coefficient of thermal expansion, and specific heat will be expected to change when a liquid bec
https://en.wikipedia.org/wiki/Composition%20series
In abstract algebra, a composition series provides a way to break up an algebraic structure, such as a group or a module, into simple pieces. The need for considering composition series in the context of modules arises from the fact that many naturally occurring modules are not semisimple, hence cannot be decomposed into a direct sum of simple modules. A composition series of a module M is a finite increasing filtration of M by submodules such that the successive quotients are simple and serves as a replacement of the direct sum decomposition of M into its simple constituents. A composition series may not exist, and when it does, it need not be unique. Nevertheless, a group of results known under the general name Jordan–Hölder theorem asserts that whenever composition series exist, the isomorphism classes of simple pieces (although, perhaps, not their location in the composition series in question) and their multiplicities are uniquely determined. Composition series may thus be used to define invariants of finite groups and Artinian modules. A related but distinct concept is a chief series: a composition series is a maximal subnormal series, while a chief series is a maximal normal series. For groups If a group G has a normal subgroup N, then the factor group G/N may be formed, and some aspects of the study of the structure of G may be broken down by studying the "smaller" groups G/N and N. If G has no normal subgroup that is different from G and from the trivial group, then G is a simple group. Otherwise, the question naturally arises as to whether G can be reduced to simple "pieces", and if so, are there any unique features of the way this can be done? More formally, a composition series of a group G is a subnormal series of finite length with strict inclusions, such that each Hi is a maximal proper normal subgroup of Hi+1. Equivalently, a composition series is a subnormal series such that each factor group Hi+1 / Hi is simple. The factor groups are called
https://en.wikipedia.org/wiki/Jensen%27s%20inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations. Jensen's inequality generalizes the statement that the secant line of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for t ∈ [0,1]), while the graph of the function is the convex function of the weighted means, Thus, Jensen's inequality is In the context of probability theory, it is generally stated in the following form: if X is a random variable and is a convex function, then The difference between the two sides of the inequality, , is called the Jensen gap. Statements The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language of measure theory or (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to its full strength. Finite form For a real convex function , numbers in its domain, and positive weights , Jensen's inequality can be stated as: and the inequality is reversed if is concave, which is Equality holds if and only if or is linear on a domain containing . As a particular case, if the weights are all equal, then () and () become For instance, the function is concave, so substituting in the previous formula () estab
https://en.wikipedia.org/wiki/Chord%20%28geometry%29
A chord (from the Latin chorda, meaning "bowstring") of a circle is a straight line segment whose endpoints both lie on a circular arc. If a chord were to be extended infinitely on both directions into a line, the object is a secant line. The perpendicular line passing through the chord's midpoint is called sagitta (Latin for "arrow"). More generally, a chord is a line segment joining two points on any curve, for instance, on an ellipse. A chord that passes through a circle's center point is the circle's diameter. In circles Among properties of chords of a circle are the following: Chords are equidistant from the center if and only if their lengths are equal. Equal chords are subtended by equal angles from the center of the circle. A chord that passes through the center of a circle is called a diameter and is the longest chord of that specific circle. If the line extensions (secant lines) of chords AB and CD intersect at a point P, then their lengths satisfy AP·PB = CP·PD (power of a point theorem). In conics The midpoints of a set of parallel chords of a conic are collinear (midpoint theorem for conics). In trigonometry Chords were used extensively in the early development of trigonometry. The first known trigonometric table, compiled by Hipparchus, tabulated the value of the chord function for every degrees. In the second century AD, Ptolemy of Alexandria compiled a more extensive table of chords in his book on astronomy, giving the value of the chord for angles ranging from to 180 degrees by increments of degree. The circle was of diameter 120, and the chord lengths are accurate to two base-60 digits after the integer part. The chord function is defined geometrically as shown in the picture. The chord of an angle is the length of the chord between two points on a unit circle separated by that central angle. The angle θ is taken in the positive sense and must lie in the interval (radian measure). The chord function can be related to the modern s
https://en.wikipedia.org/wiki/Incompatible%20Timesharing%20System
Incompatible Timesharing System (ITS) is a time-sharing operating system developed principally by the MIT Artificial Intelligence Laboratory, with help from Project MAC. The name is the jocular complement of the MIT Compatible Time-Sharing System (CTSS). ITS, and the software developed on it, were technically and culturally influential far beyond their core user community. Remote "guest" or "tourist" access was easily available via the early ARPAnet, allowing many interested parties to informally try out features of the operating system and application programs. The wide-open ITS philosophy and collaborative online community were a major influence on the hacker culture, as described in Steven Levy's book Hackers, and were the direct forerunners of the free and open-source software, open-design, and Wiki movements. History ITS development was initiated in the late 1960s by those (the majority of the MIT AI Lab staff at that time) who disagreed with the direction taken by Project MAC's Multics project (which had started in the mid-1960s), particularly such decisions as the inclusion of powerful system security. The name was chosen by Tom Knight as a joke on the name of the earliest MIT time-sharing operating system, the Compatible Time-Sharing System, which dated from the early 1960s. By simplifying their system compared to Multics, ITS's authors were able to quickly produce a functional operating system for their lab. ITS was written in assembly language, originally for the Digital Equipment Corporation PDP-6 computer, but the majority of ITS development and use was on the later, largely compatible, PDP-10. Although not used as intensively after about 1986, ITS continued to operate on original hardware at MIT until 1990, and then until 1995 at Stacken Computer Club in Sweden. Today, some ITS implementations continue to be remotely accessible, via emulation of PDP-10 hardware running on modern, low-cost computers supported by interested hackers. Significant techn
https://en.wikipedia.org/wiki/Obfuscated%20Perl%20Contest
The Obfuscated Perl Contest was a competition for programmers of Perl which was held annually between 1996 and 2000. Entrants to the competition aimed to write "devious, inhuman, disgusting, amusing, amazing, and bizarre Perl code". It was run by The Perl Journal and took its name from the International Obfuscated C Code Contest. Contest The entries were judged on aesthetics, output and incomprehensibility. One entrant per year received the Best of Show award. Entrants were advised to try and demonstrate a range of Perl knowledge, while being humorous, surprising and deceitful. Code which purposely crashed the judges' machines was discouraged. The competition was typically divided into four categories, which, in the last contest, included: Create a Diversion (limit of 2048 bytes if using Perl/Tk, 512 bytes otherwise) World Wide Wasteland (limit of 512 bytes) Inner Beauty (limit of 512 bytes) Best The Perl Journal (code which generated the words The Perl Journal, limit of 256 bytes) See also Obfuscated code Just another Perl hacker References Further reading — reprints of the announcements, made in The Perl Journal by Felix S. Gallo, of the Zeroth, First, Third, Fourth, and Fifth contests Perl Computer humor Ironic and humorous awards Programming contests Recurring events established in 1996 Recurring events disestablished in 2000 Software obfuscation
https://en.wikipedia.org/wiki/Brownian%20tree
In probability theory, the Brownian tree, or Aldous tree, or Continuum Random Tree (CRT) is a random real tree that can be defined from a Brownian excursion. The Brownian tree was defined and studied by David Aldous in three articles published in 1991 and 1993. This tree has since then been generalized. This random tree has several equivalent definitions and constructions: using sub-trees generated by finitely many leaves, using a Brownian excursion, Poisson separating a straight line or as a limit of Galton-Watson trees. Intuitively, the Brownian tree is a binary tree whose nodes (or branching points) are dense in the tree; which is to say that for any distinct two points of the tree, there will always exist a node between them. It is a fractal object which can be approximated with computers or by physical processes with dendritic structures. Definitions The following definitions are different characterisations of a Brownian tree, they are taken from Aldous's three articles. The notions of leaf, node, branch, root are the intuitive notions on a tree (for details, see real trees). Finite-dimensional laws This definition gives the finite-dimensional laws of the subtrees generated by finitely many leaves. Let us consider the space of all binary trees with leaves numbered from to . These trees have edges with lengths . A tree is then defined by its shape (which is to say the order of the nodes) and the edge lengths. We define a probability law of a random variable on this space by: where . In other words, depends not on the shape of the tree but rather on the total sum of all the edge lengths. In other words, the Brownian tree is defined from the laws of all the finite sub-trees one can generate from it. Continuous tree The Brownian tree is a real tree defined from a Brownian excursion (see characterisation 4 in Real tree). Let be a Brownian excursion. Define a pseudometric on with for any We then define an equivalence relation, noted on
https://en.wikipedia.org/wiki/Maximum%20and%20minimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the largest and smallest value taken by the function. Known generically as extremum, they may be defined either within a given range (the local or relative extrema) or on the entire domain (the global or absolute extrema) of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. In statistics, the corresponding concept is the sample maximum and minimum. Definition A real-valued function f defined on a domain X has a global (or absolute) maximum point at x∗, if for all x in X. Similarly, the function has a global (or absolute) minimum point at x∗, if for all x in X. The value of the function at a maximum point is called the of the function, denoted , and the value of the function at a minimum point is called the of the function. Symbolically, this can be written as follows: is a global maximum point of function if The definition of global minimum point also proceeds similarly. If the domain X is a metric space, then f is said to have a local (or relative) maximum point at the point x∗, if there exists some ε > 0 such that for all x in X within distance ε of x∗. Similarly, the function has a local minimum point at x∗, if f(x∗) ≤ f(x) for all x in X within distance ε of x∗. A similar definition can be used when X is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: Let be a metric space and function . Then is a local maximum point of function if such that The definition of local minimum point can also proceed similarly. In both the global and local cases, the
https://en.wikipedia.org/wiki/Identity%20%28mathematics%29
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain range of validity. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, and are identities. Identities are sometimes indicated by the triple bar symbol instead of , the equals sign. Formally, an identity is a universally quantified equality. Common identities Algebraic identities Certain identities, such as and , form the basis of algebra, while other identities, such as and , can be useful in simplifying algebraic expressions and expanding them. Trigonometric identities Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. One of the most prominent examples of trigonometric identities involves the equation which is true for all real values of . On the other hand, the equation is only true for certain values of , not all. For example, this equation is true when but false when . Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity , the addition formula for ), which can be used to break down expressions of larger angles into those with smaller constituents
https://en.wikipedia.org/wiki/CompactFlash
CompactFlash (CF) is a flash memory mass storage device used mainly in portable electronic devices. The format was specified and the devices were first manufactured by SanDisk in 1994. CompactFlash became one of the most successful of the early memory card formats, surpassing Miniature Card and SmartMedia. Subsequent formats, such as MMC/SD, various Memory Stick formats, and xD-Picture Card offered stiff competition. Most of these cards are smaller than CompactFlash while offering comparable capacity and speed. Proprietary memory card formats for use in professional audio and video, such as P2 and SxS, are faster, but physically larger and more costly. CompactFlash's popularity is declining as CFexpress is taking over. As of 2022, both Canon and Nikon's newest high end cameras, e.g. the Canon EOS R5, Canon EOS R3, and Nikon Z 9 use CFexpress cards for the higher performance required to record 8K video. Traditional CompactFlash cards use the Parallel ATA interface, but in 2008, a variant of CompactFlash, CFast was announced. CFast (also known as CompactFast) is based on the Serial ATA interface. In November 2010, SanDisk, Sony and Nikon presented a next generation card format to the CompactFlash Association. The new format has a similar form factor to CF/CFast but is based on the PCI Express interface instead of Parallel ATA or Serial ATA. With potential read and write speeds of 1 Gbit/s (125 MB/s) and storage capabilities beyond 2 TiB, the new format is aimed at high-definition camcorders and high-resolution digital cameras, but the new cards are not backward compatible with either CompactFlash or CFast. The XQD card format was officially announced by the CompactFlash Association in December 2011. Description There are two main subdivisions of CF cards, 3.3 mm-thick type I and 5 mm-thick type II (CF2). The type II slot is used by miniature hard drives and some other devices, such as the Hasselblad CFV Digital Back for the Hasselblad series of medium format ca
https://en.wikipedia.org/wiki/Downstream%20%28networking%29
In a telecommunications network or computer network, downstream refers to data sent from a network service provider to a customer. One process sending data primarily in the downstream direction is downloading. However, the overall download speed depends on the downstream speed of the user, the upstream speed of the server, and the network between them. In the client–server model, downstream can refer to the direction from the server to the client. References Data transmission Orientation (geometry) nl:Downstream
https://en.wikipedia.org/wiki/HP%20Multi-Programming%20Executive
MPE (Multi-Programming Executive) is a discontinued business-oriented mainframe computer real-time operating system made by Hewlett-Packard. While initially a mini-mainframe, the final high-end systems supported 12 CPUs and over 2000 simultaneous users. Description It runs on the HP 3000 family of computers, which originally used HP custom 16-bit stack architecture CISC CPUs and were later migrated to PA-RISC where the operating system was called MPE XL. In 1983, the original version of MPE was written in a language called SPL (System Programming Language). MPE XL was written primarily in Pascal, with some assembly language and some of the old SPL code. In 1992, the OS name was changed to MPE/iX to indicate Unix interoperability with the addition of POSIX compatibility. The discontinuance of the product line was announced in late 2001, with support from HP terminating at the end of 2010. A number of 3rd party companies still support both the hardware and software. In 2002 HP released the last version MPE/iX 7.5. Commands Among others, MPE/iX supports the following list of common commands and programs. =SHUTDOWN BASIC CHDIR COPY DEBUG ECHO ELSE EXIT FORTRAN HELP IF PASCAL PRINT RENAME SH WHILE See also HP 3000 References External links Allegro Consultants, Inc. Free HP 3000 Software, MPE Software Support Beechglen Development Inc. MPE Software Support HP MPE/iX homepage HP MPE/iX Command reference openMPE Advocates of continued MPE and IMAGE source code access beyond 2010 Discontinued operating systems Multi-Programming Executive Proprietary operating systems 1974 software
https://en.wikipedia.org/wiki/Complete%20contract
A complete contract is an important concept from contract theory. If the parties to an agreement could specify their respective rights and duties for every possible future state of the world, their contract would be complete. There would be no gaps in the terms of the contract. However, because it would be prohibitively expensive to write a complete contract, contracts in the real world are usually incomplete. When a dispute arises and the case falls into a gap in the contract, either the parties must engage in bargaining or the courts must step in and fill in the gap. The idea of a complete contract is closely related to the notion of default rules, e.g. legal rules that will fill the gap in a contract in the absence of an agreed upon provision. In economics, the field of contract theory can be subdivided into the theory of complete contracts and the theory of incomplete contracts. Complete contracting theory is also called agency theory (or principal-agent theory) and closely related to (Bayesian) mechanism design and implementation theory. The two most important classes of models in complete contracting theory are adverse selection and moral hazard models. In this part of contract theory, every conceivable contractual arrangement between the contractual parties is allowed, provided it is feasible given the relevant technological and information constraints. In the presence of asymmetric information, the optimization problems can be handled due to the revelation principle. A leading textbook exposition of complete contract theory is Laffont and Martimort (2002). In contrast, incomplete contracting models consider situations in which only a restricted class of contracts is allowed, e.g. only simple ownership structures can be contractually specified in the Grossman-Hart-Moore theory of the firm. References External links Laffont, Jean-Jacques and David Martimort. The theory of incentives: The principal-agent model. Princeton University Press, 2009. Lawrence
https://en.wikipedia.org/wiki/Affine%20group
In mathematics, the affine group or general affine group of any affine space is the group of all invertible affine transformations from the space into itself. In the case of a Euclidean space (where the associated field of scalars is the real numbers), the affine group consists of those functions from the space to itself such that the image of every line is a line. Over any field, the affine group may be viewed as a matrix group in a natural way. If the associated field of scalars the real or complex field, then the affine group is a Lie group. Relation to general linear group Construction from general linear group Concretely, given a vector space , it has an underlying affine space obtained by "forgetting" the origin, with acting by translations, and the affine group of can be described concretely as the semidirect product of by , the general linear group of : The action of on is the natural one (linear transformations are automorphisms), so this defines a semidirect product. In terms of matrices, one writes: where here the natural action of on is matrix multiplication of a vector. Stabilizer of a point Given the affine group of an affine space , the stabilizer of a point is isomorphic to the general linear group of the same dimension (so the stabilizer of a point in is isomorphic to ); formally, it is the general linear group of the vector space : recall that if one fixes a point, an affine space becomes a vector space. All these subgroups are conjugate, where conjugation is given by translation from to (which is uniquely defined), however, no particular subgroup is a natural choice, since no point is special – this corresponds to the multiple choices of transverse subgroup, or splitting of the short exact sequence In the case that the affine group was constructed by starting with a vector space, the subgroup that stabilizes the origin (of the vector space) is the original . Matrix representation Representing the affine group as a semidirec
https://en.wikipedia.org/wiki/Affine%20space
In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. Affine space is the setting for affine geometry. As in Euclidean space, the fundamental objects in an affine space are called points, which can be thought of as locations in the space without any size or shape: zero dimensional. Through any pair of points an infinite straight line can be drawn, a one-dimensional set of points; through any three points which are not collinear, a two-dimensional plane can be drawn; and in general through points in general position a -dimensional flat or affine subspace can be drawn. Affine space is characterized by a notion of pairs of parallel lines which lie within the same plane but never meet each-other (non-parallel lines within the same plane intersect in a point). Given any line, a line parallel to it can be drawn through any point in the space, and the equivalence class of parallel lines are said to share a direction. Unlike for vectors in a vector space, in an affine space there is no distinguished point that serves as an origin. There is no predefined concept of adding or multiplying points together, or multiplying a point by a scalar number. However, for any affine space, an associated vector space can be constructed from the differences between start and end points, which are called displacement vectors, translation vectors or simply translations. Likewise, it makes sense to add a displacement vector to a point of an affine space, resulting in a new point translated from the starting point by that vector. While points cannot be arbitrarily added together, it is meaningful to take affine combinations of points: weighted sums with numerical coefficients summing to 1, resulting in another point. These coefficients defin
https://en.wikipedia.org/wiki/Percolation
In physics, chemistry, and materials science, percolation () refers to the movement and filtering of fluids through porous materials. It is described by Darcy's law. Broader applications have since been developed that cover connectivity of many systems modeled as lattices or graphs, analogous to connectivity of lattice components in the filtration problem that modulates capacity for percolation. Background During the last decades, percolation theory, the mathematical study of percolation, has brought new understanding and techniques to a broad range of topics in physics, materials science, complex networks, epidemiology, and other fields. For example, in geology, percolation refers to filtration of water through soil and permeable rocks. The water flows to recharge the groundwater in the water table and aquifers. In places where infiltration basins or septic drain fields are planned to dispose of substantial amounts of water, a percolation test is needed beforehand to determine whether the intended structure is likely to succeed or fail. In two dimensional square lattice percolation is defined as follows. A site is "occupied" with probability p or "empty" (in which case its edges are removed) with probability 1 – p; the corresponding problem is called site percolation, see Fig. 2. Percolation typically exhibits universality. Statistical physics concepts such as scaling theory, renormalization, phase transition, critical phenomena and fractals are used to characterize percolation properties. Combinatorics is commonly employed to study percolation thresholds. Due to the complexity involved in obtaining exact results from analytical models of percolation, computer simulations are typically used. The current fastest algorithm for percolation was published in 2000 by Mark Newman and Robert Ziff. Examples Coffee percolation (see Fig. 1), where the solvent is water, the permeable substance is the coffee grounds, and the soluble constituents are the chemical compoun
https://en.wikipedia.org/wiki/Probabilistic%20context-free%20grammar
Grammar theory to model symbol strings originated from work in computational linguistics aiming to understand the structure of natural languages. Probabilistic context free grammars (PCFGs) have been applied in probabilistic modeling of RNA structures almost 40 years after they were introduced in computational linguistics. PCFGs extend context-free grammars similar to how hidden Markov models extend regular grammars. Each production is assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters via machine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs have application in areas as diverse as natural language processing to the study the structure of RNA molecules and design of programming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Definitions Derivation: The process of recursive generation of strings from a grammar. Parsing: Finding a valid derivation using an automaton. Parse Tree: The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is the pushdown automaton. The algorithm parses grammar nonterminals from left to right in a stack-like manner. This brute-force approach is not very efficient. In RNA secondary structure prediction variants of the Cocke–Younger–Kasami (CYK) algorithm provide more efficient alternatives to grammar parsing than pushdown automata. Another example of a PCFG parser is the Stanford Statistical Parser which has been trained using Treebank. Formal definition Similar to a CFG, a probabilisti
https://en.wikipedia.org/wiki/Multicellular%20organism
A multicellular organism is an organism that consists of more than one cell, in contrast to unicellular organism. All species of animals, land plants and most fungi are multicellular, as are many algae, whereas a few organisms are partially uni- and partially multicellular, like slime molds and social amoebae such as the genus Dictyostelium. Multicellular organisms arise in various ways, for example by cell division or by aggregation of many single cells. Colonial organisms are the result of many identical individuals joining together to form a colony. However, it can often be hard to separate colonial protists from true multicellular organisms, because the two concepts are not distinct; colonial protists have been dubbed "pluricellular" rather than "multicellular". There are also macroscopic organisms that are multinucleate though technically unicellular, such as the Xenophyophorea that can reach 20 cm. Evolutionary history Occurrence Multicellularity has evolved independently at least 25 times in eukaryotes, and also in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, Magnetoglobus multicellularis or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, symbiomycotan fungi, brown algae, red algae, green algae, and land plants. It evolved repeatedly for Chloroplastida (green algae and land plants), once for animals, once for brown algae, three times in the fungi (chytrids, ascomycetes, and basidiomycetes) and perhaps several times for slime molds and red algae. The first evidence of multicellular organization, which is when unicellular organisms coordinate behaviors and may be an evolutionary precursor to true multicellularity, is from cyanobacteria-like organisms that lived 3.0–3.5 billion years ago. To reproduce, true multicellular organisms must solve the problem of regenerating a whole organism from germ cells (i.e., sperm and egg cells), an issue that is studied in evolutionary develop
https://en.wikipedia.org/wiki/Structural%20engineer
Structural engineers analyze, design, plan, and research structural components and structural systems to achieve design goals and ensure the safety and comfort of users or occupants. Their work takes account mainly of safety, technical, economic, and environmental concerns, but they may also consider aesthetic and social factors. Structural engineering is usually considered a specialty discipline within civil engineering, but it can also be studied in its own right. In the United States, most practicing structural engineers are currently licensed as civil engineers, but the situation varies from state to state. Some states have a separate license for structural engineers who are required to design special or high-risk structures such as schools, hospitals, or skyscrapers. In the United Kingdom, most structural engineers in the building industry are members of the Institution of Structural Engineers or the Institution of Civil Engineers. Typical structures designed by a structural engineer include buildings, towers, stadiums, and bridges. Other structures such as oil rigs, space satellites, aircraft, and ships may also be designed by a structural engineer. Most structural engineers are employed in the construction industry, however, there are also structural engineers in the aerospace, automobile, and shipbuilding industries. In the construction industry, they work closely with architects, civil engineers, mechanical engineers, electrical engineers, quantity surveyors, and construction managers. Structural engineers ensure that buildings and bridges are built to be strong enough and stable enough to resist all appropriate structural loads (e.g., gravity, wind, snow, rain, seismic (earthquake), earth pressure, temperature, and traffic) to prevent or reduce the loss of life or injury. They also design structures to be stiff enough to not deflect or vibrate beyond acceptable limits. Human comfort is an issue that is regularly considered limited. Fatigue is also an im
https://en.wikipedia.org/wiki/RC%20circuit
A resistor–capacitor circuit (RC circuit), or RC filter or RC network, is an electric circuit composed of resistors and capacitors. It may be driven by a voltage or current source and these will produce different responses. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit. RC circuits can be used to filter a signal by blocking certain frequencies and passing others. The two most common RC filters are the high-pass filters and low-pass filters; band-pass filters and band-stop filters usually require RLC filters, though crude ones can be made with RC filters. Introduction There are three basic, linear passive lumped analog circuit components: the resistor (R), the capacitor (C), and the inductor (L). These may be combined in the RC circuit, the RL circuit, the LC circuit, and the RLC circuit, with the acronyms indicating which components are used. These circuits, among them, exhibit a large number of important types of behaviour that are fundamental to much of analog electronics. In particular, they are able to act as passive filters. This article considers the RC circuit, in both series and parallel forms, as shown in the diagrams below. Natural response The simplest RC circuit consists of a resistor and a charged capacitor connected to one another in a single loop, without an external voltage source. Once the circuit is closed, the capacitor begins to discharge its stored energy through the resistor. The voltage across the capacitor, which is time-dependent, can be found by using Kirchhoff's current law. The current through the resistor must be equal in magnitude (but opposite in sign) to the time derivative of the accumulated charge on the capacitor. This results in the linear differential equation where is the capacitance of the capacitor. Solving this equation for yields the formula for exponential decay: where is the capacitor voltage at time . The time required for the voltage to fall
https://en.wikipedia.org/wiki/Acoustical%20engineering
Acoustical engineering (also known as acoustic engineering) is the branch of engineering dealing with sound and vibration. It includes the application of acoustics, the science of sound and vibration, in technology. Acoustical engineers are typically concerned with the design, analysis and control of sound. One goal of acoustical engineering can be the reduction of unwanted noise, which is referred to as noise control. Unwanted noise can have significant impacts on animal and human health and well-being, reduce attainment by students in schools, and cause hearing loss. Noise control principles are implemented into technology and design in a variety of ways, including control by redesigning sound sources, the design of noise barriers, sound absorbers, suppressors, and buffer zones, and the use of hearing protection (earmuffs or earplugs). Besides noise control, acoustical engineering also covers positive uses of sound, such as the use of ultrasound in medicine, programming digital synthesizers, designing concert halls to enhance the sound of orchestras and specifying railway station sound systems so that announcements are intelligible. Acoustic engineer (professional) Acoustic engineers usually possess a bachelor's degree or higher qualification in acoustics, physics or another engineering discipline. Practicing as an acoustic engineer usually requires a bachelor's degree with significant scientific and mathematical content. Acoustic engineers might work in acoustic consultancy, specializing in particular fields, such as architectural acoustics, environmental noise or vibration control. In other industries, acoustic engineers might: design automobile sound systems; investigate human response to sounds, such as urban soundscapes and domestic appliances; develop audio signal processing software for mixing desks, and design loudspeakers and microphones for mobile phones. Acousticians are also involved in researching and understanding sound scientifically. Some pos
https://en.wikipedia.org/wiki/Structural%20information%20theory
Structural information theory (SIT) is a theory about human perception and in particular about visual perceptual organization, which is a neuro-cognitive process. It has been applied to a wide range of research topics, mostly in visual form perception but also in, for instance, visual ergonomics, data visualization, and music perception. SIT began as a quantitative model of visual pattern classification. Nowadays, it includes quantitative models of symmetry perception and amodal completion, and is theoretically sustained by a perceptually adequate formalization of visual regularity, a quantitative account of viewpoint dependencies, and a powerful form of neurocomputation. SIT has been argued to be the best defined and most successful extension of Gestalt ideas. It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations. The simplicity principle A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts. The assumption that the visual system prefers simplest interpretations is called the simplicity principle. Historically, the simplicity principle is an information-theoretical translation of the Gestalt law of Prägnanz, which was inspired by the natural tendency of physical systems to settle into relatively stable states defined by a minimum of free-energy. Furthermore, just as the later-proposed minimum description length principle in algorithmic information theory (AIT), a.k.a. the theory of Kolmogorov complexity, it can be seen as a formalization of Occam's Razor, according to which the simplest interpretation of data is the best one. Simplicity versus likelihood Crucial to the latter finding is the distinction between, and integration of,
https://en.wikipedia.org/wiki/Optical%20tweezers
Optical tweezers (originally called single-beam gradient force trap) are scientific instruments that use a highly focused laser beam to hold and move microscopic and sub-microscopic objects like atoms, nanoparticles and droplets, in a manner similar to tweezers. If the object is held in air or vacuum without additional support, it can be called optical levitation. The laser light provides an attractive or repulsive force (typically on the order of piconewtons), depending on the relative refractive index between particle and surrounding medium. Levitation is possible if the force of the light counters the force of gravity. The trapped particles are usually micron-sized, or even smaller. Dielectric and absorbing particles can be trapped, too. Optical tweezers are used in biology and medicine (for example to grab and hold a single bacterium, a cell like a sperm cell or a blood cell, or a molecule like DNA), nanoengineering and nanochemistry (to study and build materials from single molecules), quantum optics and quantum optomechanics (to study the interaction of single particles with light). The development of optical tweezing by Arthur Ashkin was lauded with the 2018 Nobel Prize in Physics. History and development The detection of optical scattering and the gradient forces on micron sized particles was first reported in 1970 by Arthur Ashkin, a scientist working at Bell Labs. Years later, Ashkin and colleagues reported the first observation of what is now commonly referred to as an optical tweezer: a tightly focused beam of light capable of holding microscopic particles stable in three dimensions. In 2018, Ashkin was awarded the Nobel Prize in Physics for this development. One author of this seminal 1986 paper, Steven Chu, would go on to use optical tweezing in his work on cooling and trapping neutral atoms. This research earned Chu the 1997 Nobel Prize in Physics along with Claude Cohen-Tannoudji and William D. Phillips. In an interview, Steven Chu described how
https://en.wikipedia.org/wiki/Use%20case
In software and systems engineering, the phrase use case is a polyseme with two senses: A usage scenario for a piece of software; often used in the plural to suggest situations where a piece of software may be useful. A potential scenario in which a system receives an external request (such as user input) and responds to it. This article discusses the latter sense. (For more on the other sense, see for example user persona). A use case is a list of actions or event steps typically defining the interactions between a role (known in the Unified Modeling Language (UML) as an actor) and a system to achieve a goal. The actor can be a human or another external system. In systems engineering, use cases are used at a higher level than within software engineering, often representing missions or stakeholder goals. The detailed requirements may then be captured in the Systems Modeling Language (SysML) or as contractual statements. History In 1987, Ivar Jacobson presented the first article on use cases at the OOPSLA'87 conference. He described how this technique was used at Ericsson to capture and specify requirements of a system using textual, structural, and visual modeling techniques to drive object-oriented analysis and design. Originally he had used the terms usage scenarios and usage case – the latter a direct translation of his Swedish term användningsfall – but found that neither of these terms sounded natural in English, and eventually he settled on use case. In 1992 he co-authored the book Object-Oriented Software Engineering - A Use Case Driven Approach, which laid the foundation of the OOSE system engineering method and helped to popularize use cases for capturing functional requirements, especially in software development. In 1994 he published a book about use cases and object-oriented techniques applied to business models and business process reengineering. At the same time, Grady Booch and James Rumbaugh worked at unifying their object-oriented analysis a
https://en.wikipedia.org/wiki/Puyo%20Puyo%20%28video%20game%29
is a puzzle video game released in 1991 by Compile for the MSX2. Since its creation, it uses characters from Madō Monogatari. It was created by Masamitsu "Moo" Niitani, the founder of Compile, who was inspired by certain elements from the Tetris and Dr. Mario series of games. The game was released by Tokuma Shoten on the same day of the MSX2 release under the name and as part of the Famimaga Disk series for the Family Computer Disk System. A year after the MSX2 and FDS versions, Sega released an arcade version, which heavily expanded the previous versions by including a one-player story mode and a two-player competitive mode. Gameplay The main game of Puyo Puyo is played against at least one opponent, computer or human. The game itself has three modes, Single Puyo Puyo, Double Puyo Puyo, and Endless Puyo Puyo. In Single mode, the player takes on the role of Arle Nadja, a 16-year-old female spellcaster that has the pleasure of foiling the Dark Prince's plans. The Dark Prince wishes to take over the world, and Arle stands in his way. As such, Arle must first however battle her way through 12 opponents before facing the Dark Prince. With the exception of Rulue, they are not sent by the Dark Prince, and mostly they just want to pull shenanigans with her (for Rulue, she fell in love with the Dark Prince). Once Arle has beaten the Dark Prince, the world is saved, so she can return home. As in all main Puyo games, the story mode consists of playing Puyo matches against a fixed sequence of characters in one of three courses. In Double mode, two players play against each other. In exactly the same fashion as before, by out-chaining one another, the player tries to fill up their opponent's grid. Since the rules of sending so many garbage blocks made games short-lived, no matter how many chains are sent, Compile added the rule of Offsetting in Puyo Puyo 2 and onwards. This lets players counter opponents' attacks with chains of their own, sending any garbage blocks back to
https://en.wikipedia.org/wiki/Year%202038%20problem
The year 2038 problem (also known as Y2038, Y2K38, Y2K38 superbug or the Epochalypse) is a time formatting bug in computer systems with representing times after 03:14:07 UTC on 19 January 2038. The problem exists in systems which measure Unix time – the number of seconds elapsed since the Unix epoch (00:00:00 UTC on 1 January 1970) – and store it in a signed 32-bit integer. The data type is only capable of representing integers between −(2) and 231 − 1, meaning the latest time that can be properly encoded is 2 − 1 seconds after epoch (03:14:07 UTC on 19 January 2038). Attempting to increment to the following second (03:14:08) will cause the integer to overflow, setting its value to −(2) which systems will interpret as 2 seconds before epoch (20:45:52 UTC on 13 December 1901). The problem is similar in nature to the year 2000 problem. Computer systems that use time for critical computations may encounter fatal errors if the Y2038 problem is not addressed. Some applications that use future dates have already encountered the bug. The most vulnerable systems are those which are infrequently or never updated, such as legacy and embedded systems. There is no universal solution to the problem, though many modern systems have been upgraded to measure Unix time with signed 64-bit integers which will not overflow for 292 billion years—approximately 21 times the estimated age of the universe. Cause Many computer systems measure time and date as Unix time, an international standard for digital timekeeping. Unix time is defined as the number of seconds elapsed since 00:00:00 UTC on 1 January 1970 (an arbitrarily chosen time based on the creation of the first Unix system), which has been dubbed the Unix epoch. Unix time has historically been encoded as a signed 32-bit integer, a data type composed of 32 binary digits (bits) which represent an integer value, with 'signed' meaning that the number is stored in Two's complement format. Thus, a signed 32-bit integer can only rep
https://en.wikipedia.org/wiki/Tomsrtbt
tomsrtbt (pronounced: Tom's Root Boot) is a very small Linux distribution. It is short for "Tom's floppy which has a root filesystem and is also bootable." Its author, Tom Oehser, touts it as "The most GNU/Linux on one floppy disk", containing many common Linux command-line tools useful for system recovery (Linux and other operating systems.) It also features drivers for many types of hardware, and network connectivity. It could be created from within Linux or earlier versions of Windows running in MS-DOS mode, either by formatting a standard 1.44MB floppy disk as a higher density 1.722MB disk and writing the tomsrtbt image to the disk, or by burning it as a bootable CD. It is capable of reading and writing the filesystems of many operating systems of its era, including ext2/ext3 (used in Linux), FAT (used by DOS and Windows), NTFS (used in Windows NT, 2000, and XP) and Minix (used by the Minix operating system). Windows NT and most later versions of Windows, including 2000, XP, and Vista, cannot create a tomsrtbt floppy as their floppy driver does not allow the extended format. A few of the utilities on tomsrtbt are written in the Lua programming language, and many more use BusyBox. Space saving compiler options were used throughout, the kernel was patched to support loading an image compressed with Bzip2, and in many cases older or alternate versions of programs were selected due to their smaller size. See also Lightweight Linux distribution References External links tomsrtbt homepage Light-weight Linux distributions Floppy disk-based operating systems Lua (programming language)-scripted software Linux distributions
https://en.wikipedia.org/wiki/Integrated%20receiver/decoder
An integrated receiver/decoder (IRD) is an electronic device used to pick up a radio-frequency signal and convert digital information transmitted in it. Consumer IRDs Consumer IRDs, commonly called set-top boxes, are used by end users and are much cheaper compared to professional IRDs. To curb content piracy, they also lack many features and interfaces found in professional IRDs such as outputting uncompressed SDI video or ASI transport stream dumps. They are also designed to be more aesthetically pleasing. Professional IRDs Commonly found in radio, television, Cable and satellite broadcasting facilities, the IRD is generally used for the reception of contribution feeds that are intended for re-broadcasting. The IRD is the interface between a receiving satellite dish or Telco networks and a broadcasting facility video/audio infrastructure. Professional IRDs have various features that consumer IRDs lack such as: SDI outputs. ASI inputs / outputs. TSoIP inputs. AES/EBU Audio decoding. VBI reinsertion. WSS data and pass through. Transport stream demultiplexing. Genlock input. Frame synchronization of digital video output to analogue input. Closed captions and VITS/ITS/VITC Insertion. Video test pattern generator. Remote management over LAN/WAN. GPI interface - For sending external alarm triggers. Rack mountable. Uses direct broadcast satellite (DBS) television applications like DirecTV, Astra or DishTV fixed service satellite (FSS) applications like VideoCipher, DigiCipher, or PowerVu digital audio radio satellite (DARS) applications like XM Satellite Radio and Sirius Satellite Radio digital audio broadcasting (DAB) applications like Eureka 147 and IBOC digital video broadcasting (DVB) applications like DVB-T and ATSC See also ATSC tuner Broadcast engineering Set-top box Television technology Television terminology
https://en.wikipedia.org/wiki/Liskov%20substitution%20principle
The Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called strong behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address titled Data abstraction and hierarchy. It is based on the concept of "substitutability" a principle in object-oriented programming stating that an object (such as a class) may be replaced by a sub-object (such as a class that extends the first class) without breaking the program. It is a semantic rather than merely syntactic relation, because it intends to guarantee semantic interoperability of types in a hierarchy, object types in particular. Barbara Liskov and Jeannette Wing described the principle succinctly in a 1994 paper as follows: Subtype Requirement: Let be a property provable about objects of type . Then should be true for objects of type where is a subtype of . Symbolically: That is, if S subtypes T, what holds for T-objects holds for S-objects. In the same paper, Liskov and Wing detailed their notion of behavioral subtyping in an extension of Hoare logic, which bears a certain resemblance to Bertrand Meyer's design by contract in that it considers the interaction of subtyping with preconditions, postconditions and invariants. Principle Liskov's notion of a behavioural subtype defines a notion of substitutability for objects; that is, if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program (e.g. correctness). Behavioural subtyping is a stronger notion than typical subtyping of functions defined in type theory, which relies only on the contravariance of parameter types and covariance of the return type. Behavioural subtyping is undecidable in general: if q is the property "method for x always terminates", then it is impossible for a program (e.g. a compiler) to verify that it holds true for some subtype S of T, even if q does ho
https://en.wikipedia.org/wiki/End-of-life%20product
An end-of-life product (EOL product) is a product at the end of the product lifecycle which prevents users from receiving updates, indicating that the product is at the end of its useful life (from the vendor's point of view). At this stage, a vendor stops the marketing, selling, or provisioning of parts, services, or software updates for the product. The vendor may simply intend to limit or end support for the product. In the specific case of product sales, a vendor may employ the more specific term "end-of-sale" ("EOS"). All users can continue to access discontinued products, but cannot receive security updates and technical support. The time-frame after the last production date depends on the product and relates to the expected product lifetime from a customer's point of view. Different lifetime examples include toys from fast food chains (weeks or months), mobile phones (3 years) and cars (10 years). Product support Product support during EOL varies by product. For hardware with an expected lifetime of 10 years after production ends, the support includes spare parts, technical support and service. Spare-part lifetimes are price-driven due to increasing production costs, as high-volume production sites are often closed when series production ends. Manufacturers may also continue to offer parts and services even when it is not profitable, to demonstrate good faith and to retain a reputation of durability. Minimum service lifetimes are also mandated by law for some products in some jurisdictions. Alternatively, some producers may discontinue maintenance of a product in order to force customers to upgrade to newer products. Computing In the computing field, the concept of end-of-life has significance in the production, supportability and purchase of software and hardware products. For example, Microsoft marked Windows 98 for end-of-life on June 30, 2006. Software produced after that date may not work for it. Microsoft's product Office 2007 (released on November 3
https://en.wikipedia.org/wiki/Internet%20access
Internet access is a facility or service that provides connectivity for a computer, a computer network, or other network device to the Internet, and for individuals or organizations to access or use applications such as email and the World Wide Web. Internet access is offered for sale by an international hierarchy of Internet service providers (ISPs) using various networking technologies. At the retail level, many organizations, including municipal entities, also provide cost-free access to the general public. Availability of Internet access to the general public began with the commercialization of the early Internet in the early 1990s, and has grown with the availability of useful applications, such as the World Wide Web. In 1995, only percent of the world's population had access, with well over half of those living in the United States, and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology, and by 2014, 41 percent of the world's population had access, broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second. History The Internet developed from the ARPANET, which was funded by the US government to support projects within the government and at universities and research laboratories in the US – but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Initially, dial-up connections were mad
https://en.wikipedia.org/wiki/Mel-frequency%20cepstrum
In sound processing, the mel-frequency cepstrum (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC. They are derived from a type of cepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). The difference between the cepstrum and the mel-frequency cepstrum is that in the MFC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal spectrum. This frequency warping can allow for better representation of sound, for example, in audio compression that might potentially reduce the transmission bandwidth and the storage requirements of audio signals. MFCCs are commonly derived as follows: Take the Fourier transform of (a windowed excerpt of) a signal. Map the powers of the spectrum obtained above onto the mel scale, using triangular overlapping windows or alternatively, cosine overlapping windows. Take the logs of the powers at each of the mel frequencies. Take the discrete cosine transform of the list of mel log powers, as if it were a signal. The MFCCs are the amplitudes of the resulting spectrum. There can be variations on this process, for example: differences in the shape or spacing of the windows used to map the scale, or addition of dynamics features such as "delta" and "delta-delta" (first- and second-order frame-to-frame difference) coefficients. The European Telecommunications Standards Institute in the early 2000s defined a standardised MFCC algorithm to be used in mobile phones. Applications MFCCs are commonly used as features in speech recognition systems, such as the systems which can automatically recognize numbers spoken into a telephone. MFCCs are also increasingly finding uses in mus
https://en.wikipedia.org/wiki/Lorenz%20cipher
The Lorenz SZ40, SZ42a and SZ42b were German rotor stream cipher machines used by the German Army during World War II. They were developed by C. Lorenz AG in Berlin. The model name SZ was derived from Schlüssel-Zusatz, meaning cipher attachment. The instruments implemented a Vernam stream cipher. British cryptanalysts, who referred to encrypted German teleprinter traffic as Fish, dubbed the machine and its traffic Tunny (meaning tunafish) and deduced its logical structure three years before they saw such a machine. The SZ machines were in-line attachments to standard teleprinters. An experimental link using SZ40 machines was started in June 1941. The enhanced SZ42 machines were brought into substantial use from mid-1942 onwards for high-level communications between the German High Command in Wünsdorf close to Berlin, and Army Commands throughout occupied Europe. The more advanced SZ42A came into routine use in February 1943 and the SZ42B in June 1944. Radioteletype (RTTY) rather than land-line circuits was used for this traffic. These non-Morse (NoMo) messages were picked up by Britain's Y-stations at Knockholt in Kent and Denmark Hill in south London, and sent to the Government Code and Cypher School at Bletchley Park (BP). Some were deciphered using hand methods before the process was partially automated, first with Robinson machines and then with the Colossus computers. The deciphered Lorenz messages made one of the most significant contributions to British Ultra military intelligence and to Allied victory in Europe, due to the high-level strategic nature of the information that was gained from Lorenz decrypts. History After the Second World War a group of British and US cryptanalysts entered Germany with the front-line troops to capture the documents, technology and personnel of the various German signal intelligence organizations before these secrets could be destroyed, looted, or captured by the Soviets. They were called the Target Intelligence Committee:
https://en.wikipedia.org/wiki/RSA%20Security
RSA Security LLC, formerly RSA Security, Inc. and trade name RSA, is an American computer and network security company with a focus on encryption and encryption standards. RSA was named after the initials of its co-founders, Ron Rivest, Adi Shamir and Leonard Adleman, after whom the RSA public key cryptography algorithm was also named. Among its products is the SecurID authentication token. The BSAFE cryptography libraries were also initially owned by RSA. RSA is known for incorporating backdoors developed by the NSA in its products. It also organizes the annual RSA Conference, an information security conference. Founded as an independent company in 1982, RSA Security was acquired by EMC Corporation in 2006 for US$2.1 billion and operated as a division within EMC. When EMC was acquired by Dell Technologies in 2016, RSA became part of the Dell Technologies family of brands. On 10 March 2020, Dell Technologies announced that they will be selling RSA Security to a consortium, led by Symphony Technology Group (STG), Ontario Teachers’ Pension Plan Board (Ontario Teachers’) and AlpInvest Partners (AlpInvest) for US$2.1 billion, the same price when it was bought by EMC back in 2006. RSA is based in Bedford, Massachusetts, with regional headquarters in Bracknell (UK) and Singapore, and numerous international offices. History Ron Rivest, Adi Shamir and Leonard Adleman, who developed the RSA encryption algorithm in 1977, founded RSA Data Security in 1982. The company acquired a "worldwide exclusive license" from the Massachusetts Institute of Technology to a patent on the RSA cryptosystem technology granted in 1983. In 1994, RSA was against the Clipper chip during the Crypto War. In 1995, RSA sent a handful of people across the hall to found Digital Certificates International, better known as VeriSign. The company then called Security Dynamics acquired RSA Data Security in July 1996 and DynaSoft AB in 1997. In January 1997, it proposed the first of the DES Challenges
https://en.wikipedia.org/wiki/Liquid%20War
Liquid War is an free software multi-player action game based on particle flow mechanic. Thomas Colcombet developed the core concept and the original shortest path algorithm, the software was programmed by . Liquid War 6 is a GNU package distributed as free software and part of the GNU project. Gameplay Gameplay takes place on a 2D battlefield, usually with some obstacles. Each player (2 to 6, computer or human) has an army of particles and a cursor. The objective of the game is to assimilate all enemy particles. The players can only move their cursors and cannot directly control the particles. Each particle follows the shortest path around the obstacles to its team's cursor. A player may have several thousands particles at a time, giving the collection of particles a look of a liquid blob. When a particle moves into a particle from a different team, it will fight and if the opponent particle fails to fight back (it is not moving in the opposite direction) it will eventually be assimilated by its attacker. As particles cannot die but only change teams, the total number of particles on the map remains constant. Since a particle can only fight in one direction at a time (towards its team's cursor), a player that surrounds its opponents will have a distinct advantage. The game ends when one player controls all of the particles or when the time runs out. When the time runs out, the player with the most particles wins. There are multiple maps which affect the obstacles in the battlefield. These obstacles may affect the strategies of the game. is a multiplayer game and can be played by up to 6 people on one computer, or over the Internet or a LAN. A single player mode is available in which the opponents are controlled by the computer. The computer AI's "strategy" is to constantly choose a random point in the enemy and move its cursor to it. History The shortest path algorithm was invented by Thomas Colcombet before the game itself in Spring 1995. The game came as a
https://en.wikipedia.org/wiki/Lozenge%20%28shape%29
A lozenge ( ; symbol: ), often referred to as a diamond, is a form of rhombus. The definition of lozenge is not strictly fixed, and the word is sometimes used simply as a synonym () for rhombus. Most often, though, lozenge refers to a thin rhombus—a rhombus with two acute and two obtuse angles, especially one with acute angles of 45°. The lozenge shape is often used in parquetry (with acute angles that are 360°/n with n being an integer higher than 4, because they can be used to form a set of tiles of the same shape and size, reusable to cover the plane in various geometric patterns as the result of a tiling process called tessellation in mathematics) and as decoration on ceramics, silverware and textiles. It also features in heraldry and playing cards. Symbolism The lozenge motif dates from the Neolithic and Paleolithic period in Eastern Europe and represents a sown field and female fertility. The ancient lozenge pattern often shows up in Diamond vault architecture, in traditional dress patterns of Slavic peoples, and in traditional Ukrainian embroidery. The lozenge pattern also appears extensively in Celtic art, art from the Ottoman Empire, and ancient Phrygian art. The lozenge symbolism is one of the main symbols for women in Berber carpets. Common Berber jewelry from the Aurès Mountains or Kabylie in Algeria also uses this pattern as a female fertility sign. In 1658, the English philosopher Sir Thomas Browne published The Garden of Cyrus, subtitled The Quincunciall Lozenge, or Network Plantations of the Ancients, in which he outlined the mystical interconnection of art, nature and the universe via the quincunx pattern. He also suggested therein that ancient plantations were laid out in a lozenge pattern. Lozenges appear as symbols in ancient classic element systems, in amulets, and in religious symbolism. In playing cards, the symbol for the suit of diamonds is a lozenge. Encodings In Unicode, the lozenge is encoded in multiple variants:
https://en.wikipedia.org/wiki/Functional%20derivative
In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative) relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends. In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integrand of a functional, if a function is varied by adding to it another function that is arbitrarily small, and the resulting integrand is expanded in powers of , the coefficient of in the first order term is called the functional derivative. For example, consider the functional where . If is varied by adding to it a function , and the resulting integrand is expanded in powers of , then the change in the value of to first order in can be expressed as follows: where the variation in the derivative, was rewritten as the derivative of the variation , and integration by parts was used in these derivatives. Definition In this section, the functional differential (or variation or first variation) Called first variation in , variation or first variation in , variation or differential in and differential in . is defined. Then the functional derivative is defined in terms of the functional differential. Functional differential Suppose is a Banach space and is a functional defined on . The differential of at a point is the linear functional on defined by the condition that, for all , where is a real number that depends on in such a way that as . This means that is the Fréchet derivative of at . However, this notion of functional differential is so strong it may not exist, and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined as the directional derivative Note that this notion of the functional differential can even be defined without a n
https://en.wikipedia.org/wiki/Fork%20%28software%20development%29
In software engineering, a project fork happens when developers take a copy of source code from one software package and start independent development on it, creating a distinct and separate piece of software. The term often implies not merely a development branch, but also a split in the developer community; as such, it is a form of schism. Grounds for forking are varying user preferences and stagnated or discontinued development of the original software. Free and open-source software is that which, by definition, may be forked from the original development team without prior permission, and without violating copyright law. However, licensed forks of proprietary software (e.g. Unix) also happen. Etymology The word "fork" has been used to mean "to divide in branches, go separate ways" as early as the 14th century. In the software environment, the word evokes the fork system call, which causes a running process to split itself into two (almost) identical copies that (typically) diverge to perform different tasks. In the context of software development, "fork" was used in the sense of creating a revision control "branch" by Eric Allman as early as 1980, in the context of Source Code Control System: The term was in use on Usenet by 1983 for the process of creating a subgroup to move topics of discussion to. "Fork" is not known to have been used in the sense of a community schism during the origins of Lucid Emacs (now XEmacs) (1991) or the Berkeley Software Distributions (BSDs) (1993–1994); Russ Nelson used the term "shattering" for this sort of fork in 1993, attributing it to John Gilmore. However, "fork" was in use in the present sense by 1995 to describe the XEmacs split, and was an understood usage in the GNU Project by 1996. Forking of free and open-source software Free and open-source software may be legally forked without prior approval of those currently developing, managing, or distributing the software per both The Free Software Definition and The Open
https://en.wikipedia.org/wiki/Fork%20%28system%20call%29
In computing, particularly in the context of the Unix operating system and its workalikes, fork is an operation whereby a process creates a copy of itself. It is an interface which is required for compliance with the POSIX and Single UNIX Specification standards. It is usually implemented as a C standard library wrapper to the fork, clone, or other system calls of the kernel. Fork is the primary method of process creation on Unix-like operating systems. Overview In multitasking operating systems, processes (running programs) need a way to create new processes, e.g. to run other programs. Fork and its variants are typically the only way of doing so in Unix-like systems. For a process to start the execution of a different program, it first forks to create a copy of itself. Then, the copy, called the "child process", calls the exec system call to overlay itself with the other program: it ceases execution of its former program in favor of the other. The fork operation creates a separate address space for the child. The child process has an exact copy of all the memory segments of the parent process. In modern UNIX variants that follow the virtual memory model from SunOS-4.0, copy-on-write semantics are implemented and the physical memory need not be actually copied. Instead, virtual memory pages in both processes may refer to the same pages of physical memory until one of them writes to such a page: then it is copied. This optimization is important in the common case where fork is used in conjunction with exec to execute a new program: typically, the child process performs only a small set of actions before it ceases execution of its program in favour of the program to be started, and it requires very few, if any, of its parent's data structures. When a process calls fork, it is deemed the parent process and the newly created process is its child. After the fork, both processes not only run the same program, but they resume execution as though both had called the sy
https://en.wikipedia.org/wiki/Functional%20integration
Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differential equations, and in the path integral approach to the quantum mechanics of particles and fields. In an ordinary integral (in the sense of Lebesgue integration) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research. Functional integration was developed by Percy John Daniell in an article of 1919 and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion. They developed a rigorous method (now known as the Wiener measure) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral, useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties. Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum elec
https://en.wikipedia.org/wiki/ICalendar
The Internet Calendaring and Scheduling Core Object Specification (iCalendar) is a media type which allows users to store and exchange calendaring and scheduling information such as events, to-dos, journal entries, and free/busy information, and together with its associated standards has been a cornerstone of the standardization and interoperability of digital calendars across different vendors. Files formatted according to the specification usually have an extension of . With supporting software, such as an email reader or calendar application, recipients of an iCalendar data file can respond to the sender easily or counter-propose another meeting date/time. The file format is specified in a proposed Internet standard (RFC 5545) for calendar data exchange. The standard and file type are sometimes referred to as "iCal", which was the name of the Apple Inc. calendar program until 2012 (see iCal), which provides one of the implementations of the standard. iCalendar is used and supported by many products, including Google Calendar, Apple Calendar (formerly iCal), HCL Domino (formerly IBM Notes and Lotus Notes), Yahoo! Calendar, GNU Emacs, GNOME Evolution, eM Client, Lightning extension for Mozilla Thunderbird and SeaMonkey, and partially by Microsoft Outlook and Novell GroupWise. iCalendar is designed to be independent of the transport protocol. For example, certain events can be sent by traditional email or whole calendar files can be shared and edited by using a WebDav server, or SyncML. Simple web servers (using just the HTTP protocol) are often used to distribute iCalendar data about an event and to publish busy times of an individual. Publishers can embed iCalendar data in web pages using hCalendar, a 1:1 microformat representation of iCalendar in semantic (X)HTML. History iCalendar was first created in 1998 by the Calendaring and Scheduling Working Group of the Internet Engineering Task Force, chaired by Anik Ganguly of Open Text Corporation, and was authore
https://en.wikipedia.org/wiki/Robert%20W.%20Floyd
Robert W Floyd (June 8, 1936 – September 25, 2001) was a computer scientist. His contributions include the design of the Floyd–Warshall algorithm (independently of Stephen Warshall), which efficiently finds all shortest paths in a graph and his work on parsing; Floyd's cycle-finding algorithm for detecting cycles in a sequence was attributed to him as well. In one isolated paper he introduced the important concept of error diffusion for rendering images, also called Floyd–Steinberg dithering (though he distinguished dithering from diffusion). He pioneered in the field of program verification using logical assertions with the 1967 paper Assigning Meanings to Programs. This was a contribution to what later became Hoare logic. Floyd received the Turing Award in 1978. Life Born in New York City, Floyd finished high school at age 14. At the University of Chicago, he received a Bachelor of Arts (B.A.) in liberal arts in 1953 (when still only 17) and a second bachelor's degree in physics in 1958. Floyd was a college roommate of Carl Sagan. Floyd became a staff member of the Armour Research Foundation (now IIT Research Institute) at Illinois Institute of Technology in the 1950s. Becoming a computer operator in the early 1960s, he began publishing many papers, including on compilers (particularly parsing). He was a pioneer of operator-precedence grammars, and is credited with initiating the field of programming language semantics in . He was appointed an associate professor at Carnegie Mellon University by the time he was 27 and became a full professor at Stanford University six years later. He obtained this position without a Doctor of Philosophy (Ph.D.) degree. He was a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. He was elected a Fellow of the American Academy of Arts and Sciences in 1974.
https://en.wikipedia.org/wiki/John%20Cocke%20%28computer%20scientist%29
John Cocke (May 30, 1925 – July 16, 2002) was an American computer scientist recognized for his large contribution to computer architecture and optimizing compiler design. He is considered by many to be "the father of RISC architecture." Biography He was born in Charlotte, North Carolina, US. He attended Duke University, where he received his bachelor's degree in mechanical engineering in 1946 and his Ph.D. in mathematics in 1956. Cocke spent his entire career as an industrial researcher for IBM, from 1956 to 1992. Perhaps the project where his innovations were most noted was in the IBM 801 minicomputer, where his realization that matching the design of the architecture's instruction set to the relatively simple instructions actually emitted by compilers could allow high performance at a low cost. He is one of the inventors of the CYK algorithm (C for Cocke). He was also involved in the pioneering speech recognition and machine translation work at IBM in the 1970s and 1980s, and is credited by Frederick Jelinek with originating the idea of using a trigram language model for speech recognition. Cocke was appointed IBM Fellow in 1972. He won the Eckert-Mauchly Award in 1985, ACM Turing Award in 1987, the National Medal of Technology in 1991 and the National Medal of Science in 1994, IEEE John von Neumann Medal in 1984, The Franklin Institute's Certificate of Merit in 1996, the Seymour Cray Computer Engineering Award in 1999, and The Benjamin Franklin Medal in 2000. He was a member of the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences. In 2002, he was made a Fellow of the Computer History Museum "for his development and implementation of reduced instruction set computer architecture and program optimization technology." He died in Valhalla, New York, US. References External links IBM obituary Duke profile from 1988 By Eileen Bryn Interview transcript IEEE John von Neumann Medal Recipients 1
https://en.wikipedia.org/wiki/Tote%20board
A tote board (or totalisator/totalizator) is a numeric or alphanumeric display used to convey information, typically at a race track (to display the odds or payoffs for each horse) or at a telethon (to display the total amount donated to the charitable organization sponsoring the event). The term "tote board" comes from the colloquialism for "totalizator" (or "totalisator"), the name for the automated system which runs parimutuel betting, calculating payoff odds, displaying them, and producing tickets based on incoming bets. Parimutuel systems had used totalisator boards since the 1860s and they were often housed in substantial buildings. However the manual systems often resulted in substantial delays in calculations of clients' payouts. The first all-mechanical totalisator was invented by George Julius. Julius was a consulting engineer, based in Sydney. His father, Churchill Julius, an Anglican Bishop, had campaigned, in the early years of the twentieth century, against the iniquities of gambling using totalisators and its damage to New Zealand society. That attitude had changed by late 1907 when he argued that the totalisator removed much of the evil of gambling with bookmakers. Bishop Churchill was himself an amateur mechanic with a reputation for fixing clocks and organs in parishes he visited. Initially, George Julius was attempting to develop a voting calculating machine for the Australian government, to automatically reduce the instances of voter fraud and create a cheat-free political environment. He went on to present his unique invention, only to have his design rejected as it was deemed to be excessive. The first all-mechanical machine was installed at Ellerslie Racecourse in New Zealand in 1913 (first used on the Holy Saturday races on 22 March 1913), and the second was installed at Gloucester Park Racetrack in Western Australia in 1917. George Julius founded Automatic Totalisators Limited (ATL) in 1917, which supplied the "Premier Totalisator: now
https://en.wikipedia.org/wiki/Lumped-element%20model
The lumped-element model (also called lumped-parameter model, or lumped-component model) is a simplified representation of a physical system or circuit that assumes all components are concentrated at a single point and their behavior can be described by idealized mathematical models. The lumped-element model simplifies the system or circuit behavior description into a topology. It is useful in electrical systems (including electronics), mechanical multibody systems, heat transfer, acoustics, etc. This is in contrast to distributed parameter systems or models in which the behaviour is distributed spatially and cannot be considered as localized into discrete entities. The simplification reduces the state space of the system to a finite dimension, and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters. Electrical systems Lumped-matter discipline The lumped-matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped-circuit abstraction used in network analysis. The self-imposed constraints are: The change of the magnetic flux in time outside a conductor is zero. The change of the charge in time inside conducting elements is zero. Signal timescales of interest are much larger than propagation delay of electromagnetic waves across the lumped element. The first two assumptions result in Kirchhoff's circuit laws when applied to Maxwell's equations and are only applicable when the circuit is in steady state. The third assumption is the basis of the lumped-element model used in network analysis. Less severe assumptions result in the distributed-element model, while still not requiring the direct application of the full Maxwell equations. Lumped-element model The lumped-element model of electronic circuits makes the simplifying assumption that the attribut
https://en.wikipedia.org/wiki/Distributed-element%20model
In electrical engineering, the distributed-element model or transmission-line model of electrical circuits assumes that the attributes of the circuit (resistance, capacitance, and inductance) are distributed continuously throughout the material of the circuit. This is in contrast to the more common lumped-element model, which assumes that these values are lumped into electrical components that are joined by perfectly conducting wires. In the distributed-element model, each circuit element is infinitesimally small, and the wires connecting elements are not assumed to be perfect conductors; that is, they have impedance. Unlike the lumped-element model, it assumes nonuniform current along each branch and nonuniform voltage along each wire. The distributed model is used where the wavelength becomes comparable to the physical dimensions of the circuit, making the lumped model inaccurate. This occurs at high frequencies, where the wavelength is very short, or on low-frequency, but very long, transmission lines such as overhead power lines. Applications The distributed-element model is more accurate but more complex than the lumped-element model. The use of infinitesimals will often require the application of calculus, whereas circuits analysed by the lumped-element model can be solved with linear algebra. The distributed model is consequently usually only applied when accuracy calls for its use. The location of this point is dependent on the accuracy required in a specific application, but essentially, it needs to be used in circuits where the wavelengths of the signals have become comparable to the physical dimensions of the components. An often-quoted engineering rule of thumb (not to be taken too literally because there are many exceptions) is that parts larger than one-tenth of a wavelength will usually need to be analysed as distributed elements. Transmission lines Transmission lines are a common example of the use of the distributed model. Its use is d
https://en.wikipedia.org/wiki/Original%20programming
Original programming (also called originals or original programs, and subcategorized as "original series", "original movies", "original documentaries" and "original specials") is a term used for in-house television, film or web series productions to which the exclusive domestic and, if the originating service operates non-domestic versions of the service outside of their home country, international broadcast rights are held by traditional and over-the-top content providers. The term was coined by HBO in 1983 when the premium service began producing its slate of in-house series and film productions. HBO initially branded the original series on the network under "HBOriginal" until 1986, and by 1993, the "originals" term had expanded to encompass most of its original productions. The term eventually expanded into use by various cable-originated television networks (including, among others, Disney Channel, TNT and USA Network) to identify their in-house productions. It also advertises them as being distinct from the acquired content offered to fill out the remainder of their programming schedule. Most original made-for-cable or made-for-streaming productions are produced solely in conjunction with independent production companies that also hold day-to-day management responsibilities for the program, although some series (such as The Larry Sanders Show, Queer as Folk, The Leftovers and Power) share financial interests with major television studios—such as 20th Television, Warner Bros. Television and Lionsgate Television—that may also handle distribution responsibilities for domestic and international syndication on behalf of the originating network. Television networks and digital content providers that produce original programming include Cinemax, Netflix, Showtime, Amazon Prime, and YouTube. In October 2020, CBS became the first U.S. broadcast network to identify all of its entertainment programming under the term, branding them as "CBS Originals"; however, the netwo
https://en.wikipedia.org/wiki/Frequency%20response
In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters. The frequency response characterizes systems in the frequency domain, just as the impulse response characterizes systems in the time domain. In linear systems (or as an approximation to a real system neglecting second order non-linear properties), either response completely describes the system and thus have one-to-one correspondence: the frequency response is the Fourier transform of the impulse response. The frequency response allows simpler analysis of cascaded systems such as multistage amplifiers, as the response of the overall system can be found through multiplication of the individual stages' frequency responses (as opposed to convolution of the impulse response in the time domain). The frequency response is closely related to the transfer function in linear systems, which is the Laplace transform of the impulse response. They are equivalent when the real part of the transfer function's complex variable is zero. Measurement and plotting Measuring the frequency response typically involves exciting the system with an input signal and measuring the resulting output signal, calculating the frequency spectra
https://en.wikipedia.org/wiki/Audio%20system%20measurements
Audio system measurements are a means of quantifying system performance. These measurements are made for several purposes. Designers take measurements so that they can specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing. Subjectivity and frequency weighting Subjectively valid methods came to prominence in consumer audio in the UK and Europe in the 1970s, when the introduction of compact cassette tape, dbx and Dolby noise reduction techniques revealed the unsatisfactory nature of many basic engineering measurements. The specification of weighted CCIR-468 quasi-peak noise, and weighted quasi-peak wow and flutter became particularly widely used and attempts were made to find more valid methods for distortion measurement. Measurements based on psychoacoustics, such as the measurement of noise, often use a weighting filter. It is well established that human hearing is more sensitive to some frequencies than others, as demonstrated by equal-loudness contours, but it is not well appreciated that these contours vary depending on the type of sound. The measured curves for pure tones, for instance, are different from those for random noise. The ear also responds less well to short bursts, below 100 to 200 ms, than to continuous sounds such that a quasi-peak detector has been found to give the most representative results when noise contains click or bursts, as is often the case for noise in digital systems. For these reasons, a set of subjectively valid measurement techniques have been devised and incorporated into BS, IEC, EBU and ITU standards. These methods of audio quality measurement are used by broadcast engineers throughout most of the world, as well as by so
https://en.wikipedia.org/wiki/Index%20of%20logic%20articles
A A System of Logic -- A priori and a posteriori -- Abacus logic -- Abduction (logic) -- Abductive validation -- Academia Analitica -- Accuracy and precision -- Ad captandum -- Ad hoc hypothesis -- Ad hominem -- Affine logic -- Affirming the antecedent -- Affirming the consequent -- Algebraic logic -- Ambiguity -- Analysis -- Analysis (journal) -- Analytic reasoning -- Analytic–synthetic distinction -- Anangeon -- Anecdotal evidence -- Antecedent (logic) -- Antepredicament -- Anti-psychologism -- Antinomy -- Apophasis -- Appeal to probability -- Appeal to ridicule -- Archive for Mathematical Logic -- Arché -- Argument -- Argument by example -- Argument form -- Argument from authority -- Argument map -- Argumentation theory -- Argumentum ad baculum -- Argumentum e contrario -- Ariadne's thread (logic) -- Aristotelian logic -- Aristotle -- Association for Informal Logic and Critical Thinking -- Association for Logic, Language and Information -- Association for Symbolic Logic -- Attacking Faulty Reasoning -- Australasian Association for Logic -- Axiom -- Axiom independence -- Axiom of reducibility -- Axiomatic system -- Axiomatization -- B Backward chaining -- Barcan formula -- Begging the question -- Begriffsschrift -- Belief -- Belief bias -- Belief revision -- Benson Mates -- Bertrand Russell Society -- Biconditional elimination -- Biconditional introduction -- Bivalence and related laws -- Blue and Brown Books -- Boole's syllogistic -- Boolean algebra (logic) -- Boolean algebra (structure) -- Boolean network -- C Canonical form -- Canonical form (Boolean algebra) -- Cartesian circle -- Case-based reasoning -- Categorical logic -- Categories (Aristotle) -- Categories (Peirce) -- Category mistake -- Catuṣkoṭi -- Circular definition -- Circular reasoning -- Circular reference -- Circular reporting -- Circumscription (logic) -- Circumscription (taxonomy) -- Classical logic -- Clocked logic -- Cognitive bias -- Cointerpretability -- Colorless green ideas sleep fu
https://en.wikipedia.org/wiki/Bernard%20Bolzano
Bernard Bolzano (, ; ; ; born Bernardus Placidus Johann Nepomuk Bolzano; 5 October 1781 – 18 December 1848) was a Bohemian mathematician, logician, philosopher, theologian and Catholic priest of Italian extraction, also known for his liberal views. Bolzano wrote in German, his native language. For the most part, his work came to prominence posthumously. Family Bolzano was the son of two pious Catholics. His father, Bernard Pompeius Bolzano, was an Italian who had moved to Prague, where he married Maria Cecilia Maurer who came from Prague's German-speaking family Maurer. Only two of their twelve children lived to adulthood. Career Bolzano entered the University of Prague in 1796 and studied mathematics, philosophy and physics. Starting in 1800, he also began studying theology, becoming a Catholic priest in 1804. He was appointed to the new chair of philosophy of religion at Prague University in 1805. He proved to be a popular lecturer not only in religion but also in philosophy, and he was elected Dean of the Philosophical Faculty in 1818. Bolzano alienated many faculty and church leaders with his teachings of the social waste of militarism and the needlessness of war. He urged a total reform of the educational, social and economic systems that would direct the nation's interests toward peace rather than toward armed conflict between nations. His political convictions, which he was inclined to share with others with some frequency, eventually proved to be too liberal for the Austrian authorities. On December 24, 1819, he was removed from his professorship (upon his refusal to recant his beliefs) and was exiled to the countryside and then devoted his energies to his writings on social, religious, philosophical, and mathematical matters. Although forbidden to publish in mainstream journals as a condition of his exile, Bolzano continued to develop his ideas and publish them either on his own or in obscure Eastern European journals. In 1842 he moved back to Prague,
https://en.wikipedia.org/wiki/Self-adjoint
In mathematics, and more specifically in abstract algebra, an element x of a *-algebra is self-adjoint if . A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection C of elements of a star-algebra is self-adjoint if it is closed under the involution operation. For example, if then since in a star-algebra, the set {x,y} is a self-adjoint set even though x and y need not be self-adjoint elements. In functional analysis, a linear operator on a Hilbert space is called self-adjoint if it is equal to its own adjoint A. See self-adjoint operator for a detailed discussion. If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator A is self-adjoint if and only if the matrix describing A with respect to this basis is Hermitian, i.e. if it is equal to its own conjugate transpose. Hermitian matrices are also called self-adjoint. In a dagger category, a morphism is called self-adjoint if ; this is possible only for an endomorphism . See also Hermitian matrix Normal element Symmetric matrix Self-adjoint operator Unitary element References Abstract algebra Linear algebra
https://en.wikipedia.org/wiki/Chipset
In a computer system, a chipset is a set of electronic components on one or more ULSI integrated circuits known as a "Data Flow Management System" that manages the data flow between the processor, memory and peripherals. It is usually found on the motherboard of computers. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance. Computers In computing, the term chipset commonly refers to a set of specialized chips on a computer's motherboard or an expansion card. In personal computers, the first chipset for the IBM PC AT of 1984 was the NEAT chipset developed by Chips and Technologies for the Intel 80286 CPU. In home computers, game consoles, and arcade hardware of the 1980s and 1990s, the term chipset was used for the custom audio and graphics chips. Examples include the Original Amiga chipset and Sega's System 16 chipset. In x86-based personal computers, the term chipset often refers to a specific pair of chips on the motherboard: the northbridge and the southbridge. The northbridge links the CPU to very high-speed devices, especially RAM and graphics controllers, and the southbridge connects to lower-speed peripheral buses (such as PCI or ISA). In many modern chipsets, the southbridge contains some on-chip integrated peripherals, such as Ethernet, USB, and audio devices. Motherboards and their chipsets often come from different manufacturers. , manufacturers of chipsets for x86 motherboards include AMD, Intel, VIA Technologies and Zhaoxin. In the 1990s, a major designer and manufacturer of chipsets was VLSI Technology in Tempe, Arizona. The early Apple Power Macintosh PCs (that used the Motorola 68030 and 68040) had chipsets from VLSI Technology. Some of their innovations included the integration of PCI bridge logic, the GraphiCore 2D graphics accelerator and direct support for synchronous
https://en.wikipedia.org/wiki/MOS%20Technology%20VIC
The VIC (Video Interface Chip), specifically known as the MOS Technology 6560 (NTSC version) / 6561 (PAL version), is the integrated circuit chip responsible for generating video graphics and sound in the VIC-20 home computer from Commodore. It was originally designed for applications such as low cost CRT terminals, biomedical monitors, control system displays and arcade or home video game consoles. The chip was designed by Al Charpentier in 1977 but Commodore could not find a market for the chip. In 1979, MOS Technology began work on a video chip named MOS Technology 6564 intended for the TOI computer and had also made some work on another chip, MOS 6562 intended for a color version of the Commodore PET. Both of these chips failed due to memory timing constraints (both required very fast and thus expensive SRAM, making them unsuitable for mass production). Before finally starting to use the VIC in the VIC-20, chip designer Robert Yannes fed features from the 6562 (a better sound generator) and 6564 (more colors) back to the 6560, so before beginning mass production for the VIC-20 it had been thoroughly revised. Its features include: 16 kB address space for screen, character and color memory (only 5 kB points to RAM on the VIC-20 without a hardware modification) 16 colors (the upper 8 can only be used in the global background and auxiliary colors) two selectable character sizes (8×8 or 8×16 bits; the pixel width is 1 bit for "hires" characters and 2 bits for "multicolor" characters) maximum video resolution depends on the television system (176 × 184 is the standard for the VIC-20 firmware, although up to 248 × 232p/464i is possible on the NTSC machine and up to 256 × 280 is possible on the PAL machine) 4 channel sound system (3 square wave + "white" noise + global volume setting) on-chip DMA two 8-bit analog-to-digital converter light pen support Unlike many other video circuits of the era, it does not offer dynamic RAM refresh capabilities. Thus the VI
https://en.wikipedia.org/wiki/Color%20vision
Color vision, a feature of visual perception, is an ability to perceive differences between light composed of different frequencies independently of light intensity. Color perception is a part of the larger visual system and is mediated by a complex process between neurons that begins with differential stimulation of different types of photoreceptors by light entering the eye. Those photoreceptors then emit outputs that are propagated through many layers of neurons and then ultimately to the brain. Color vision is found in many animals and is mediated by similar underlying mechanisms with common types of biological molecules and a complex history of evolution in different animal taxa. In primates, color vision may have evolved under selective pressure for a variety of visual tasks including the foraging for nutritious young leaves, ripe fruit, and flowers, as well as detecting predator camouflage and emotional states in other primates. Wavelength Isaac Newton discovered that white light after being split into its component colors when passed through a dispersive prism could be recombined to make white light by passing them through a different prism. The visible light spectrum ranges from about 380 to 740 nanometers. Spectral colors (colors that are produced by a narrow band of wavelengths) such as red, orange, yellow, green, cyan, blue, and violet can be found in this range. These spectral colors do not refer to a single wavelength, but rather to a set of wavelengths: red, 625–740 nm; orange, 590–625 nm; yellow, 565–590 nm; green, 500–565 nm; cyan, 485–500 nm; blue, 450–485 nm; violet, 380–450 nm. Wavelengths longer or shorter than this range are called infrared or ultraviolet, respectively. Humans cannot generally see these wavelengths, but other animals may. Hue detection Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths t
https://en.wikipedia.org/wiki/No%20Silver%20Bullet
"No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper on software engineering written by Turing Award winner Fred Brooks in 1986. Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law). Summary Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. This is related to Aristotle's classification. Accidental complexity relates to problems which engineers create and can fix. For example, modern programming languages have abstracted away the details of writing and optimizing assembly code, and eliminated the delays caused by batch processing, though other sources of accidental complexity remain. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things. Brooks claims that accidental complexity has decreased substantially, and today's programmers spend most of their time addressing essential complexity. Brooks argues that this means shrinking all the accidental activities to zero will not give the same order-of-magnitude improvement as attempting to decrease essential complexity. While Brooks insists that there is no one silver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements. One technology that had made significant improvement in the area of accidental complexity was the invention of high-level programming languages, such as Ada. Brooks advocates "growing" software organically through incremental development
https://en.wikipedia.org/wiki/Neon%20lamp
A neon lamp (also neon glow lamp) is a miniature gas-discharge lamp. The lamp typically consists of a small glass capsule that contains a mixture of neon and other gases at a low pressure and two electrodes (an anode and a cathode). When sufficient voltage is applied and sufficient current is supplied between the electrodes, the lamp produces an orange glow discharge. The glowing portion in the lamp is a thin region near the cathode; the larger and much longer neon signs are also glow discharges, but they use the positive column which is not present in the ordinary neon lamp. Neon glow lamps were widely used as indicator lamps in the displays of electronic instruments and appliances. They are still sometimes used for their electrical simplicity in high-voltage circuits. History Neon was discovered in 1898 by William Ramsay and Morris Travers. The characteristic, brilliant red color that is emitted by gaseous neon when excited electrically was noted immediately; Travers later wrote, "the blaze of crimson light from the tube told its own story and was a sight to dwell upon and never forget." Neon's scarcity precluded its prompt application for electrical lighting along the lines of Moore tubes, which used electric discharges in nitrogen. Moore tubes were commercialized by their inventor, Daniel McFarlan Moore, in the early 1900s. After 1902, Georges Claude's company, Air Liquide, was producing industrial quantities of neon as a byproduct of his air liquefaction business, and in December 1910 Claude demonstrated modern neon lighting based on a sealed tube of neon. In 1915 a U.S. patent was issued to Claude covering the design of the electrodes for neon tube lights; this patent became the basis for the monopoly held in the U.S. by his company, Claude Neon Lights, through the early 1930s. Around 1917, Daniel Moore developed the neon lamp while working at the General Electric Company. The lamp has a very different design from the much larger neon tubes used for neon l
https://en.wikipedia.org/wiki/Neon%20sign
In the signage industry, neon signs are electric signs lighted by long luminous gas-discharge tubes that contain rarefied neon or other gases. They are the most common use for neon lighting, which was first demonstrated in a modern form in December 1910 by Georges Claude at the Paris Motor Show. While they are used worldwide, neon signs were popular in the United States from about the 1920s to 1950s. The installations in Times Square, many originally designed by Douglas Leigh, were famed, and there were nearly 2,000 small shops producing neon signs by 1940. In addition to signage, neon lighting is used frequently by artists and architects, and (in a modified form) in plasma display panels and televisions. The signage industry has declined in the past several decades, and cities are now concerned with preserving and restoring their antique neon signs. Light emitting diode arrays can be formed to simulate the appearance of neon lamps. History The neon sign is an evolution of the earlier Geissler tube, which is a sealed glass tube containing a "rarefied" gas (the gas pressure in the tube is well below atmospheric pressure). When a voltage is applied to electrodes inserted through the glass, an electrical glow discharge results. Geissler tubes were popular in the late 19th century, and the different colors they emitted were characteristics of the gases within. They were unsuitable for general lighting, as the pressure of the gas inside typically declined with use. The direct predecessor of neon tube lighting was the Moore tube, which used nitrogen or carbon dioxide as the luminous gas and a patented mechanism for maintaining pressure. Moore tubes were sold for commercial lighting for a number of years in the early 1900s. The discovery of neon in 1898 by British scientists William Ramsay and Morris W. Travers included the observation of a brilliant red glow in Geissler tubes. Travers wrote, "the blaze of crimson light from the tube told its own story and was a sigh
https://en.wikipedia.org/wiki/Structural%20genomics
Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome. This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches. The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large number of sequenced genomes and previously solved protein structures allows scientists to model protein structure on the structures of previously solved homologs. Because protein structure is closely linked with protein function, the structural genomics has the potential to inform knowledge of protein function. In addition to elucidating protein functions, structural genomics can be used to identify novel protein folds and potential targets for drug discovery. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure. As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure. Structural genomics emphasizes high throughput determination of protein structures. This is performed in dedicated centers of structural genomics. While most struct
https://en.wikipedia.org/wiki/IP%20address%20blocking
IP address blocking or IP banning is a configuration of a network service that blocks requests from hosts with certain IP addresses. IP address blocking is commonly used to protect against brute force attacks and to prevent access by a disruptive address. It can also be used to restrict access to or from a particular geographic area; for example, syndicating content to a specific region through the use of Internet geolocation. IP address blocking can be implemented with a hosts file (e.g., for Mac, Windows, Android, or OS X) or with a TCP wrapper (for Unix-like operating systems). It can be bypassed using methods such as proxy servers; however, this can be circumvented with DHCP lease renewal. How it works Every device connected to the Internet is assigned a unique IP address, which is needed to enable devices to communicate with each other. With appropriate software on the host website, the IP address of visitors to the site can be logged and can also be used to determine the visitor's geographical location. Logging the IP address can, for example, monitor if a person has visited the site before, for example, to vote more than once, as well as to monitor their viewing pattern, how long since they performed any activity on the site (and set a time out limit), besides other things. Knowing the visitor's geolocation indicates, besides other things, the visitor's country. In some cases, requests from or responses to a certain country would be blocked entirely. Geo-blocking has been used, for example, to block shows in certain countries, such as censoring shows deemed inappropriate. This is especially frequent in places such as China. Internet users may circumvent geo-blocking and censorship and protect their personal identity using a Virtual Private Network. On a website, an IP address block can prevent a disruptive address from access, though a warning and/or account block may be used first. Dynamic allocation of IP addresses by ISPs can complicate IP address bl
https://en.wikipedia.org/wiki/Gudermannian%20function
In mathematics, the Gudermannian function relates a hyperbolic angle measure to a circular angle measure called the gudermannian of and denoted . The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830. The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude when parameter The real Gudermannian function is typically defined for to be the integral of the hyperbolic secant The real inverse Gudermannian function can be defined for as the integral of the secant The hyperbolic angle measure is called the anti-gudermannian of or sometimes the lambertian of , denoted In the context of geodesy and navigation for latitude , (scaled by arbitrary constant ) was historically called the meridional part of (French: latitude croissante). It is the vertical coordinate of the Mercator projection. The two angle measures and are related by a common stereographic projection and this identity can serve as an alternative definition for and valid throughout the complex plane: Circular–hyperbolic identities We can evaluate the integral of the hyperbolic secant using the stereographic projection (hyperbolic half-tangent) as a change of variables: Letting and we can derive a number of identities between hyperbolic functions of and circular functions of These are commonly used as expressions for and for real values of and with For example, the numerically well-behaved formulas (Note, for and for complex arguments, care must be taken choosing branches of the inverse functions.) We can also express and in terms of If we expand and in terms of the exponential, then we can see that and are all Möbius transformations of each-other (specifically, rotations
https://en.wikipedia.org/wiki/Magic%20%28software%29
Magic is an electronic design automation (EDA) layout tool for very-large-scale integration (VLSI) integrated circuit (IC) originally written by John Ousterhout and his graduate students at UC Berkeley. Work began on the project in February 1983. A primitive version was operational by April 1983, when Joan Pendleton, Shing Kong and other graduate student chip designers suffered through many fast revisions devised to meet their needs in designing the SOAR CPU chip, a follow-on to Berkeley RISC. Fearing that Ousterhout was going to propose another name that started with "C" to match his previous projects Cm*, Caesar, and Crystal, Gordon Hamachi proposed the name Magic because he liked the idea of being able to say that people used magic to design chips. The rest of the development team enthusiastically agreed to this proposal after he devised the backronym Manhattan Artwork Generator for Integrated Circuits. The Magic software developers called themselves magicians, while the chip designers were Magic users. As free and open-source software, subject to the requirements of the BSD license, Magic continues to be popular because it is easy to use and easy to expand for specialized tasks. Differences The main difference between Magic and other VLSI design tools is its use of "corner-stitched" geometry, in which all layout is represented as a stack of planes, and each plane consists entirely of "tiles" (rectangles). The tiles must cover the entire plane. Each tile consists of an (X, Y) coordinate of its lower left-hand corner, and links to four tiles: the right-most neighbor on the top, the top-most neighbor on the right, the bottom-most neighbor on the left, and the left-most neighbor on the bottom. With the addition of the type of material represented by the tile, the layout geometry in the plane is exactly specified. The corner-stitched geometry representation leads to the concept of layout as "paint" to be applied to, or erased from, a canvas. This is con
https://en.wikipedia.org/wiki/Quintic%20function
In mathematics, a quintic function is a function of the form where , , , , and are members of a field, typically the rational numbers, the real numbers or the complex numbers, and is nonzero. In other words, a quintic function is defined by a polynomial of degree five. Because they have an odd degree, normal quintic functions appear similar to normal cubic functions when graphed, except they may possess one additional local maximum and one additional local minimum. The derivative of a quintic function is a quartic function. Setting and assuming produces a quintic equation of the form: Solving quintic equations in terms of radicals (nth roots) was a major problem in algebra from the 16th century, when cubic and quartic equations were solved, until the first half of the 19th century, when the impossibility of such a general solution was proved with the Abel–Ruffini theorem. Finding roots of a quintic equation Finding the roots (zeros) of a given polynomial has been a prominent mathematical problem. Solving linear, quadratic, cubic and quartic equations by factorization into radicals can always be done, no matter whether the roots are rational or irrational, real or complex; there are formulae that yield the required solutions. However, there is no algebraic expression (that is, in terms of radicals) for the solutions of general quintic equations over the rationals; this statement is known as the Abel–Ruffini theorem, first asserted in 1799 and completely proved in 1824. This result also holds for equations of higher degree. An example of a quintic whose roots cannot be expressed in terms of radicals is . Some quintics may be solved in terms of radicals. However, the solution is generally too complicated to be used in practice. Instead, numerical approximations are calculated using a root-finding algorithm for polynomials. Solvable quintics Some quintic equations can be solved in terms of radicals. These include the quintic equations defined by a polyno
https://en.wikipedia.org/wiki/Apoplexy
Apoplexy () is rupture of an internal organ and the accompanying symptoms. The term formerly referred to what is now called a hemorrhagic stroke, which is the result of a ruptured blood vessel in the brain. Today health care professionals do not use the term, but instead specify the anatomic location of the bleeding, such as cerebral, ovarian or pituitary. Informally or metaphorically, the term apoplexy is associated with being furious, especially as "apoplectic". Historical meaning From the late 14th to the late 19th century, apoplexy referred to any sudden death that began with a sudden loss of consciousness, especially one in which the victim died within a matter of seconds after losing consciousness. The word apoplexy was sometimes used to refer to the symptom of sudden loss of consciousness immediately preceding death. Ruptured aortic aneurysms, and even heart attacks and strokes were referred to as apoplexy in the past, because before the advent of medical science, there was limited ability to differentiate abnormal conditions and diseased states. Although physiology as a medical field dates back at least to the time of Hippocrates, until the late 19th century physicians often had inadequate or inaccurate understandings of many of the human body's normal functions and abnormal presentations. Hence, identifying a specific cause of a symptom or of death often proved difficult or impossible. Hemorrhage Because the term by itself is now ambiguous, it is often coupled with a descriptive adjective to indicate the site of bleeding. For example, bleeding within the pituitary gland is called pituitary apoplexy, and bleeding within the adrenal glands can be called adrenal apoplexy. Apoplexy also includes hemorrhaging with the gland and accompanying neurological problems such as confusion, headache, and impairment of consciousness. See also Transient ischemic attack References External links Pathology Causes of death Bleeding Obsolete medical terms es:Apop