source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/Platinum%20silicide
|
Platinum silicide, also known as platinum monosilicide, is the inorganic compound with the formula PtSi. It is a semiconductor that turns into a superconductor when cooled to 0.8 K.
Structure and bonding
The crystal structure of PtSi is orthorhombic, with each silicon atom having six neighboring platinum atoms. The distances between the silicon and the platinum neighbors are as follows: one at a distance of 2.41 angstroms, two at a distance of 2.43 angstroms, one at a distance of 2.52 angstroms, and the final two at a distance of 2.64 angstroms. Each platinum atom has six silicon neighbors at the same distances, as well as two platinum neighbors, at a distance of 2.87 and 2.90 angstroms. All of the distances over 2.50 angstroms are considered too far to really be involved in bonding interactions of the compound. As a result, it has been shown that two sets of covalent bonds compose the bonds forming the compound. One set is the three center Pt–Si–Pt bond, and the other set the two center Pt–Si bonds. Each silicon atom in the compound has one three center bond and two center bonds. The thinnest film of PtSi would consist of two alternating planes of atoms, a single sheet of orthorhombic structures. Thicker layers are formed by stacking pairs of the alternating sheets. The mechanism of bonding between PtSi is more similar to that of pure silicon than pure platinum or , though experimentation has revealed metallic bonding character in PtSi that pure silicon lacks.
Synthesis
Methods
PtSi can be synthesized in several ways. The standard method involves depositing a thin film of pure platinum onto silicon wafers and heating in a conventional furnace at 450–600 °C for a half an hour in inert ambients. The process cannot be carried out in an oxygenated environment, as this results in the formation of an oxide layer on the silicon, preventing PtSi from forming.
A secondary technique for synthesis requires a sputtered platinum film deposited on a silicon substrate. Due to
|
https://en.wikipedia.org/wiki/Engineering%20bill%20of%20materials
|
An engineering bill of materials (EBOM) is a type of bill of materials (BOM) reflecting the product as designed by engineering, referred to as the "as-designed" bill of materials.
The EBOM is not related to modular BOM or configurable BOM (CBOM) concepts, as modular and configurable BOMs are used to reflect selection of items to create saleable end-products.
The EBOM concept aligns to sales BOMs (as sold), service BOMs (as changed based on changes due to field service).
This BOM includes all substitute and alternate part numbers, and includes parts that are contained in drawing notes.
See also
Bill of materials
Configurable BOM (CBOM)
Material Requirements Planning (MRP)
Manufacturing resource planning (MRP II)
Enterprise resource planning (ERP)
Manufacturing
Enterprise resource planning terminology
|
https://en.wikipedia.org/wiki/Electronic%20Frontier%20Finland
|
Electronic Frontier Finland – Effi ry (Effi) is a Finnish on-line civil rights organization founded in 2001 by Herkko Hietanen, Ville Oksanen and Mikko Välimäki. It had about 1,600 members at the end of 2014. While not formally affiliated with the U.S.-based Electronic Frontier Foundation, the two organizations share many of their goals. Effi is a member of the Global Internet Liberty Campaign and a founding member of European Digital Rights (EDRi).
Effi's stated aim is to protect and promote freedom of speech and privacy on the Internet as well as in Finnish society in general. Among other things, Effi has lobbied for effective anti-spam legislation and against software patents. Effi has also assumed a leading role on certain consumer rights issues such as CD copy protection, in part due to the reluctance of traditional Finnish consumer protection agencies to address them.
Effi presents the annual Finnish Big Brother Awards in cooperation with Privacy International.
Board members include Tapani Tarvainen, Timo Karjalainen and Leena Romppainen as chairperson.
References
External links
Electronic Frontier Finland website
Politics and technology
Computer law organizations
Internet privacy organizations
Foundations based in Finland
Privacy organizations
Organizations established in 2001
2001 establishments in Finland
|
https://en.wikipedia.org/wiki/Morita%20equivalence
|
In abstract algebra, Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely two rings like R, S are Morita equivalent (denoted by ) if their categories of modules are additively equivalent (denoted by ). It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958.
Motivation
Rings are commonly studied in terms of their modules, as modules can be viewed as representations of rings. Every ring R has a natural R-module structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent. This notion is of interest only when dealing with noncommutative rings, since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic.
Definition
Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive.
Examples
Any two isomorphic rings are Morita equivalent.
The ring of n-by-n matrices with elements in R, denoted Mn(R), is Morita-equivalent to R for any n > 0. Notice that this generalizes the classification of simple artinian rings given by Artin–Wedderburn theory. To see the equivalence, notice that if X is a left R-module then Xn is an Mn(R)-module where the module structure
|
https://en.wikipedia.org/wiki/DO-254
|
RTCA DO-254 / EUROCAE ED-80, Design Assurance Guidance for Airborne Electronic Hardware is a document providing guidance for the development of airborne electronic hardware, published by RTCA, Incorporated and EUROCAE. The DO-254/ED-80 standard was formally recognized by the FAA in 2005 via AC 20-152 as a means of compliance for the design assurance of electronic hardware in airborne systems. The guidance in this document is applicable, but not limited, to such electronic hardware items as
Line Replaceable Units (quickly replaceable components)
Circuit board assemblies (CBA)
Custom micro-coded components such as field programmable gate arrays (FPGA), programmable logic devices (PLD), and application-specific integrated circuits (ASIC), including any associated macro functions
Integrated technology components such as hybrid integrated circuits and multi-chip modules
Commercial off-the-shelf (COTS) components
The document classifies electronic hardware items into simple or complex categories. An item is simple "if a comprehensive combination of deterministic tests and analyses appropriate to the design assurance level can ensure correct functional performance under all foreseeable operating conditions with no anomalous behavior." Conversely, a complex item is one that cannot have correct functional performance ensured by tests and analyses alone; so, assurance must be accomplished by additional means. The body of DO-254/ED-80 establishes objectives and activities for the systematic design assurance of complex electronic hardware, generally presumed to be complex custom micro-coded components, as listed above. However, simple electronic hardware is within the scope of DO-254/ED-80 and applicants propose and use the guidance in this standard to obtain certification approval of simple custom micro-coded components, especially devices that support higher level (A/B) aircraft functions.
The DO-254/ED-80 standard is the counterpart to the well-established software s
|
https://en.wikipedia.org/wiki/Arboreal%20theory
|
The arboreal theory claims that primates evolved from their ancestors by adapting to arboreal life. It was proposed by Grafton Elliot Smith (1912), a neuroanatomist who was chiefly concerned with the emergence of the primate brain.
Summary
Primates are thought to have developed several of their traits and habits initially while living in trees. One key component to this argument is that primates relied on sight over smell. They were able to develop a keen sense of depth perception, perhaps because of the constant leaping that was necessary to move about the trees. Primates also developed hands and feet that were capable of grasping. This was also a result of arboreal life, which required a great deal of crawling along branches, and reaching out for fruit and other food. These early primates were likely to have eaten foods found in trees, such as flowers, fruits, berries, gums, leaves, and insects. They are thought to have shifted their diets towards insects in the early Cenozoic era, when insects became more numerous.
References
Theories
Evolutionary biology
|
https://en.wikipedia.org/wiki/Pelvic%20thrust
|
The pelvic thrust is the thrusting motion of the pelvic region, which is used for a variety of activities, such as dance, exercise, or sexual activity.
Sexual activity
The pelvic thrust is used during copulation by many species of mammals, including humans, or for other sexual activities (such as non-penetrative sex). In 2007, German scientists noted that female monkeys could increase the vigor and amount of pelvic thrusts made by the male by shouting during intercourse. In whitetail deer, copulation consists of a single pelvic thrust.
Dance
One of the first to perform this move on stage was Elvis Presley. It was quite controversial due to its obvious sexual connotations. Due to this controversy, he was sometimes shown (as seen on his third appearance on The Ed Sullivan Show) from the waist up on TV. Later, the pelvic thrust also became one of the signature moves of Michael Jackson. It is also mentioned in "Time Warp", a song from The Rocky Horror Show, as a part of the choreography associated with the warp itself. Twerking, a reverse and sometimes passive form of pelvic thrust dance move, is also a very popular form of hip-hop dance move. The sideways pelvic thrust is a famous female dance move in India and Bangladesh and known as thumka. It appears in the lyrics of various Bollywood songs.
Exercise
Hip thrusts can be used as an exercise to train the gluteus maximus muscle. The athlete will get into a reclined position and thrust their hips upward to lift weights balanced on their lap.
Infants
Pelvic thrusting is observed in infant monkeys, apes, and humans. These observations led ethologist John Bowlby (1969) to suggest that infantile sexual behavior may be the rule in mammals, not the exception. Thrusting has been observed in humans at eight to 10 months of age and may be an expression of affection. Typically, the infant clings to the parent, then nuzzles, thrusts, and rotates the pelvis for several seconds.
See also
Lordosis behavior
Twerking
Refe
|
https://en.wikipedia.org/wiki/P-compact%20group
|
In mathematics, in particular algebraic topology, a p-compact group is a homotopical version of a compact Lie group, but with all the local structure concentrated at a single prime p. This concept was introduced in , making precise earlier notions of a mod p finite loop space. A p-compact group has many Lie-like properties like maximal tori and Weyl groups, which are defined purely homotopically in terms of the classifying space, but with the important difference that the Weyl group, rather than being a finite reflection group over the integers, is now a finite p-adic reflection group. They admit a classification in terms of root data, which mirrors the classification of compact Lie groups, but with the integers replaced by the p-adic integers.
Definition
A p-compact group is a pointed space BG, with is local with respect to mod p homology, and such the pointed loop space G = ΩBG has finite mod p homology. One sometimes also refer to the p-compact group by G, but then one needs to keep in mind that the loop space structure is part of the data (which then allows one to recover BG).
A p-compact group is said to be connected if G is a connected space (in general the group of components of G will be a finite p-group). The rank of a p-compact group is the rank of its maximal torus.
Examples
The p-completion, in the sense of homotopy theory, of (the classifying space of) a compact connected Lie group defines a connected p-compact group. (The Weyl group is just its ordinary Weyl group, now viewed as a p-adic reflection group by tensoring the coweight lattice by .)
More generally the p-completion of a connected finite loop space defines a p-compact group. (Here the Weyl will be a -reflection group that may not come from a -reflection group.)
A rank 1 connected 2-compact group is either the 2-completion of SU(2) or SO(3). A rank 1 connected p-compact group, for p odd, is a "Sullivan sphere", i.e., the p-completion of a 2n-1-sphere S2n-1, where n divides p − 1.
|
https://en.wikipedia.org/wiki/Placard
|
A placard is a notice installed in a public place, like a small card, sign, or plaque. It can be attached to or hung from a vehicle or building to indicate information about the vehicle operator or contents of a vehicle or building. It can also refer to paperboard signs or notice carried by picketers or demonstrators.
Buildings
A placard is posted on buildings to communicate a wide variety of information, such as fire safety policies, emergency shelters.
The International Building Code requires doors in some public and commercial structures, fitted with an internal key lock have a notice "This door to remain unlocked when this space is occupied" in a minimum of text be posted beside or above the door. Some state and local building codes modify this text, such as California fire code, which specifies "This door to remain unlocked during business hours".
Temporary placards may be placed on buildings such as warning signs when a structure is being fumigated, or has been condemned by building inspectors or the fire department and is unsafe to enter.
Fallout shelters
As part of the civil defense preparations in the event of a nuclear attack, in 1961 United States began establishing fallout shelters in communities across the country. The shelters were symbolized by orange-yellow and black trefoil symbol, designed by Robert W. Blakeley.
In 1962, 1.4 million metal signs and 1 million adhesive stickers were manufactured and distributed across the country at a total cost of $700,500 .
Two standard signs were used widely, a aluminum sign for posting on the exterior of buildings identifying the building as having a fallout shelter, and a steel sign, intended for interior use to the shelter's location and mark the actual location of the shelter within the building.
The sign system included 'overlays' that were designed to be added to signs for conveying additional information about the specific shelter and its location.
Exterior sign overlays:
Numbers - for Capacity
|
https://en.wikipedia.org/wiki/Graph%20factorization
|
In graph theory, a factor of a graph G is a spanning subgraph, i.e., a subgraph that has the same vertex set as G. A k-factor of a graph is a spanning k-regular subgraph, and a k-factorization partitions the edges of the graph into disjoint k-factors. A graph G is said to be k-factorable if it admits a k-factorization. In particular, a 1-factor is a perfect matching, and a 1-factorization of a k-regular graph is a proper edge coloring with k colors. A 2-factor is a collection of cycles that spans all vertices of the graph.
1-factorization
If a graph is 1-factorable (i.e., has a 1-factorization), then it has to be a regular graph. However, not all regular graphs are 1-factorable. A k-regular graph is 1-factorable if it has chromatic index k; examples of such graphs include:
Any regular bipartite graph. Hall's marriage theorem can be used to show that a k-regular bipartite graph contains a perfect matching. One can then remove the perfect matching to obtain a (k − 1)-regular bipartite graph, and apply the same reasoning repeatedly.
Any complete graph with an even number of nodes (see below).
However, there are also k-regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include:
Any regular graph with an odd number of nodes.
The Petersen graph.
Complete graphs
A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament. The 1-factorization of complete graphs is a special case of Baranyai's theorem concerning the 1-factorization of complete hypergraphs.
One method for constructing a 1-factorization of a complete graph on an even number of vertices involves placing all but one of the vertices on a circle, forming a regular polygon, with the remaining vertex at the center of the circle. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on
|
https://en.wikipedia.org/wiki/Program%20status%20word
|
The program status word (PSW) is a register that performs the function of a status register and program counter, and sometimes more. The term is also applied to a copy of the PSW in storage. This article only discusses the PSW in the IBM System/360 and its successors, and follows the IBM convention of numbering bits starting with 0 as the leftmost (most significant) bit.
Although certain fields within the PSW may be tested or set by using non-privileged instructions, testing or setting the remaining fields may only be accomplished by using privileged instructions.
Contained within the PSW are the two bit condition code, representing zero, positive, negative, overflow, and similar flags of other architectures' status registers. Conditional branch instructions test this encoded as a four bit value, with each bit representing a test of one of the four condition code values, 23 + 22 + 21 + 20. (Since IBM uses big-endian bit numbering, mask value 8 selects code 0, mask value 4 selects code 1, mask value 2 selects code 2, and mask value 1 selects code 3.)
The 64-bit PSW describes (among other things)
Interrupt masks
Privilege states
Condition code
Instruction address
In the early instances of the architecture (System/360 and early System/370), the instruction address was 24 bits; in later instances (XA/370), the instruction address was 31 bits plus a mode bit (24 bit addressing mode if zero; 31 bit addressing mode if one) for a total of 32 bits.
In the present instances of the architecture (z/Architecture), the instruction address is 64 bits and the PSW itself is 128 bits.
The PSW may be loaded by the LOAD PSW instruction (LPSW or LPSWE). Its contents may be examined with the Extract PSW instruction (EPSW).
Format
S/360
On all but 360/20, the PSW has the following formats. S/360 Extended PSW format only applies to the 360/67 with bit 8 of control register 6 set.
S/370
S/370 Extended Architecture (S/370-XA)
Enterprise Systems Architecture (ESA)
z/Archi
|
https://en.wikipedia.org/wiki/Transversality%20%28mathematics%29
|
In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection.
Definition
Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point.
In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point.
One notation for the transverse intersection of two submanifolds and of a given manifold is . This notation can be read in two ways: either as “ and intersect transversally” or as an alternative notation for the set-theoretic intersection of and when that intersection is transverse. In this notation, the definition of transversality reads
Transversality of maps
The notion of transversality of a pair of submanifolds is easily extended to tran
|
https://en.wikipedia.org/wiki/Sukkur%20Barrage
|
Sukkur Barrage (, ) is a barrage on the River Indus near the city of Sukkur in the Sindh province of Pakistan. The barrage was built during the British Raj from 1923 to 1932 and was named Lloyd Barrage. The Sukkur Barrage, is the pride of Pakistan's irrigation system as it is the largest single irrigation network of its kind in the world. It irrigates from Sukkur district in the north, to Mirpurkhas/Tharparkar and Hyderabad districts in the south of Sindh, almost all parts of the province. It is situated about northeast of Karachi, below the railway bridge, or the Sukkur Gorge. The introduction of barrage-controlled irrigation system resulted in more timely water supplies for the existing cultivated areas of Sindh province of Pakistan.
History
Sindh survives almost entirely on the water of the River Indus as there is very limited groundwater available. Rainfall in the province averages between 100 and 200 mm per year, while the evaporation rate is between 1,000 and 2,000 mm. Thus, Sindh is arid and it is only the Indus which irrigates otherwise barren lands of Sindh. Regular surveys have not been carried out to assess the availability of groundwater in the province. Various sources estimate that its volume is between three and five MAF scattered in 28 per cent of the geographical area of Sindh. However, some experts suggest it to be less than these estimates. This water is found mainly along the Indus water channels and in the few natural underground streams.
The idea of Sukkur Barrage was conceived by Mr. C.A. Fife, in 1868. However, the project was finally sanctioned in 1923. It was constructed under the overall direction of Sir Charlton Harrison, CIE, as chief engineer, while Sir Arnold Musto, CIE, was the architect and engineer of the scheme. The Head Works and Canals were completed by 1932. On its completion it was opened by The 1st Earl of Willingdon, Viceroy of India. The scheme had been launched by the Governor of Bombay, Sir George Lloyd (later known as
|
https://en.wikipedia.org/wiki/WinWAP
|
WinWAP was a web browser for Windows CE mobile devices. It was first released in 1999.
See also
Article about Winwap in Finnish IT Magazine Tietoviikko
Windows web browsers
Mobile web browsers
1999 software
Discontinued web browsers
|
https://en.wikipedia.org/wiki/Collection%20of%20Computer%20Science%20Bibliographies
|
The Collection of Computer Science Bibliographies (1993–2023) was one of the oldest (if not the oldest) bibliography collections freely accessible on the Internet. As of July 2023 it ceased operations. It is a collection of bibliographies of scientific literature in computer science and (computational) mathematics from various sources, covering most aspects of computer science. The bibliographies are updated weekly from their original locations.
As of 2009 the collection contains more than 2.8 million unique references (mostly to journal articles, conference papers and technical reports), clustered in about 1700 bibliographies, and consists of more than 4.4 Gb (950 Mb gzipped) of BibTeX entries. More than 600,000 references contain cross-references to citing or cited publications.
More than 1 million references contain URLs to online versions of the papers. Abstracts are available for more than 1 million entries. There are more than 2,000 links to other sites carrying bibliographic information.
Duplicates and links
As the Collection of Computer Science Bibliographies consists of many subcollections there is a substantial overlap (roughly 1/3). At the end of 2008 there were more than 4.2 million records which represent about 2.8 million unique (in terms of normalized title and authors' last names) bibliographic entries.
The number of duplicates may be seen as an advantage, because there is a greater chance for finding a freely available full text PDF of a searched publication. Publications are clustered by title and last names of authors, so it is possible to find an extended version (e.g. Technical Report or Thesis) of an article.
There are also generated links to Google Scholar and IEEE Xplore in cases where no full text link was available directly. Almost every bibliographic query may be served in RSS format.
Major subcollections
arXiv
Bibliography Network Project
CiteSeerX
DBLP
LEABib
Networked Computer Science Technical Reference Library
Histo
|
https://en.wikipedia.org/wiki/Nyquist%20ISI%20criterion
|
In communications, the Nyquist ISI criterion describes the conditions which, when satisfied by a communication channel (including responses of transmit and receive filters), result in no intersymbol interference or ISI. It provides a method for constructing band-limited functions to overcome the effects of intersymbol interference.
When consecutive symbols are transmitted over a channel by a linear modulation (such as ASK, QAM, etc.), the impulse response (or equivalently the frequency response) of the channel causes a transmitted symbol to be spread in the time domain. This causes intersymbol interference because the previously transmitted symbols affect the currently received symbol, thus reducing tolerance for noise. The Nyquist theorem relates this time-domain condition to an equivalent frequency-domain condition.
The Nyquist criterion is closely related to the Nyquist–Shannon sampling theorem, with only a differing point of view.
Nyquist criterion
If we denote the channel impulse response as , then the condition for an ISI-free response can be expressed as:
for all integers , where is the symbol period. The Nyquist theorem says that this is equivalent to:
,
where is the Fourier transform of . This is the Nyquist ISI criterion.
This criterion can be intuitively understood in the following way: frequency-shifted replicas of must add up to a constant value. This condition is satisfied when spectrum has even symmetry, has bandwidth less than or equal to , and its single-sideband has odd symmetry at the cutoff frequency .
In practice this criterion is applied to baseband filtering by regarding the symbol sequence as weighted impulses (Dirac delta function). When the baseband filters in the communication system satisfy the Nyquist criterion, symbols can be transmitted over a channel with flat response within a limited frequency band, without ISI. Examples of such baseband filters are the raised-cosine filter, or the sinc filter as the ideal case.
Der
|
https://en.wikipedia.org/wiki/Gun.Smoke
|
is vertically scrolling run and gun video game and designed by Yoshiki Okamoto and released in arcades in 1985. Gun.Smoke centers on a character named Billy Bob, a bounty hunter going after the criminals of the Wild West.
Gameplay
Gun Smoke is a run and gun video game in which the screen automatically scrolls upward. Players use three buttons to shoot left, right, and center. The player can also change the way Billy shoots through button combinations. The player dies by getting shot, struck by enemies, or caught between an obstacle and the bottom of the screen. The player can collect various items, including a horse for extra protection, boots for increased movement speed, bullets for faster shots, a yashichi for an extra life, and a rifle for longer shot range. Other items add points to your score such as stars, bottles, bags, and dragonflies.
Two versions of Gun.Smoke were released in North America by Romstar.
Ports
Gun.Smoke was ported to these systems:
The MSX
The PlayStation and Sega Saturn as a part of Capcom Generation 4
The PlayStation 2, PlayStation Portable and Xbox as a part of Capcom Classics Collection
The PlayStation 3 and Xbox 360 as a part of Capcom Arcade Cabinet
The, Nintendo Switch, PlayStation 4, Xbox One, and Microsoft Windows as part of Capcom Arcade 2nd Stadium, referred to as Gan Sumoku.
Windows 98 and Windows XP as a part of Capcom Arcade Hits Volume 3
The Amstrad CPC as Desperado – Gun.Smoke; this platform received a sequel called Desperado 2
The ZX Spectrum
NES version
The game was later ported to the Nintendo Entertainment System (NES) and Family Computer Disk System (FDS) in 1988. The game has a new storyline: In 1849, a gang known as the Wingates attacks the town of Hicksville, kills the sheriff, and causes trouble everyday until Billy, the main character, comes to town to free it from the gang. The NES version also has different music.
Soundtrack
The soundtrack for the arcade version was composed by Ayako Mori. On August 25, 19
|
https://en.wikipedia.org/wiki/Weighing%20matrix
|
In mathematics, a weighing matrix of order and weight is a matrix with entries from the set such that:
Where is the transpose of and is the identity matrix of order . The weight is also called the degree of the matrix. For convenience, a weighing matrix of order and weight is often denoted by .
Weighing matrices are so called because of their use in optimally measuring the individual weights of multiple objects. When the weighing device is a balance scale, the statistical variance of the measurement can be minimized by weighing multiple objects at once, including some objects in the opposite pan of the scale where they subtract from the measurement.
Properties
Some properties are immediate from the definition. If is a , then:
The rows of are pairwise orthogonal (that is, every pair of rows you pick from will be orthogonal). Similarly, the columns are pairwise orthogonal.
Each row and each column of has exactly non-zero elements.
, since the definition means that where is the inverse of
where is the determinant of
A weighing matrix is a generalization of Hadamard matrix, which does not allow zero entries. As two special cases, a is a Hadamard matrix and a is equivalent to a conference matrix.
Applications
Experiment design
Weighing matrices take their name from the problem of measuring the weight of multiple objects. If a measuring device has a statistical variance of , then measuring the weights of objects and subtracting the (equally imprecise) tare weight will result in a final measurement with a variance of . It is possible to increase the accuracy of the estimated weights by measuring different subsets of the objects, especially when using a balance scale where objects can be put on the opposite measuring pan where they subtract their weight from the measurement.
An order matrix can be used to represent the placement of objects—including the tare weight—in trials. Suppose the left pan of the balance scale adds to the meas
|
https://en.wikipedia.org/wiki/Charactron
|
Charactron was a U.S. registered trademark (number 0585950, 23 February 1954) of Consolidated Vultee Aircraft Corporation (Convair) for its shaped electron beam cathode ray tube. Charactron CRTs performed functions of both a display device and a read-only memory storing multiple characters and fonts. The similar Typotron was a U.S. registered trademark (23 November 1953) of Hughes Aircraft Corporation for its type of shaped electron beam storage tube with a direct-view bistable storage screen.
The Charactron CRT used an electron beam to flood a specially patterned perforated anode that contained the stencil patterns for each of the characters that it could form. The first deflection positioning of the electron beam steered the beam to pass through one of the (typically 64 or 116) characters and symbols that could be formed. The beam, which then had the cross-section of the desired character, was re-centered along the axis of the tube and deflected to the desired position of the screen for display. Alternately, as in the accompanying image, the entire matrix was filled with the electron beam then deflected through a selection aperture to isolate one character.
The term Charactron is sometimes mistakenly applied to another type of CRT properly called a monoscope which generates an electrical signal by scanning an electron beam of uniform cross section across a printed pattern on an internal target electrode.
Applications
There were two basic types/uses of Charactrons:
Direct view — where the intended user watched the face of the tube. An example was the tube of the AN/FSQ-7 SAGE Semi Automatic Ground Environment computer console.
Photographic output — where the display screen was photographed by a microfilm camera for recording of computer generated data. The Stromberg-Carlson SC-4000 series system was a typical use of the tube
The technical expertise, and trademarks, for the Charactron ultimately passed to Stromberg-Carlson, General Dynamics, Stromberg Da
|
https://en.wikipedia.org/wiki/Graphics%20hardware
|
Graphics hardware is computer hardware that generates computer graphics and allows them to be shown on a display, usually using a graphics card (video card) in combination with a device driver to create the images on the screen.
Types
Graphics cards
The most important piece of graphics hardware is the graphics card, which is the piece of equipment that renders out all images and sends them to a display. There are two types of graphics cards: integrated and dedicated.
An integrated graphics card, usually by Intel to use in their computers, is bound to the motherboard and shares RAM(Random Access Memory) with the CPU, reducing the total amount of RAM available. This is undesirable for running programs and applications that use a large amount of video memory.
A dedicated graphics card has its own RAM and Processor for generating its images, and does not slow down the computer. Dedicated graphics cards also have higher performance than integrated graphics cards. It is possible to have both dedicated and integrated graphics, however once a dedicated graphics card is installed, the integrated card will no longer function until the dedicated card is removed.
Parts of a graphics card
The GPU, or graphics processing unit, is the unit that allows the graphics card to function. It performs a large amount of the work given to the card. The majority of video playback on a computer is controlled by the GPU. Once again, a GPU can be either integrated or dedicated.
Video Memory is built-in RAM on the graphics card, which provides it with its own memory, allowing it to run smoothly without taking resources intended for general use by the rest of the computer. The term "Video" here is an informal designation and is not intended in a narrow sense. In particular, it does not imply exclusively video data. The data in this form of memory comprises all manner of graphical data including those for still images, icons, fonts, and generally anything that is displayed on the screen.
|
https://en.wikipedia.org/wiki/Domains%20by%20Proxy
|
Domains by Proxy (DBP) is an Internet company started by the founder of GoDaddy, Bob Parsons. Domains by Proxy offers domain privacy services through partner domain registrars such as GoDaddy and Wild West Domains.
Subscribers list Domains by Proxy as their administrative and technical contacts in the Internet's WHOIS database, thereby delegating responsibility for managing unsolicited contacts from third parties and keeping the domains owners' personal information secret. However, the company will release a registrant's personal information in some cases, such as by court order or for other reasons as deemed appropriate by the company per its Domain Name Proxy Agreement.
As of 2014, over 9,850,000 domain names use the Domains by Proxy service.
Political usage
In the run-up to the 2012 United States presidential primaries, numerous domain names with derogatory expressions have been registered through Domains by Proxy by both Republicans and Democrats.
Domains by Proxy have allegedly been a target of the Internet organization Anonymous due to perceived malicious business activities including inducements to join their service, claims of privacy that are not fulfilled and the lowering of Google PageRank of the sites they link to.
Controversy
Fraudsters
Controversially, Domains By Proxy is also used by a number of organizations that target vulnerable individuals by sending threatening psychic letters, and fake drug companies.
It is also used by fake anti-spyware and anti-malware sites to hide their real ownership of the software that they promote.
Advance Fee fraudsters also use Domains By Proxy. On 5 February 2016, the Artists Against 419 database reflected 1124 out of 108684 entries abused the services of Domains By Proxy. This represents a figure of slightly over one percent of the entries.
Privacy
In 2014, Domains by Proxy handed over personal details of a site owner to Motion Picture Association due to potential copyright infringement despite the website
|
https://en.wikipedia.org/wiki/Sintran%20III
|
Sintran III is a real-time, multitasking, multi-user operating system used with Norsk Data minicomputers from 1974. Unlike its predecessors Sintran I and II, it was written entirely by Norsk Data, in Nord Programming Language (Nord PL, NPL), an intermediate language for Norsk Data computers.
Overview
Sintran was mainly a command-line interface based operating system, though there were several shells which could be installed to control the user environment more strictly, by far the most popular of which was USER-ENVIRONMENT.
One of the clever features was to be able to abbreviate commands and file names between hyphens. For example, typing LIST-FILES would give users several prompts, including for print, paging etc. Users could override this using the following LI-FI ,,n, which would abbreviate the LIST-FILES command prompt and bypass any of the prompts. One could also refer to files in this way, for example, with PED H-W: which would refer to HELLO-WORLD:SYMB if this was the only file having H, any number of characters, a hyphen -, a W, any number of characters, and any file ending.
This saved many keystrokes and would allow users a very nice learning experience, from complete and self-explanatory commands like LIST-ALL-FILES to L-A-F for an advanced user. (The hyphen key on Norwegian keyboards resides where the slash key does on U.S. ones.)
Now that Sintran has mostly disappeared as an operating system, there are few references to it. However a job control or batch processing language was available named JEC, believed to be named Job Execution Controller, this could be used to set up batch jobs to compile COBOL programs, etc.
References
Discontinued operating systems
Norsk Data software
Proprietary operating systems
Real-time operating systems
1974 software
|
https://en.wikipedia.org/wiki/Microsoft%20CryptoAPI
|
The Microsoft Windows platform specific Cryptographic Application Programming Interface (also known variously as CryptoAPI, Microsoft Cryptography API, MS-CAPI or simply CAPI) is an application programming interface included with Microsoft Windows operating systems that provides services to enable developers to secure Windows-based applications using cryptography. It is a set of dynamically linked libraries that provides an abstraction layer which isolates programmers from the code used to encrypt the data. The Crypto API was first introduced in Windows NT 4.0 and enhanced in subsequent versions.
CryptoAPI supports both public-key and symmetric key cryptography, though persistent symmetric keys are not supported. It includes functionality for encrypting and decrypting data and for authentication using digital certificates. It also includes a cryptographically secure pseudorandom number generator function CryptGenRandom.
CryptoAPI works with a number of CSPs (Cryptographic Service Providers) installed on the machine. CSPs are the modules that do the actual work of encoding and decoding data by performing the cryptographic functions. Vendors of HSMs may supply a CSP which works with their hardware.
Cryptography API: Next Generation
Windows Vista features an update to the Crypto API known as Cryptography API: Next Generation (CNG). It has better API factoring to allow the same functions to work using a wide range of cryptographic algorithms, and includes a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is also flexible, featuring support for plugging custom cryptographic APIs into the CNG runtime. However, CNG Key Storage Providers still do not support symmetric keys. CNG works in both user and kernel mode, and also supports all of the algorithms from the CryptoAPI. The Microsoft provider that implements CNG is housed in Bcrypt.dll.
CNG also supports elliptic curve cryptography which, because it uses shorter keys for the
|
https://en.wikipedia.org/wiki/Hardbass
|
Hardbass or hard bass () is a subgenre of pumping house that originated in Saint Petersburg, Russia during the late 1990s, drawing inspiration from bouncy techno, hardstyle, as well as local Russian influences. Hardbass is characterized by its fast tempo (usually 150–175 BPM), donks, distinctive basslines (commonly known as "hard bounce"), distorted sounds, heavy kicks and occasional chants or rapping. In several European countries, so-called "hardbass scenes" have sprung up, which are events related to the genre that involve multiple people dancing in public while masked, sometimes with moshing involved.
History
Late 1990s–mid 2000s: Saint Petersburg, metal shade, drug raves
Hardbass first began to emerge in the late 1990s, mainly in the Saint Petersburg electronic dance music underground, when the pumping house genre, built around the bamboo bass, or donk bass (a type of metallic bass synthesizer sound, first invented by Klubbheads in 1997), became a staple in local raves. Eventually, party nights dedicated solely to pumping house were held in Saint Petersburg and to a lesser extent, in Moscow. The most famous venues for pumping raves in Saint Petersburg included those held in the "Rassvet" (Dawn) club and forest raves in a quarry near , an artificial lake not far from Saint Petersburg. Among the DJs kickstarting the domestic pumping house production in Russia were DJ Tolstyak, DJ 8088, DJ Yurbanoid, DJ Solovey, Dj Glyuk, and many others.
This raving scene was markedly different from its later offshoots. It formed a distinct subculture, mostly catering to the lower and middle class youth of Saint Petersburg. Drug use (especially barbiturate, xyrem and amphetamine use) became prevalent in the scene.
To increase the energy of the parties, Saint Petersburg producers and DJs started to increase the BPM of the pumping house they played and produced, eventually reaching 150 BPM and beyond. Saint Petersburg producers would include distinct whistles and other samples
|
https://en.wikipedia.org/wiki/Robinson%E2%80%93Dadson%20curves
|
The Robinson–Dadson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by D. W. Robinson and R. S. Dadson.
Until recently, it was common to see the term 'Fletcher–Munson' used to refer to equal-loudness contours generally, even though the re-determination carried out by Robinson and Dadson in 1956, became the basis for an ISO standard ISO 226 which was only revised recently.
It is now better to use the term 'Equal-loudness contours' as the generic term, especially as a recent survey by ISO redefined the curves in a new standard, ISO 226 :2003.
According to the ISO report, the Robinson-Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. It comments that it is fortunate that the 40-Phon Fletcher-Munson curve on which the A-weighting standard was based turns out to have been in good agreement with modern determinations.
The article also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
The equipment used was not properly calibrated.
The criteria used for judging equal loudness (which is tricky) differed.
Different races actually vary greatly in this respect (possible, and most recent determinations were by the Japanese).
Subjects were not properly rested for days in advance, or were exposed to loud noise in travelling to the tests which tensed the tensor timpani and stapedius muscles controlling low-frequency mechanical coupling.
See also
A-weighting
ITU-R 468 noise weighting
References
External links
ISO Standard
Fletcher–Munson is not Robinson–Dadson
Full Revision of International Standards for Equal-Loudness Level Contours (ISO 226)
Hearing curves and on-line hearing test
Equal-loudness contours by Robinson and Dadson
Acoustics
Hearing
Audio engineering
Sound
Psychoacoustics
|
https://en.wikipedia.org/wiki/Key%20Code%20Qualifier
|
Key Code Qualifier is an error-code returned by a SCSI device.
When a SCSI target device returns a check condition in response to a command, the initiator usually then issues a SCSI Request Sense command. This process is part of a SCSI protocol called Contingent Allegiance Condition. The target will respond to the Request Sense command with a set of SCSI sense data which includes three fields giving increasing levels of detail about the error:
K - sense key - 4 bits, (byte 2 of Fixed sense data format)
C - additional sense code (ASC) - 8 bits, (byte 12 of Fixed sense data format)
Q - additional sense code qualifier (ASCQ) - 8 bits, (byte 13 of Fixed sense data format)
The initiator can take action based on just the K field which indicates if the error is minor or major. However all three fields are usually logically combined into a 20 bit field called Key Code Qualifier or KCQ. The specification for the target device will define the list of possible KCQ values. In practice there are many KCQ values which are common between different SCSI device types and different SCSI device vendors. Common values are listed below, you should consult your hardware specific documentation as well.
List of common SCSI KCQs
References
T10: SCSI ASC/ASCQ Assignments
SCSI
|
https://en.wikipedia.org/wiki/Causal%20filter
|
In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function.
An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal.
Example
The following definition is a sliding or moving average of input data . A constant factor of is omitted for simplicity:
where could represent a spatial coordinate, as in image processing. But if represents time , then a moving average defined that way is non-causal (also called non-realizable), because depends on future inputs, such as . A realizable output is
which is a delayed version of the non-realizable output.
Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution
In those terms, causality requires
and general equality of these two expressions requires h(t) = 0 for all t < 0.
Characterization of causal filters in the frequency domain
Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function
which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relat
|
https://en.wikipedia.org/wiki/Quadrature%20filter
|
In signal processing, a quadrature filter is the analytic representation of the impulse response of a real-valued filter:
If the quadrature filter is applied to a signal , the result is
which implies that is the analytic representation of .
Since is an analytic signal, it is either zero or complex-valued. In practice, therefore, is often implemented as two real-valued filters, which correspond to the real and imaginary parts of the filter, respectively.
An ideal quadrature filter cannot have a finite support. It has single sided support, but by choosing the (analog) function carefully, it is possible to design quadrature filters which are localized such that they can be approximated by means of functions of finite support. A digital realization without feedback (FIR) has finite support.
Applications
This construction will simply assemble an analytic signal with a starting point to finally create a causal signal with finite energy. The two Delta Distributions will perform this operation. This will impose an additional constraint on the filter.
Single frequency signals
For single frequency signals (in practice narrow bandwidth signals) with frequency the magnitude of the response of a quadrature filter equals the signal's amplitude A times the frequency function of the filter at frequency .
This property can be useful when the signal s is a narrow-bandwidth signal of unknown frequency. By choosing a suitable frequency function Q of the filter, we may generate known functions of the unknown frequency which then can be estimated.
See also
Analytic signal
Hilbert transform
Signal processing
|
https://en.wikipedia.org/wiki/MOSE
|
MOSE () is a project intended to protect the city of Venice, Italy, and the Venetian Lagoon from flooding.
The project is an integrated system consisting of rows of mobile gates installed at the Lido, Malamocco, and Chioggia inlets that are able to isolate the Venetian Lagoon temporarily from the Adriatic Sea during acqua alta high tides. Together with other measures, such as coastal reinforcement, the raising of quaysides, and the paving and improvement of the lagoon, MOSE is designed to protect Venice and the lagoon from tides of up to . Currently it is raised for tides of more than 110 centimetres.
The Consorzio Venezia Nuova is responsible for the work on behalf of the Ministry of Infrastructure and Transport – Venice Water Authority. Construction began simultaneously in 2003. On 10 July 2020, the first full test was successfully completed, and after multiple delays, cost overruns, and scandals resulted in the project missing both its 2018 completion deadline (originally a 2011 deadline) and its 2021 deadline, and is now to be finished in 2025. On 3 October 2020, the MOSE was activated for the first time in the occurrence of a high tide event, preventing some of the low-lying parts of the city (in particular piazza San Marco) from being flooded. In 2020, the experts who had conceived a set of three floodgates separating the Adriatic Sea from Venice estimated that each year they would have to raise the floodgates 5 times. Within two years after the inaugural raising of the floodgates, MOSE was activated 49 times.
Origin of the name
Before the acronym was used to describe the entire flood protection system, MOSE referred to the 1:1 scale prototype of a gate that had been tested between 1988 and 1992 at the Lido inlet.
The name also holds a secondary meaning: "MOSE" alludes to the biblical character Moses ("Mosè" in Italian), who is remembered for parting the Red Sea.
Context
MOSE is part of a General Plan of Interventions to safeguard Venice and the lagoon
|
https://en.wikipedia.org/wiki/CrypTool
|
CrypTool is an open-source project
that is a free e-learning software for illustrating cryptographic and cryptanalytic concepts.
According to "Hakin9", CrypTool is worldwide the most widespread e-learning software in the field of cryptology.
CrypTool implements more than 400 algorithms. Users can adjust these with own parameters. To introduce users to the field of cryptography, the organization created multiple graphical interface software containing an online documentation, analytic tools and algorithms. They contain most classical ciphers, as well as modern symmetric and asymmetric cryptography including RSA, ECC, digital signatures, hybrid encryption, homomorphic encryption, and Diffie–Hellman key exchange. Methods from the area of quantum cryptography (like BB84 key exchange protocol) and the area of post-quantum cryptography (like McEliece, WOTS, Merkle-Signature-Scheme, XMSS, XMSS_MT, and SPHINCS) are implemented. In addition to the algorithms, solvers (analyzers) are included, especially for classical ciphers. Other methods (for instance Huffman code, AES, Keccak, MSS) are visualized.
In addition it contains: didactical games (like Number Shark, Divider Game, or Zudo-Ku) and interactive tutorials about primes, elementary number theory, and lattice-based cryptography.
Development, history and roadmap
The development of CrypTool started in 1998. Originally developed by German companies and universities, it is an open-source project since 2001. More than sixty people worldwide contribute regularly to the project. Contributions as software plugins came from universities or schools in the following towns: Belgrad, Berlin, Bochum, Brisbane, Darmstadt, Dubai, Duisburg-Essen, Eindhoven, Hagenberg, Jena, Kassel, Klagenfurt, Koblenz, London, Madrid, Mannheim, San Jose, Siegen, Utrecht, Warsaw.
Currently 4 versions of CrypTool are maintained and developed: The CrypTool 1 (CT1) software is available in 6 languages (English, German, Polish, Spanish, Serbian, and Fren
|
https://en.wikipedia.org/wiki/Arcus%20II%3A%20Silent%20Symphony
|
is a computer game developed and released in Japan by Wolf Team. Narumi Kakinouchi, co-creator of Vampire Princess Miyu, was the art director for this game. The music for the game was composed by Masaaki Uno, Motoi Sakuraba, and Yasunori Shiono.
See also
Arcus Odyssey
References
External links
1989 video games
Adventure games
Arcus (video game series)
Japan-exclusive video games
MSX2 games
Telenet Japan games
X68000 games
NEC PC-8801 games
NEC PC-9801 games
Video games developed in Japan
Video games scored by Motoi Sakuraba
|
https://en.wikipedia.org/wiki/Simson%20line
|
In geometry, given a triangle and a point on its circumcircle, the three closest points to on lines , , and are collinear. The line through these points is the Simson line of , named for Robert Simson. The concept was first published, however, by William Wallace in 1799, and is sometimes called the Wallace line.
The converse is also true; if the three closest points to on three lines are collinear, and no two of the lines are parallel, then lies on the circumcircle of the triangle formed by the three lines. Or in other words, the Simson line of a triangle and a point is just the pedal triangle of and that has degenerated into a straight line and this condition constrains the locus of to trace the circumcircle of triangle .
Equation
Placing the triangle in the complex plane, let the triangle with unit circumcircle have vertices whose locations have complex coordinates , , , and let P with complex coordinates be a point on the circumcircle. The Simson line is the set of points satisfying
where an overbar indicates complex conjugation.
Properties
The Simson line of a vertex of the triangle is the altitude of the triangle dropped from that vertex, and the Simson line of the point diametrically opposite to the vertex is the side of the triangle opposite to that vertex.
If and are points on the circumcircle, then the angle between the Simson lines of and is half the angle of the arc . In particular, if the points are diametrically opposite, their Simson lines are perpendicular and in this case the intersection of the lines lies on the nine-point circle.
Letting denote the orthocenter of the triangle , the Simson line of bisects the segment in a point that lies on the nine-point circle.
Given two triangles with the same circumcircle, the angle between the Simson lines of a point on the circumcircle for both triangles does not depend of .
The set of all Simson lines, when drawn, form an envelope in the shape of a deltoid known as the Steiner del
|
https://en.wikipedia.org/wiki/Molybdenum%20disilicide
|
Molybdenum disilicide (MoSi2, or molybdenum silicide), an intermetallic compound, a silicide of molybdenum, is a refractory ceramic with primary use in heating elements. It has moderate density, melting point 2030 °C, and is electrically conductive. At high temperatures it forms a passivation layer of silicon dioxide, protecting it from further oxidation. The thermal stability of MoSi2 alongside its high emissivity make this material, alongside WSi2 attractive for applications as a high emissivity coatings in heat shields for atmospheric entry.
MoSi2 is a gray metallic-looking material with tetragonal crystal structure (alpha-modification); its beta-modification is hexagonal and unstable. It is insoluble in most acids but soluble in nitric acid and hydrofluoric acid.
While MoSi2 has excellent resistance to oxidation and high Young's modulus at temperatures above 1000 °C, it is brittle in lower temperatures. Also, at above 1200 °C it loses creep resistance. These properties limits its use as a structural material, but may be offset by using it together with another material as a composite material.
Molybdenum disilicide and MoSi2-based materials are usually made by sintering. Plasma spraying can be used for producing its dense monolithic and composite forms; material produced this way may contain a proportion of β-MoSi2 due to its rapid cooling.
Molybdenum disilicide heating elements can be used for temperatures up to 1800 °C, in electric furnaces used in laboratory and production environment in production of glass, steel, electronics, ceramics, and in heat treatment of materials. While the elements are brittle, they can operate at high power without aging, and their electrical resistivity does not increase with operation time. Their maximum operating temperature has to be lowered in atmospheres with low oxygen content due to breakdown of the passivation layer.
Other ceramic materials used for heating elements include silicon carbide, barium titanate, and lead t
|
https://en.wikipedia.org/wiki/Plugboard
|
A plugboard or control panel (the term used depends on the application area) is an array of jacks or sockets (often called hubs) into which patch cords can be inserted to complete an electrical circuit. Control panels are sometimes used to direct the operation of unit record equipment, cipher machines, and early computers.
Unit record equipment
Main article: Unit record equipment
The earliest machines were hardwired for specific applications. Control panels were introduced in 1906 for the Hollerith Type 1 Tabulator (photo of Type 3 with built-in control panel here). Removable control panels were introduced with the Hollerith (IBM) type 3-S tabulator in the 1920s. Applications then could be wired on separate control panels, and inserted into tabulators as needed. Removable control panels came to be used in all unit record machines where the machine's use for different applications required rewiring.
IBM removable control panels ranged in size from 6 1/4" by 10 3/4" (for machines such as the IBM 077, IBM 550, IBM 514) to roughly one to two feet (300 to 600 mm) on a side and had a rectangular array of hubs. Plugs at each end of a single-conductor patch cord were inserted into hubs, making a connection between two contacts on the machine when the control panel was placed in the machine, thereby connecting an emitting hub to an accepting or entry hub. For example, in a card duplicator application a card column reading (emitting) hub might be connected to a punch magnet entry hub. It was a relatively simple matter to copy some fields, perhaps to different columns, and ignore other columns by suitable wiring. Tabulator control panels could require dozens of patch cords for some applications.
Tabulator functions were implemented with both mechanical and electrical components. Control panels simplified the changing of electrical connections for different applications, but changing most tabulator's use still required mechanical changes. The IBM 407 was the first IBM t
|
https://en.wikipedia.org/wiki/International%20Technology%20Roadmap%20for%20Semiconductors
|
The International Technology Roadmap for Semiconductors (ITRS) is a set of documents produced by a group of semiconductor industry experts. These experts are representative of the sponsoring organisations which include the Semiconductor Industry Associations of Taiwan, South Korea, the United States, Europe, Japan, and China. As of 2017, ITRS is no longer being updated. Its successor is the International Roadmap for Devices and Systems.
The documents carried disclaimer: "The ITRS is devised and intended for technology assessment only and is without regard to any commercial considerations pertaining to individual products or equipment".
The documents represent best opinion on the directions of research and time-lines up to about 15 years into the future for the following areas of technology:
History
Constructing an integrated circuit, or any semiconductor device, requires a series of operations—photolithography, etching, metal deposition, and so on. As the industry evolved, each of these operations were typically performed by specialized machines built by a variety of commercial companies. This specialization may potentially make it difficult for the industry to advance, since in many cases it does no good for one company to introduce a new product if the other needed steps are not available around the same time. A technology roadmap can help this by giving an idea when a certain capability will be needed. Then each supplier can target this date for their piece of the puzzle.
With the progressive externalization of production tools to the suppliers of specialized equipment, participants identified a need for a clear roadmap to anticipate the evolution of the market and to plan and control the technological needs of IC production. For several years, the Semiconductor Industry Association (SIA) gave this responsibility of coordination to the United States, which led to the creation of an American style roadmap, the National Technology Roadmap for Semiconductors (
|
https://en.wikipedia.org/wiki/Can%20seamer
|
A can seamer is a machine used to seal the lid to the can body. The lid or "end" is usually tinplated steel (food) or aluminum (drinks) while the body can be of metal (such as cans for beverages and soups), paperboard (whisky cans) or plastic.
The seam formed is generally leak proof, but this depends on the product being canned. The seam is made by mechanically overlapping the two layers to form a hook.
Different parameters of the hook are measured and monitored to check the integrity of the seam under different conditions.
The shape of the double seam is determined by the shape of the seamer roll profile and the relative position. During the can seaming process, the seamer chuck holds the can while the rolls rotate around it. Initially, the first operation roll folds the lid (end) and then the second operation rolls tightens the resulting seam.
The first operation seam is critical to avoid problems like wrinkles (tightness issues) and leaks. The shape of the seam is determined by the shape of the first operation roll, the second operation roll, their relative positions and distances, lifter height, lifter pressure. Any damage or problems to the seamer or seamer tools can cause severe problems to the double seam like seam bumps, wrinkles, sharp seams, and open seams.
Can seamer setup
Seamer setup is usually done by an experienced individual, typically using a lifter height gauge, lifter height pressure gauge, and feeler gauges (small pieces of metal for go/no go testing of the distances between the roll and chuck toolings). New products like the clearance gauge are able to let even novice users adjust seamers and optimize them, as well as locate problems with broken/damaged tooling, shank/bushing issues, seamer adjustment issues, or broken bearings.
Applications
Some common can seamer applications include, but are not limited to:
Cans
Automotive filters (oil and fuel)
Capacitors
Certain automotive mufflers (silencers)
Drums
References
Yam, K. L., E
|
https://en.wikipedia.org/wiki/Fallacies%20of%20distributed%20computing
|
The fallacies of distributed computing are a set of assertions made by L Peter Deutsch and others at Sun Microsystems describing false assumptions that programmers new to distributed applications invariably make.
The fallacies
The fallacies are
The network is reliable;
Latency is zero;
Bandwidth is infinite;
The network is secure;
Topology doesn't change;
There is one administrator;
Transport cost is zero;
The network is homogeneous.
The effects of the fallacies
Software applications are written with little error-handling on networking errors. During a network outage, such applications may stall or infinitely wait for an answer packet, permanently consuming memory or other resources. When the failed network becomes available, those applications may also fail to retry any stalled operations or require a (manual) restart.
Ignorance of network latency, and of the packet loss it can cause, induces application- and transport-layer developers to allow unbounded traffic, greatly increasing dropped packets and wasting bandwidth.
Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks.
Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures.
Changes in network topology can have effects on both bandwidth and latency issues, and therefore can have similar problems.
Multiple administrators, as with subnets for rival companies, may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths.
The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls.
If a system assumes a homogeneous network, then it can lead to the same problems that result from the first three fallacies.
History
The list of fallacies generally came about at Sun Microsystems. L. Peter Deutsch, one of the original Sun "Fel
|
https://en.wikipedia.org/wiki/Exterior%20insulation%20finishing%20system
|
Exterior insulation and finish system (EIFS) is a general class of non-load bearing building cladding systems that provides exterior walls with an insulated, water-resistant, finished surface in an integrated composite material system.
EIFS has been in use since the 1960s in North America and was first used on masonry buildings. Since the 1990s, the majority of wood-framed buildings have used EIFS.
History of EIFS
EIFS was developed in Europe after World War II and was initially used to retrofit masonry walls. EIFS started to be used in North America in the 1960s, at first on commercial masonry buildings. EIFS became popular in the mid-1970s due to the oil embargo and the resultant surge in interest in insulating wall systems that conserve energy used for heating and cooling.
In the late 1980s problems started developing due to water leakage in EIFS-clad buildings. This led to international controversy and lawsuits. EIFS installation was found to be a contributing factor in the multibillion-dollar problem known as the "Leaky condo crisis" in southwestern British Columbia and the "Leaky homes" issue in New Zealand that emerged separately in the 1980s and 1990s.
Critics argue that, while not inherently more prone to water penetration than other exterior finishes, barrier-type EIFS systems (non-water-managed systems) do not allow water that does penetrate the building to escape. The EIFS industry has consistently maintained that poor craftsmanship and bad architectural detailing at the perimeter of the EIFS was the problem. As a result, building codes began mandating a drainage system for EIFS systems on wood-frame buildings and additional on-site inspection.
Though there are some cases where insurance companies may not offer coverage for EIFS, several companies do. EIFS systems installed at lower building levels are subject to vandalism, as the material is soft and can be chipped or carved resulting in significant damage. In these cases, heavier ounce reinforci
|
https://en.wikipedia.org/wiki/Architecture%20of%20macOS
|
The architecture of macOS describes the layers of the operating system that is the culmination of Apple Inc.'s decade-long research and development process to replace the classic Mac OS.
After the failures of their previous attempts—Pink, which started as an Apple project but evolved into a joint venture with IBM called Taligent, and Copland, which started in 1994 and was cancelled two years later—Apple began development of Mac OS X, later renamed OS X and then macOS, with the acquisition of NeXT's NeXTSTEP in 1997.
Development
NeXTSTEP
NeXTSTEP used a hybrid kernel that combined the Mach 2.5 kernel developed at Carnegie Mellon University with subsystems from 4.3BSD. NeXTSTEP also introduced a new windowing system based on Display PostScript that intended to achieve better WYSIWYG systems by using the same language to draw content on monitors that drew content on printers. NeXT also included object-oriented programming tools based on the Objective-C language that they had acquired from Stepstone and a collection of Frameworks (or Kits) that were intended to speed software development. NeXTSTEP originally ran on Motorola's 68k processors, but was later ported to Intel's x86, Hewlett-Packard's PA-RISC and Sun Microsystems' SPARC processors. Later on, the developer tools and frameworks were released, as OpenStep, as a development platform that would run on other operating systems.
Rhapsody
On February 4, 1997, Apple acquired NeXT and began development of the Rhapsody operating system. Rhapsody built on NeXTSTEP, porting the core system to the PowerPC architecture and adding a redesigned user interface based on the Platinum user interface from Mac OS 8. An emulation layer called Blue Box allowed Mac OS applications to run within an actual instance of the Mac OS and an integrated Java platform. The Objective-C developer tools and Frameworks were referred to as the Yellow Box and also made available separately for Microsoft Windows. The Rhapsody project eventually b
|
https://en.wikipedia.org/wiki/Axiomatic%20design
|
Axiomatic design is a systems design methodology using matrix methods to systematically analyze the transformation of customer needs into functional requirements, design parameters, and process variables. Specifically, a set of functional requirements(FRs) are related to a set of design parameters (DPs) by a Design Matrix A:
The method gets its name from its use of design principles or design Axioms (i.e., given without proof) governing the analysis and decision making process in developing high quality product or system designs. The two axioms used in Axiomatic Design (AD) are:
Axiom 1: The Independence Axiom. Maintain the independence of the functional requirements (FRs).
Axiom 2: The Information Axiom. Minimize the information content of the design.
Axiomatic design is considered to be a design method that addresses fundamental issues in Taguchi methods.
Coupling is the term Axiomatic Design uses to describe a lack of independence between the FRs of the system as determined by the DPs. I.e., if varying one DP has a resulting significant impact on two separate FRs, it is said the FRs are coupled. Axiomatic Design introduces matrix analysis of the Design Matrix to both assess and mitigate the effects of coupling.
Axiom 2, the Information Axiom, provides a metric of the probability that a specific DP will deliver the functional performance required to satisfy the FR. The metric is normalized to be summed up for the entire system being modeled. Systems with less functional performance risk (minimal information content) are preferred over alternative systems with higher information content.
The methodology was developed by Dr. Suh Nam Pyo at MIT, Department of Mechanical Engineering since the 1990s. A series of academic conferences have been held to present current developments of the methodology.
See also
Design structure matrix (DSM)
New product development (NPD)
Design for Six Sigma
Six Sigma
Taguchi methods
Axiomatic product development life
|
https://en.wikipedia.org/wiki/Ferranti-Packard%206000
|
The FP-6000 was a second-generation mainframe computer developed and built by Ferranti-Packard, the Canadian division of Ferranti, in the early 1960s. It is particularly notable for supporting multitasking, being one of the first commercial machines to do so. Only six FP-6000s were sold before the computer division of Ferranti-Packard was sold off by Ferranti's UK headquarters in 1963, the FP-6000 becoming the basis for the mid-range machines of the ICT 1900, which sold into the thousands in Europe.
Background
What was to become the FP-6000 had its genesis in a Royal Canadian Navy project starting in 1949 called DATAR. For DATAR, Ferranti-Packard (then still known as Ferranti Canada) built an experimental computer to share information among ships in a convoy. Although the prototype was a success, the failure rate of the vacuum tubes was a concern to everyone and Ferranti suggested they re-build the machine using transistors instead. DATAR ran out of funds before this conversion could take place, but Ferranti put the experience to good use in a series of one-off transistorized machines. One such example was a cheque sorting system built for the Federal Reserve Bank, itself a modification of a system developed to sort mail for the Canadian Post Office.
The developmental series eventually culminated in ReserVec. ReserVec was the first computerized reservation system to enter service when it took over all bookings for Air Canada in 1961. Ferranti initially had high hopes for the machine, thinking that it would be successful in Europe if sold by the UK headquarters' sales staff. As had happened many times in the past, however, the UK computer team suffered from a terminal case of not invented here, and decided it was better if they designed their own instead. Their project was never delivered, and ReserVec withered.
Ferranti-Packard was unwilling to simply let the development effort go to waste, and started looking for ways to commercialize the ReserVec hardware into
|
https://en.wikipedia.org/wiki/Impact%20ionization
|
Impact ionization is the process in a material by which one energetic charge carrier can lose energy by the creation of other charge carriers. For example, in semiconductors, an electron (or hole) with enough kinetic energy can knock a bound electron out of its bound state (in the valence band) and promote it to a state in the conduction band, creating an electron-hole pair. For carriers to have sufficient kinetic energy a sufficiently large electric field must be applied, in essence requiring a sufficiently large voltage but not necessarily a large current.
If this occurs in a region of high electrical field then it can result in avalanche breakdown. This process is exploited in avalanche diodes, by which a small optical signal is amplified before entering an external electronic circuit. In an avalanche photodiode the original charge carrier is created by the absorption of a photon.
The impact ionization process is used in modern cosmic dust detectors like the Galileo Dust Detector and dust analyzers Cassini CDA, Stardust CIDA and the Surface Dust Analyser for the identification of dust impacts and the compositional analysis of cosmic dust particles.
In some sense, impact ionization is the reverse process to Auger recombination.
Avalanche photodiodes (APD) are used in optical receivers before the signal is given to the receiver circuitry the photon is multiplied with the photocurrent and this increases the sensitivity of the receiver since photocurrent is multiplied before encountering of the thermal noise associated with the receiver circuit.
See also
Multiphoton ionization
Avalanche breakdown
Avalanche diode
Avalanche photodiode
References
External links
Animation showing impact ionization in a semiconductor
Semiconductors
Ionization
|
https://en.wikipedia.org/wiki/C.%20Thomas%20Elliott
|
Charles Thomas Elliott (known as Tom Elliott), (born 16 January 1939), is a scientist in the fields of narrow gap semiconductor and infrared detector research.
Early life
Hailing from County Durham, he attended Washington Grammar Technical School. After gaining his Ph.D. he worked at the University of Manchester
Career
He joined RRE in Malvern, Worcestershire in the late 1960s. In the 1970s he invented the SPRITE detector (Signal PRocessing In The Element) which was also known as the TED (Tom Elliott's Detector). This was a photoconductor device in which the infrared scene was scanned across the detector (made from HgCdTe) at the same rate as the carriers drifted under an applied controlled constant bias current. This device became part of TICM - the standard UK thermal imaging common module used since the 1980s by UK armed forces. Tom Elliott received a Rank Prize in 1982 for this work and was elected a Fellow of the Royal Society in 1988. He was appointed CBE in the 1994 Birthday Honours.
He won the Clifford Paterson Medal and Prize in 1997.
Tom Elliott also contributed to the development of the semiconductor indium antimonide (InSb) as an infrared detector, magnetic sensor and fast, low voltage transistor material. He was involved in the exploitation of negative luminescence in diode structures.
He retired from the successor to RRE, DERA in 1999 and is an honorary professor at Heriot-Watt University.
Personal life
A conference centre at DERA Malvern (by 2007 QinetiQ) was named 'The Tom Elliott Centre' in his honour when opened by the Princess Royal in 2007. He lives in Malvern.
Bibliography
Infrared Detectors and Emitters: Materials and Devices, edited by Peter Capper and C T Elliott, Springer (2000)
An infrared detector with integrated signal processing, C. T. Elliott, Electron Devices Meeting, 1982 International, Vol. 28 Page(s): 132 - 135 (1982)
Uncooled InSb/In1–xAlxSb mid-infrared emitter, T. Ashley, C. T. Elliott, N. T. Gordon, R. S. Hall, A
|
https://en.wikipedia.org/wiki/Society%20of%20Professional%20Engineering%20Employees%20in%20Aerospace
|
The Society of Professional Engineering Employees in Aerospace (SPEEA), IFPTE Local 2001 is a professional labor union representing more than 24,000 engineers, technical workers and other professionals in the aerospace industry. SPEEA represents employees at The Boeing Company, Spirit AeroSystems, BAE Systems and Triumph Composite Systems. Members work in Washington, Kansas, Oregon, Utah, Texas, California, and Florida.
SPEEA is governed by an elected Executive Board and Council, but daily operations are handled by a professional staff and Executive Director. Union headquarters are in Seattle, with branch offices and union halls in Everett, Washington and Wichita, Kansas.
History
According to Richard Henning, SPEEA's co-founder and engineer, SPEEA's earliest beginnings were meetings at the Seattle YMCA in 1945 to frame its first constitution.
SPEEA was formed in 1946 by a group of Boeing engineers in Seattle, Washington and is an affiliated local union of the International Federation of Professional and Technical Engineers (IFPTE). On behalf of its members, SPEEA negotiates contracts with employers; it also provides assistance with resolving workplace and benefit issues. SPEEA originally stood for Seattle Professional Engineering Employees Association.
In an interview in February 2015, the centenarian co-founder Richard Henning, who retired in 1979 as a Boeing Executive remarked,
“It’s a smart idea to have an organization speak for an individual instead of one on one. In the old days, if you sat down to negotiate with the chief of engineering it was pretty difficult to express ideas. Whereas in a union, there was discussion to bring out points and that happened on a daily basis.”
References
External links
Society of Professional Engineering Employees in Aerospace website
Aerospace engineering organizations
Engineering societies based in the United States
Trade unions in the United States
Organizations based in Seattle
.
.
Trade unions established in 1946
|
https://en.wikipedia.org/wiki/Dear%20Doctor
|
"Dear Doctor" is the thirteenth episode of the first season of the American science fiction television series Star Trek: Enterprise, and originally aired on January 23, 2002, on UPN. The episode was written by Maria and Andre Jacquemetton, and was directed by James A. Contner.
Set in the 22nd century, the series follows the adventures of the first Starfleet starship Enterprise, registration NX-01. In "Dear Doctor", Dr. Phlox (John Billingsley) faces a serious dilemma as a dying race begs for help from the crew of the Enterprise. The culture consists of two related races, but only the more genetically advanced race has been stricken by a planet-wide plague.
The episode is significant for introducing the concepts and motivations of the Prime Directive just prior to the founding of the United Federation of Planets.
UPN requested that the ending of the episode be changed, something that Billingsley did not like. However, he and other members of the cast and crew approved of the final episode. Due to the subject matter and the ending, it is seen as a controversial episode critically and by audience response. Although "Dear Doctor" received the same audience share as the previous episode, there was a 6.6% drop in viewers to 5.7 million viewers for its first broadcast.
Plot
Doctor Phlox receives a letter from his Interspecies Medical Exchange counterpart, Doctor Jeremy Lucas, who is serving a term on Denobula. He begins to compose a letter back, describing his experiences with the crew, and the ways in which humans are different. Meanwhile, on the Bridge, the crew are discussing a pre-warp vessel they have encountered. The alien they speak with, a Valakian, begs them to assist with a medical emergency their species is facing. Sub-Commander T'Pol reveals that the Vulcans are unaware of the species, but she agrees with Captain Jonathan Archer to help them. Phlox continues his letter, describing the challenges of treating the disease – with over fifty million lives at stak
|
https://en.wikipedia.org/wiki/Single-ended%20triode
|
A single-ended triode (SET) is a vacuum tube electronic amplifier that uses a single triode to produce an output, in contrast to a push-pull amplifier which uses a pair of devices with antiphase inputs to generate an output with the wanted signals added and the distortion components subtracted. Single-ended amplifiers normally operate in Class A; push-pull amplifiers can also operate in Classes AB or B without excessive net distortion, due to cancellation.
The term single-ended triode amplifier is mainly used for output stages of audio power amplifiers.
The phrase directly heated triode single-ended triode amplifier (abbreviated to DHT SET) is used when directly heated triodes are used.
There are also single-ended tetrode, beam tetrode/beam power tube/kinkless tetrode, and pentode amplifiers with the same functionality and similar circuitry; e.g. this Mullard design.
Audio power amplifiers
A typical triode audio power amplifier will have a driver that provides voltage gain, coupled to a triode (like 2A3 and 300B) or a pentode or kinkless tetrode such as EL34 or KT88 connected as a triode, connected to the loudspeaker through an audio transformer in a common cathode arrangement. The triode is biased to Class A operation by applying a suitable negative bias voltage to its input control grid (see diagram), or by raising the cathode potential with biasing components.
In traditional SET amp, the direct current of output triode (from 30 mA for triode-strapped 6V6 to 250 mA for 6C33C) flows continuously through the primary winding of a transformer. This requires inserting a gap in the transformer core to prevent core saturation by DC current; adding a gap decreases primary inductance and limits bass response; the inductance and bass response can be restored by using a larger transformer than if the DC were not present.
An alternative schematic, parafeed amplifier, solves bandwidth problem by blocking direct current from output transformer (which does not need to be
|
https://en.wikipedia.org/wiki/Pilot%20chute
|
A pilot chute is a small auxiliary parachute used to deploy the main or reserve parachute. The pilot chute is connected by a bridle to the deployment bag containing the parachute. Pilot chutes are a critical component of all modern skydiving and BASE jumping gear. Pilot chutes are also used as a component of spacecraft such as NASA's Orion.
Deployment methods
Spring-loaded
The spring-loaded pilot chute is used in conjunction with a ripcord. When the user pulls the ripcord, the container opens, allowing the pilot chute compressed inside and loaded with a large spring inside it to jump out. Spring-loaded pilot chutes are mainly used to deploy reserve parachutes. They are often also used to deploy the main parachute on skydiving students' parachute equipment. They are also commonly used in drogue parachute in cars or in planes such as the B52 Bomber.
Pull-out
The pull-out and throw-out pilot chutes are identical in construction; the difference is in their connection to the handle and the bridle, and in the way they are packed.
With the pull-out system, the pilot chute is packed inside the container. The activation handle is attached to a lanyard, which in turn is attached to the closing pin. The lanyard is also attached to base of the pilot chute, at the point of connection to the bridle. When the user pulls the handle, the closing pin is pulled, opening the container. Continuing the pull, the user pulls the pilot chute out of the container and into the airstream, at which point the pilot chute inflates and pulls the main parachute out of the container.
Throw-out
The throw-out pilot chute is the most popular type in use today. The pilot chute is packed in a pouch at the bottom of the container (often called BOC for short). The handle is attached to the apex of the pilot chute. When the user grabs the handle and throws the pilot chute into the airstream, the bridle extends, pulling the closing pin and opening the container, as the pilot chute continues in the ai
|
https://en.wikipedia.org/wiki/Exponential%20factorial
|
The exponential factorial is a positive integer n raised to the power of n − 1, which in turn is raised to the power of n − 2, and so on in a right-grouping manner. That is,
The exponential factorial can also be defined with the recurrence relation
The first few exponential factorials are 1, 2, 9, 262144, ... ( or ). For example, 262144 is an exponential factorial since
Using the recurrence relation, the first exponential factorials are:
1
21 = 2
32 = 9
49 = 262144
5262144 = 6206069878...8212890625 (183231 digits)
The exponential factorials grow much more quickly than regular factorials or even hyperfactorials. The number of digits in the exponential factorial of 6 is approximately 5 × 10183 230.
The sum of the reciprocals of the exponential factorials from 1 onwards is the following transcendental number:
This sum is transcendental because it is a Liouville number.
Like tetration, there is currently no accepted method of extension of the exponential factorial function to real and complex values of its argument, unlike the factorial function, for which such an extension is provided by the gamma function. But it is possible to expand it if it is defined in a strip width of 1.
Similarly, there is disagreement about the appropriate value at 0; any value would be consistent with the recursive definition. A smooth extension to the reals would satisfy , which suggests a value strictly between 0 and 1.
Related functions, notation and conventions
References
Jonathan Sondow, "Exponential Factorial" From Mathworld, a Wolfram Web resource
Factorial and binomial topics
Integer sequences
Large integers
Exponentials
|
https://en.wikipedia.org/wiki/X-Video%20Motion%20Compensation
|
X-Video Motion Compensation (XvMC), is an extension of the X video extension (Xv) for the X Window System. The XvMC API allows video programs to offload portions of the video decoding process to the GPU video-hardware. In theory this process should also reduce bus bandwidth requirements. Currently, the supported portions to be offloaded by XvMC onto the GPU are motion compensation (mo comp) and inverse discrete cosine transform (iDCT) for MPEG-2 video. XvMC also supports offloading decoding of mo comp, iDCT, and VLD ("Variable-Length Decoding", more commonly known as "slice level acceleration") for not only MPEG-2 but also MPEG-4 ASP video on VIA Unichrome (S3 Graphics Chrome Series) hardware.
XvMC was the first UNIX equivalent of the Microsoft Windows DirectX Video Acceleration (DxVA) API. Popular software applications known to take advantage of XvMC include MPlayer, MythTV, and xine.
Device drivers
Each hardware video GPU capable of XvMC video acceleration requires a X11 software device driver to enable these features.
Hardware manufacturers
Nvidia
There are currently three X11 Nvidia drivers available: a 2D-only open source but obfuscated driver maintained by Nvidia called nv, a proprietary binary driver by Nvidia, and an open source driver based on reverse engineering of the binary driver developed by the Linux community called Nouveau. Nouveau is not pursuing XvMC support, the 2D nv driver does not support XvMC, and the official proprietary binary driver by Nvidia only supports MPEG-2 offloading (mo comp and iDCT) on hardware up to and including the GeForce 7000 series.
VIA
VIA provides open source device drivers for some of its VIA Unichrome (S3 Graphics Chrome Series) hardware, supporting offloading of MPEG-2 and MPEG-4 ASP video.
Thanks to VLD level of decoding VIA offloads much more decoding tasks from CPU than GPUs supporting iDCT or mo comp levels only. Keep in mind that not all devices are supported and there are some other caveats.
Intel
Intel pr
|
https://en.wikipedia.org/wiki/Pythagorean%20means
|
In mathematics, the three classical Pythagorean means are the arithmetic mean (AM), the geometric mean (GM), and the harmonic mean (HM). These means were studied with proportions by Pythagoreans and later generations of Greek mathematicians because of their importance in geometry and music.
Definition
They are defined by:
Properties
Each mean, , has the following properties:
First-order homogeneity
Invariance under exchange
for any and .
Monotonicity
Idempotence
Monotonicity and idempotence together imply that a mean of a set always lies between the extremes of the set:
The harmonic and arithmetic means are reciprocal duals of each other for positive arguments,
while the geometric mean is its own reciprocal dual:
Inequalities among means
There is an ordering to these means (if all of the are positive)
with equality holding if and only if the are all equal.
This is a generalization of the inequality of arithmetic and geometric means and a special case of an inequality for generalized means. The proof follows from the arithmetic–geometric mean inequality, , and reciprocal duality ( and are also reciprocal dual to each other).
The study of the Pythagorean means is closely related to the study of majorization and Schur-convex functions. The harmonic and geometric means are concave symmetric functions of their arguments, and hence Schur-concave, while the arithmetic mean is a linear function of its arguments and hence is both concave and convex.
History
Almost everything that we know about the Pythagorean means came from arithmetic handbooks written in the first and second century. Nicomachus of Gerasa says that they were “acknowledged by all the ancients, Pythagoras, Plato and Aristotle.” Their earliest known use is a fragment of the Pythagorean philosopher Archytas of Tarentum:
The name "harmonic mean", according to Iamblichus, was coined by Archytas and Hippasus. The Pythagorean means also appear in Plato's Timaeus. Another evidence of t
|
https://en.wikipedia.org/wiki/POWER3
|
The POWER3 is a microprocessor, designed and exclusively manufactured by IBM, that implemented the 64-bit version of the PowerPC instruction set architecture (ISA), including all of the optional instructions of the ISA (at the time) such as instructions present in the POWER2 version of the POWER ISA but not in the PowerPC ISA. It was introduced on 5 October 1998, debuting in the RS/6000 43P Model 260, a high-end graphics workstation. The POWER3 was originally supposed to be called the PowerPC 630 but was renamed, probably to differentiate the server-oriented POWER processors it replaced from the more consumer-oriented 32-bit PowerPCs. The POWER3 was the successor of the P2SC derivative of the POWER2 and completed IBM's long-delayed transition from POWER to PowerPC, which was originally scheduled to conclude in 1995. The POWER3 was used in IBM RS/6000 servers and workstations at 200 MHz. It competed with the Digital Equipment Corporation (DEC) Alpha 21264 and the Hewlett-Packard (HP) PA-8500.
Description
The POWER3 was based on the PowerPC 620, an earlier 64-bit PowerPC implementation that was late, under-performing and commercially unsuccessful. Like the PowerPC 620, the POWER3 has three fixed-point units, but the single floating-point unit (FPU) was replaced with two floating-point fused multiply–add units, and an extra load-store unit was added (for a total of two) to improve floating-point performance. The POWER3 is a superscalar design that executed instructions out of order. It has a seven-stage integer pipeline, a minimal eight-stage load/store pipeline and a ten-stage floating-point pipeline.
The front end consists of two stages: fetch and decode. During the first stage, eight instructions were fetched from a 32 KB instruction cache and placed in a 12-entry instruction buffer. During the second stage, four instructions were taken from the instruction buffer, decoded, and issued to instruction queues. Restrictions on instruction issue are few: of the two i
|
https://en.wikipedia.org/wiki/List%20of%20cohomology%20theories
|
This is a list of some of the ordinary and generalized (or extraordinary) homology and cohomology theories in algebraic topology that are defined on the categories of CW complexes or spectra. For other sorts of homology theories see the links at the end of this article.
Notation
S = π = S0 is the sphere spectrum.
Sn is the spectrum of the n-dimensional sphere
SnY = Sn∧Y is the nth suspension of a spectrum Y.
[X,Y] is the abelian group of morphisms from the spectrum X to the spectrum Y, given (roughly) as homotopy classes of maps.
[X,Y]n = [SnX,Y]
[X,Y]* is the graded abelian group given as the sum of the groups [X,Y]n.
πn(X) = [Sn, X] = [S, X]n is the nth stable homotopy group of X.
π*(X) is the sum of the groups πn(X), and is called the coefficient ring of X when X is a ring spectrum.
X∧Y is the smash product of two spectra.
If X is a spectrum, then it defines generalized homology and cohomology theories on the category of spectra as follows.
Xn(Y) = [S, X∧Y]n = [Sn, X∧Y] is the generalized homology of Y,
Xn(Y) = [Y, X]−n = [S−nY, X] is the generalized cohomology of Y
Ordinary homology theories
These are the theories satisfying the "dimension axiom" of the Eilenberg–Steenrod axioms that the homology of a point vanishes in dimension other than 0. They are determined by an abelian coefficient group G, and denoted by H(X, G) (where
G is sometimes omitted, especially if it is Z). Usually G is the integers, the rationals, the reals, the complex numbers, or the integers mod a prime p.
The cohomology functors of ordinary cohomology theories are represented by Eilenberg–MacLane spaces.
On simplicial complexes, these theories coincide with singular homology and cohomology.
Homology and cohomology with integer coefficients.
Spectrum: H (Eilenberg–MacLane spectrum of the integers.)
Coefficient ring: πn(H) = Z if n = 0, 0 otherwise.
The original homology theory.
Homology and cohomology with rational (or real or complex) coefficients.
Spectrum: HQ (Eilenberg–Mac
|
https://en.wikipedia.org/wiki/Mechanical%20engineering%20technology
|
Mechanical engineering technology is the application of engineering principles and technological developments for the creation of useful products and production machinery.
Technologists
Mechanical engineering technologists are expected to apply current technologies and principles from machine and product design, production and material and manufacturing processes.
Expandable specialties may include aerospace, automotive, energy, nuclear, petroleum, manufacturing, product development, and industrial design.
Mechanical engineering technologists can have many different titles, including in the United States:
Mechanical Engineering Technologist
Mechanical Engineer
Product Engineering Technologist
Mechanical Designer
Product Development Engineering Technologist
Manufacturing Engineering Technologist
Training
Mechanical Engineering Technology coursework is less theoretical, and more application based than a mechanical engineering degree. This is evident through the additional laboratory coursework required for a degree. The ability to apply concepts from the chemical engineering and electrical engineering fields is important.
Some university Mechanical Engineering Technology degree programs require mathematics through differential equations and statistics. Most courses involve algebra and calculus.
Oftentimes, a MET graduate could get hired as an engineer; job titles may include Mechanical Engineer and Manufacturing Engineer.
In the U.S. it is possible to get an associates or bachelor's degree. Individuals with a bachelor's degree in engineering technology may continue on to complete the E.I.T. (Engineer in Training) examination to eventually become Professional Engineers if the program is ABET accredited.
Applications
Software tools such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) are often used to analyze parts and assemblies. 3D models can be made to represent parts and assemblies with computer-aided design (CAD) software
|
https://en.wikipedia.org/wiki/CRI%20Middleware
|
(formerly CSK Research Institute Corp.) is a Japanese developer providing middleware for use in the video game industry. From the early nineties, CRI was a video game developer, but shifted focus in 2001.
History
CRI started out as CSK Research Institute, subsidiary of CSK, producing video games for the Mega Drive/Genesis. It went on to develop games for the Sega Saturn and Dreamcast before it was incorporated as CRI Middleware in 2001. In 2006, CRI Middleware introduced the CRIWARE brand.
Games
Developed
Published
CRIWARE
CRI ADX
CRI ADX is a streamed audio format which allows for multiple audio streams, seamless looping and continuous playback (allowing two or more files to be crossfaded or played in sequence) with low, predictable CPU usage. The format uses the ADPCM framework.
CRI Sofdec
CRI Sofdec is a streamed video format supporting up to 24bit color which includes multistreaming and seamless playback with a frame rate of up to 60 frames per second. It is essentially a repackaging of MPEG-1/MPEG-2 video with CRI's proprietary ADX codec for audio playback.
CRI Clipper
CRI Clipper is an automated lip-syncing program which analyzes waveforms and outputs an appropriate lip pattern into a text file, for later substitution into the facial animations of the (in-game) speaker.
CRI ROFS
CRI ROFS is a file management system for handling a virtual disc image, an extension of the CD-ROM standard. It has no limitations on file name format, or number of directories or files, and has been designed with compatibility with ADX and Sofdec in mind.
CRI Sound Factory
CRI Sound Factory is a GUI-based video game audio tool for effective sound design without input from programmers. It has support for the previewing and playback of generated audio.
CRI Movie Encode
CRI Movie Encode is a video encoding service by which CRI generates Sofdec or MPEG files from other media. For a fee (designated by the length of the file to be encoded), files are converted to the desired fo
|
https://en.wikipedia.org/wiki/Inclusion%20%28taxonomy%29
|
In taxonomy, inclusion is the process whereby two species that were believed to be distinct are found in fact to be the same and are thus combined as one species. Which name is kept for this unified species is sometimes a cause of debate, but generally it is the earlier-named one, and the other species is said to be "included" within this one.
Inclusion is far more common in paleontology than more recent biology, although it is not unheard of in the latter. When it occurs with more recent or modern species, it is usually the result of a species with wide geographical dispersion.
References
Taxonomy (biology)
|
https://en.wikipedia.org/wiki/Advanced%20process%20control
|
In control theory, advanced process control (APC) refers to a broad range of techniques and technologies implemented within industrial process control systems. Advanced process controls are usually deployed optionally and in addition to basic process controls. Basic process controls are designed and built with the process itself, to facilitate basic operation, control and automation requirements. Advanced process controls are typically added subsequently, often over the course of many years, to address particular performance or economic improvement opportunities in the process.
Process control (basic and advanced) normally implies the process industries, which includes chemicals, petrochemicals, oil and mineral refining, food processing, pharmaceuticals, power generation, etc. These industries are characterized by continuous processes and fluid processing, as opposed to discrete parts manufacturing, such as automobile and electronics manufacturing. The term process automation is essentially synonymous with process control.
Process controls (basic as well as advanced) are implemented within the process control system, which may mean a distributed control system (DCS), programmable logic controller (PLC), and/or a supervisory control computer. DCSs and PLCs are typically industrially hardened and fault-tolerant. Supervisory control computers are often not hardened or fault-tolerant, but they bring a higher level of computational capability to the control system, to host valuable, but not critical, advanced control applications. Advanced controls may reside in either the DCS or the supervisory computer, depending on the application. Basic controls reside in the DCS and its subsystems, including PLCs.
Types of Advanced Process Control
Following is a list of well known types of advanced process control:
Advanced regulatory control (ARC) refers to several proven advanced control techniques, such as override or adaptive gain (but in all cases, "regulating o
|
https://en.wikipedia.org/wiki/Reinnervation
|
Reinnervation is the restoration, either by spontaneous cellular regeneration or by surgical grafting, of nerve supply to a body part from which it has been lost or damaged.
See also
Denervation
Neuroregeneration
Targeted reinnervation
References
Neuroscience
|
https://en.wikipedia.org/wiki/List%20of%20SAP%20products
|
This presents a partial list of products of the enterprise software company SAP SE.
Major units
SAP S/4HANA (Enterprise Resource Planning on-premise and cloud)
SAP Business ByDesign (SME Cloud Enterprise Resource Planning)
SAP Business One (B1 on HANA) (Small enterprise Enterprise Resource Planning)
SAP CRM (Customer Relationship Management)
SAP ERP (Enterprise Resource Planning)
SAP PLM (Product Lifecycle Management)
SAP SCM (Supply Chain Management)
SAP SRM (Supplier Relationship Management)
Business software
SAP Advanced Data Migration (ADP)
SAP Advanced Planner and Optimizer
SAP Analytics Cloud (SAC)
SAP Advanced Business Application Programming (ABAP)
SAP Apparel and Footwear Solution (AFS)
SAP Business Information Warehouse (BW)
SAP Business ByDesign (ByD)
SAP Business Explorer (Bex)
SAP BusinessObjects Lumira
SAP BusinessObjects Web Intelligence (Webi)
SAP Business One
SAP Business Partner Screening
SAP Business Intelligence (BI)
SAP Business Workflow
SAP Catalog Content Management ()
SAP Cloud for Customer (C4C)
SAP Cost Center Accounting (CCA)
SAP Convergent Charging (CC)
SAP Converged Cloud
SAP Data Warehouse Cloud (DWC)
SAP Design Studio
SAP PRD2(P2)
SAP Enterprise Buyer Professional (EBP)
SAP Enterprise Learning
SAP Portal (EP)
SAP Exchange Infrastructure (XI) (From release 7.0 onwards, SAP XI has been renamed as SAP Process Integration (SAP PI))
SAP Extended Warehouse Management (EWM)
SAP FICO
SAP BPC (Business Planning and Consolidation, formerly OutlookSoft)
SAP GRC (Governance, Risk and Compliance)
SAP EHSM (Environment Health & Safety Management)
Enterprise Central Component (ECC)
SAP ERP
SAP HANA (formerly known as High-performance Analytics Appliance)
SAP Human Resource Management Systems (HRMS)
SAP SuccessFactors
SAP Litmos Training Cloud
SAP Information Design Tool (IDT)
SAP Integrated Business Planning (IBP)
SAP Internet Transaction Server (ITS)
SAP Incentive and Commission Management (ICM)
|
https://en.wikipedia.org/wiki/Signal-to-quantization-noise%20ratio
|
Signal-to-quantization-noise ratio (SQNR or SNqR) is widely used quality measure in analysing digitizing schemes such as pulse-code modulation (PCM). The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error (also known as quantization noise) introduced in the analog-to-digital conversion.
The SQNR formula is derived from the general signal-to-noise ratio (SNR) formula:
where:
is the probability of received bit error
is the peak message signal level
is the mean message signal level
As SQNR applies to quantized signals, the formulae for SQNR refer to discrete-time digital signals. Instead of , the digitized signal will be used. For quantization steps, each sample, requires bits. The probability distribution function (PDF) represents the distribution of values in and can be denoted as . The maximum magnitude value of any is denoted by .
As SQNR, like SNR, is a ratio of signal power to some noise power, it can be calculated as:
The signal power is:
The quantization noise power can be expressed as:
Giving:
When the SQNR is desired in terms of decibels (dB), a useful approximation to SQNR is:
where is the number of bits in a quantized sample, and is the signal power calculated above. Note that for each bit added to a sample, the SQNR goes up by approximately 6 dB ().
References
B. P. Lathi, Modern Digital and Analog Communication Systems (3rd edition), Oxford University Press, 1998
External links
Signal to quantization noise in quantized sinusoidal - Analysis of quantization error on a sine wave
Digital audio
Engineering ratios
Noise (electronics)
|
https://en.wikipedia.org/wiki/Igor%20Pak
|
Igor Pak () (born 1971, Moscow, Soviet Union) is a professor of mathematics at the University of California, Los Angeles, working in combinatorics and discrete probability. He formerly taught at the Massachusetts Institute of Technology and the University of Minnesota, and he is best known for his bijective proof of the hook-length formula for the number of Young tableaux, and his work on random walks. He was a keynote speaker alongside George Andrews and Doron Zeilberger at the 2006 Harvey Mudd College Mathematics Conference on Enumerative Combinatorics.
Pak is an Associate Editor for the journal Discrete Mathematics. He gave a Fejes Tóth Lecture at the University of Calgary in February 2009.
In 2018, he was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro.
Background
Pak went to Moscow High School № 57. After graduating, he worked for a year at Bank Menatep.
He did his undergraduate studies at Moscow State University. He was a PhD student of Persi Diaconis at Harvard University, where he received a doctorate in Mathematics in 1997, with a thesis titled Random Walks on Groups: Strong Uniform Time Approach. Afterwards, he worked with László Lovász as a postdoc at Yale University. He was a fellow at the Mathematical Sciences Research Institute and a long-term visitor at the Hebrew University of Jerusalem.
References
External links
Personal site.
List of published papers, with abstracts.
MIT Mathematics Department website.
MathSciNet: "Items authored by Pak, Igor."
DBLP: Igor Pak.
1971 births
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
Harvard University alumni
Living people
Massachusetts Institute of Technology School of Science faculty
Moscow State University alumni
Mathematicians from Moscow
Russian emigrants to the United States
University of Minnesota faculty
University of California, Los Angeles faculty
|
https://en.wikipedia.org/wiki/OCCRA
|
OCCRA stands for Oakland County Competitive Robotics Association. OCCRA is an organized competition between the robotics teams of about 30 different high schools in Oakland County, Michigan, United States, that takes place each fall, beginning in early September and ending in early December.
OCCRA vs. FIRST Robotics
Although inspired by FIRST Robotics, OCCRA differs from FIRST in several key ways. Firstly, the student members of the robotics teams are expected to design and build the robots without direct assistance from their adult mentors. This gives students more responsibility and allows them to develop leadership skills.
In OCCRA, teams are also forbidden from having corporate sponsorships. Each team is responsible for raising its own money to promote teamwork and to teach students to work within a budget. The league as a whole has corporate sponsors.
Furthermore, "heavy machinery" is restricted. Lathes and other types of precision machinery are not to be used in the construction of OCCRA-bound robots. Instead, students build their robots with rulers, hacksaws, and cordless drills. This rule is intended to ensure equality among teams with varying resources (e.g. having a machine shop in the team's high school or in a team member's garage).
One key way in which OCCRA does emulate FIRST is that OCCRA maintains a policy of gracious professionalism.
See also
FIRST
External links
Official OCCRA website
ChiefDelphi Forums - OCCRA
AdamBots (Team 245) - OCCRA
Monsters308
PDF file released by Oakland Schools.
OCCRA
|
https://en.wikipedia.org/wiki/Big%20Muskie
|
Big Muskie was a coal mining Bucyrus-Erie dragline excavator owned by the Central Ohio Coal Company (formerly a division of American Electric Power), weighing and standing nearly 22 stories tall. It operated in the U.S. state of Ohio from 1969 to 1991.
Design specifications and service
The Big Muskie was a model 4250-W dragline and was the only one ever built by the Bucyrus-Erie. With a bucket, it was the largest single-bucket digging machine ever created and one of the world's largest mobile earth-moving machines alongside the Ohio-based Marion 6360 stripping shovel called The Captain and the German bucket wheel excavators of the Bagger 288 and Bagger 293 family. The bucket alone could hold two Greyhound buses side by side. It took over 200,000 man hours to construct over a period of about two years and cost $25 million in 1969, the equivalent of $ today adjusted for inflation.
Big Muskie was powered by electricity supplied at 13,800 volts via a trailing cable, which had its own transporter/coiling units to move it. The electricity powered the main drives, eighteen and ten DC electric motors. Some systems in Big Muskie were electro-hydraulic, but the main drives were all electric. While working, Big Muskie used the equivalent of the power for 27,500 homes, costing tens of thousands of dollars an hour just in power costs and necessitating special agreements with local Ohio power companies to accommodate the extra load. The machine had a crew of five, and worked around the clock, with special emphasis on night work since the per kilowatt-hour rate was much cheaper.
Once it had stripped all the overburden in one area of the pit, it could move itself short distances (usually less than ) to another pre-prepared digging position using massive hydraulic walker feet, although due to its weight it traveled very slowly () and required a carefully graded travelway with a roadbed of heavy wooden beams to avoid sinking into the soil and tipping over or getting stuck.
|
https://en.wikipedia.org/wiki/Two-point%20tensor
|
Two-point tensors, or double vectors, are tensor-like quantities which transform as Euclidean vectors with respect to each of their indices. They are used in continuum mechanics to transform between reference ("material") and present ("configuration") coordinates. Examples include the deformation gradient and the first Piola–Kirchhoff stress tensor.
As with many applications of tensors, Einstein summation notation is frequently used. To clarify this notation, capital indices are often used to indicate reference coordinates and lowercase for present coordinates. Thus, a two-point tensor will have one capital and one lower-case index; for example, AjM.
Continuum mechanics
A conventional tensor can be viewed as a transformation of vectors in one coordinate system to other vectors in the same coordinate system. In contrast, a two-point tensor transforms vectors from one coordinate system to another. That is, a conventional tensor,
,
actively transforms a vector u to a vector v such that
where v and u are measured in the same space and their coordinates representation is with respect to the same basis (denoted by the "e").
In contrast, a two-point tensor, G will be written as
and will transform a vector, U, in E system to a vector, v, in the e system as
.
The transformation law for two-point tensor
Suppose we have two coordinate systems one primed and another unprimed and a vectors' components transform between them as
.
For tensors suppose we then have
.
A tensor in the system . In another system, let the same tensor be given by
.
We can say
.
Then
is the routine tensor transformation. But a two-point tensor between these systems is just
which transforms as
.
Simple example
The most mundane example of a two-point tensor is the transformation tensor, the Q in the above discussion. Note that
.
Now, writing out in full,
and also
.
This then requires Q to be of the form
.
By definition of tensor product,
So we can write
Thus
Incorporating (),
|
https://en.wikipedia.org/wiki/Board%20support%20package
|
In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific boot firmware and device drivers and other routines that allow a given embedded operating system, for example a real-time operating system (RTOS), to function in a given hardware environment (a motherboard), integrated with the embedded operating system.
Software
Third-party hardware developers who wish to support a given embedded operating system must create a BSP that allows that embedded operating system to run on their platform. In most cases, the embedded operating system image and software license, the BSP containing it, and the hardware are bundled together by the hardware vendor.
BSPs are typically customizable, allowing the user to specify which drivers and routines should be included in the build based on their selection of hardware and software options. For instance, a particular single-board computer might be paired with several peripheral chips; in that case the BSP might include drivers for peripheral chips supported; when building the BSP image the user would specify which peripheral drivers to include based on their choice of hardware.
Some suppliers also provide a root file system, a toolchain for building programs to run on the embedded system, and utilities to configure the device (while running) along with the BSP. Many embedded operating system providers provide template BSP's, developer assistance, and test suites to aid BSP developers to set up an embedded operating system on a new hardware platform.
History
The term BSP has been in use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP.
Example
The Wind River Systems board support package for the ARM Integrator 92
|
https://en.wikipedia.org/wiki/Entropy%20in%20thermodynamics%20and%20information%20theory
|
The mathematical expressions for thermodynamic entropy in the statistical thermodynamics formulation established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s are similar to the information entropy by Claude Shannon and Ralph Hartley, developed in the 1940s.
Equivalence of form of the defining expressions
The defining expression for entropy in the theory of statistical mechanics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, is of the form:
where is the probability of the microstate i taken from an equilibrium ensemble, and is the Boltzmann's constant.
The defining expression for entropy in the theory of information established by Claude E. Shannon in 1948 is of the form:
where is the probability of the message taken from the message space M, and b is the base of the logarithm used. Common values of b are 2, Euler's number , and 10, and the unit of entropy is shannon (or bit) for b = 2, nat for b = , and hartley for b = 10.
Mathematically H may also be seen as an average information, taken over the message space, because when a certain message occurs with probability pi, the information quantity −log(pi) (called information content or self-information) will be obtained.
If all the microstates are equiprobable (a microcanonical ensemble), the statistical thermodynamic entropy reduces to the form, as given by Boltzmann,
where W is the number of microstates that corresponds to the macroscopic thermodynamic state. Therefore S depends on temperature.
If all the messages are equiprobable, the information entropy reduces to the Hartley entropy
where is the cardinality of the message space M.
The logarithm in the thermodynamic definition is the natural logarithm. It can be shown that the Gibbs entropy formula, with the natural logarithm, reproduces all of the properties of the macroscopic classical thermodynamics of Rudolf Clausius. (See article: Entropy (statistical views)).
The logarithm can also be taken to the natural base
|
https://en.wikipedia.org/wiki/Bondy%27s%20theorem
|
In mathematics, Bondy's theorem is a bound on the number of elements needed to distinguish the sets in a family of sets from each other. It belongs to the field of combinatorics, and is named after John Adrian Bondy, who published it in 1972.
Statement
The theorem is as follows:
Let X be a set with n elements and let A1, A2, ..., An be distinct subsets of X. Then there exists a subset S of X with n − 1 elements such that the sets Ai ∩ S are all distinct.
In other words, if we have a 0-1 matrix with n rows and n columns such that each row is distinct, we can remove one column such that the rows of the resulting n × (n − 1) matrix are distinct.
Example
Consider the 4 × 4 matrix
where all rows are pairwise distinct. If we delete, for example, the first column, the resulting matrix
no longer has this property: the first row is identical to the second row. Nevertheless, by Bondy's theorem we know that we can always find a column that can be deleted without introducing any identical rows. In this case, we can delete the third column: all rows of the 3 × 4 matrix
are distinct. Another possibility would have been deleting the fourth column.
Learning theory application
From the perspective of computational learning theory, Bondy's theorem can be rephrased as follows:
Let C be a concept class over a finite domain X. Then there exists a subset S of X with the size at most |C| − 1 such that S is a witness set for every concept in C.
This implies that every finite concept class C has its teaching dimension bounded by |C| − 1.
Notes
Computational learning theory
Theorems in combinatorics
|
https://en.wikipedia.org/wiki/SCSI%20command
|
In SCSI computer storage, computers and storage devices use a client-server model of communication. The computer is a client which requests the storage device to perform a service, e.g., to read or write data. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with Fibre Channel, iSCSI, Serial Attached SCSI, and other transport layers.
In the SCSI protocol, the initiator sends a SCSI command information unit to the target device. Data information units may then be transferred between the computer and device. Finally, the device sends a response information unit to the computer.
SCSI commands are sent in a command descriptor block (CDB), which consists of a one byte operation code (opcode) followed by five or more bytes containing command-specific parameters. Upon receiving and processing the CDB the device will return a status code byte and other information.
The rest of this article contains a list of SCSI commands, sortable in opcode or description alphabetical order. In the published SCSI standards, commands are designated as "mandatory," "optional" or "vendor-unique." Only the mandatory commands are required of all devices. There are links to detailed descriptions for the more common SCSI commands. Some opcodes produce different, though usually comparable, effects in different device types; for example, opcode recalibrates a disk drive by seeking back to physical sector zero, but rewinds the medium in a tape drive.
SCSI command lengths
Originally the most significant 3 bits of a SCSI opcode specified the length of the CDB. However, when variable-length CDBs were created this correspondence was changed, and the entire opcode must be examined to determine the CDB length.
The lengths are as follows:
List of SCSI commands
When a command is defined in multiple CDB sizes, the length of the CDB is given in parentheses after the command name, e.g., READ(6) and READ(10).
E
|
https://en.wikipedia.org/wiki/Invariant%20basis%20number
|
In mathematics, more specifically in the field of ring theory, a ring has the invariant basis number (IBN) property if all finitely generated free left modules over R have a well-defined rank. In the case of fields, the IBN property becomes the statement that finite-dimensional vector spaces have a unique dimension.
Definition
A ring R has invariant basis number (IBN) if for all positive integers m and n, Rm isomorphic to Rn (as left R-modules) implies that .
Equivalently, this means there do not exist distinct positive integers m and n such that Rm is isomorphic to Rn.
Rephrasing the definition of invariant basis number in terms of matrices, it says that, whenever A is an m-by-n matrix over R and B is an n-by-m matrix over R such that and , then . This form reveals that the definition is left–right symmetric, so it makes no difference whether we define IBN in terms of left or right modules; the two definitions are equivalent.
Note that the isomorphisms in the definitions are not ring isomorphisms, they are module isomorphisms, even when one of n or m is 1.
Properties
The main purpose of the invariant basis number condition is that free modules over an IBN ring satisfy an analogue of the dimension theorem for vector spaces: any two bases for a free module over an IBN ring have the same cardinality. Assuming the ultrafilter lemma (a strictly weaker form of the axiom of choice), this result is actually equivalent to the definition given here, and can be taken as an alternative definition.
The rank of a free module Rn over an IBN ring R is defined to be the cardinality of the exponent m of any (and therefore every) R-module Rm isomorphic to Rn. Thus the IBN property asserts that every isomorphism class of free R-modules has a unique rank. The rank is not defined for rings not satisfying IBN. For vector spaces, the rank is also called the dimension. Thus the result above is in short: the rank is uniquely defined for all free R-modules iff it is uniquely def
|
https://en.wikipedia.org/wiki/GPFS
|
GPFS (General Parallel File System, brand name IBM Storage Scale and previously IBM Spectrum Scale) is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List.
For example, it is the filesystem of the Summit
at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 TOP500 list of supercomputers. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem called Alpine has 250 PB of storage using Spectrum Scale on IBM ESS storage hardware, capable of approximately 2.5TB/s of sequential I/O and 2.2TB/s of random I/O.
Like typical cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX clusters, Linux clusters, on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes running on x86, Power or IBM Z processor architectures. In addition to providing filesystem storage capabilities, it provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote clusters.
History
GPFS began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Tiger Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.
Another ancestor is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992 and 1995. Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystem
|
https://en.wikipedia.org/wiki/Higher-order%20differential%20cryptanalysis
|
In cryptography, higher-order differential cryptanalysis is a generalization of differential cryptanalysis, an attack used against block ciphers. While in standard differential cryptanalysis the difference between only two texts is used, higher-order differential cryptanalysis studies the propagation of a set of differences between a larger set of texts. Xuejia Lai, in 1994, laid the groundwork by showing that differentials are a special case of the more general case of higher order derivates. Lars Knudsen, in the same year, was able to show how the concept of higher order derivatives can be used to mount attacks on block ciphers. These attacks can be superior to standard differential cryptanalysis. Higher-order differential cryptanalysis has notably been used to break the KN-Cipher, a cipher which had previously been proved to be immune against standard differential cryptanalysis.
Higher-order derivatives
A block cipher which maps -bit strings to -bit strings can, for a fixed key, be thought of as a function . In standard differential cryptanalysis, one is interested in finding a pair of an input difference and an output difference such that two input texts with difference are likely to result in output texts with a difference i.e., that is true for many . Note that the difference used here is the XOR which is the usual case, though other definitions of difference are possible.
This motivates defining the derivative of a function at a point as
Using this definition, the -th derivative at can recursively be defined as
Thus for example .
Higher order derivatives as defined here have many properties in common with ordinary derivative such as the sum rule and the product rule. Importantly also, taking the derivative reduces the algebraic degree of the function.
Higher-order differential attacks
To implement an attack using higher order derivatives, knowledge about the probability distribution of the derivative of the cipher is needed. Calculating or est
|
https://en.wikipedia.org/wiki/Impossible%20differential%20cryptanalysis
|
In cryptography, impossible differential cryptanalysis is a form of differential cryptanalysis for block ciphers. While ordinary differential cryptanalysis tracks differences that propagate through the cipher with greater than expected probability, impossible differential cryptanalysis exploits differences that are impossible (having probability 0) at some intermediate state of the cipher algorithm.
Lars Knudsen appears to be the first to use a form of this attack, in the 1998 paper where he introduced his AES candidate, DEAL. The first presentation to attract the attention of the cryptographic community was later the same year at the rump session of CRYPTO '98, in which Eli Biham, Alex Biryukov, and Adi Shamir introduced the name "impossible differential" and used the technique to break 4.5 out of 8.5 rounds of IDEA and 31 out of 32 rounds of the NSA-designed cipher Skipjack. This development led cryptographer Bruce Schneier to speculate that the NSA had no previous knowledge of impossible differential cryptanalysis. The technique has since been applied to many other ciphers: Khufu and Khafre, E2, variants of Serpent, MARS, Twofish, Rijndael (AES), CRYPTON, Zodiac, Hierocrypt-3, TEA, XTEA, Mini-AES, ARIA, Camellia, and SHACAL-2.
Biham, Biryukov and Shamir also presented a relatively efficient specialized method for finding impossible differentials that they called a miss-in-the-middle attack. This consists of finding "two events with probability one, whose conditions cannot be met together."
References
Further reading
Cryptographic attacks
|
https://en.wikipedia.org/wiki/Interpolation%20attack
|
In cryptography, an interpolation attack is a type of cryptanalytic attack against block ciphers.
After the two attacks, differential cryptanalysis and linear cryptanalysis, were presented on block ciphers, some new block ciphers were introduced, which were proven secure against differential and linear attacks. Among these there were some iterated block ciphers such as the KN-Cipher and the SHARK cipher. However, Thomas Jakobsen and Lars Knudsen showed in the late 1990s that these ciphers were easy to break by introducing a new attack called the interpolation attack.
In the attack, an algebraic function is used to represent an S-box. This may be a simple quadratic, or a polynomial or rational function over a Galois field. Its coefficients can be determined by standard Lagrange interpolation techniques, using known plaintexts as data points. Alternatively, chosen plaintexts can be used to simplify the equations and optimize the attack.
In its simplest version an interpolation attack expresses the ciphertext as a polynomial of the plaintext. If the polynomial has a relative low number of unknown coefficients, then with a collection of plaintext/ciphertext (p/c) pairs, the polynomial can be reconstructed. With the polynomial reconstructed the attacker then has a representation of the encryption, without exact knowledge of the secret key.
The interpolation attack can also be used to recover the secret key.
It is easiest to describe the method with an example.
Example
Let an iterated cipher be given by
where is the plaintext, the output of the round, the secret round key (derived from the secret key by some key schedule), and for a -round iterated cipher, is the ciphertext.
Consider the 2-round cipher. Let denote the message, and denote the ciphertext.
Then the output of round 1 becomes
and the output of round 2 becomes
Expressing the ciphertext as a polynomial of the plaintext yields
where the 's are key dependent constants.
Using as many plainte
|
https://en.wikipedia.org/wiki/Opt-in%20email
|
Opt-in email is a term used when someone is not initially added to an emailing list and is instead given the option to join the emailing list. Typically, this is some sort of mailing list, newsletter, or advertising. Opt-out emails do not ask for permission to send emails, these emails are typically criticized as unsolicited bulk emails, better known as spam.
Forms
There are several common forms of opt-in email:
Unconfirmed opt-in/single opt-in
Someone first gives an email address to the list software (for instance, on a Web page), but no steps are taken to make sure that this address belongs to the person submitting it. This can cause email from the mailing list to be considered spam because simple typos of the email address can cause the email to be sent to someone else. Malicious subscriptions are also possible, as are subscriptions that are due to spammers forging email addresses that are sent to the email address used to subscribe to the mailing list.
Confirmed opt-in (COI)/double opt-in (DOI)
A new subscriber asks to be subscribed to the mailing list, but unlike unconfirmed or single opt-in, a confirmation email is sent to verify it was really them. Generally, unless the explicit step is taken to verify the end-subscriber's e-mail address, such as clicking a special web link or sending back a reply email, it is difficult to establish that the e-mail address in question indeed belongs to the person who submitted the request to receive the e-mail. Using a confirmed opt-in (COI) (also known as a Double opt-in) procedure helps to ensure that a third party is not able to subscribe someone else accidentally, or out of malice, since if no action is taken on the part of the e-mail recipient, they will simply no longer receive any messages from the list operator. Mail system administrators and non-spam mailing list operators refer to this as confirmed subscription or closed-loop opt-in.
Some marketers call closed-loop opt-in "double opt-in". This term was
|
https://en.wikipedia.org/wiki/Cryptocurrency%20exchange
|
A cryptocurrency exchange, or a digital currency exchange (DCE), is a business that allows customers to trade cryptocurrencies or digital currencies for other assets, such as conventional fiat money or other digital currencies. Exchanges may accept credit card payments, wire transfers or other forms of payment in exchange for digital currencies or cryptocurrencies. A cryptocurrency exchange can be a market maker that typically takes the bid–ask spreads as a transaction commission for its service or, as a matching platform, simply charges fees.
Some brokerages which also focus on other assets such as stocks, like Robinhood and eToro, let users purchase but not withdraw cryptocurrencies to cryptocurrency wallets. Dedicated cryptocurrency exchanges such as Binance and Coinbase do allow cryptocurrency withdrawals, however.
Operation
A cryptocurrency exchange can typically send cryptocurrency to a user's personal cryptocurrency wallet. Some can convert digital currency balances into anonymous prepaid cards which can be used to withdraw funds from ATMs worldwide while other digital currencies are backed by real-world commodities such as gold.
The creators of digital currencies are typically independent of the digital currency exchange that facilitate trading in the currency. In one type of system, digital currency providers (DCP) are businesses that keep and administer accounts for their customers, but generally do not issue digital currency to those customers directly. Customers buy or sell digital currency from digital currency exchanges, who transfer the digital currency into or out of the customer's DCP account. Some exchanges are subsidiaries of DCP, but many are legally independent businesses. The denomination of funds kept in DCP accounts may be of a real or fictitious currency.
A digital currency exchange can be a brick-and-mortar business or a strictly online business. As a brick-and-mortar business, it exchanges traditional payment methods and digital curr
|
https://en.wikipedia.org/wiki/Odell%20Lake%20%28video%20game%29
|
Odell Lake is a 1986 educational life simulation game produced by MECC for the Apple II and Commodore 64. The player is a fish living in Odell Lake, a real-world lake in Oregon. It is based on a 1980 BASIC program of the same name. It was followed-up by Odell Down Under.
Gameplay
As a fish, the player could "go exploring" or "play for points". The object was to decide which fish to eat, while trying to survive and avoid other enemies; such as otters, ospreys, and bait from fishermen. When simply exploring, the player could select from six different species of fish, such as Mackinaw Trout, Whitefish, or Rainbow Trout; however, when playing for points, the computer randomly assigned the type of fish that the player will play as. In addition, the titles for each of the types of fish and other creatures are removed when playing for points, forcing the player to rely on memory; also the game was timed. After every five moves, the player played as a different type of fish.
When playing for points, the best decision netted the player the most points, with less intelligent decisions earning the player fewer or no points, or in the case of the fish eating something disagreeable, actually taking them away. If no decision was made when time ran out, it counted as "Ignore". If at any time the player's fish was attacked by an enemy, or the player got caught by an angler, the game ended immediately.
In Israel the game was published in Hebrew in 1987 for Apple II.
Main fish
The species of fish found in Odell Lake included the following:
Rainbow trout
Dolly Varden trout
Mackinaw trout, the largest fish in the game
Blueback salmon
Whitefish
Chub, the smallest fish in the game
The game is heavily random; the same situation played in the same way can have different outcomes. For the most points, players must play the game safely, choosing the action that has the greatest chance of leading to a positive outcome. It's helpful to remember the typical locations of food and predator
|
https://en.wikipedia.org/wiki/Service%20Profile%20Identifier
|
A Service Profile Identifier (SPID) is a number issued by ISDN service providers in North America that identifies the services and features of an ISDN circuit. Service providers typically assign each B channel a unique SPID. A SPID is derived from the telephone number assigned to the circuit, and in the U.S. it typically follows a generic, 14-digit format.
A SPID (Service Profile Identifier) is a number assigned by a phone company to a terminal on an Integrated Services Digital Network B-channel. A SPID tells equipment at the phone company's central office about the capabilities of each terminal (computer or phone) on the B-channels. A Basic Rate home or business user may divide service into two B-channels with one used for normal phone service and the other for computer data. The SPID tells the phone company whether the terminal accepts voice or data information.
The SPID is a numeric string from 3 to 20 digits in length. A SPID (or more than one, if necessary) is assigned when the ISDN Basic Rate Interface (BRI) is obtained from the phone company. Starting in 1998, most phone companies began to use a generic SPID format. In this format, the SPID is a 14-digit number that includes a 10-digit telephone number (which includes the 3-digit Numbering Plan Area code), a 2-digit Sharing Terminal Identifier, and a 2-digit Terminal Identifier (TID). The generic SPID format makes it easier to tell users what to specify when installing an ISDN line and simplifies corporate installation procedures.
Back in 1998, some ISDN manufacturers began to provide non-initializing terminals (NITs) that do not require the entering of a SPID. Manufacturers also are delivering terminals with automated SPID selection in which the correct SPID is downloaded to the terminal rather than having to be specified by the user.
References
Integrated Services Digital Network
Telecommunication protocols
|
https://en.wikipedia.org/wiki/Speech%20analytics
|
Speech analytics is the process of analyzing recorded calls to gather customer information to improve communication and future interaction. The process is primarily used by customer contact centers to extract information buried in client interactions with an enterprise. Although speech analytics includes elements of automatic speech recognition, it is known for analyzing the topic being discussed, which is weighed against the emotional character of the speech and the amount and locations of speech versus non-speech during the interaction. Speech analytics in contact centers can be used to mine recorded customer interactions to surface the intelligence essential for building effective cost containment and customer service strategies. The technology can pinpoint cost drivers, trend analysis, identify strengths and weaknesses with processes and products, and help understand how the marketplace perceives offerings.
Definition
Speech analytics provides a Complete analysis of recorded phone conversations between a company and its customers. It provides advanced functionality and valuable intelligence from customer calls. This information can be used to discover information relating to strategy, product, process, operational issues and contact center agent performance. In addition, speech analytics can automatically identify areas in which contact center agents may need additional training or coaching, and can automatically monitor the customer service provided on calls.
The process can isolate the words and phrases used most frequently within a given time period, as well as indicate whether usage is trending up or down. This information is useful for supervisors, analysts, and others in an organization to spot changes in consumer behavior and take action to reduce call volumes—and increase customer satisfaction. It allows insight into a customer's thought process, which in turn creates an opportunity for companies to make adjustments.
Usability
Speech analytics applic
|
https://en.wikipedia.org/wiki/Cadinenes
|
Cadinenes are a group of isomeric hydrocarbons that occur in a wide variety of essential oil-producing plants. The name is derived from that of the Cade juniper (Juniperus oxycedrus L.), the wood of which yields an oil from which cadinene isomers were first isolated.
Chemically, the cadinenes are bicyclic sesquiterpenes. The term cadinene has sometimes also been used in a broad sense to refer to any sesquiterpene with the so-called cadalane (4-isopropyl-1,6-dimethyldecahydronaphthalene) carbon skeleton. Because of the large number of known double-bond and stereochemical isomers, this class of compounds has been subdivided into four subclasses based on the relative stereochemistry at the isopropyl group and the two bridgehead carbon atoms. The name cadinene is now properly used only for the first subclass below, which includes the compounds originally isolated from cade oil. Only one enantiomer of each subclass is depicted, with the understanding that the other enantiomer bears the same subclass name.
References
Flavors
Sesquiterpenes
Cyclohexenes
Isopropyl compounds
|
https://en.wikipedia.org/wiki/Logical%20form
|
In logic, logical form of a statement is a precisely-specified semantic version of that statement in a formal system. Informally, the logical form attempts to formalize a possibly ambiguous statement into a statement with a precise, unambiguous logical interpretation with respect to a formal system. In an ideal formal language, the meaning of a logical form can be determined unambiguously from syntax alone. Logical forms are semantic, not syntactic constructs; therefore, there may be more than one string that represents the same logical form in a given language.
The logical form of an argument is called the argument form of the argument.
History
The importance of the concept of form to logic was already recognized in ancient times. Aristotle, in the Prior Analytics, was probably the first to employ variable letters to represent valid inferences. Therefore, Jan Łukasiewicz claims that the introduction of variables was "one of Aristotle's greatest inventions."
According to the followers of Aristotle like Ammonius, only the logical principles stated in schematic terms belong to logic, and not those given in concrete terms. The concrete terms man, mortal, and so forth are analogous to the substitution values of the schematic placeholders A, B, C, which were called the "matter" (Greek hyle, Latin materia) of the argument.
The term "logical form" itself was introduced by Bertrand Russell in 1914, in the context of his program to formalize natural language and reasoning, which he called philosophical logic. Russell wrote: "Some kind of knowledge of logical forms, though with most people it is not explicit, is involved in all understanding of discourse. It is the business of philosophical logic to extract this knowledge from its concrete integuments, and to render it explicit and pure."
Example of argument form
To demonstrate the important notion of the form of an argument, substitute letters for similar items throughout the sentences in the original argument.
Origin
|
https://en.wikipedia.org/wiki/Flashing%20%28weatherproofing%29
|
Flashing refers to thin pieces of impervious material installed to prevent the passage of water into a structure from a joint or as part of a weather resistant barrier system. In modern buildings, flashing is intended to decrease water penetration at objects such as chimneys, vent pipes, walls, windows and door openings to make buildings more durable and to reduce indoor mold problems. Metal flashing materials include lead, aluminium, copper, stainless steel, zinc alloy, and other materials.
Etymology and related terms
The origin of the term flash and flashing are uncertain, but may come from the Middle English verb flasshen, 'to sprinkle, splash', related to flask.
Counter-flashing (or cover flashing, cap flashing) is a term used when there are two parallel pieces of flashing employed together such as on a chimney, where the counter-flashing is built into the chimney and overlaps a replaceable piece of base flashing. Strips of lead used for flashing an edge were sometimes called an apron, and the term is still used for the piece of flashing below a chimney. The up-hill side of a chimney may have a small gable-like assembly called a cricket with cricket flashing or on narrow chimneys with no cricket a back flashing or back pan flashing. Flashing may be let into a groove in a wall or chimney called a reglet.
Purpose
Before the availability of sheet products for flashing, builders used creative methods to minimize water penetration. These methods included angling roof shingles away from the joint, placing chimneys at the ridge, building steps into the sides of chimneys to throw off water and covering seams between roofing materials with mortar . The introduction of manufactured flashing decreased water penetration at obstacles such as chimneys, vent pipes, walls which abut roofs, window and door openings, etc. thus making buildings more durable and reducing indoor mold problems. It is also essential to prevent leaks around skylights or roof windows. Moreover, fla
|
https://en.wikipedia.org/wiki/Thin%20provisioning
|
In computing, thin provisioning involves using virtualization technology to give the appearance of having more physical resources than are actually available. If a system always has enough resource to simultaneously support all of the virtualized resources, then it is not thin provisioned. The term thin provisioning is applied to disk layer in this article, but could refer to an allocation scheme for any resource. For example, real memory in a computer is typically thin-provisioned to running tasks with some form of address translation technology doing the virtualization. Each task acts as if it has real memory allocated. The sum of the allocated virtual memory assigned to tasks typically exceeds the total of real memory.
The efficiency of thin or thick/fat provisioning is a function of the use case, not of the technology. Thick provisioning is typically more efficient when the amount of resource used very closely approximates to the amount of resource allocated. Thin provisioning offers more efficiency where the amount of resource used is much smaller than allocated, so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used.
Just-in-time allocation differs from thin provisioning. Most file systems back files just-in-time but are not thin provisioned. Overallocation also differs from thin provisioning; resources can be over-allocated / oversubscribed without using virtualization technology, for example overselling seats on a flight without allocating actual seats at time of sale, avoiding having each consumer having a claim on a specific seat number.
Thin provisioning is a mechanism that applies to large-scale centralized computer disk-storage systems, SANs, and storage virtualization systems. Thin provisioning allows space to be easily allocated to servers, on a just-enough and just-in-time basis. Thin provisioning is called "sparse volumes" in some contexts.
Overview
Thin provisioning, in a shared-s
|
https://en.wikipedia.org/wiki/SCinet
|
SCinet is the high-performance network built annually by volunteers in support of SC (formerly Supercomputing, the International Conference for High Performance Computing, Networking, Storage and Analysis).
SCinet is the primary network for the yearly conference and is used by attendees and exhibitors to demonstrate and test high-performance computing and networking applications.
International Community
SCinet is also a hub for the international networking community. It provides a platform to share the latest research, technologies, and demonstrations for networks, network technology providers, and even software developers who are in charge of supporting HPC communities at their own institutions or organizations.
Volunteers
Nearly 200 volunteers from educational institutions, high performance
computing sites, equipment vendors, research and education networks, government agencies and telecommunications carriers collaborate via technology and in-person to design, build and operate SCinet.
While many of these credentialed individuals have volunteered at SCinet for years, first timers join the team each year. They include international
students and participants in the National Science Foundation-funded Women in IT Networking at SC (WINS) program. The 2017 SCinet team included women and men from high performance computing institutions in the U.S. and throughout the world.
History
Originated in 1991 as an initiative within the SC conference to provide networking to attendees, SCinet has grown to become the "World's Fastest Network" during the duration of the conference. For 29 years, SCinet has provided SC attendees and the high performance computing (HPC) community with the innovative network platform necessary to internationally interconnect, transport, and display HPC research during SC.
Historically, SCinet has been used as a platform to test networking technology and applications which have found their way into common use.
Research and development
In
|
https://en.wikipedia.org/wiki/186%20%28number%29
|
186 (one hundred [and] eighty-six) is the natural number following 185 and preceding 187.
In mathematics
There is no integer with exactly 186 coprimes less than it, so 186 is a nontotient. It is also never the difference between an integer and the total of coprimes below it, so it is a noncototient.
There are 186 different pentahexes, shapes formed by gluing together five regular hexagons, when rotations of shapes are counted as distinct from each other.
186 is a Fine number.
See also
The year AD 186 or 186 BC
List of highways numbered 186
References
Integers
|
https://en.wikipedia.org/wiki/Tragacanth
|
Tragacanth is a natural gum obtained from the dried sap of several species of Middle Eastern legumes of the genus Astragalus, including A. adscendens, A. gummifer, A. brachycalyx, and A. tragacantha. Some of these species are known collectively under the common names "goat's thorn" and "locoweed". The gum is sometimes called Shiraz gum, shiraz, gum elect or gum dragon. The name derives from the Greek words tragos (meaning "goat") and akantha ("thorn"). Iran is the biggest producer of this gum.
Gum tragacanth is a viscous, odorless, tasteless, water-soluble mixture of polysaccharides obtained from sap that is drained from the root of the plant and dried. The gum seeps from the plant in twisted ribbons or flakes that can be powdered. It absorbs water to become a gel, which can be stirred into a paste. The major fractions are known as tragacanthin, highly water-soluble as a mucilaginous colloid, and the chemically related bassorin, which is far less soluble but swells in water to form a gel. The gum is used in vegetable-tanned leatherworking as an edge slicking and burnishing compound, and is occasionally used as a stiffener in textiles. The gum has been used historically as a herbal remedy for such conditions as cough and diarrhea. Powders using tragacanth as a basis were sometimes called diatragacanth. As a mucilage or paste, it has been used as a topical treatment for burns. It is used in pharmaceuticals and foods as an emulsifier, thickener, stabilizer, and texturant additive (E number E413). It is the traditional binder used in the making of artists' pastels, as it does not adhere to itself the same way other gums (such as gum arabic) do when dry. Gum tragacanth is also used to make a paste used in floral sugarcraft to create lifelike flowers on wires used as decorations for cakes, which air-dries brittle and can take colorings. It enables users to get a very fine, delicate finish to their work. It has traditionally been used as an adhesive in the cigar-rolling p
|
https://en.wikipedia.org/wiki/Wireless%20Datagram%20Protocol
|
Wireless Datagram Protocol (WDP) defines the movement of information from receiver to the sender and resembles the User Datagram Protocol in the Internet protocol suite.
The Wireless Datagram Protocol (WDP), a protocol in WAP architecture, covers the Transport Layer Protocols in the Internet model. As a general transport service, WDP offers to the upper layers an invisible interface independent of the underlying network technology used. In consequence of the interface common to transport protocols, the upper layer protocols of the WAP architecture can operate independently of the underlying wireless network. By letting only the transport layer deal with physical network-dependent issues, global interoperability can be acquired using mediating gateways.
See also
Wireless Application Protocol
Wireless Session Protocol
Wireless transaction protocol
References
External links
Wireless Datagram Protocol on Wireshark Wiki
Transport layer protocols
Wireless Application Protocol
|
https://en.wikipedia.org/wiki/Division%20algorithm
|
A division algorithm is an algorithm which, given two integers N and D (respectively the numerator and the denominator), computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software.
Division algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division include restoring, non-performing restoring, non-restoring, and SRT division. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration. Newton–Raphson and Goldschmidt algorithms fall into this category.
Variants of these algorithms allow using fast multiplication algorithms. It results that, for large integers, the computer time needed for a division is the same, up to a constant factor, as the time needed for a multiplication, whichever multiplication algorithm is used.
Discussion will refer to the form , where
N = numerator (dividend)
D = denominator (divisor)
is the input, and
Q = quotient
R = remainder
is the output.
Division by repeated subtraction
The simplest division algorithm, historically incorporated into a greatest common divisor algorithm presented in Euclid's Elements, Book VII, Proposition 1, finds the remainder given two positive integers using only subtractions and comparisons:
R := N
Q := 0
while R ≥ D do
R := R − D
Q := Q + 1
end
return (Q,R)
The proof that the quotient and remainder exist and are unique (described at Euclidean division) gives rise to a complete division algorithm, applicable to both negative and positive numbers, using additions, subtractions, and comparisons:
function divide(N, D)
if D = 0 then error(DivisionByZero) end
if D < 0 then (Q, R) := divide(N, −D); return (−Q, R) end
if N < 0 then
(Q,R) := divide(−N, D)
if R = 0 then return (−Q
|
https://en.wikipedia.org/wiki/FIRST%20Robotics%20Competition
|
FIRST Robotics Competition (FRC) is an international high school robotics competition. Each year, teams of high school students, coaches, and mentors work during a six-week period to build robots capable of competing in that year's game that weigh up to . Robots complete tasks such as scoring balls into goals, placing inner tubes onto racks, hanging on bars, and balancing robots on balance beams. The game, along with the required set of tasks, changes annually. While teams are given a kit of a standard set of parts during the annual Kickoff, they are also allowed and encouraged to buy or fabricate specialized parts. FIRST Robotics Competition is one of five robotics competition programs organized by FIRST, the other four being FIRST LEGO League Discover, FIRST LEGO League Explore, FIRST LEGO League Challenge, and FIRST Tech Challenge.
The culture of FIRST Robotics Competition is built around two values. "Gracious Professionalism" embraces the competition inherent in the program but rejects trash talk and chest-thumping, instead embracing empathy and respect for other teams. "Coopertition" emphasizes that teams can cooperate and compete at the same time. The goal of the program is to inspire students to be science and technology leaders.
2022 was the 31st year of the competition. 3,225 teams, including more than 80,000 students and 25,000 mentors from 26 countries, built robots. The 2022 season included 58 Regional Competitions, 90 District Qualifying Competitions, and 11 District Championships. In 2022, over 450 teams won slots to attend the FIRST Championship event, where they competed in a tournament. In addition to on-field competition, teams and team members competed for awards recognizing entrepreneurship, creativity, engineering, industrial design, safety, controls, media, quality, and exemplifying the core values of the program. As a result of COVID-19, the amount of active teams decreased during the 2021 season; however, numbers began to increase during
|
https://en.wikipedia.org/wiki/Block-stacking%20problem
|
In statics, the block-stacking problem (sometimes known as The Leaning Tower of Lire , also the book-stacking problem, or a number of other similar terms) is a puzzle concerning the stacking of blocks at the edge of a table.
Statement
The block-stacking problem is the following puzzle:
Place identical rigid rectangular blocks in a stable stack on a table edge in such a way as to maximize the overhang.
provide a long list of references on this problem going back to mechanics texts from the middle of the 19th century.
Variants
Single-wide
The single-wide problem involves having only one block at any given level. In the ideal case of perfectly rectangular blocks, the solution to the single-wide problem is that the maximum overhang is given by times the width of a block. This sum is one half of the corresponding partial sum of the harmonic series. Because the harmonic series diverges, the maximal overhang tends to infinity as increases, meaning that it is possible to achieve any arbitrarily large overhang, with sufficient blocks.
The number of blocks required to reach at least block-lengths past the edge of the table is 4, 31, 227, 1674, 12367, 91380, ... .
Multi-wide
Multi-wide stacks using counterbalancing can give larger overhangs than a single width stack. Even for three blocks, stacking two counterbalanced blocks on top of another block can give an overhang of 1, while the overhang in the simple ideal case is at most . As showed, asymptotically, the maximum overhang that can be achieved by multi-wide stacks is proportional to the cube root of the number of blocks, in contrast to the single-wide case in which the overhang is proportional to the logarithm of the number of blocks. However, it has been shown that in reality this is impossible and the number of blocks that we can move to the right, due to block stress, is not more than a specified number. For example, for a special brick with = , Young's modulus = and density = and limiting compress
|
https://en.wikipedia.org/wiki/Definite%20clause%20grammar
|
A definite clause grammar (DCG) is a way of expressing grammar, either for natural or formal languages, in a logic programming language such as Prolog. It is closely related to the concept of attribute grammars / affix grammars from which Prolog was originally developed.
DCGs are usually associated with Prolog, but similar languages such as Mercury also include DCGs. They are called definite clause grammars because they represent a grammar as a set of definite clauses in first-order logic.
The term DCG refers to the specific type of expression in Prolog and other similar languages; not all ways of expressing grammars using definite clauses are considered DCGs. However, all of the capabilities or properties of DCGs will be the same for any grammar that is represented with definite clauses in essentially the same way as in Prolog.
The definite clauses of a DCG can be considered a set of axioms where the validity of a sentence, and the fact that it has a certain parse tree can be considered theorems that follow from these axioms. This has the advantage of making it so that recognition and parsing of expressions in a language becomes a general matter of proving statements, such as statements in a logic programming language.
History
The history of DCGs is closely tied to the history of Prolog, and the history of Prolog revolves around several researchers in both Marseille, France, and Edinburgh, Scotland. According to Robert Kowalski, an early developer of Prolog, the first Prolog system was developed in 1972 by Alain Colmerauer and Phillipe Roussel. The first program written in the language was a large natural-language processing system. Fernando Pereira and David Warren at the University of Edinburgh were also involved in the early development of Prolog.
Colmerauer had previously worked on a language processing system called Q-systems that was used to translate between English and French. In 1978, Colmerauer wrote a paper about a way of representing grammars call
|
https://en.wikipedia.org/wiki/Euler%27s%20theorem%20in%20geometry
|
In geometry, Euler's theorem states that the distance d between the circumcenter and incenter of a triangle is given by
or equivalently
where and denote the circumradius and inradius respectively (the radii of the circumscribed circle and inscribed circle respectively). The theorem is named for Leonhard Euler, who published it in 1765. However, the same result was published earlier by William Chapple in 1746.
From the theorem follows the Euler inequality:
which holds with equality only in the equilateral case.
Stronger version of the inequality
A stronger version is
where , , and are the side lengths of the triangle.
Euler's theorem for the escribed circle
If and denote respectively the radius of the escribed circle opposite to the vertex and the distance between its center and the center of
the circumscribed circle, then .
Euler's inequality in absolute geometry
Euler's inequality, in the form stating that, for all triangles inscribed in a given circle, the maximum of the radius of the inscribed circle is reached for the equilateral triangle and only for it, is valid in absolute geometry.
See also
Fuss' theorem for the relation among the same three variables in bicentric quadrilaterals
Poncelet's closure theorem, showing that there is an infinity of triangles with the same two circles (and therefore the same R, r, and d)
List of triangle inequalities
References
External links
Articles containing proofs
Triangle inequalities
Theorems about triangles and circles
|
https://en.wikipedia.org/wiki/Noise%20%28video%29
|
Noise, commonly known as static, white noise, static noise, or snow, in analog video and television, is a random dot pixel pattern of static displayed when no transmission signal is obtained by the antenna receiver of television sets and other display devices.
Description
The random pixel pattern is superimposed on the picture or the television screen, being visible as a random flicker of "dots", "snow" or "fuzzy zig-zags" in some television sets, is the result of electronic noise and radiated electromagnetic noise accidentally picked up by the antenna like air or cable. This effect is most commonly seen with analog TV sets, blank VHS tapes, or other display devices.
There are many sources of electromagnetic noise which cause the characteristic display patterns of static. Atmospheric sources of noise are the most ubiquitous, and include electromagnetic signals prompted by cosmic microwave background radiation, or more localized radio wave noise from nearby electronic devices.
The display device itself is also a source of noise, due in part to thermal noise produced by the inner electronics. Most of this noise comes from the first transistor the antenna is attached to.
Names
UK viewers used to see "snow" on black after sign-off, instead of "bugs" on white, a purely technical artifact due to old 405-line British senders using positive rather than the negative video modulation used in Canada, the US, and (currently) the UK as well. Since one impression of the "snow" is of fast-flickering black bugs on a white background, the phenomenon is often called myrornas krig in Swedish, myrekrig in Danish, hangyák háborúja in Hungarian, Ameisenkrieg in German, and semut bertengkar in Indonesian, which all translate to 'war of the ants'.
It is also known as ekran karıncalanması in Turkish, meaning 'ants on the screen', hangyafoci 'ant football' in Hungarian, and purici 'fleas' in Romanian. In French however, this phenomenon is mostly called neige 'snow', just like in Dutc
|
https://en.wikipedia.org/wiki/Radio%20noise
|
In radio reception, radio noise (commonly referred to as radio static) is unwanted random radio frequency electrical signals, fluctuating voltages, always present in a radio receiver in addition to the desired radio signal. Radio noise near in frequency to the radio signal being received (in the receiver's passband) interferes with it in the receiver's circuits. Radio noise is a combination of natural electromagnetic atmospheric noise ("spherics", static) created by electrical processes in the atmosphere like lightning, manmade radio frequency interference (RFI) from other electrical devices picked up by the receiver's antenna, and thermal noise present in the receiver input circuits, caused by the random thermal motion of molecules.
The level of noise determines the maximum sensitivity and reception range of a radio receiver; if no noise were picked up with radio signals, even weak transmissions could be received at virtually any distance by making a radio receiver that was sensitive enough. With noise present, if a radio source is so weak and far away that the radio signal in the receiver has a lower amplitude than the average noise, the noise will drown out the signal. The level of noise in a communications circuit is measured by the signal-to-noise ratio (S/N), the ratio of the average amplitude of the signal voltage to the average amplitude of the noise voltage. When this ratio is below one (0 dB) the noise is greater than the signal, requiring special processing to recover the information.
The limiting noise source in a receiver depends on the frequency range in use. At frequencies below about 40 MHz, particularly in the mediumwave and longwave bands and below, atmospheric noise and nearby radio frequency interference from electrical switches, motors, vehicle ignition circuits, computers, and other man-made sources tend to be above the thermal noise floor in the receiver's circuits.
These noises are often referred to as static. Conversely, at very high fr
|
https://en.wikipedia.org/wiki/Automatic%20switched-transport%20network
|
Automatic Switched Transport Network (ASTN) allows traffic paths to be set up through a switched network automatically. The term ASTN replaces the term ASON (Automatically Switched Optical Network) and is often used interchangeably with GMPLS (Generalized MPLS). This is not completely correct as GMPLS is a family of protocols, but ASON/ASTN is an optical/transport network architecture. The requirements of the ASON/ASTN architecture can be satisfied using GMPLS protocols developed by the IETF or by GMPLS protocols that have been modified by the ITU. Furthermore, the GMPLS protocols are applicable to optical and non-optical (e.g., packet and frame) networks, and can be used in transport or client networks. Thus, GMPLS is a wider concept than ASTN.
Traditionally, creating traffic paths through a series of Network Elements has involved configuration of individual cross-connects on each Network Element. ASTN allows the user to specify the start point, end point and bandwidth required, and the ASTN agent on the Network Elements will allocate the path through the network, provisioning the traffic path, setting up cross-connects, and allocating bandwidth from the paths for the user requested service. The actual path that the traffic will take through the network is not specified by the user.
Changes to the network (adding/removing nodes) will be taken into account by the ASTN agents in the network, but do not need to be considered by the user. This gives the user far more flexibility when allocating user bandwidth to provide services demanded by the customer.
GMPLS consist of several protocols, including routing protocols (OSPF-TE or ISIS-TE), link management protocols (LMP), and a reservation/label distribution protocol (RSVP-TE). The reservation/label distribution protocol CR-LDP has now been deprecated by the IETF in RFC 3468 (February 2003) and IETF GMPLS working group decided to focus purely on RSVP-TE.
The GMPLS architecture is defined in RFC 3945.
References
|
https://en.wikipedia.org/wiki/Signcryption
|
In cryptography, signcryption is a public-key primitive that simultaneously performs the functions of both digital signature and encryption.
Encryption and digital signature are two fundamental cryptographic tools that can guarantee the confidentiality, integrity, and non-repudiation. Until 1997, they were viewed as important but distinct building blocks of various cryptographic systems. In public key schemes, a traditional method is to digitally sign a message then followed by an encryption (signature-then-encryption) that can have two problems: Low efficiency and high cost of such summation, and the case that any arbitrary scheme cannot guarantee security. Signcryption is a relatively new cryptographic technique that is supposed to perform the functions of digital signature and encryption in a single logical step and can effectively decrease the computational costs and communication overheads in comparison with the traditional signature-then-encryption schemes.
Signcryption provides the properties of both digital signatures and encryption schemes in a way that is more efficient than signing and encrypting separately. This means that at least some aspect of its efficiency (for example the computation time) is better than any hybrid of digital signature and encryption schemes, under a particular model of security. Note that sometimes hybrid encryption can be employed instead of simple encryption, and a single session-key reused for several encryptions to achieve better overall efficiency across many signature-encryptions than a signcryption scheme but the session-key reuse causes the system to lose security under even the relatively weak CPA model. This is the reason why a random session key is used for each message in a hybrid encryption scheme but for a given level of security (i.e., a given model, say CPA), a signcryption scheme should be more efficient than any simple signature-hybrid encryption combination.
History
The first signcryption scheme was introd
|
https://en.wikipedia.org/wiki/Medial%20triangle
|
In Euclidean geometry, the medial triangle or midpoint triangle of a triangle is the triangle with vertices at the midpoints of the triangle's sides . It is the case of the midpoint polygon of a polygon with sides. The medial triangle is not the same thing as the median triangle, which is the triangle whose sides have the same lengths as the medians of .
Each side of the medial triangle is called a midsegment (or midline). In general, a midsegment of a triangle is a line segment which joins the midpoints of two sides of the triangle. It is parallel to the third side and has a length equal to half the length of the third side.
Properties
The medial triangle can also be viewed as the image of triangle transformed by a homothety centered at the centroid with ratio -1/2. Thus, the sides of the medial triangle are half and parallel to the corresponding sides of triangle ABC. Hence, the medial triangle is inversely similar and shares the same centroid and medians with triangle . It also follows from this that the perimeter of the medial triangle equals the semiperimeter of triangle , and that the area is one quarter of the area of triangle . Furthermore, the four triangles that the original triangle is subdivided into by the medial triangle are all mutually congruent by SSS, so their areas are equal and thus the area of each is 1/4 the area of the original triangle.
The orthocenter of the medial triangle coincides with the circumcenter of triangle . This fact provides a tool for proving collinearity of the circumcenter, centroid and orthocenter. The medial triangle is the pedal triangle of the circumcenter. The nine-point circle circumscribes the medial triangle, and so the nine-point center is the circumcenter of the medial triangle.
The Nagel point of the medial triangle is the incenter of its reference triangle.
A reference triangle's medial triangle is congruent to the triangle whose vertices are the midpoints between the reference triangle's orthocenter a
|
https://en.wikipedia.org/wiki/Raptor%20code
|
In computer science, Raptor codes (rapid tornado; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes.
Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number k of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any k or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above k becoming very close to 1, once the number of received encoding symbols is only very slightly larger than k. For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million. (See Recovery probability and overhead section below for more discussion on this.) A symbol can be any size, from a single byte to hundreds or thousands of bytes.
Raptor codes may be systematic or non-systematic. In the systematic case, the symbols of the original source block, i.e. the source symbols, are included within the set of encoding symbols. Some examples of a systematic Raptor code is the use by the 3rd Generation Partnership Project in mobile cellular wireless broadcasting and multicasting, and also by DVB-H standards for IP datacast to handheld devices (see external links). The Raptor codes used in these standards is also defined in IETF RFC 5053.
Online codes are an example of a non-systematic fountain code.
RaptorQ code
The most advance
|
https://en.wikipedia.org/wiki/Adaptive%20value
|
The adaptive value represents the combined influence of all characters which affect the fitness of an individual or population.
Definition
Adaptive value is an essential concept of population genetics. It represents usefulness of a trait that can help an organism to survive in its environment. This heritable trait that can help offspring to cope with the new surrounding or condition is a measurable quantity. Measuring adaptive value increases our understanding of how a trait helps an individual's or population's chances of survival in a particular set of conditions.
Measurement
The adaptive value can be measured by contribution of an individual to the gene pool of their offspring. The adaptive values are approximately calculated from the rates of change in frequency and mutation–selection balance.
Examples
Avoiding Predators Some plants use indirect plant defenses to protect themselves against their herbivorous consumers. One of defensive mechanism that plants employ is to release volatile chemicals when herbivores are feeding from them. The odor of volatile chemical attracts carnivores’ attention, and they get rid of herbivores by eating them.
Sexual Reproduction Advantages Sexual mimicry is common among animals. Male cuttlefishes uses this strategy to gain advantage over other males competitor. They mimic female cuttlefish's marking to fool guarding male and fertilize their females. This strategy has more success rate than normal courtship.
See also
Adaptation
Evolution
External links
http://www.talkorigins.org/indexcc/CB/CB950.html
References
Evolutionary biology terminology
|
https://en.wikipedia.org/wiki/Maltol
|
Maltol is a naturally occurring organic compound that is used primarily as a flavor enhancer. It is found in nature in the bark of larch trees and in the needles of pine trees, and is produced during the roasting of malt (from which it gets its name) and in the baking of bread.
It has the odor of caramel and is used to impart a pleasant aroma to foods and fragrances.
It is used as a flavor enhancer, is designated in the U.S. as INS number 636,
and is known in the European E number food additive series as E636.
Chemistry
Maltol is a white crystalline powder that is soluble in hot water and other polar solvents.
Like related 3-hydroxy-4-pyrones such as kojic acid, it binds to hard metal centers such as Fe3+, Ga3+, Al3+, and VO2+.
Related to this property, maltol has been reported to greatly increase aluminium uptake in the body and to increase the oral bioavailability of gallium and iron.
See also
Ethyl maltol
Ferric maltol
Gallium maltolate
5-Hydroxymaltol
Isomaltol
References
Flavors
Food additives
Flavor enhancers
4-Pyrones
Enols
Sweet-smelling chemicals
|
https://en.wikipedia.org/wiki/Open%20Mobile%20Terminal%20Platform
|
The Open Mobile Terminal Platform (OMTP) was a forum created by mobile network operators to discuss standards with manufacturers of mobile phones and other mobile devices. During its lifetime, the OMTP included manufacturers such as Huawei, LG Electronics, Motorola, Nokia, Samsung and Sony Ericsson.
Membership
OMTP was originally set up by leading mobile operators. At the time it transitioned into the Wholesale Applications Community at the end of June 2010, there were nine full members: AT&T, Deutsche Telekom AG, KT, Orange, Smart Communications, Telecom Italia, Telefónica, Telenor and Vodafone. OMTP also had the support of two sponsors, Ericsson and Nokia.
Activities
OMTP recommendations have hugely helped to standardise mobile operator terminal requirements, and its work has gone towards helping to defragment and deoptionalise operators' recommendations. OMTP's focus was on gathering and driving mobile terminal requirements, and publishing their findings in their Recommendations. OMTP was technology neutral, with its recommendations intended for deployment across the range of technology platforms, operating systems (OS) and middleware layers.
OMTP is perhaps best known for its work in the field of mobile security, but its work encompassed the full range of mobile device capabilities. OMTP published recommendations in 2007 and early 2008 on areas such as Positioning Enablers, Advanced Device Management, IMS and Mobile VoIP. Later, the Advanced Trusted Environment: OMTP TR1 and its supporting document, 'Security Threats on Embedded Consumer Devices' were released, with the endorsement of the UK Home Secretary, Jacqui Smith.
OMTP also published requirements document addressing support for advanced SIM cards. This document defines also advanced profiles for Smart Card Web Server, High Speed Protocol, Mobile TV and Contactless.
OMTP has also made significant progress in getting support for the use of micro-USB as a standard connector for data and power. A full
|
https://en.wikipedia.org/wiki/Spatial%E2%80%93temporal%20reasoning
|
Spatial–temporal reasoning is an area of artificial intelligence that draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space.
Influence from cognitive psychology
A convergent result in cognitive psychology is that the connection relation is the first spatial relation that human babies acquire, followed by understanding orientation relations and distance relations. Internal relations among the three kinds of spatial relations can be computationally and systematically explained within the theory of cognitive prism as follows: (1) the connection relation is primitive; (2) an orientation relation is a distance comparison relation: you being in front of me can be interpreted as you are nearer to my front side than my other sides; (3) a distance relation is a connection relation using a third object: you being one meter away from me can be interpreted as a one meter long object connected with you and me simultaneously.
Fragmentary representations of temporal calculi
Without addressing internal relations among spatial relations, AI researchers contributed many fragmentary representations. Examples of temporal calculi include Allen's interval algebra, and Vilain's & Kautz's point algebra. The most prominent spatial calculi are mereotopological calculi, Frank's cardinal direction calculus, Freksa's double cross calculus, Egenhofer and Franzosa's 4- and 9-intersection calculi, Ligozat's flip-flop calculus, various region connection calculi (RCC), and the Oriented Point Relation Algebra. Recently, spatio-temporal calculi have been designed that combine spatial and temporal information. For example, the spatiotemporal constraint calculus (STCC) by Gerevini and Nebel combines Allen's inte
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.