source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/600%20%28number%29 | 600 (six hundred) is the natural number following 599 and preceding 601.
Mathematical properties
Six hundred is a composite number, an abundant number, a pronic number and a Harshad number.
Credit and cars
In the United States, a credit score of 600 or below is considered poor, limiting available credit at a normal interest rate.
NASCAR runs 600 advertised miles in the Coca-Cola 600, its longest race.
The Fiat 600 is a car, the SEAT 600 its Spanish version.
Integers from 601 to 699
600s
601 = prime number, centered pentagonal number
602 = 2 × 7 × 43, nontotient, number of cubes of edge length 1 required to make a hollow cube of edge length 11, area code for Phoenix, AZ along with 480 and 623
603 = 32 × 67, Harshad number, Riordan number, area code for New Hampshire
604 = 22 × 151, nontotient, totient sum for first 44 integers, area code for southwestern British Columbia (Lower Mainland, Fraser Valley, Sunshine Coast and Sea to Sky)
605 = 5 × 112, Harshad number, sum of the nontriangular numbers between the two successive triangular numbers 55 and 66, number of non-isomorphic set-systems of weight 9.
606 = 2 × 3 × 101, sphenic number, sum of six consecutive primes (89 + 97 + 101 + 103 + 107 + 109), admirable number
607 – prime number, sum of three consecutive primes (197 + 199 + 211), Mertens function(607) = 0, balanced prime, strictly non-palindromic number, Mersenne prime exponent
608 = 25 × 19, Mertens function(608) = 0, nontotient, happy number, number of regions formed by drawing the line segments connecting any two of the perimeter points of a 3 times 4 grid of squares.
609 = 3 × 7 × 29, sphenic number, strobogrammatic number
610s
610 = 2 × 5 × 61, sphenic number, nontotient, Fibonacci number, Markov number. Also a kind of telephone wall socket used in Australia.
611 = 13 × 47, sum of the three standard board sizes in Go (92 + 132 + 192), the 611th tribonacci number is prime
612 = 22 × 32 × 17, Harshad number, Zuckerman number , untouchable |
https://en.wikipedia.org/wiki/Eddy%20current | In electromagnetism, eddy currents (also called Foucault's currents) are loops of electric current induced within conductors by a changing magnetic field in the conductor according to Faraday's law of induction or by the relative motion of a conductor in a magnetic field. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time-varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the strength of the magnetic field, the area of the loop, and the rate of change of flux, and inversely proportional to the resistivity of the material. When graphed, these circular currents within a piece of metal look vaguely like eddies or whirlpools in a liquid.
By Lenz's law, an eddy current creates a magnetic field that opposes the change in the magnetic field that created it, and thus eddy currents react back on the source of the magnetic field. For example, a nearby conductive surface will exert a drag force on a moving magnet that opposes its motion, due to eddy currents induced in the surface by the moving magnetic field. This effect is employed in eddy current brakes which are used to stop rotating power tools quickly when they are turned off. The current flowing through the resistance of the conductor also dissipates energy as heat in the material. Thus eddy currents are a cause of energy loss in alternating current (AC) inductors, transformers, electric motors and generators, and other AC machinery, requiring special construction such as laminated magnetic cores or ferrite cores to minimize them. Eddy currents are also used to heat objects in induction heating furnaces and equipment, and to detect cracks and flaws in metal parts using eddy-current testing instruments.
Origin of term
The term eddy current co |
https://en.wikipedia.org/wiki/Molybdenite | Molybdenite is a mineral of molybdenum disulfide, MoS2. Similar in appearance and feel to graphite, molybdenite has a lubricating effect that is a consequence of its layered structure. The atomic structure consists of a sheet of molybdenum atoms sandwiched between sheets of sulfur atoms. The Mo-S bonds are strong, but the interaction between the sulfur atoms at the top and bottom of separate sandwich-like tri-layers is weak, resulting in easy slippage as well as cleavage planes.
Molybdenite crystallizes in the hexagonal crystal system as the common polytype 2H and also in the trigonal system as the 3R polytype.
Description
Occurrence
Molybdenite occurs in high temperature hydrothermal ore deposits.
Its associated minerals include pyrite, chalcopyrite, quartz, anhydrite, fluorite, and scheelite. Important deposits include the disseminated porphyry molybdenum deposits at Questa, New Mexico and the Henderson and Climax mines in Colorado. Molybdenite also occurs in porphyry copper deposits of Arizona, Utah, and Mexico.
The element rhenium is always present in molybdenite as a substitute for molybdenum, usually in the parts per million (ppm ) range, but often up to 1–2%. High rhenium content results in a structural variety detectable by X-ray diffraction techniques. Molybdenite ores are essentially the only source for rhenium. The presence of the radioactive isotope rhenium-187 and its daughter isotope osmium-187 provides a useful geochronologic dating technique.
Features
Molybdenite is extremely soft with a metallic luster, and is superficially almost identical to graphite, to the point where it is not possible to positively distinguish between the two minerals without scientific equipment. It marks paper in much the same way as graphite. Its distinguishing feature from graphite is its higher specific gravity, as well as its tendency to occur in a matrix.
Uses
Molybdenite is an important ore of molybdenum, and is the most common source of the metal. While |
https://en.wikipedia.org/wiki/Sperner%27s%20lemma | In mathematics, Sperner's lemma is a combinatorial result on colorings of triangulations, analogous to the Brouwer fixed point theorem, which is equivalent to it. It states that every Sperner coloring (described below) of a triangulation of an simplex contains a cell whose vertices all have different colors.
The initial result of this kind was proved by Emanuel Sperner, in relation with proofs of invariance of domain. Sperner colorings have been used for effective computation of fixed points and in root-finding algorithms, and are applied in fair division (cake cutting) algorithms.
According to the Soviet Mathematical Encyclopaedia (ed. I.M. Vinogradov), a related 1929 theorem (of Knaster, Borsuk and Mazurkiewicz) had also become known as the Sperner lemma – this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as the Knaster–Kuratowski–Mazurkiewicz lemma.
Statement
One-dimensional case
In one dimension, Sperner's Lemma can be regarded as a discrete version of the intermediate value theorem. In this case, it essentially says that if a discrete function takes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times.
Two-dimensional case
The two-dimensional case is the one referred to most frequently. It is stated as follows:
Subdivide a triangle arbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that
Each of the three vertices , , and of the initial triangle has a distinct color
The vertices that lie along any edge of triangle have only two colors, the two colors at the endpoints of the edge. For example, each vertex on must have the same color as or .
Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation |
https://en.wikipedia.org/wiki/Dipole%20antenna | In radio and telecommunications a dipole antenna or doublet is the simplest and most widely used class of antenna. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each end. A dipole antenna commonly consists of two identical conductive elements such as metal wires or rods. The driving current from the transmitter is applied, or for receiving antennas the output signal to the receiver is taken, between the two halves of the antenna. Each side of the feedline to the transmitter or receiver is connected to one of the conductors. This contrasts with a monopole antenna, which consists of a single rod or conductor with one side of the feedline connected to it, and the other side connected to some type of ground. A common example of a dipole is the "rabbit ears" television antenna found on broadcast television sets.
The dipole is the simplest type of antenna from a theoretical point of view. Most commonly it consists of two conductors of equal length oriented end-to-end with the feedline connected between them. Dipoles are frequently used as resonant antennas. If the feedpoint of such an antenna is shorted, then it will be able to resonate at a particular frequency, just like a guitar string that is plucked. Using the antenna at around that frequency is advantageous in terms of feedpoint impedance (and thus standing wave ratio), so its length is determined by the intended wavelength (or frequency) of operation. The most commonly used is the center-fed half-wave dipole which is just under a half-wavelength long. The radiation pattern of the half-wave dipole is maximum perpendicular to the conductor, falling to zero in the axial direction, thus implementing an omnidirectional antenna if installed vertically, or (more commonly) a weakly directional antenna if horizontal.
Although they may be us |
https://en.wikipedia.org/wiki/Henry%20Dudeney | Henry Ernest Dudeney (10 April 1857 – 23 April 1930) was an English author and mathematician who specialised in logic puzzles and mathematical games. He is known as one of the country's foremost creators of mathematical puzzles.
Early life
Dudeney was born in the village of Mayfield, East Sussex, England, one of six children of Gilbert and Lucy Dudeney. His grandfather, John Dudeney, was well known as a self-taught mathematician and shepherd; his initiative was much admired by his grandson. Dudeney learned to play chess at an early age, and continued to play frequently throughout his life. This led to a marked interest in mathematics and the composition of puzzles. Chess problems in particular fascinated him during his early years.
Career
Although Dudeney spent his career in the Civil Service, he continued to devise various problems and puzzles. Dudeney's first puzzle contributions were submissions to newspapers and magazines, often under the pseudonym of "Sphinx." Much of this earlier work was a collaboration with American puzzlist Sam Loyd; in 1890, they published a series of articles in the English penny weekly Tit-Bits.
Dudeney later contributed puzzles under his real name to publications such as The Weekly Dispatch, The Queen, Blighty, and Cassell's Magazine. For twenty years, he had a successful column, "Perplexities", in The Strand Magazine, edited by the former editor of Tit-Bits, George Newnes. Dudeney continued to exchange puzzles with fellow recreational mathematician Sam Loyd for a while, but broke off the correspondence and accused Loyd of stealing his puzzles and publishing them under his own name.
Some of Dudeney's most famous innovations were his 1903 success at solving the Haberdasher's Puzzle (Cut an equilateral triangle into four pieces that can be rearranged to make a square) and publishing the first known crossnumber puzzle, in 1926. He has also been credited with discovering new applications of digital roots. Dudeney was a leading exponent |
https://en.wikipedia.org/wiki/Opto-isolator | An opto-isolator (also called an optocoupler, photocoupler, or optical isolator) is an electronic component that transfers electrical signals between two isolated circuits by using light. Opto-isolators prevent high voltages from affecting the system receiving the signal. Commercially available opto-isolators withstand input-to-output voltages up to 10 kV and voltage transients with speeds up to 25 kV/μs.
A common type of opto-isolator consists of an LED and a phototransistor in the same opaque package. Other types of source-sensor combinations include LED-photodiode, LED-LASCR, and lamp-photoresistor pairs. Usually opto-isolators transfer digital (on-off) signals and can act as an electronic switch, but some techniques allow them to be used with analog signals.
History
The value of optically coupling a solid state light emitter to a semiconductor detector for the purpose of electrical isolation was recognized in 1963 by Akmenkalns, et al. (US patent 3,417,249). Photoresistor-based opto-isolators were introduced in 1968. They are the slowest, but also the most linear isolators and still retain a niche market in the audio and music industries. Commercialization of LED technology in 1968–1970 caused a boom in optoelectronics, and by the end of the 1970s the industry developed all principal types of opto-isolators. The majority of opto-isolators on the market use bipolar silicon phototransistor sensors. They attain medium data transfer speed, sufficient for applications like electroencephalography. The fastest opto-isolators use PIN diodes in photoconductive mode.
Operation
An opto-isolator contains a source (emitter) of light, almost always a near infrared light-emitting diode (LED), that converts electrical input signal into light, a closed optical channel (also called dielectrical channel), and a photosensor, which detects incoming light and either generates electric energy directly, or modulates electric current flowing from an external power supply. The sensor |
https://en.wikipedia.org/wiki/Online%20public%20access%20catalog | The online public access catalog (OPAC), now frequently synonymous with library catalog, is an online database of materials held by a library or group of libraries. Online catalogs have largely replaced the analog card catalogs previously used in libraries.
History
Early online
Although a handful of experimental systems existed as early as the 1960s, the first large-scale online catalogs were developed at Ohio State University in 1975 and the Dallas Public Library in 1978.
These and other early online catalog systems tended to closely reflect the card catalogs that they were intended to replace. Using a dedicated terminal or telnet client, users could search a handful of pre-coordinate indexes and browse the resulting display in much the same way they had previously navigated the card catalog.
Throughout the 1980s, the number and sophistication of online catalogs grew. The first commercial systems appeared, and would by the end of the decade largely replace systems built by libraries themselves. Library catalogs began providing improved search mechanisms, including Boolean and keyword searching, as well as ancillary functions, such as the ability to place holds on items that had been checked-out.
At the same time, libraries began to develop applications to automate the purchase, cataloging, and circulation of books and other library materials. These applications, collectively known as an integrated library system (ILS) or library management system, included an online catalog as the public interface to the system's inventory. Most library catalogs are closely tied to their underlying ILS system.
Stagnation and dissatisfaction
The 1990s saw a relative stagnation in the development of online catalogs. Although the earlier character-based interfaces were replaced with ones for the Web, both the design and the underlying search technology of most systems did not advance much beyond that developed in the late 1980s.
At the same time, organizations outside of libr |
https://en.wikipedia.org/wiki/Active%20filter | An active filter is a type of analog circuit implementing an electronic filter using active components, typically an amplifier. Amplifiers included in a filter design can be used to improve the cost, performance and predictability of a filter.
An amplifier prevents the load impedance of the following stage from affecting the characteristics of the filter. An active filter can have complex poles and zeros without using a bulky or expensive inductor. The shape of the response, the Q (quality factor), and the tuned frequency can often be set with inexpensive variable resistors. In some active filter circuits, one parameter can be adjusted without affecting the others.
Types
Using active elements has some limitations. Basic filter design equations neglect the finite bandwidth of amplifiers. Available active devices have limited bandwidth, so they are often impractical at high frequencies. Amplifiers consume power and inject noise into a system. Certain circuit topologies may be impractical if no DC path is provided for bias current to the amplifier elements. Power handling capability is limited by the amplifier stages.
Active filter circuit configurations (electronic filter topology) include:
Sallen-Key, and VCVS filters (low sensitivity to component tolerance)
State variable filters and biquadratic or biquad filters
Dual amplifier bandpass (DABP)
Wien notch
Multiple feedback filters
Fliege (lowest component count for 2 opamp but with good controllability over frequency and type)
Akerberg Mossberg (one of the topologies that offer complete and independent control over gain, frequency, and type)
Active filters can implement the same transfer functions as passive filters. Common transfer functions are:
High-pass filter – attenuation of frequencies below their cut-off points.
Low-pass filter – attenuation of frequencies above their cut-off points.
Band-pass filter – attenuation of frequencies both above and below those they allow to pass.
Band-stop fil |
https://en.wikipedia.org/wiki/Stub%20network | A stub network, or pocket network, is a somewhat casual term describing a computer network, or part of an internetwork, with no knowledge of other networks, that will typically send much or all of its non-local traffic out via a single path, with the network aware only of a default route to non-local destinations. As a practical analogy, think of an island which is connected to the rest of the world through a bridge and no other path is available either through air or sea. Continuing this analogy, the island might have more than one physical bridge to the mainland, but the set of bridges still represents only one logical path.
An enterprise that connects to the corporate network by only one router, or multiple default routers connected to the same logical upstream destination.
A single LAN which never carries packets between multiple routers connected to it. All traffic is to and/or from local hosts. The routers will only route packets into the LAN if it's destined for the LAN, and out from the LAN if it originated on the LAN.
A person, or workgroup, who is connected to an , by only one router, is a stub network with respect to the ISP. This stub network is part of the ISP's , discussed below.
For each interface on which no default route (also called the gateway of last resort) has been elected, refers to these subnets as stub networks.
An OSPF stubby area is one which receives routes from other areas in the OSPF domain but for external routes, which are communicated via a Type 5 Link-state advertisement, the stubby area is only aware of a default route
An OSPF totally stubby area is one which only has a default route to the rest of the OSPF routing domain. Such an area may have more than one router, but these routers will only know about the default route to the outside.
A stub autonomous system that is connected to only one other autonomous system, through which it gains access to the Internet. This is also called a stub AS, which characterize the great |
https://en.wikipedia.org/wiki/Sallen%E2%80%93Key%20topology | The Sallen–Key topology is an electronic filter topology used to implement second-order active filters that is particularly valued for its simplicity. It is a degenerate form of a voltage-controlled voltage-source (VCVS) filter topology. It was introduced by R. P. Sallen and E. L. Key of MIT Lincoln Laboratory in 1955.
Explanation of operation
A VCVS filter uses a voltage amplifier with practically infinite input impedance and zero output impedance to implement a 2-pole low-pass, high-pass, bandpass, bandstop, or allpass response. The VCVS filter allows high Q factor and passband gain without the use of inductors. A VCVS filter also has the advantage of independence: VCVS filters can be cascaded without the stages affecting each others tuning. A Sallen–Key filter is a variation on a VCVS filter that uses a unity-voltage-gain amplifier (i.e., a pure buffer amplifier).
History and implementation
In 1955, Sallen and Key used vacuum tube cathode follower amplifiers; the cathode follower is a reasonable approximation to an amplifier with unity voltage gain. Modern analog filter implementations may use operational amplifiers (also called op amps). Because of its high input impedance and easily selectable gain, an operational amplifier in a conventional non-inverting configuration is often used in VCVS implementations. Implementations of Sallen–Key filters often use an op amp configured as a voltage follower; however, emitter or source followers are other common choices for the buffer amplifier.
Sensitivity to component tolerances
VCVS filters are relatively resilient to component tolerance, but obtaining high Q factor may require extreme component value spread or high amplifier gain. Higher-order filters can be obtained by cascading two or more stages.
Generic Sallen–Key topology
The generic unity-gain Sallen–Key filter topology implemented with a unity-gain operational amplifier is shown in Figure 1. The following analysis is based on the assumption that the operat |
https://en.wikipedia.org/wiki/Onsager%20reciprocal%20relations | In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.
"Reciprocal relations" occur between different pairs of forces and flows in a variety of physical systems. For example, consider fluid systems described in terms of temperature, matter density, and pressure. In this class of systems, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions. What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. Perhaps surprisingly, the heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics (microscopic reversibility). The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once, with the limitation that "the principle of dynamical reversibility does not apply when (external) magnetic fields or Coriolis forces are present", in which case "the reciprocal relations break down".
Though the fluid system is perhaps described most intuitively, the high precision of electrical measurements makes experimental realisations of Onsager's reciprocity easier in systems involving electrical phenomena. In fact, Onsager's 1931 paper refers to thermoelectricity and transport phenomena in electrolytes as well known from the 19th century, including "quasi-thermodynamic" theories by Thomson and Helmholtz respectively. Onsager' |
https://en.wikipedia.org/wiki/Thermal%20equilibrium | Two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. Thermal equilibrium obeys the zeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant.
Systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change in internal energy' but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium.
Two varieties of thermal equilibrium
Relation of thermal equilibrium between two thermally connected bodies
The relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work; it is called a diathermal connection. According to Lieb and Yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. It is not included in the essential meaning whether it is or is not transitive. After discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the "zeroth law of thermodynamics", that thermal equilibrium is a transitive relation. They comment that the equivalence classes of systems so established are called isotherms.
Internal thermal equilibrium of an isolated body
Thermal equilibrium of a body in itself refers to the body when it is isolated. The background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. When it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. It is not implied that it is necessarily in othe |
https://en.wikipedia.org/wiki/St.%20Petersburg%20paradox | The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.
The problem was invented by Nicolas Bernoulli, who stated it in a letter to Pierre Raymond de Montmort on September 9, 1713. However, the paradox takes its name from its analysis by Nicolas' cousin Daniel Bernoulli, one-time resident of Saint Petersburg, who in 1738 published his thoughts about the problem in the Commentaries of the Imperial Academy of Science of Saint Petersburg.
The St. Petersburg game
A casino offers a game of chance for a single player in which a fair coin is tossed at each stage. The initial stake begins at 2 dollars and is doubled every time tails appears. The first time heads appears, the game ends and the player wins whatever is the current stake. Thus the player wins 2 dollars if heads appears on the first toss, 4 dollars if tails appears on the first toss and heads on the second, 8 dollars if tails appears on the first two tosses and heads on the third, and so on. Mathematically, the player wins dollars, where is the number of consecutive tails tosses. What would be a fair price to pay the casino for entering the game?
To answer this, one needs to consider what would be the expected payout at each stage: with probability , the player wins 2 dollars; with probability the player wins 4 dollars; with probability the player wins 8 dollars, and so on. Assuming the game can continue as long as the coin toss results in heads a |
https://en.wikipedia.org/wiki/List%20of%20human%20blood%20components | In blood banking, the fractions of Whole Blood used for transfusion are also called components.
See also
Reference ranges for common blood tests
References
Blood
Human blood components |
https://en.wikipedia.org/wiki/POKEY | POKEY, an acronym for Pot Keyboard Integrated Circuit, is a digital I/O chip designed by Doug Neubauer at Atari, Inc. for the Atari 8-bit family of home computers. It was first released with the Atari 400 and Atari 800 in 1979 and is included in all later models and the Atari 5200 console. POKEY combines functions for reading paddle controllers (potentiometers) and computer keyboards as well as sound generation and a source for pseudorandom numbers. It produces four voices of distinctive square wave audio, either as clear tones or modified with distortion settings. Neubauer also developed the Atari 8-bit killer application Star Raiders which makes use of POKEY features.
POKEY chips are used for audio in many arcade video games of the 1980s including Centipede, Missile Command, Asteroids Deluxe, and Gauntlet. Some of Atari's arcade systems use multi-core versions with 2 or 4 POKEYs in a single package for more audio channels. The Atari 7800 console allows a game cartridge to contain a POKEY, providing better sound than the system's audio chip. Only two licensed games make use of this: the ports of Ballblazer and Commando.
The LSI chip has 40 pins and is identified as C012294. The USPTO granted U.S. Patent 4,314,236 to Atari on February 2, 1982 for an "Apparatus for producing a plurality of audio sound effects". The inventors listed are Steven T. Mayer and Ronald E. Milner.
No longer manufactured, POKEY is emulated in software by arcade and Atari 8-bit emulators and also via the Atari SAP music format and associated player.
Features
Audio
4 semi-independent audio channels
Channels may be configured as one of:
Four 8-bit channels
Two 16-bit channels
One 16-bit channel and two 8-bit channels
Per-channel volume, frequency, and waveform (square wave with variable duty cycle or pseudorandom noise)
15 kHz or 64 kHz frequency divider.
Two channels may be driven at the CPU clock frequency.
High-pass filter
Keyboard scan (up to 64 keys) + 2 modifier bits (Shift |
https://en.wikipedia.org/wiki/FileVault | FileVault is a disk encryption program in Mac OS X 10.3 Panther (2003) and later. It performs on-the-fly encryption with volumes on Mac computers.
Versions and key features
FileVault was introduced with Mac OS X 10.3 Panther, and could only be applied to a user's home directory, not the startup volume. The operating system uses an encrypted sparse disk image (a large single file) to present a volume for the home directory. Mac OS X 10.5 Leopard and Mac OS X 10.6 Snow Leopard use more modern sparse bundle disk images which spread the data over 8 MB files (called bands) within a bundle. Apple refers to this original iteration of FileVault as "legacy FileVault".
OS X 10.7 Lion and newer versions offer FileVault 2, which is a significant redesign. This encrypts the entire OS X startup volume and typically includes the home directory, abandoning the disk image approach. For this approach to disk encryption, authorised users' information is loaded from a separate non-encrypted boot volume (partition/slice type Apple_Boot).
FileVault
The original version of FileVault was added in Mac OS X Panther to encrypt a user's home directory.
Master passwords and recovery keys
When FileVault is enabled the system invites the user to create a master password for the computer. If a user password is forgotten, the master password or recovery key may be used to decrypt the files instead. FileVault recovery key is different from a Mac recovery key, which is a 28-character code used to reset your password or regain access to your Apple ID.
Migration
Migration of FileVault home directories is subject to two limitations:
there must be no prior migration to the target computer
the target must have no existing user accounts.
If Migration Assistant has already been used or if there are user accounts on the target:
before migration, FileVault must be disabled at the source.
If transferring FileVault data from a previous Mac that uses 10.4 using the built-in utility to move data to a new |
https://en.wikipedia.org/wiki/Programming%20Perl | Programming Perl, best known as the Camel Book among programmers, is a book about writing programs using the Perl programming language, revised as several editions (1991-2012) to reflect major language changes since Perl version 4. Editions have been co-written by the creator of Perl, Larry Wall, along with Randal L. Schwartz, then Tom Christiansen and then Jon Orwant. Published by O'Reilly Media, the book is considered the canonical reference work for Perl programmers. With over 1,000 pages, the various editions contain complete descriptions of each Perl language version and its interpreter. Examples range from trivial code snippets to the highly complex expressions for which Perl is widely known. The camel book editions are also noted for being written in an approachable and humorous style.
History
The first edition, which gained the nickname "the pink camel" due to its pink spine, was originally published in January 1991 and covered version 4 of the Perl language. It was the work of Larry Wall and Randal L. Schwartz. The second edition, published in August 1996, included updates for the release of Perl 5, among them references, objects, packages and other modern programming constructs. This edition was written from scratch by the original authors and Tom Christiansen. In July 2000, the third edition of Programming Perl was published. This version was again rewritten, this time by Wall, Christiansen and Jon Orwant, and covered the Perl 5.6 language. The fourth edition constitutes a major update and rewrite of the book for Perl version 5.14, and improves the coverage of Unicode usage in Perl. The fourth edition was published in February 2012. This edition is written by Tom Christiansen, brian d foy, Larry Wall and Jon Orwant.
Programming Perl has also been made available electronically by O'Reilly, both through its inclusion in various editions of The Perl CD Bookshelf and through the "Safari" service (a subscription-based website containing technical ebooks). Th |
https://en.wikipedia.org/wiki/Abstraction%20inversion | In computer programming, abstraction inversion is an anti-pattern arising when users of a construct need functions implemented within it but not exposed by its interface. The result is that the users re-implement the required functions in terms of the interface, which in its turn uses the internal implementation of the same functions. This may result in implementing lower-level features in terms of higher-level ones, thus the term 'abstraction inversion'.
Possible ill-effects are:
The user of such a re-implemented function may seriously underestimate its running-costs.
The user of the construct is forced to obscure their implementation with complex mechanical details.
Many users attempt to solve the same problem, increasing the risk of error.
Examples
Alleged examples from professional programming circles include:
In Ada, choice of the rendezvous construct as a synchronisation primitive forced programmers to implement simpler constructs such as semaphores on the more complex basis.
In Applesoft BASIC, integer arithmetic was implemented on top of floating-point arithmetic, and there were no bitwise operators and no support for blitting of raster graphics (even though the language supported vector graphics on the Apple II's raster hardware). This caused games and other programs written in BASIC to run slower.
Like Applesoft BASIC, Lua has a floating-point type as its sole numeric type when configured for desktop computers, and it had no bitwise operators prior to Lua 5.2.
Creating an object to represent a function is cumbersome in object-oriented languages such as Java and C++ (especially prior to C++11 and Java 8), in which functions are not first-class objects. In C++ it is possible to make an object 'callable' by overloading the () operator, but it is still often necessary to implement a new class, such as the Functors in the STL. (C++11's lambda function makes it much easier to create an object representing a function.)
Tom Lord has suggested that Subv |
https://en.wikipedia.org/wiki/HACEK%20organisms | The HACEK organisms are a group of fastidious Gram-negative bacteria that are an unusual cause of infective endocarditis, which is an inflammation of the heart due to bacterial infection. HACEK is an abbreviation of the initials of the genera of this group of bacteria: Haemophilus, Aggregatibacter (previously Actinobacillus), Cardiobacterium, Eikenella, Kingella. The HACEK organisms are a normal part of the human microbiota, living in the oral-pharyngeal region.
The bacteria were originally grouped because they were thought to be a significant cause of infective endocarditis, but recent research has shown that they are rare and only responsible for 1.4–3.0% of all cases of this disease.
Organisms
HACEK originally referred to Haemophilus parainfluenzae, Haemophilus aphrophilus, Actinobacillus actinomycetemcomitans, Cardiobacterium hominis, Eikenella corrodens, and Kingella kingae. However, taxonomic rearrangements have changed the A to Aggregatibacter species and the H to Haemophilus species to reflect the recategorization and novel identification of many of the species in these genera. Some reviews of medical literature on HACEK organisms use the older classification, but recent papers are using the new classification.
A list of HACEK organisms:
Haemophilus species
Haemophilus haemolyticus
Haemophilus influenzae: The incidence of endocarditis due to H. influenzae declined after the introduction of the Hib vaccine.
Haemophilus parahaemolyticus
Haemophilus parainfluenzae
Aggregatibacter
Aggregatibacter actinomycetemcomitans (previously Actinobacillus actinomycetemcomitans)
Aggregatibacter aphrophilus (previously Haemophilus aphrophilus)
Aggregatibacter paraphrophilus (previously Haemophilus aphrophilus)
Aggregatibacter segnis
Cardiobacterium
Cardiobacterium hominis: This is the most common species in the genus Cardiobacterium.
Cardiobacterium valvarum
Eikenella
Eikenella corrodens
Kingella
Kingella denitrificans
Kingella kingae: This is the most common species |
https://en.wikipedia.org/wiki/169%20%28number%29 | 169 (one hundred [and] sixty-nine) is the natural number following 168 and preceding 170.
In mathematics
169 is an odd number, a composite number, and a deficient number.
169 is a square number: 13 × 13 = 169, and if each number is reversed the equation is still true: 31 × 31 = 961. 144 shares this property: 12 × 12 = 144, 21 × 21 = 441.
169 is one of the few squares to also be a centered hexagonal number. Like all odd squares, it is a centered octagonal number. 169 is an odd-indexed Pell number, thus it is also a Markov number, appearing in the solutions (2, 169, 985), (2, 29, 169), (29, 169, 14701), etc. 169 is the sum of seven consecutive primes: 13 + 17 + 19 + 23 + 29 + 31 + 37. 169 is a difference in consecutive cubes, equaling
In astronomy
169 Zelia is a bright main belt asteroid
Gliese 169 is an orange, main sequence (K7 V) star in the constellation Taurus
QSO B0307+169 is a quasar in the constellation Aries
Sayh al Uhaymir 169 is a 206g lunar meteorite found in Sultanate of Oman
In the military
was a United States Navy technical research ship during the 1960s
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy following World War I
was a United States Navy during World War II
was a United States Navy submarine during World War II
169th Battalion, CEF unit in the Canadian Expeditionary Force during the World War I
169th Fires Brigade the US Army National Guard artillery brigade, a part of the Colorado Army National Guard
The United States Air Force's 169th Fighter Wing fighter unit at McEntire Joint National Guard Station, South Carolina
169 or 169th Squadrons
169th Airlift Squadron, a unit of the U.S. Air Force
Marine Light Attack Helicopter Squadron 169, United States Marine Corps Light Attack Helicopter Squadron
No. 169 Squadron RAF, a unit of the United Kingdom Royal Air Force
In transportation
Metro Transit Route 169 in Seattle
169th Street station on the I |
https://en.wikipedia.org/wiki/Software%20inspection | Inspection in software engineering, refers to peer review of any work product by trained individuals who look for defects using a well defined process. An inspection might also be referred to as a Fagan inspection after Michael Fagan, the creator of a very popular software inspection process.
Introduction
An inspection is one of the most common sorts of review practices found in software projects. The goal of the inspection is to identify defects. Commonly inspected work products include software requirements specifications and test plans. In an inspection, a work product is selected for review and a team is gathered for an inspection meeting to review the work product. A moderator is chosen to moderate the meeting. Each inspector prepares for the meeting by reading the work product and noting each defect. In an inspection, a defect is any part of the work product that will keep an inspector from approving it. For example, if the team is inspecting a software requirements specification, each defect will be text in the document which an inspector disagrees with.
Inspection process
The inspection process was developed in the mid-1970s and it has later been extended and modified.
The process should have entry criteria that determine if the inspection process is ready to begin. This prevents unfinished work products from entering the inspection process. The entry criteria might be a checklist including items such as "The document has been spell-checked".
The stages in the inspections process are: Planning, Overview meeting, Preparation, Inspection meeting, Rework and Follow-up. The Preparation, Inspection meeting and Rework stages might be iterated.
Planning: The inspection is planned by the moderator.
Overview meeting: The author describes the background of the work product.
Preparation: Each inspector examines the work product to identify possible defects.
Inspection meeting: During this meeting the reader reads through the work product, part by part and t |
https://en.wikipedia.org/wiki/Systems%20biology | Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Particularly from the year 2000 onwards, the concept has been used widely in biology in a variety of contexts. The Human Genome Project is an example of applied systems thinking in biology which has led to new, collaborative ways of working on problems in the biological field of genetics. One of the aims of systems biology is to model and discover emergent properties, properties of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. These typically involve metabolic networks or cell signaling networks.
Overview
Systems biology can be considered from a number of different aspects.
As a field of study, particularly, the study of the interactions between the components of biological systems, and how these interactions give rise to the function and behavior of that system (for example, the enzymes and metabolites in a metabolic pathway or the heart beats).
As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Sy |
https://en.wikipedia.org/wiki/Protein%20family | A protein family is a group of evolutionarily related proteins. In many cases, a protein family has a corresponding gene family, in which each gene encodes a corresponding protein with a 1:1 relationship. The term "protein family" should not be confused with family as it is used in taxonomy.
Proteins in a family descend from a common ancestor and typically have similar three-dimensional structures, functions, and significant sequence similarity. The most important of these is sequence similarity (usually amino-acid sequence), since it is the strictest indicator of homology and therefore the clearest indicator of common ancestry. A fairly well developed framework exists for evaluating the significance of similarity between a group of sequences using sequence alignment methods. Proteins that do not share a common ancestor are very unlikely to show statistically significant sequence similarity, making sequence alignment a powerful tool for identifying the members of protein families. Families are sometimes grouped together into larger clades called superfamilies based on structural and mechanistic similarity, even if no identifiable sequence homology is seen.
Currently, over 60,000 protein families have been defined, although ambiguity in the definition of "protein family" leads different researchers to highly varying numbers.
Terminology and usage
As with many biological terms, the use of protein family is somewhat context dependent; it may indicate large groups of proteins with the lowest possible level of detectable sequence similarity, or very narrow groups of proteins with almost identical sequence, function, and three-dimensional structure, or any kind of group in between. To distinguish between these situations, the term protein superfamily is often used for distantly related proteins whose relatedness is not detectable by sequence similarity, but only from shared structural features. Other terms, such as protein class, group, clan, and subfamily, have been |
https://en.wikipedia.org/wiki/Mask%20%28computing%29 | In computer science, a mask or bitmask is data that is used for bitwise operations, particularly in a bit field. Using a mask, multiple bits in a byte, nibble, word, etc. can be set either on or off, or inverted from on to off (or vice versa) in a single bitwise operation. An additional use of masking involves predication in vector processing, where the bitmask is used to select which element operations in the vector are to be executed (mask bit is enabled) and which are not (mask bit is clear).
Common bitmask functions
Masking bits to 1
To turn certain bits on, the bitwise OR operation can be used, following the principle that Y OR 1 = 1 and Y OR 0 = Y. Therefore, to make sure a bit is on, OR can be used with a 1. To leave a bit unchanged, OR is used with a 0.
Example: Masking on the higher nibble (bits 4, 5, 6, 7) while leaving the lower nibble (bits 0, 1, 2, 3) unchanged.
10010101 10100101
OR 11110000 11110000
= 11110101 11110101
Masking bits to 0
More often in practice, bits are "masked off" (or masked to 0) than "masked on" (or masked to 1). When a bit is ANDed with a 0, the result is always 0, i.e. Y AND 0 = 0. To leave the other bits as they were originally, they can be ANDed with 1 as Y AND 1 = Y
Example: Masking off the higher nibble (bits 4, 5, 6, 7) while leaving the lower nibble (bits 0, 1, 2, 3) unchanged.
10010101 10100101
AND 00001111 00001111
= 00000101 00000101
Querying the status of a bit
It is possible to use bitmasks to easily check the state of individual bits regardless of the other bits. To do this, turning off all the other bits using the bitwise AND is done as discussed above and the value is compared with 0. If it is equal to 0, then the bit was off, but if the value is any other value, then the bit was on. What makes this convenient is that it is not necessary to figure out what the value actually is, just that it is not 0.
Example: Querying the status of the 4th bit
10011101 10010101
AND 0 |
https://en.wikipedia.org/wiki/List%20of%20moments%20of%20inertia | Moment of inertia, denoted by , measures the extent to which an object resists rotational acceleration about a particular axis, it is the rotational analogue to mass (which determines an object's resistance to linear acceleration). The moments of inertia of a mass have units of dimension ML2 ([mass] × [length]2). It should not be confused with the second moment of area, which has units of dimension L4 ([length]4) and is used in beam calculations. The mass moment of inertia is often also known as the rotational inertia, and sometimes as the angular mass.
For simple objects with geometric symmetry, one can often determine the moment of inertia in an exact closed-form expression. Typically this occurs when the mass density is constant, but in some cases the density can vary throughout the object as well. In general, it may not be straightforward to symbolically express the moment of inertia of shapes with more complicated mass distributions and lacking symmetry. When calculating moments of inertia, it is useful to remember that it is an additive function and exploit the parallel axis and perpendicular axis theorems.
This article mainly considers symmetric mass distributions, with constant density throughout the object, and the axis of rotation is taken to be through the center of mass unless otherwise specified.
Moments of inertia
Following are scalar moments of inertia. In general, the moment of inertia is a tensor, see below.
List of 3D inertia tensors
This list of moment of inertia tensors is given for principal axes of each object.
To obtain the scalar moments of inertia I above, the tensor moment of inertia I is projected along some axis defined by a unit vector n according to the formula:
where the dots indicate tensor contraction and the Einstein summation convention is used. In the above table, n would be the unit Cartesian basis ex, ey, ez to obtain Ix, Iy, Iz respectively.
See also
List of second moments of area
Parallel axis theorem
Perpendicula |
https://en.wikipedia.org/wiki/List%20of%20curves | This is a list of Wikipedia articles about curves used in different fields: mathematics (including geometry, statistics, and applied mathematics), physics, engineering, economics, medicine, biology, psychology, ecology, etc.
Mathematics (Geometry)
Algebraic curves
Rational curves
Rational curves are subdivided according to the degree of the polynomial.
Degree 1
Line
Degree 2
Plane curves of degree 2 are known as conics or conic sections and include
Circle
Unit circle
Ellipse
Parabola
Hyperbola
Unit hyperbola
Degree 3
Cubic plane curves include
Cubic parabola
Folium of Descartes
Cissoid of Diocles
Conchoid of de Sluze
Right strophoid
Semicubical parabola
Serpentine curve
Trident curve
Trisectrix of Maclaurin
Tschirnhausen cubic
Witch of Agnesi
Degree 4
Quartic plane curves include
Ampersand curve
Bean curve
Bicorn
Bow curve
Bullet-nose curve
Cartesian oval
Cruciform curve
Deltoid curve
Devil's curve
Hippopede
Kampyle of Eudoxus
Kappa curve
Lemniscate
Lemniscate of Booth
Lemniscate of Gerono
Lemniscate of Bernoulli
Limaçon
Cardioid
Limaçon trisectrix
Ovals of Cassini
Squircle
Trifolium Curve
Degree 5
Degree 6
Astroid
Atriphtaloid
Nephroid
Quadrifolium
Curve families of variable degree
Epicycloid
Epispiral
Epitrochoid
Hypocycloid
Lissajous curve
Poinsot's spirals
Rational normal curve
Rose curve
Curves with genus 1
Bicuspid curve
Cassinoide
Cubic curve
Elliptic curve
Watt's curve
Curves with genus > 1
Bolza surface (genus 2)
Klein quartic (genus 3)
Bring's curve (genus 4)
Macbeath surface (genus 7)
Butterfly curve (algebraic) (genus 7)
Curve families with variable genus
Polynomial lemniscate
Fermat curve
Sinusoidal spiral
Superellipse
Hurwitz surface
Elkies trinomial curves
Hyperelliptic curve
Classical modular curve
Cassini oval
Transcendental curves
Bowditch curve
Brachistochrone
Butterfly curve (transcendental)
Catenary
Clélies
Cochleoid
Cycloid
Horopter
Isochrone
Isochrone of Huygens (Tautochrone)
Isochrone of Leibniz
Isochrone of Varignon
Lamé |
https://en.wikipedia.org/wiki/Fisher%20equation | In financial mathematics and economics, the Fisher equation expresses the relationship between nominal interest rates, real interest rates, and inflation. Named after Irving Fisher, an American economist, it can be expressed as real interest rate ≈ nominal interest rate − inflation rate.
In more formal terms, where equals the real interest rate, equals the nominal interest rate, and equals the inflation rate, then . The approximation of is often used instead since the nominal interest rate, real interest rate, and inflation rate are usually close to zero.
Applications
Borrowing, lending and the time value of money
When loans are made, the amount borrowed and the repayments due to the lender are normally stated in nominal terms, before inflation. However, when inflation occurs, a dollar repaid in the future is worth less than a dollar borrowed today. To calculate the true economics of the loan, it is necessary to adjust the nominal cash flows to account for future inflation.
Inflation-indexed bonds
The Fisher equation can be used in the analysis of bonds. The real return on a bond is roughly equivalent to the nominal interest rate minus the expected inflation rate. But if actual inflation exceeds expected inflation during the life of the bond, the bondholder's real return will suffer. This risk is one of the reasons inflation-indexed bonds such as U.S. Treasury Inflation-Protected Securities were created to eliminate inflation uncertainty. Holders of indexed bonds are assured that the real cash flow of the bond (principal plus interest) will not be affected by inflation.
Cost–benefit analysis
As detailed by Steve Hanke, Philip Carver, and Paul Bugg (1975), cost benefit analysis can be greatly distorted if the exact Fisher equation is not applied. Prices and interest rates must both be projected in either real or nominal terms.
Monetary policy
The Fisher equation plays a key role in the Fisher hypothesis, which asserts that the real interest rate is unaffec |
https://en.wikipedia.org/wiki/Geometric%20dimensioning%20and%20tolerancing | Geometric dimensioning and tolerancing (GD&T) is a system for defining and communicating engineering tolerances via a symbolic language on engineering drawings and computer-generated 3D models that describes a physical object's nominal geometry and the permissible variation thereof. GD&T is used to define the nominal (theoretically perfect) geometry of parts and assemblies, the allowable variation in size, form, orientation, and location of individual features, and how features may vary in relation to one another such that a component is considered satisfactory for its intended use. Dimensional specifications define the nominal, as-modeled or as-intended geometry, while tolerance specifications define the allowable physical variation of individual features of a part or assembly.
There are several standards available worldwide that describe the symbols and define the rules used in GD&T. One such standard is American Society of Mechanical Engineers (ASME) Y14.5. This article is based on that standard. Other standards, such as those from the International Organization for Standardization (ISO) describe a different system which has very different interpretation rules (see GPS&V). The Y14.5 standard provides a fairly complete set of rules for GD&T in one document. The ISO standards, in comparison, typically only address a single topic at a time. There are separate standards that provide the details for each of the major symbols and topics below (e.g. position, flatness, profile, etc.). BS 8888 provides a self-contained document taking into account a lot of GPS&V standards.
Origin
The origin of GD&T is credited to Stanley Parker, who developed the concept of "true position". While little is known about Parker's life, it is known that he worked at the Royal Torpedo Factory in Alexandria, West Dunbartonshire, Scotland. His work increased production of naval weapons by new contractors.
In 1940, Parker published Notes on Design and Inspection of Mass Production Engineer |
https://en.wikipedia.org/wiki/Exponential%20sum | In mathematics, an exponential sum may be a finite Fourier series (i.e. a trigonometric polynomial), or other finite sum formed using the exponential function, usually expressed by means of the function
Therefore, a typical exponential sum may take the form
summed over a finite sequence of real numbers xn.
Formulation
If we allow some real coefficients an, to get the form
it is the same as allowing exponents that are complex numbers. Both forms are certainly useful in applications. A large part of twentieth century analytic number theory was devoted to finding good estimates for these sums, a trend started by basic work of Hermann Weyl in diophantine approximation.
Estimates
The main thrust of the subject is that a sum
is trivially estimated by the number N of terms. That is, the absolute value
by the triangle inequality, since each summand has absolute value 1. In applications one would like to do better. That involves proving some cancellation takes place, or in other words that this sum of complex numbers on the unit circle is not of numbers all with the same argument. The best that is reasonable to hope for is an estimate of the form
which signifies, up to the implied constant in the big O notation, that the sum resembles a random walk in two dimensions.
Such an estimate can be considered ideal; it is unattainable in many of the major problems, and estimates
have to be used, where the o(N) function represents only a small saving on the trivial estimate. A typical 'small saving' may be a factor of log(N), for example. Even such a minor-seeming result in the right direction has to be referred all the way back to the structure of the initial sequence xn, to show a degree of randomness. The techniques involved are ingenious and subtle.
A variant of 'Weyl differencing' investigated by Weyl involving a generating exponential sum
was previously studied by Weyl himself, he developed a method to express the sum as the value , where 'G' can be defined via a |
https://en.wikipedia.org/wiki/PETSCII | PETSCII (PET Standard Code of Information Interchange), also known as CBM ASCII, is the character set used in Commodore Business Machines' 8-bit home computers, starting with the PET from 1977 and including the CBM-II, VIC-20, Commodore 64, Commodore 16, Commodore 116, Plus/4, and Commodore 128.
History
The character set was largely designed by Leonard Tramiel (the son of Commodore CEO Jack Tramiel) and PET designer Chuck Peddle. The graphic characters of PETSCII were one of the extensions Commodore specified for Commodore BASIC when laying out desired changes to Microsoft's existing 6502 BASIC to Microsoft's Ric Weiland in 1977. The VIC-20 used the same pixel-for-pixel font as the PET, although the characters appeared wider due to the VIC's 22-column screen. The Commodore 64, however, used a slightly re-designed, heavy upper-case font, essentially a thicker version of the PET's, in order to avoid color artifacts created by the machine's higher resolution screen. The C64's lowercase characters are identical to the lowercase characters in the Atari 800's system font (released several years earlier).
Peddle claims the inclusion of card suit symbols was spurred by the demand that it should be easy to write card games on the PET (as part of the specification list he received).
Specifications
"Unshifted" PETSCII is based on the 1963 version of ASCII (rather than the 1967 version, which most if not all other computer character sets based on ASCII use). It has only uppercase letters, an up-arrow instead of caret at $5E and a left-arrow instead of an underscore at $5F, and in the VIC-20 and C64 version, a British pound sign instead of the backslash at $5C. Other characters added in ASCII-1967: lowercase letters, the grave accent, curly braces, vertical bar, and tildedo not exist in PETSCII. Codes $60–$7F and $A0–$BF are allotted to CBM-specific block graphics characters (horizontal and vertical lines, hatches, shades, triangles, circles and card suits).
PETSCI |
https://en.wikipedia.org/wiki/Synaptic%20%28software%29 | Synaptic is a GTK-based graphical user interface for the APT package manager used by the Debian Linux distribution and its derivatives. Synaptic is usually used on systems based on deb packages but can also be used on systems based on RPM packages. It can be used to install, remove and upgrade software packages and to add repositories.
Features
Install, remove, upgrade and downgrade single and multiple packages
System-wide upgrade
Package search utility
Manage package repositories
Find packages by name, description and several other attributes
Select packages by status, section, name or a custom filter
Sort packages by name, status, size or version
Browse available online documentation related to a package
Download the latest changelog of a package
Lock packages to the current version
Force the installation of a specific package version
Undo/Redo of selections
Built-in terminal emulator for the package manager
Allows creation of download scripts (see Usage for more details)
It also has the following features:
Configure packages through the debconf system
Xapian-based fast search
Get screenshots from screenshots.debian.net
Usage
The package manager enables the user to install, to upgrade or to remove software packages. To install or remove a package a user must search or navigate to the package, then mark it for installation or removal. Changes are not applied instantly; the user must first mark all changes and then apply them.
History
Synaptic development was funded by the brazilian company Conectiva, which asked Alfredo Kojima, then an employee, to write a graphical front-end for APT, continuing the work initiated with the creation of the APT RPM back-end, apt-rpm.
See also
Aptitude (software), an ncurses interface for APT
Debian
Kali Linux
Tails
FreeBSD
OpenBSD
NetBSD
GNOME Software
PackageKit
References
External links
Synaptic in help.ubuntu.com, including the improvements
Synaptic GitHub page
Dpkg
Free package managemen |
https://en.wikipedia.org/wiki/Lattice%20%28order%29 | A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of a partially ordered set in which every pair of elements has a unique supremum (also called a least upper bound or join) and a unique infimum (also called a greatest lower bound or meet). An example is given by the power set of a set, partially ordered by inclusion, for which the supremum is the union and the infimum is the intersection. Another example is given by the natural numbers, partially ordered by divisibility, for which the supremum is the least common multiple and the infimum is the greatest common divisor.
Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities. Since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras. These lattice-like structures all admit order-theoretic as well as algebraic descriptions.
The sub-field of abstract algebra that studies lattices is called lattice theory.
Definition
A lattice can be defined either order-theoretically as a partially ordered set, or as an algebraic structure.
As partially ordered set
A partially ordered set (poset) is called a lattice if it is both a join- and a meet-semilattice, i.e. each two-element subset has a join (i.e. least upper bound, denoted by ) and dually a meet (i.e. greatest lower bound, denoted by ). This definition makes and binary operations. Both operations are monotone with respect to the given order: and implies that and
It follows by an induction argument that every non-empty finite subset of a lattice has a least upper bound and a greatest lower bound. With additional assumptions, further conclusions may be possible; see Completeness (order theory) for more discussion of this subject. That article also discusses how one may rephrase the above definition in terms of the |
https://en.wikipedia.org/wiki/Volumetric%20efficiency | Volumetric efficiency (VE) in internal combustion engine engineering is defined as the ratio of the equivalent volume of the fresh air drawn into the cylinder during the intake stroke (if the gases were at the reference condition for density) to the volume of the cylinder itself. The term is also used in other engineering contexts, such as hydraulic pumps and electronic components.
Internal combustion engines
Volumetric Efficiency in an internal combustion engine design refers to the efficiency with which the engine can move the charge of fresh air into and out of the cylinders. It also denotes the ratio of equivalent air volume drawn into the cylinder to the cylinder's swept volume. This equivalent volume is commonly inserted into a mass estimation equation based upon Boyle's Gas Law. When VE is multiplied by the cylinder volume, an accurate estimate of cylinder air mass (charge) can be made for use in determining the required fuel delivery and spark timing for the engine.
The flow restrictions in the intake and exhaust systems create a reduction in the inlet flow which reduces the total mass delivery to the cylinder. Under some conditions, ram tuning may either increase or decrease the pumping efficiency of the engine. This happens when a favorable alignment of the pressure wave in the inlet (or exhaust) plumbing improves the flow through the valve. Increasing the pressure differential across the inlet valve typically increases VE throughout the naturally aspirated range. Adding intake manifold boost pressure from a supercharger or turbocharger can increase the VE, but the final calculation for cylinder airmass takes most of this benefit into account with the pressure term in n=PV/RT which is taken from the intake manifold pressure.
Many high performance cars use carefully arranged air intakes and tuned exhaust systems that use pressure waves to push air into and out of the cylinders, making use of the resonance of the system. Two-stroke engines are very sensi |
https://en.wikipedia.org/wiki/Fundamental%20domain | Given a topological space and a group acting on it, the images of a single point under the group action form an orbit of the action. A fundamental domain or fundamental region is a subset of the space which contains exactly one point from each of these orbits. It serves as a geometric realization for the abstract set of representatives of the orbits.
There are many ways to choose a fundamental domain. Typically, a fundamental domain is required to be a connected subset with some restrictions on its boundary, for example, smooth or polyhedral. The images of a chosen fundamental domain under the group action then tile the space. One general construction of fundamental domains uses Voronoi cells.
Hints at a general definition
Given an action of a group G on a topological space X by homeomorphisms, a fundamental domain for this action is a set D of representatives for the orbits. It is usually required to be a reasonably nice set topologically, in one of several precisely defined ways. One typical condition is that D is almost an open set, in the sense that D is the symmetric difference of an open set in X with a set of measure zero, for a certain (quasi)invariant measure on X. A fundamental domain always contains a free regular set U, an open set moved around by G into disjoint copies, and nearly as good as D in representing the orbits. Frequently D is required to be a complete set of coset representatives with some repetitions, but the repeated part has measure zero. This is a typical situation in ergodic theory. If a fundamental domain is used to calculate an integral on X/G, sets of measure zero do not matter.
For example, when X is Euclidean space Rn of dimension n, and G is the lattice Zn acting on it by translations, the quotient X/G is the n-dimensional torus. A fundamental domain D here can be taken to be [0,1)n, which differs from the open set (0,1)n by a set of measure zero, or the closed unit cube [0,1]n, whose boundary consists of the points whose orbi |
https://en.wikipedia.org/wiki/List%20of%20curves%20topics | This is an alphabetical index of articles related to curves used in mathematics.
Acnode
Algebraic curve
Arc
Asymptote
Asymptotic curve
Barbier's theorem
Bézier curve
Bézout's theorem
Birch and Swinnerton-Dyer conjecture
Bitangent
Bitangents of a quartic
Cartesian coordinate system
Caustic
Cesàro equation
Chord (geometry)
Cissoid
Circumference
Closed timelike curve
concavity
Conchoid (mathematics)
Confocal
Contact (mathematics)
Contour line
Crunode
Cubic Hermite curve
Curvature
Curve orientation
Curve fitting
Curve-fitting compaction
Curve of constant width
Curve of pursuit
Curves in differential geometry
Cusp
Cyclogon
De Boor algorithm
Differential geometry of curves
Eccentricity (mathematics)
Elliptic curve cryptography
Envelope (mathematics)
Fenchel's theorem
Genus (mathematics)
Geodesic
Geometric genus
Great-circle distance
Harmonograph
Hedgehog (curve)
Hilbert's sixteenth problem
Hyperelliptic curve cryptography
Inflection point
Inscribed square problem
intercept, y-intercept, x-intercept
Intersection number
Intrinsic equation
Isoperimetric inequality
Jordan curve
Jordan curve theorem
Knot
Limit cycle
Linking coefficient
List of circle topics
Loop (knot)
M-curve
Mannheim curve
Meander (mathematics)
Mordell conjecture
Natural representation
Opisometer
Orbital elements
Osculating circle
Osculating plane
Osgood curve
Parallel (curve)
Parallel transport
Parametric curve
Bézier curve
Spline (mathematics)
Hermite spline
Beta spline
B-spline
Higher-order spline
NURBS
Perimeter
Pi
Plane curve
Pochhammer contour
Polar coordinate system
Prime geodesic
Projective line
Ray
Regular parametric representation
Reuleaux triangle
Ribaucour curve
Riemann–Hurwitz formula
Riemann–Roch theorem
Riemann surface
Road curve
Sato–Tate conjecture
secant
Singular solution
Sinuosity
Slope
Space curve
Spinode
Square wheel
Subtangent
Tacnode
Tangent
Tangent space
Tangential angle
Tor |
https://en.wikipedia.org/wiki/Desktop%20communication%20protocol | Desktop Communication Protocol (DCOP) was an inter-process communication (IPC) daemon by KDE used in K Desktop Environment 3. The design goal for the protocol was to allow applications to interoperate, and share complex tasks. Essentially, DCOP was a ‘remote control’ system, which allowed applications or scripts to enlist the help of other applications. DCOP is built on top of the X11 Inter-Client Exchange protocol.
DCOP continues to be used by the K Desktop Environment 3-fork Trinity Desktop Environment. DCOP was replaced by D-Bus, a message bus system heavily influenced by the DCOP and standardized by freedesktop.org, in KDE Software Compilation 4 and later.
DCOP model
DCOP implements the client–server model, where each application using DCOP is a client and communicates with other clients through the DCOP server. DCOP server functions like a traffic director, dispatching messages/calls to the proper destinations. All clients are peers of each other.
Two types of actions are possible with DCOP: "send and forget" messages, which do not block, and "calls," which block waiting for some data to be returned.
Any data that will be sent is serialized (also referred to as marshalling in CORBA speak) using the built-in QDataStream operators available in all of the Qt classes. There is also a simple IDL-like compiler available (dcopidl and dcopidl2cpp) that generates stubs and skeletons. Using the dcopidl compiler has the additional benefit of type safety.
There is a command-line tool called ‘dcop’ (note the lower-case letters) that can be used for communication with the applications from the shell. ‘kdcop’ is a GUI tool to explore the interfaces of an application.
See also
KDELibs – predecessor of KDE Platform 4
External links
DCOP Documentation
Inter-process communication
KDE Platform
Software that uses Qt |
https://en.wikipedia.org/wiki/Desiccation | Desiccation () is the state of extreme dryness, or the process of extreme drying. A desiccant is a hygroscopic (attracts and holds water) substance that induces or sustains such a state in its local vicinity in a moderately sealed container.
Industry
Desiccation is widely employed in the oil and gas industry. These materials are obtained in a hydrated state, but the water content leads to corrosion or is incompatible with downstream processing. Removal of water is achieved by cryogenic condensation, absorption into glycols, and absorption onto desiccants such as silica gel.
Laboratory
A desiccator is a heavy glass or plastic container, now somewhat antiquated, used in practical chemistry for drying or keeping small amounts of materials very dry. The material is placed on a shelf, and a drying agent or desiccant, such as dry silica gel or anhydrous sodium hydroxide, is placed below the shelf.
Often some sort of humidity indicator is included in the desiccator to show, by color changes, the level of humidity. These indicators are in the form of indicator plugs or indicator cards. The active chemical is cobalt chloride (CoCl2). Anhydrous cobalt chloride is blue. When it bonds with two water molecules, (CoCl2•2H2O), it turns purple. Further hydration results in the pink hexaaquacobalt(II) chloride complex [Co(H2O)6]2+.
Biology and ecology
In biology and ecology, desiccation refers to the drying out of a living organism, such as when aquatic animals are taken out of water, slugs are exposed to salt, or when plants are exposed to sunlight or drought. Ecologists frequently study and assess various organisms' susceptibility to desiccation. For example, in one study the investigators found that Caenorhabditis elegans dauer is a true anhydrobiote that can withstand extreme desiccation and that the basis of this ability is founded in the metabolism of trehalose.
DNA damage and repair
Several bacterial species have been shown to accumulate DNA damages upon desicc |
https://en.wikipedia.org/wiki/Game%20Developers%20Conference | The Game Developers Conference (GDC) is an annual conference for video game developers. The event includes an expo, networking events, and awards shows like the Game Developers Choice Awards and Independent Games Festival, and a variety of tutorials, lectures, and roundtables by industry professionals on game-related topics covering programming, design, audio, production, business and management, and visual arts.
History
Originally called the Computer Game Developers Conference, the first conference was organized in April 1988 by Chris Crawford in his San Jose, California-area living room. About twenty-seven designers attended, including Don Daglow, Brenda Laurel, Brian Moriarty, Gordon Walton, Tim Brengle, Cliff Johnson, Dave Menconi, and Carol and Ivan Manley. The second conference, held that same year at a Holiday Inn at Milpitas, attracted about 125 developers. Early conference directors included Brenda Laurel, Tim Brengle, Sara Reeder, Dave Menconi, Jeff Johannigman, Stephen Friedman, Chris Crawford, and Stephanie Barrett. Later directors include John Powers, Nicky Robinson, Anne Westfall, Susan Lee-Merrow, and Ernest W. Adams. In the early years the conference changed venue each year to accommodate its increases in size. Attendance in this period grew from 525 to 2,387. By 1994 the CGDC could afford to sponsor the creation of the Computer Game Developers Association with Adams as its founding director. Miller Freeman, Inc. took on the running of the conference in 1996, nearly doubling attendance to 4,000 that year. In 2005, the GDC moved to the new Moscone Center West, in the heart of San Francisco's SOMA district, and reported over 12,000 attendees. The GDC returned to San Jose in 2006, reporting over 12,500 attendees, and moved to San Francisco in 2007 – where the organizers expect it will stay for the foreseeable future. Attendance figures continued to rise in following years, with 18,000 attendees in the 2008 event. The 2009 Game Developers Conference wa |
https://en.wikipedia.org/wiki/Land%20rehabilitation | Land rehabilitation as a part of environmental remediation is the process of returning the land in a given area to some degree of its former state, after some process (industry, natural disasters, etc.) has resulted in its damage. Many projects and developments will result in the land becoming degraded, for example mining, farming and forestry.
Mine rehabilitation
Modern mine rehabilitation aims to minimize and mitigate the environmental effects of modern mining, which may in the case of open pit mining involve movement of significant volumes of rock. Rehabilitation management is an ongoing process, often resulting in open pit mines being backfilled.
After mining finishes, the mine area must undergo rehabilitation.
Waste dumps are contoured to flatten them out, to further stabilize them against erosion.
If the ore contains sulfides it is usually covered with a layer of clay to prevent access of rain and oxygen from the air, which can oxidize the sulfides to produce sulfuric acid.
Landfills are covered with topsoil, and vegetation is planted to help consolidate the material.
Dumps are usually fenced off to prevent livestock denuding them of vegetation.
The open pit is then surrounded with a fence, to prevent access, and it generally eventually fills up with groundwater.
Tailings dams are left to evaporate, then covered with waste rock, clay if need be, and soil, which is planted to stabilize it.
For underground mines, rehabilitation is not always a significant problem or cost. This is because of the higher grade of the ore and lower volumes of waste rock and tailings. In some situations, stopes are backfilled with concrete slurry using waste, so that minimal waste is left at surface.
The removal of plant and infrastructure is not always part of a rehabilitation programme, as many old mine plants have cultural heritage and cultural value. Often in gold mines, rehabilitation is performed by scavenger operations which treat the soil within the plant area for |
https://en.wikipedia.org/wiki/Spooling | In computing, spooling is a specialized form of multi-programming for the purpose of copying data between different devices. In contemporary systems, it is usually used for mediating between a computer application and a slow peripheral, such as a printer. Spooling allows programs to "hand off" work to be done by the peripheral and then proceed to other tasks, or to not begin until input has been transcribed. A dedicated program, the spooler, maintains an orderly sequence of jobs for the peripheral and feeds it data at its own rate. Conversely, for slow input peripherals, such as a card reader, a spooler can maintain a sequence of computational jobs waiting for data, starting each job when all of the relevant input is available; see batch processing. The spool itself refers to the sequence of jobs, or the storage area where they are held. In many cases, the spooler is able to drive devices at their full rated speed with minimal impact on other processing.
Spooling is a combination of buffering and queueing.
Print spooling
Nowadays, the most common use of spooling is printing: documents formatted for printing are stored in a queue at the speed of the computer, then retrieved and printed at the speed of the printer. Multiple processes can write documents to the spool without waiting, and can then perform other tasks, while the "spooler" process operates the printer.
For example, when a large organization prepares payroll cheques, the computation takes only a few minutes or even seconds, but the printing process might take hours. If the payroll program printed cheques directly, it would be unable to proceed to other computations until all the cheques were printed. Similarly, before spooling was added to PC operating systems, word processors were unable to do anything else, including interact with the user, while printing.
Spooler or print management software often includes a variety of related features, such as allowing priorities to be assigned to print jobs |
https://en.wikipedia.org/wiki/Whirlpool | A whirlpool is a body of rotating water produced by opposing currents or a current running into an obstacle. Small whirlpools form when a bath or a sink is draining. More powerful ones formed in seas or oceans may be called maelstroms ( ). Vortex is the proper term for a whirlpool that has a downdraft.
In narrow ocean straits with fast flowing water, whirlpools are often caused by tides. Many stories tell of ships being sucked into a maelstrom, although only smaller craft are actually in danger. Smaller whirlpools appear at river rapids and can be observed downstream of artificial structures such as weirs and dams. Large cataracts, such as Niagara Falls, produce strong whirlpools.
Notable whirlpools
Saltstraumen
Saltstraumen is a narrow strait located close to the Arctic Circle, south-east of the city of Bodø, Norway.
It has one of the strongest tidal currents in the world. Whirlpools up to in diameter and in depth are formed when the current is at its strongest.
Moskstraumen
Moskstraumen or Moske-stroom is an unusual system of whirlpools in the open seas in the Lofoten Islands off the Norwegian coast. It is the second strongest whirlpool in the world with flow currents reaching speeds as high as . This is supposedly the whirlpool depicted in Olaus Magnus's map, labeled as "Horrenda Caribdis" (Charybdis).
The Moskstraumen is formed by the combination of powerful semi-diurnal tides and the unusual shape of the seabed, with a shallow ridge between the Moskenesøya and Værøy islands which amplifies and whirls the tidal currents.
The fictional depictions of the Moskstraumen by Edgar Allan Poe, Jules Verne, and Cixin Liu describe it as a gigantic circular vortex that reaches the bottom of the ocean, when in fact it is a set of currents and crosscurrents with a rate of . Poe described this phenomenon in his short story "A Descent into the Maelström", which in 1841 was the first to use the word maelstrom in the English language; in this story related to the Lof |
https://en.wikipedia.org/wiki/SECIOP | In distributed computing, SECIOP (SECure Inter-ORB Protocol) is a protocol for secure inter-ORB communication.
References
Inter-process communication |
https://en.wikipedia.org/wiki/1942%20%28video%20game%29 | 1942 is a vertically scrolling shooter by Capcom that was released as an arcade video game in 1984. Designed by Yoshiki Okamoto, it was the first game in the 194X series, and was followed by 1943: The Battle of Midway.
1942 is set in the Pacific Theater of World War II, and is loosely based on the Battle of Midway. Despite the game being created by Japanese developers, the goal is to reach Tokyo and destroy the Japanese air fleet; this was due to being the first Capcom game designed with Western markets in mind. It went on to be a commercial success in arcades, becoming Japan's fifth highest-grossing table arcade game of 1986 and one of top five highest-grossing arcade conversion kits that year in the United States. It was ported to the Nintendo Entertainment System, selling over copies worldwide, along with other home systems.
Gameplay
The player pilots a Lockheed P-38 Lightning dubbed the "Super Ace". The player has to shoot down enemy planes; to avoid enemy fire, the player can perform a roll or vertical loop. During the game, the player may collect a series of power-ups, one of them allowing the plane to be escorted by two other smaller fighters in a Tip Tow formation. Enemies included: Kawasaki Ki-61s, Mitsubishi A6M Zeros and Kawasaki Ki-48s. The boss plane is a Nakajima G10N.
The game has "a special roll button that allows players to avoid dangerous situations by temporarily looping out of" the playfield. In addition to the standard high score, it also has a separate percentage high score, recording the best ratio of enemy fighters to enemies shot down.
Development
The game was designed by Yoshiki Okamoto. The game's main goal was to be easily accessible for players. This is why they decided to use a World War II theme. 1942 was also the first Capcom game designed with Western markets in mind. That was why they decided to have the player pilot an American P-38 fighter plane, to appeal to the American market. The game is loosely based on the Battle of |
https://en.wikipedia.org/wiki/Logarithmic%20derivative | In mathematics, specifically in calculus and complex analysis, the logarithmic derivative of a function f is defined by the formula
where is the derivative of f. Intuitively, this is the infinitesimal relative change in f; that is, the infinitesimal absolute change in f, namely scaled by the current value of f.
When f is a function f(x) of a real variable x, and takes real, strictly positive values, this is equal to the derivative of ln(f), or the natural logarithm of f. This follows directly from the chain rule:
Basic properties
Many properties of the real logarithm also apply to the logarithmic derivative, even when the function does not take values in the positive reals. For example, since the logarithm of a product is the sum of the logarithms of the factors, we have
So for positive-real-valued functions, the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors. But we can also use the Leibniz law for the derivative of a product to get
Thus, it is true for any function that the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors (when they are defined).
A corollary to this is that the logarithmic derivative of the reciprocal of a function is the negation of the logarithmic derivative of the function:
just as the logarithm of the reciprocal of a positive real number is the negation of the logarithm of the number.
More generally, the logarithmic derivative of a quotient is the difference of the logarithmic derivatives of the dividend and the divisor:
just as the logarithm of a quotient is the difference of the logarithms of the dividend and the divisor.
Generalising in another direction, the logarithmic derivative of a power (with constant real exponent) is the product of the exponent and the logarithmic derivative of the base:
just as the logarithm of a power is the product of the exponent and the logarithm of the base.
In summary, both derivatives and logarithms have a |
https://en.wikipedia.org/wiki/Constantin%20Carath%C3%A9odory | Constantin Carathéodory (; 13 September 1873 – 2 February 1950) was a Greek mathematician who spent most of his professional career in Germany. He made significant contributions to real and complex analysis, the calculus of variations, and measure theory. He also created an axiomatic formulation of thermodynamics. Carathéodory is considered one of the greatest mathematicians of his era and the most renowned Greek mathematician since antiquity.
Origins
Constantin Carathéodory was born in 1873 in Berlin to Greek parents and grew up in Brussels. His father , a lawyer, served as the Ottoman ambassador to Belgium, St. Petersburg and Berlin. His mother, Despina, née Petrokokkinos, was from the island of Chios. The Carathéodory family, originally from Bosnochori or Vyssa, was well established and respected in Constantinople, and its members held many important governmental positions.
The Carathéodory family spent 1874–75 in Constantinople, where Constantin's paternal grandfather lived, while his father Stephanos was on leave. Then in 1875 they went to Brussels when Stephanos was appointed there as Ottoman Ambassador. In Brussels, Constantin's younger sister Julia was born. The year 1879 was a tragic one for the family since Constantin's paternal grandfather died in that year, but much more tragically, Constantin's mother Despina died of pneumonia in Cannes. Constantin's maternal grandmother took on the task of bringing up Constantin and Julia in his father's home in Belgium. They employed a German maid who taught the children to speak German. Constantin was already bilingual in French and Greek by this time.
Constantin began his formal schooling at a private school in Vanderstock in 1881. He left after two years and then spent time with his father on a visit to Berlin, and also spent the winters of 1883–84 and 1884–85 on the Italian Riviera. Back in Brussels in 1885 he attended a grammar school for a year where he first began to become interested in mathematics. In 18 |
https://en.wikipedia.org/wiki/Doxygen | Doxygen ( ) is a documentation generator and static analysis tool for software source trees. When used as a documentation generator, Doxygen extracts information from specially-formatted comments within the code. When used for analysis, Doxygen uses its parse tree to generate diagrams and charts of the code structure. Doxygen can cross reference documentation and code, so that the reader of a document can easily refer to the actual code.
Doxygen is free software, released under the terms of the GNU General Public License version2 (GPLv2).
Design
Like Javadoc, Doxygen extracts documentation from source file comments. In addition to the Javadoc syntax, Doxygen supports the documentation tags used in the Qt toolkit and can generate output in HyperText Markup Language (HTML) as well as in Microsoft Compiled HTML Help (CHM), Rich Text Format (RTF), Portable Document Format (PDF), LaTeX, PostScript or man pages.
Uses
Programming languages supported by Doxygen include C, C++, C#, D, Fortran, IDL, Java, Objective-C, Perl, PHP, Python, and VHDL. Other languages can be supported with additional code.
Doxygen runs on most Unix-like systems, macOS, and Windows.
The first version of Doxygen borrowed code from an early version of DOC++, developed by Roland Wunderling and Malte Zöckler at Zuse Institute Berlin. Later, the Doxygen code was rewritten by Dimitri van Heesch.
Doxygen has built-in support to generate inheritance diagrams for C++ classes. For more advanced diagrams and graphs, Doxygen can use the "dot" tool from Graphviz.
Example code
The generic syntax of documentation comments is to start a comment with an extra asterisk after the leading comment delimiter '/*':
/**
<A short one line description>
<Longer description>
<May span multiple lines or paragraphs as needed>
@param Description of method's or function's input parameter
@param ...
@return Description of the return value
*/
Many programmers like to mark the start of each line with space-asterisk |
https://en.wikipedia.org/wiki/Liquid%20crystal%20on%20silicon | Liquid crystal on silicon (LCoS or LCOS) is a miniaturized reflective active-matrix liquid-crystal display or "microdisplay" using a liquid crystal layer on top of a silicon backplane. It is also referred to as a spatial light modulator. LCoS was initially developed for projection televisions but is now used for wavelength selective switching, structured illumination, near-eye displays and optical pulse shaping. By way of comparison, some LCD projectors use transmissive LCD, allowing light to pass through the liquid crystal.
In an LCoS display, a complementary metal–oxide–semiconductor (CMOS) chip controls the voltage on square reflective aluminium electrodes buried just below the chip surface, each controlling one pixel. For example, a chip with XGA resolution will have 1024x768 plates, each with an independently addressable voltage. Typical cells are about 1–3 centimeters square and about 2 mm thick, with pixel pitch as small as 2.79 μm. A common voltage for all the pixels is supplied by a transparent conductive layer made of indium tin oxide on the cover glass.
Displays
History
The history of LCos projectors dates back to the late 1980s when the technology was first developed. At the time, the primary use of LCos projectors was in the military and scientific fields due to their large and bulky size. However, in the late 1990s, companies like JVC and Hughes Electronics began developing smaller and more affordable LCos projectors for commercial use.
The early LCos projectors had their challenges. They suffered from a phenomenon called "image sticking," where the image would remain on the screen after it was supposed to be gone. This was due to the mirrors sticking in their positions, which resulted in ghosting on the screen. However, manufacturers continued to refine the technology, and today's LCos projectors have largely overcome this issue.
One of the biggest milestones in the history of LCos projectors came in 2004 when Sony introduced its SXRD (Silicon X |
https://en.wikipedia.org/wiki/Pick%20operating%20system | The Pick Operating System, also known as the Pick System or simply Pick, is a demand-paged, multi-user, virtual memory, time-sharing computer operating system based around a MultiValue database. Pick is used primarily for business data processing. It is named after one of its developers, Dick Pick.
The term "Pick system" has also come to be used as the general name of all operating environments which employ this multivalued database and have some implementation of Pick/BASIC and ENGLISH/Access queries. Although Pick started on a variety of minicomputers, the system and its various implementations eventually spread to a large assortment of microcomputers, personal computers, and mainframe computers.
Overview
The Pick Operating System consists of a database, dictionary, query language, procedural language (PROC), peripheral management, multi-user management, and a compiled BASIC Programming language.
The database is a "hash-file" data management system. A hash-file system is a collection of dynamic associative arrays which are organized altogether and linked and controlled using associative files as a database management system. Being hash-file oriented, Pick provides efficiency in data access time. Originally, all data structures in Pick were hash-files (at the lowest level) meaning records are stored as associated couplets of a primary key to a set of values. Today a Pick system can also natively access host files in Windows or Unix in any format.
A Pick database is divided into one or more accounts, master dictionaries, dictionaries, files, and sub-files, each of which is a hash-table oriented file. These files contain records made up of fields, sub-fields, and sub-sub-fields. In Pick, records are called items, fields are called attributes, and sub-fields are called values or sub-values (hence the present-day label "multivalued database"). All elements are variable-length, with field and values marked off by special delimiters, so that any file, record, or fie |
https://en.wikipedia.org/wiki/Cryptographie%20ind%C3%A9chiffrable | Cryptographie indéchiffrable (subtitle: basée sur de nouvelles combinaisons rationelles) is a French book on cryptography written by Émile Victor Théodore Myszkowski (a retired French colonel) and published in 1902.
His book described a cipher that the author had invented and claimed (incorrectly) was "undecipherable" (i.e. secure against unauthorised attempts to read it). It was based on a form of repeated-key transposition.
See also
Books on cryptography
Transposition cipher
Cryptography books
1902 non-fiction books |
https://en.wikipedia.org/wiki/Mark-8 | The Mark-8 is a microcomputer design from 1974, based on the Intel 8008 CPU (which was the world's first 8-bit microprocessor). The Mark-8 was designed by Jonathan Titus, a Virginia Tech graduate student in chemistry. After building the machine, Titus decided to share its design with the community and reached out to Radio-Electronics and Popular Electronics. He was turned down by Popular Electronics, but Radio-Electronics was interested and announced the Mark-8 as a 'loose kit' in the July 1974 issue of Radio-Electronics magazine.
Project kit
The Mark-8 was introduced as a 'build it yourself' project in Radio-Electronics'''s July 1974 cover article, offering a US$5 booklet containing circuit board layouts and DIY construction project descriptions, with Titus himself arranging for $50 circuit board sets to be made by a New Jersey company for delivery to hobbyists. Prospective Mark-8 builders had to gather the various electronics parts themselves from various sources. A couple of thousand booklets and some one-hundred circuit board sets were eventually sold.
The Mark-8 was introduced in R-E as "Your Personal Minicomputer" as the word 'microcomputer' was still far from being commonly used for microprocessor-based computers. In their announcement of their computer kit, the editors placed the Mark-8 in the same category as the era's other 'minisize' computers. As quoted by an Intel official publication, "The Mark-8 is known as one of the first computers for the home."
Influences
Although not very commercially successful, the Mark-8 prompted the editors of Popular Electronics'' magazine to consider publishing a similar but more easily accessible microcomputer project, and just six months later, in January 1975, they went through with their plans announcing the Altair 8800. According to a 1998 Virginia Tech University article, Titus' Mark-8 microcomputer now resides in the Smithsonian Institution's "Information Age" display
See also
Microcomputer
Minicomputer
SC |
https://en.wikipedia.org/wiki/Partition%20of%20an%20interval | In mathematics, a partition of an interval on the real line is a finite sequence of real numbers such that
.
In other terms, a partition of a compact interval is a strictly increasing sequence of numbers (belonging to the interval itself) starting from the initial point of and arriving at the final point of .
Every interval of the form is referred to as a subinterval of the partition x.
Refinement of a partition
Another partition of the given interval [a, b] is defined as a refinement of the partition , if contains all the points of and possibly some other points as well; the partition is said to be “finer” than . Given two partitions, and , one can always form their common refinement, denoted , which consists of all the points of and , in increasing order.
Norm of a partition
The norm (or mesh) of the partition
is the length of the longest of these subintervals
{{math|maxxi − xi−1}} : i 1, … , n .
Applications
Partitions are used in the theory of the Riemann integral, the Riemann–Stieltjes integral and the regulated integral. Specifically, as finer partitions of a given interval are considered, their mesh approaches zero and the Riemann sum based on a given partition approaches the Riemann integral.
Tagged partitions
A tagged partition is a partition of a given interval together with a finite sequence of numbers subject to the conditions that for each ,
.
In other words, a tagged partition is a partition together with a distinguished point of every subinterval: its mesh is defined in the same way as for an ordinary partition. It is possible to define a partial order on the set of all tagged partitions by saying that one tagged partition is bigger than another if the bigger one is a refinement of the smaller one.
Suppose that together with is a tagged partition of , and that together with is another tagged partition of . We say that together with is a refinement of a tagged partition together with if for each integer with , there |
https://en.wikipedia.org/wiki/Fuchsian%20group | In mathematics, a Fuchsian group is a discrete subgroup of PSL(2,R). The group PSL(2,R) can be regarded equivalently as a group of orientation-preserving isometries of the hyperbolic plane, or conformal transformations of the unit disc, or conformal transformations of the upper half plane, so a Fuchsian group can be regarded as a group acting on any of these spaces. There are some variations of the definition: sometimes the Fuchsian group is assumed to be finitely generated, sometimes it is allowed to be a subgroup of PGL(2,R) (so that it contains orientation-reversing elements), and sometimes it is allowed to be a Kleinian group (a discrete subgroup of PSL(2,C)) which is conjugate to a subgroup of PSL(2,R).
Fuchsian groups are used to create Fuchsian models of Riemann surfaces. In this case, the group may be called the Fuchsian group of the surface. In some sense, Fuchsian groups do for non-Euclidean geometry what crystallographic groups do for Euclidean geometry. Some Escher graphics are based on them (for the disc model of hyperbolic geometry).
General Fuchsian groups were first studied by , who was motivated by the paper , and therefore named them after Lazarus Fuchs.
Fuchsian groups on the upper half-plane
Let H = {z in C : Im(z) > 0} be the upper half-plane. Then H is a model of the hyperbolic plane when endowed with the metric
The group PSL(2,R) acts on H by linear fractional transformations (also known as Möbius transformations):
This action is faithful, and in fact PSL(2,R) is isomorphic to the group of all orientation-preserving isometries of H.
A Fuchsian group Γ may be defined to be a subgroup of PSL(2,R), which acts discontinuously on H. That is,
For every z in H, the orbit Γz = {γz : γ in Γ} has no accumulation point in H.
An equivalent definition for Γ to be Fuchsian is that Γ be a discrete group, which means that:
Every sequence {γn} of elements of Γ converging to the identity in the usual topology of point-wise convergence is eventually |
https://en.wikipedia.org/wiki/Marker%20interface%20pattern | The marker interface pattern is a design pattern in computer science, used with languages that provide run-time type information about objects. It provides a means to associate metadata with a class where the language does not have explicit support for such metadata.
To use this pattern, a class implements a marker interface (also called tagging interface) which is an empty interface, and methods that interact with instances of that class test for the existence of the interface. Whereas a typical interface specifies functionality (in the form of method declarations) that an implementing class must support, a marker interface need not do so. The mere presence of such an interface indicates specific behavior on the part of the implementing class. Hybrid interfaces, which both act as markers and specify required methods, are possible but may prove confusing if improperly used.
Example
An example of the application of marker interfaces from the Java programming language is the interface:package java.io;
public interface Serializable {
}A class implements this interface to indicate that its non-transient data members can be written to an . The ObjectOutputStream private method writeObject0(Object,boolean) contains a series of instanceof tests to determine writeability, one of which looks for the Serializable interface. If any of these tests fails, the method throws a NotSerializableException.
Critique
A major problem with marker interfaces is that an interface defines a contract for implementing classes, and that contract is inherited by all subclasses. This means that you cannot "unimplement" a marker. In the example given, if you create a subclass that you do not want to serialize (perhaps because it depends on transient state), you must resort to explicitly throwing NotSerializableException (per ObjectOutputStream docs)
Another solution is for the language to support metadata directly:
Both the .NET Framework and Java (as of Java 5 (1.5)) provide support for s |
https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments%20de%20g%C3%A9om%C3%A9trie%20alg%C3%A9brique | The Éléments de géométrie algébrique ("Elements of Algebraic Geometry") by Alexander Grothendieck (assisted by Jean Dieudonné), or EGA for short, is a rigorous treatise, in French, on algebraic geometry that was published (in eight parts or fascicles) from 1960 through 1967 by the Institut des Hautes Études Scientifiques. In it, Grothendieck established systematic foundations of algebraic geometry, building upon the concept of schemes, which he defined. The work is now considered the foundation stone and basic reference of modern algebraic geometry.
Editions
Initially thirteen chapters were planned, but only the first four (making a total of approximately 1500 pages) were published. Much of the material which would have been found in the following chapters can be found, in a less polished form, in the Séminaire de géométrie algébrique (known as SGA). Indeed, as explained by Grothendieck in the preface of the published version of SGA, by 1970 it had become clear that incorporating all of the planned material in EGA would require significant changes in the earlier chapters already published, and that therefore the prospects of completing EGA in the near term were limited. An obvious example is provided by derived categories, which became an indispensable tool in the later SGA volumes, but was not yet used in EGA III as the theory was not yet developed at the time. Considerable effort was therefore spent to bring the published SGA volumes to a high degree of completeness and rigour. Before work on the treatise was abandoned, there were plans in 1966–67 to expand the group of authors to include Grothendieck's students Pierre Deligne and Michel Raynaud, as evidenced by published correspondence between Grothendieck and David Mumford. Grothendieck's letter of 4 November 1966 to Mumford also indicates that the second-edition revised structure was in place by that time, with Chapter VIII already intended to cover the Picard scheme. In that letter he estimated that at the |
https://en.wikipedia.org/wiki/Melioidosis | Melioidosis is an infectious disease caused by a gram-negative bacterium called Burkholderia pseudomallei. Most people exposed to B. pseudomallei experience no symptoms; however, those who do experience symptoms have signs and symptoms that range from mild, such as fever and skin changes, to severe with pneumonia, abscesses, and septic shock that could cause death. Approximately 10% of people with melioidosis develop symptoms that last longer than two months, termed "chronic melioidosis".
Humans are infected with B. pseudomallei by contact with contaminated soil or water. The bacteria enter the body through wounds, inhalation, or ingestion. Person-to-person or animal-to-human transmission is extremely rare. The infection is constantly present in Southeast Asia particularly in northeast Thailand and northern Australia. In temperate countries such as Europe and the United States, melioidosis cases are usually imported from countries where melioidosis is endemic. The signs and symptoms of melioidosis resemble tuberculosis and misdiagnosis is common. Diagnosis is usually confirmed by the growth of B. pseudomallei from an infected person's blood or other bodily fluid such as pus, sputum, and urine. Those with melioidosis are treated first with an "intensive phase" course of intravenous antibiotics (most commonly ceftazidime) followed by a several-month treatment course of co-trimoxazole. In countries with the advanced healthcare system, approximately 10% of people with melioidosis die from the disease. In less developed countries, the death rate could reach 40%.
Efforts to prevent melioidosis include: wearing protective gear while handling contaminated water or soil, practising hand hygiene, drinking boiled water, and avoiding direct contact with soil, water, or heavy rain. There is little evidence in supporting the use of melioidosis prophylaxis in humans. The antibiotic co-trimoxazole is used as a preventative only for individuals at high risk for getting the diseas |
https://en.wikipedia.org/wiki/Thermal-transfer%20printing | Thermal-transfer printing is a digital printing method in which material is applied to paper (or some other material) by melting a coating of ribbon so that it stays glued to the material on which the print is applied. It contrasts with direct thermal printing, where no ribbon is present in the process.
Thermal transfer is preferred over direct thermal printing on surfaces that are heat-sensitive or when higher durability of printed matter (especially against heat) is desired. Thermal transfer is a popular print process particularly used for the printing of identification labels. It is the most widely used printing process in the world for the printing of high-quality barcodes. Printers like label makers can laminate the print for added durability.
Thermal transfer printing was invented by SATO corporation. The world's first thermal-transfer label printer SATO M-2311 was produced in 1981.
Thermal-transfer printing process
Thermal-transfer printing is done by melting wax within the print heads of a specialized printer. The thermal-transfer print process utilises three main components: a non-movable print head, a carbon ribbon (the ink) and a substrate to be printed, which would typically be paper, synthetics, card or textile materials. These three components effectively form a sandwich with the ribbon in the middle. A thermally compliant print head, in combination with the electrical properties of the ribbon and the correct rheological properties of the ribbon ink are all essential in producing a high-quality printed image.
Print heads are available in 203 dpi, 300 dpi and 600 dpi resolution options. Each dot is addressed independently, and when a dot is electronically addressed, it immediately heats up to a pre-set (adjustable) temperature. The heated element immediately melts the wax- or resin-based ink on the side of the ribbon film facing the substrate, and this process, in combination with the constant pressure being applied by the print-head locking mechan |
https://en.wikipedia.org/wiki/Vesica%20piscis | The vesica piscis is a type of lens, a mathematical shape formed by the intersection of two disks with the same radius, intersecting in such a way that the center of each disk lies on the perimeter of the other. In Latin, "" literally means "bladder of a fish", reflecting the shape's resemblance to the conjoined dual air bladders (swim bladder) found in most fish. In Italian, the shape's name is ("almond"). A similar shape in three dimensions is the lemon.
This figure appears in the first proposition of Euclid's Elements, where it forms the first step in constructing an equilateral triangle using a compass and straightedge. The triangle has as its vertices the two disk centers and one of the two sharp corners of the vesica piscis.
Mathematical description
Mathematically, the vesica piscis is a special case of a lens, the shape formed by the intersection of two disks.
The mathematical ratio of the height of the vesica piscis to the width across its center is the square root of 3, or 1.7320508... (since if straight lines are drawn connecting the centers of the two circles with each other and with the two points where the circles intersect, two equilateral triangles join along an edge). The ratios 265:153 = 1.7320261... and 1351:780 = 1.7320513... are two of a series of approximations to this value, each with the property that no better approximation can be obtained with smaller whole numbers. Archimedes of Syracuse, in his Measurement of a Circle, uses these ratios as upper and lower bounds:
Area
The area of the vesica piscis is formed by two equilateral triangles and four equal circular segments. In the drawing, one triangle and one segment appear in blue.
One triangle and one segment form a sector of one sixth of the circle (60°).
The area of the sector is then: .
Since the side of the equilateral triangle has length , its area is .
The area of the segment is the difference between those two areas:
By summing the areas of two triangles and four segmen |
https://en.wikipedia.org/wiki/WiMAX | Worldwide Interoperability for Microwave Access (WiMAX) is a family of wireless broadband communication standards based on the IEEE 802.16 set of standards, which provide physical layer (PHY) and media access control (MAC) options.
The WiMAX Forum was formed in June 2001 to promote conformity and interoperability, including the definition of system profiles for commercial vendors. The forum describes WiMAX as "a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL". IEEE 802.16m or WirelessMAN-Advanced was a candidate for 4G, in competition with the LTE Advanced standard.
WiMAX was initially designed to provide 30 to 40 megabit-per-second data rates, with the 2011 update providing up to 1 Gbit/s for fixed stations.
WiMAX release 2.1, popularly branded as WiMAX 2+, is a backwards-compatible transition from previous WiMAX generations. It is compatible and interoperable with TD-LTE. Newer versions, still backward compatible, include WiMAX release 2.2 (2014) and WiMAX release 3 (2021, adds interoperation with 5G NR).
Terminology
WiMAX refers to interoperable implementations of the IEEE 802.16 family of wireless-networks standards ratified by the WiMAX Forum. (Similarly, Wi-Fi refers to interoperable implementations of the IEEE 802.11 Wireless LAN standards certified by the Wi-Fi Alliance.) WiMAX Forum certification allows vendors to sell fixed or mobile products as WiMAX certified, thus ensuring a level of interoperability with other certified products, as long as they fit the same profile.
The original IEEE 802.16 standard (now called "Fixed WiMAX") was published in 2001.
WiMAX adopted some of its technology from WiBro, a service marketed in Korea.
Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis for future revisions such as 802.16m-2011.
WiMAX was sometimes referred to as "Wi-Fi on steroids" and can be used for a number of |
https://en.wikipedia.org/wiki/List%20of%20symbols | Many (but not all) graphemes that are part of a writing system that encodes a full spoken language are included in the Unicode standard, which also includes graphical symbols. See:
Language code
List of Unicode characters
List of writing systems
Punctuation
:Category:Typographical symbols
The remainder of this list focuses on graphemes not part of spoken language-encoding systems.
Basic communication
— No symbol
Arrow (symbol)
Character
Emoji
☺ — Smiley
✓ — checkmark (UK: tick)
Harvey balls
☆ — Star (polygon)
I - signal
0 - lack of signal, example: [ ],[0]
Scientific and engineering symbols
Alchemical symbols
Astronomical symbols
Planet symbols
Chemical symbols
Electronic symbol (for circuit diagrams, etc.)
Engineering drawing symbols
Energy Systems Language
Hazard symbols
List of mathematical constants (typically letters and compound symbols)
Glossary of mathematical symbols
List of physical constants (typically letters and compound symbols)
List of common physics notations (typically letters used as variable names in equations)
Rod of Asclepius / Caduceus as a symbol of medicine
Consumer symbols
Various currency signs (sublist)
Navigational symbols
Traffic signs, including warning signs contain many specialized symbols (see article for list)
DOT pictograms
ISO 7001
Exit sign, "running man"
Gender symbols for public toilets
Map symbol
Japanese map symbols
International Breastfeeding Symbol
International Symbol of Access
Barber's pole
Food
EC identification and health marks, for animal products
Food safe symbol marking food contact materials in the European Union
British Egg Industry Council lion
Kosher symbols
Star-K Kosher Certification
OK Kosher Certification
EarthKosher Kosher Certification
General consumer products
Recycling symbol
Recycling codes
Japanese recycling symbols
Green Dot (symbol)
Laundry symbol
Period-after-opening symbol (on cosmetics as 6M, 12M, 18M, etc.)
- keep dry
- keep dry |
https://en.wikipedia.org/wiki/Kaprekar%20number | In mathematics, a natural number in a given number base is a -Kaprekar number if the representation of its square in that base can be split into two parts, where the second part has digits, that add up to the original number. The numbers are named after D. R. Kaprekar.
Definition and properties
Let be a natural number. We define the Kaprekar function for base and power to be the following:
,
where and
A natural number is a -Kaprekar number if it is a fixed point for , which occurs if . and are trivial Kaprekar numbers for all and , all other Kaprekar numbers are nontrivial Kaprekar numbers.
For example, in base 10, 45 is a 2-Kaprekar number, because
A natural number is a sociable Kaprekar number if it is a periodic point for , where for a positive integer (where is the th iterate of ), and forms a cycle of period . A Kaprekar number is a sociable Kaprekar number with , and a amicable Kaprekar number is a sociable Kaprekar number with .
The number of iterations needed for to reach a fixed point is the Kaprekar function's persistence of , and undefined if it never reaches a fixed point.
There are only a finite number of -Kaprekar numbers and cycles for a given base , because if , where then
and , , and . Only when do Kaprekar numbers and cycles exist.
If is any divisor of , then is also a -Kaprekar number for base .
In base , all even perfect numbers are Kaprekar numbers. More generally, any numbers of the form or for natural number are Kaprekar numbers in base 2.
Set-theoretic definition and unitary divisors
We can define the set for a given integer as the set of integers for which there exist natural numbers and satisfying the Diophantine equation
, where
An -Kaprekar number for base is then one which lies in the set .
It was shown in 2000 that there is a bijection between the unitary divisors of and the set defined above. Let denote the multiplicative inverse of modulo , namely the least positive int |
https://en.wikipedia.org/wiki/Non-equilibrium%20thermodynamics | Non-equilibrium thermodynamics is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.
Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Some systems and processes are, however, in a useful sense, near enough to thermodynamic equilibrium to allow description with useful accuracy by currently known non-equilibrium thermodynamics. Nevertheless, many natural systems and processes will always remain far beyond the scope of non-equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost.
The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty or impossibility, in general, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium; it can be done, to useful approximation, only in carefully chosen special cases, namely those that are throughout in local thermodynamic equilibrium.
Scope
Difference between equilibrium and non-equilibrium thermodynami |
https://en.wikipedia.org/wiki/Maurice%20Wilkes | Sir Maurice Vincent Wilkes (26 June 1913 – 29 November 2010) was an English computer scientist who designed and helped build the Electronic Delay Storage Automatic Calculator (EDSAC), one of the earliest stored program computers, and who invented microprogramming, a method for using stored-program logic to operate the control unit of a central processing unit's circuits. At the time of his death, Wilkes was an Emeritus Professor at the University of Cambridge.
Early life, education, and military service
Wilkes was born in Dudley, Worcestershire, England the only child of Ellen (Helen), née Malone (1885–1968) and Vincent Joseph Wilkes (1887–1971), an accounts clerk at the estate of the Earl of Dudley. He grew up in Stourbridge, West Midlands, and was educated at King Edward VI College, Stourbridge. During his school years he was introduced to amateur radio by his chemistry teacher.
He studied the Mathematical Tripos at St John's College, Cambridge from 1931 to 1934, and in 1936 completed his PhD in physics on the subject of radio propagation of very long radio waves in the ionosphere. He was appointed to a junior faculty position of the University of Cambridge, through which he was involved in the establishment of a computing laboratory. He was called up for military service during World War II and worked on radar at the Telecommunications Research Establishment (TRE) and in operational research.
Research and career
In 1945, Wilkes was appointed as the second director of the University of Cambridge Mathematical Laboratory (later known as the Computer Laboratory).
The Cambridge laboratory initially had many different computing devices, including a differential analyser. One day Leslie Comrie visited Wilkes and lent him a copy of John von Neumann's prepress description of the EDVAC, a successor to the ENIAC under construction by Presper Eckert and John Mauchly at the Moore School of Electrical Engineering. He had to read it overnight because he had to return it an |
https://en.wikipedia.org/wiki/Active%20matrix | Active matrix is a type of addressing scheme used in flat panel displays. In this method of switching individual elements (pixels), each pixel is attached to a transistor and capacitor actively maintaining the pixel state while other pixels are being addressed, in contrast with the older passive matrix technology in which each pixel must maintain its state passively, without being driven by circuitry.
Active matrix technology was invented by Bernard J. Lechner at RCA, using MOSFETs (metal–oxide–semiconductor field-effect transistors). Active matrix technology was first demonstrated as a feasible device using thin-film transistors (TFTs) by T. Peter Brody, Fang Chen Luo and their team at the Thin-Film Devices department of Westinghouse Electric Corporation in 1974, and the term was introduced into the literature in 1975.
Given an m × n matrix, the number of connectors needed to address the display is m + n (just like in passive matrix technology). Each pixel is attached to a switch-device, which actively maintains the pixel state while other pixels are being addressed, also preventing crosstalk from inadvertently changing the state of an unaddressed pixel. The most common switching devices use TFTs, i.e. a FET based on either the cheaper non-crystalline thin-film silicon (a-Si), polycrystalline silicon (poly-Si), or CdSe semiconductor material.
Another variant is to use diodes or resistors, but neither diodes (e.g. metal insulator metal diodes), nor non-linear voltage dependent resistors (i.e. varistors) are currently used with the latter not yet economical, compared to TFT.
The Macintosh Portable (1989) was perhaps the first consumer laptop to employ an active matrix panel. Since the decline of cathode ray tubes as a consumer display technology, virtually all TVs, computer monitors and smartphone screens that use LCD or OLED technology employ active matrix technology.
See also
AMLCD
AMOLED
QLED
TFT-LCD
Passive matrix addressing
Pixel geometry
Compariso |
https://en.wikipedia.org/wiki/Serre%27s%20multiplicity%20conjectures | In mathematics, Serre's multiplicity conjectures, named after Jean-Pierre Serre, are certain purely algebraic problems, in commutative algebra, motivated by the needs of algebraic geometry. Since André Weil's initial definition of intersection numbers, around 1949, there had been a question of how to provide a more flexible and computable theory.
Let R be a (Noetherian, commutative) regular local ring and P and Q be prime ideals of R. In 1958, Serre realized that classical algebraic-geometric ideas of multiplicity could be generalized using the concepts of homological algebra. Serre defined the intersection multiplicity of R/P and R/Q by means of the Tor functors of homological algebra, as
This requires the concept of the length of a module, denoted here by , and the assumption that
If this idea were to work, however, certain classical relationships would presumably have to continue to hold. Serre singled out four important properties. These then became conjectures, challenging in the general case. (There are more general statements of these conjectures where R/P and R/Q are replaced by finitely generated modules: see Serre's Local Algebra for more details.)
Dimension inequality
Serre proved this for all regular local rings. He established the following three properties when R is either of equal characteristic or of mixed characteristic and unramified (which in this case means that characteristic of the residue field is not an element of the square of the maximal ideal of the local ring), and conjectured that they hold in general.
Nonnegativity
This was proven by Ofer Gabber in 1995.
Vanishing
If
then
This was proven in 1985 by Paul C. Roberts, and independently by Henri Gillet and Christophe Soulé.
Positivity
If
then
This remains open.
See also
Homological conjectures in commutative algebra
References
Commutative algebra
Intersection theory
Conjectures
Unsolved problems in mathematics |
https://en.wikipedia.org/wiki/System%20programming%20language | A system programming language is a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software. Edsger Dijkstra refers to these languages as machine oriented high order languages, or mohol.
General-purpose programming languages tend to focus on generic features to allow programs written in the language to use the same code on different platforms. Examples of such languages include ALGOL and Pascal. This generic quality typically comes at the cost of denying direct access to the machine's internal workings, and this often has negative effects on performance.
System languages, in contrast, are designed not for compatibility, but for performance and ease of access to the underlying hardware while still providing high-level programming concepts like structured programming. Examples include SPL and ESPOL, both of which are similar to ALGOL in syntax but tuned to their respective platforms. Others are cross-platform but designed to work close to the hardware, like BLISS, JOVIAL and BCPL.
Some languages straddle the system and application domains, bridging the gap between these uses. The canonical example is C, which is used widely for both system and application programming. Some modern languages also do this such as Rust and Swift.
Features
In contrast with application languages, system programming languages typically offer more-direct access to the physical hardware of the machine: an archetypical system programming language in this sense was BCPL. System programming languages often lack built-in input/output (I/O) facilities because a system-software project usually develops its own I/O mechanisms or builds on basic monitor I/O or screen management facilities. The distinction between languages used for system programming and application programming became blurred over time with the widespread popularity of PL/I, C and Pasca |
https://en.wikipedia.org/wiki/Syskey | The SAM Lock Tool, better known as Syskey (the name of its executable file), is a discontinued component of Windows NT that encrypts the Security Account Manager (SAM) database using a 128-bit RC4 encryption key.
Introduced in the Q143475 hotfix for Windows NT 4.0 SP3, the tool was removed in Windows 10's Fall Creators Update in 2017 because its method of cryptography is considered unsecure by modern standards and the fact that the tool has been widely employed in scams as a form of ransomware. Microsoft officially recommended use of BitLocker disk encryption as an alternative.
History
Introduced in the Q143475 hotfix included in Windows NT 4.0 SP3, Syskey was intended to protect against offline password cracking attacks by preventing the possessor of an unauthorized copy of the SAM file from extracting useful information from it.
Syskey can optionally be configured to require the user to enter the key during boot (as a startup password) or to load the key onto removable storage media (e.g., a floppy disk or USB flash drive).
In mid-2017, Microsoft removed syskey.exe from future versions of Windows. Microsoft recommends using "BitLocker or similar technologies instead of the syskey.exe utility."
Security issues
The "Syskey Bug"
In December 1999, a security team from BindView found a security hole in Syskey that indicated that a certain form of offline cryptanalytic attack is possible, making a brute force attack appear to be possible. Microsoft later issued a fix for the problem (dubbed the "Syskey Bug"). The bug affected both Windows NT 4.0 and pre-RC3 versions of Windows 2000.
Use as ransomware
Syskey is commonly abused by "tech support" scammers to lock victims out of their own computers in order to coerce them into paying a ransom.
See also
LM hash
pwdump
References
Cryptographic software
Microsoft Windows security technology
Windows administration |
https://en.wikipedia.org/wiki/After%20Burner | is a rail shooter arcade video game developed and released by Sega in 1987. The player controls an American F-14 Tomcat fighter jet and must clear each of the game's eighteen unique stages by destroying incoming enemies. The plane is equipped with a machine gun and a limited supply of heat-seeking missiles. The game uses a third-person perspective, as in Sega's earlier Space Harrier (1985) and Out Run (1986). It runs on the Sega X Board arcade system which is capable of surface and sprite rotation. It is the fourth Sega game to use a hydraulic "taikan" motion simulator arcade cabinet, one that is more elaborate than their earlier "taikan" simulator games. The cabinet simulates an aircraft cockpit, with flight stick controls, a chair with seatbelt, and hydraulic motion technology that moves, tilts, rolls and rotates the cockpit in sync with the on-screen action.
Designed by Sega veteran Yu Suzuki and the Sega AM2 division, After Burner was intended as being Sega's first "true blockbuster" video game. Development began in December 1986, shortly after the completion of Out Run, and was kept as a closely guarded secret within the company. Suzuki was inspired by the 1986 films Top Gun and Laputa: Castle in the Sky; he originally planned for the game to have a steampunk aesthetic similar to Laputa, but instead went with a Top Gun look to make the game approachable for worldwide audiences. It was designed outside the company in a building named "Studio 128", due to Sega adopting a flextime schedule to allow for games to be worked on outside company headquarters. An updated version with the addition of throttle controls, After Burner II, was released later the same year.
After Burner was a worldwide commercial success, becoming Japan's second highest-grossing large arcade game of 1987 and overall arcade game of 1988 as well as among America's top five highest-grossing dedicated arcade games of 1988. It was acclaimed by critics for its impressive visuals, gameplay and over |
https://en.wikipedia.org/wiki/Surrogate%20key | A surrogate key (or synthetic key, pseudokey, entity identifier, factless key, or technical key) in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data, unlike a natural (or business) key.
Definition
There are at least two definitions of a surrogate:
Surrogate (1) – Hall, Owlett and Todd (1976) A surrogate represents an entity in the outside world. The surrogate is internally generated by the system but is nevertheless visible to the user or application.
Surrogate (2) – Wieringa and De Jonge (1991) A surrogate represents an object in the database itself. The surrogate is internally generated by the system and is invisible to the user or application.
The Surrogate (1) definition relates to a data model rather than a storage model and is used throughout this article. See Date (1998).
An important distinction between a surrogate and a primary key depends on whether the database is a current database or a temporal database. Since a current database stores only currently valid data, there is a one-to-one correspondence between a surrogate in the modeled world and the primary key of the database. In this case the surrogate may be used as a primary key, resulting in the term surrogate key. In a temporal database, however, there is a many-to-one relationship between primary keys and the surrogate. Since there may be several objects in the database corresponding to a single surrogate, we cannot use the surrogate as a primary key; another attribute is required, in addition to the surrogate, to uniquely identify each object.
Although Hall et al. (1976) say nothing about this, others have argued that a surrogate should have the following characteristics:
the value is never reused
the value is system generated
the value is not manipulable by the user or application
the value contains no semantic meaning
the value is not visible to the user or application
t |
https://en.wikipedia.org/wiki/Marco%20Marra | Marco A. Marra is a Distinguished Scientist and Director of Canada's Michael Smith Genome Sciences Centre at the BC Cancer Research Centre and Professor of Medical Genetics at the University of British Columbia (UBC). He also serves as UBC Canada Research Chair in Genome Science for the Canadian Institutes of Health Research and is an inductee in the Canadian Medical Hall of Fame. Marra has been instrumental in bringing genome science to Canada by demonstrating the pivotal role that genomics can play in human health and disease research.
Education and Early Life
Canadian born and educated, Dr. Marco Marra received a B.Sc. in Molecular & Cell Biology and a PhD in Genetics from Simon Fraser University. The title of his PhD thesis: “Genome analysis in Caenorhabditis elegans: Genetic and molecular identification of genes tightly linked to unc-22(IV)”.
Marra trained as a post-doctoral fellow at the Washington University School of Medicine in St Louis, Missouri. He went on to become Group Leader of both the EST (Express Sequence Tag) Sequencing Team and Genome Fingerprinting and Mapping Teams at Washington University’s Genome Sequence Center (renamed the McDonnell Genome Institute), one of the top two sequencing centers in the world at that time.
In 1998, Nobel Laureate Dr. Michael Smith and Dr. Victor Ling set out to establish the Genome Sequence Centre in Vancouver. At their request, Marra returned to British Columbia to head the Mapping and Sequencing teams.
Career and Research
During his first two years with British Columbia’s Genome Sequence Center (renamed Canada's Michael Smith Genome Sciences Centre), Marra served as head of the Mapping and Sequencing teams, Associate Director and Scientific Co-Director. He also held the position of Senior Scientist at BC Cancer Research and Adjunct Professor for the Department of Medical Genetics. Marra subsequently became Professor and Head of the Department of Medical Genetics in the Faculty of Medicine at UBC.
From 2011 |
https://en.wikipedia.org/wiki/Runtime%20library | In computer programming, a runtime library is a set of low-level routines used by a compiler to invoke some of the behaviors of a runtime environment, by inserting calls to the runtime library into compiled executable binary. The runtime environment implements the execution model, built-in functions, and other fundamental behaviors of a programming language. During execution (run time) of that computer program, execution of those calls to the runtime library cause communication between the executable binary and the runtime environment. A runtime library often includes built-in functions for memory management or exception handling. Therefore, a runtime library is always specific to the platform and compiler.
The runtime library may implement a portion of the runtime environment's behavior, but if one reads the code of the calls available, they are typically only thin wrappers that simply package information, and send it to the runtime environment or operating system. However, sometimes the term runtime library is meant to include the code of the runtime environment itself, even though much of that code cannot be directly reached via a library call.
For example, some language features that can be performed only (or are more efficient or accurate) at runtime are implemented in the runtime environment and may be invoked via the runtime library API, e.g. some logic errors, array bounds checking, dynamic type checking, exception handling, and possibly debugging functionality. For this reason, some programming bugs are not discovered until the program is tested in a "live" environment with real data, despite sophisticated compile-time checking and testing performed during development.
As another example, a runtime library may contain code of built-in low-level operations too complicated for their inlining during compilation, such as implementations of arithmetic operations not directly supported by the targeted CPU, or various miscellaneous compiler-specific oper |
https://en.wikipedia.org/wiki/ISO%2031 | ISO 31 (Quantities and units, International Organization for Standardization, 1992) is a superseded international standard concerning physical quantities, units of measurement, their interrelationships and their presentation. It was revised and replaced by ISO/IEC 80000.
Parts
The standard comes in 14 parts:
ISO 31-0: General principles (replaced by ISO/IEC 80000-1:2009)
ISO 31-1: Space and time (replaced by ISO/IEC 80000-3:2007)
ISO 31-2: Periodic and related phenomena (replaced by ISO/IEC 80000-3:2007)
ISO 31-3: Mechanics (replaced by ISO/IEC 80000-4:2006)
ISO 31-4: Heat (replaced by ISO/IEC 80000-5)
ISO 31-5: Electricity and magnetism (replaced by ISO/IEC 80000-6)
ISO 31-6: Light and related electromagnetic radiations (replaced by ISO/IEC 80000-7)
ISO 31-7: Acoustics (replaced by ISO/IEC 80000-8:2007)
ISO 31-8: Physical chemistry and molecular physics (replaced by ISO/IEC 80000-9)
ISO 31-9: Atomic and nuclear physics (replaced by ISO/IEC 80000-10)
ISO 31-10: Nuclear reactions and ionizing radiations (replaced by ISO/IEC 80000-10)
ISO 31-11: Mathematical signs and symbols for use in the physical sciences and technology (replaced by ISO 80000-2:2009)
ISO 31-12: Characteristic numbers (replaced by ISO/IEC 80000-11)
ISO 31-13: Solid state physics (replaced by ISO/IEC 80000-12)
A second international standard on quantities and units was IEC 60027. The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration (, ) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027.
Coined words
ISO 31-0 introduced several new words into the English language that are direct spelling-calques from the French. Some of these words have been used in scientific literature.
Related national standards
Canada: CAN/CSA-Z234-1- |
https://en.wikipedia.org/wiki/RSSOwl | RSSOwl is a news aggregator for RSS and Atom news feeds. It is written in Java and built on the Eclipse Rich Client Platform which uses SWT as a widget toolkit to allow it to fit in with the look and feel of different operating systems while remaining cross-platform. Released under the EPL-1.0 license, RSSOwl is free software.
In addition to its full text searches, saved searches, notifications and filters, RSSOwl v2.1 synchronized with the now discontinued Google Reader.
History
RSSOwl began as small project on SourceForge at the end of July 2003. The first public version was 0.3a.
Version 1.0
RSSOwl 1.0 was released on December 19, 2001. It was released with support for RSS and Atom news feeds. The initial release also supported exporting feeds to PDF, RTF, and HTML. This release was available for Windows, Mac, Linux, and Solaris.
RSSOwl 1.1 added support for toolbars and quick search in news feeds. Version 1.2 improved toolbar customization and added support for Atom 1.0 News feeds. Versions 1.2.1 and 1.2.2 added universal binary support for mac as well as drag and drop for tabs and a built-in feed validator. RSSOwl was the SourceForge Project of the Month for January 2005.
Version 2.0
RSSOwl 2.0 was announced on March 7, 2007, at EclipseCon 2007. Version 2.0 was rebuilt on the Eclipse Rich Client Platform and used db4o for database storage and Lucene for text searching. Several milestone versions were released before the final 2.0 version that added labeling of news feeds, pop-up notification of new feeds and storage of news articles in news bins. The final 2.0 version was released as milestone 9 and added support for secure password and credential storage, news filters, support for embedding Firefox 3.0 XULRunner to render news feeds, and proxy support for Windows. Version 2.1, released July 15, 2011, added Google Reader synchronization support and new layouts.
Forks
RSSOwl is no longer maintained by its original developer. However, a maintained fork of i |
https://en.wikipedia.org/wiki/Fast%20user%20switching | Fast user switching is a feature of a multi-user operating system which allows users to switch between user accounts without quitting applications and logging out.
In Linux
The Linux kernel's VT subsystem dates back to 1993 and does not understand the concept of multiple "seats", meaning that of up to 63 VTs, only one VT can be active at any given time. Despite this kernel limitation, multi-seat is supported on Linux. The feature of "fast user switching" has less severe necessities than multi-seat does because the multiple users are not working simultaneously.
The most straight forward solution to elegant multi-seat are kmscon/systemd-consoled in combination with systemd-logind. The available desktop environments such as GNOME or KDE Software Compilation adapt their graphical login and session manager (e.g. GDM, SDDM, LightDM, etc.) to the underneath solution and have to be configured to implement fast user switching that way.
For installations with older environments, the functionality must be enabled in the appropriate configuration files then a hot key sequence such as CTRL-ALT-F8 is pressed. A separate login window will now appear and the second user can log in (or even the first user again). Alternatively, in the default install, new X sessions can be started at will by using different display parameters to have them run in different virtual terminals (e.g. "startx -- :1" or "X :1 -query localhost"). Again, hot key sequences allow the user switching to take place.
Fast user switching may potentially introduce various security-related complications, and is handled differently among operating systems, each having its advantages and disadvantages. One possibility, simple and secure, is that only the first user gets ownership of resources. A second option is to grant ownership of resources to each new user. The last one to log in takes ownership. A third is to allow all users access to shared resources. This is easier and more intuitive, but allows ( |
https://en.wikipedia.org/wiki/List%20of%20variational%20topics | This is a list of variational topics in from mathematics and physics. See calculus of variations for a general introduction.
Action (physics)
Averaged Lagrangian
Brachistochrone curve
Calculus of variations
Catenoid
Cycloid
Dirichlet principle
Euler–Lagrange equation cf. Action (physics)
Fermat's principle
Functional (mathematics)
Functional derivative
Functional integral
Geodesic
Isoperimetry
Lagrangian
Lagrangian mechanics
Legendre transformation
Luke's variational principle
Minimal surface
Morse theory
Noether's theorem
Path integral formulation
Plateau's problem
Prime geodesic
Principle of least action
Soap bubble
Soap film
Tautochrone curve
Variations |
https://en.wikipedia.org/wiki/Service%20pack | In computing, a service pack comprises a collection of updates, fixes, or enhancements to a software program delivered in the form of a single installable package. Companies often release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit, or the software release has shown to be stabilized with a limited number of remaining issues based on users' feedback and bug reports. In large software applications such as office suites, operating systems, database software, or network management, it is not uncommon to have a service pack issued within the first year or two of a product's release. Installing a service pack is easier and less error-prone than installing many individual patches, even more so when updating multiple computers over a network, where service packs are common.
Service packs are usually numbered, and thus shortly referred to as SP1, SP2, SP3 etc. They may also bring, besides bug fixes, entirely new features, as is the case of SP2 of Windows XP (e.g. Windows Security Center), or SP3 and SP4 of the heavily database dependent Trainz 2009: World Builder Edition.
Incremental and cumulative SPs
Service Packs for Microsoft Windows were cumulative through Windows XP. This means that the problems that are fixed in a service pack are also fixed in later service packs. For example, Windows XP SP3 contains all the fixes that are included in Windows XP Service Pack 2 (SP2). Windows Vista SP2 was not cumulative, however, but incremental, requiring that SP1 be installed first.
Office XP, Office 2003, Office 2007, Office 2010 and Office 2013 service packs have been cumulative.
Impact on installation of additional software components
Application service packs replace existing files with updated versions that typically fix bugs or close security holes. If, at a later time, additional components are added to the software using the original media, there is a risk of accidentally mixing older and updated compone |
https://en.wikipedia.org/wiki/Generalized%20coordinates | In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates.
An example of a generalized coordinate would be to describe the position of a pendulum using the angle of the pendulum relative to vertical, rather than by the x and y position of the pendulum.
Although there may be many possible choices for generalized coordinates for a physical system, they are generally selected to simplify calculations, such as the solution of the equations of motion for the system. If the coordinates are independent of one another, the number of independent generalized coordinates is defined by the number of degrees of freedom of the system.
Generalized coordinates are paired with generalized momenta to provide canonical coordinates on phase space.
Constraints and degrees of freedom
Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent, which means that they are related by one or more constraint equations.
Holonomic constraints
For a system of particles in 3D real coordinate space, the position vector of each particle can be written as a 3-tuple in Cartesian coordinates:
Any of the position vectors can be denoted where labels the particles. A holonomic constraint is a constraint equation of the form for particle
which connects all the 3 spatial coordinates of that particle together, so they are not independent. The |
https://en.wikipedia.org/wiki/Video%20game%20design | Video game design is the process of designing the content and rules of video games in the pre-production stage and designing the gameplay, environment, storyline and characters in the production stage. Some common video game design subdisciplines are world design, level design, system design, content design, and user interface design. Within the video game industry, video game design is usually just referred to as "game design", which is a more general term elsewhere.
The video game designer is very much like the director of a film; the designer is the visionary of the game and controls the artistic and technical elements of the game in fulfillment of their vision. However, with very complex games, such as MMORPGs or a big budget action or sports title, designers may number in the dozens. In these cases, there are generally one or two principal designers and many junior designers who specify subsets or subsystems of the game. As the industry has aged and embraced alternative production methodologies such as agile, the role of a principal game designer has begun to separate - some studios emphasizing the auteur model while others emphasizing a more team oriented model. In larger companies like Electronic Arts, each aspect of the game (control, level design) may have a separate producer, lead designer and several general designers.
Video game design requires artistic and technical competence as well as sometimes including writing skills. Historically, video game programmers have sometimes comprised the entire design team. This is the case of such noted designers as Sid Meier, John Romero, Chris Sawyer and Will Wright. A notable exception to this policy was Coleco, which from its very start separated the function of design and programming. As video games became more complex, computers and consoles became more powerful, the job of the game designer became separate from the lead programmer. Soon, game complexity demanded team members focused on game design. Many early |
https://en.wikipedia.org/wiki/Global%20Multimedia%20Protocols%20Group | The Global Multimedia Protocols Group (GMPG) was founded in March 2003 by Tantek Çelik, Eric A. Meyer, and Matt Mullenweg. The group has developed methods to represent human relationships using XHTML called XHTML Friends Network (XFN) and XHTML Meta Data Profiles (XMDP), for use in weblogs.
It is an informal organization that engages in experiments in metamemetics.
It was first mentioned in 1992 by author Neal Stephenson in his novel Snow Crash.
GMPG was founded to develop the initial principles for XFN, the XHTML Friends Network as an attempt for the creation of a simple way to express human relationships on the Web within HTML (machine-readable).
, an analysis of the network of pages collected by Common Crawl found that the web host gmpg.org had the highest PageRank and third highest in-degree of all the hosts in the network.
XFN - XHTML Friends Network
XFN provides a list of non-standard attribute values for the HTML attribute "rel", which is used within the "A" element for hyperlinks.
See also
Microformats
References
External links
GMPG.org official website
GMPG Principles
Internet-related organizations
Organizations established in 2003
2003 establishments in the United States |
https://en.wikipedia.org/wiki/Transformation%20geometry | In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and pedagogic take on the study of geometry by focusing on groups of geometric transformations, and properties that are invariant under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focuses on proving theorems.
For example, within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.
The first systematic effort to use transformations as the foundation of geometry was made by Felix Klein in the 19th century, under the name Erlangen programme. For nearly a century this approach remained confined to mathematics research circles. In the 20th century efforts were made to exploit it for mathematical education. Andrei Kolmogorov included this approach (together with set theory) as part of a proposal for geometry teaching reform in Russia. These efforts culminated in the 1960s with the general reform of mathematics teaching known as the New Math movement.
Pedagogy
An exploration of transformation geometry often begins with a study of reflection symmetry as found in daily life. The first real transformation is reflection in a line or reflection against an axis. The composition of two reflections results in a rotation when the lines intersect, or a translation when they are parallel. Thus through transformations students learn about Euclidean plane isometry. For instance, consider reflection in a vertical line and a line inclined at 45° to the horizontal. One can observe that one composition yields a counter-clockwise quarter-turn (90°) while the reverse composition yields a clockwise quarter-turn. Such results show that transformation geometry includes non-commutative processes.
An entertaining application of reflection in a line oc |
https://en.wikipedia.org/wiki/Injective%20module | In mathematics, especially in the area of abstract algebra known as module theory, an injective module is a module Q that shares certain desirable properties with the Z-module Q of all rational numbers. Specifically, if Q is a submodule of some other module, then it is already a direct summand of that module; also, given a submodule of a module Y, any module homomorphism from this submodule to Q can be extended to a homomorphism from all of Y to Q. This concept is dual to that of projective modules. Injective modules were introduced in and are discussed in some detail in the textbook .
Injective modules have been heavily studied, and a variety of additional notions are defined in terms of them: Injective cogenerators are injective modules that faithfully represent the entire category of modules. Injective resolutions measure how far from injective a module is in terms of the injective dimension and represent modules in the derived category. Injective hulls are maximal essential extensions, and turn out to be minimal injective extensions. Over a Noetherian ring, every injective module is uniquely a direct sum of indecomposable modules, and their structure is well understood. An injective module over one ring, may not be injective over another, but there are well-understood methods of changing rings which handle special cases. Rings which are themselves injective modules have a number of interesting properties and include rings such as group rings of finite groups over fields. Injective modules include divisible groups and are generalized by the notion of injective objects in category theory.
Definition
A left module Q over the ring R is injective if it satisfies one (and therefore all) of the following equivalent conditions:
If Q is a submodule of some other left R-module M, then there exists another submodule K of M such that M is the internal direct sum of Q and K, i.e. Q + K = M and Q ∩ K = {0}.
Any short exact sequence 0 →Q → M → K → 0 of left R-modules sp |
https://en.wikipedia.org/wiki/Injective%20object | In mathematics, especially in the field of category theory, the concept of injective object is a generalization of the concept of injective module. This concept is important in cohomology, in homotopy theory and in the theory of model categories. The dual notion is that of a projective object.
Definition
An object in a category is said to be injective if for every monomorphism and every morphism there exists a morphism extending to , i.e. such that .
That is, every morphism factors through every monomorphism .
The morphism in the above definition is not required to be uniquely determined by and .
In a locally small category, it is equivalent to require that the hom functor carries monomorphisms in to surjective set maps.
In Abelian categories
The notion of injectivity was first formulated for abelian categories, and this is still one of its primary areas of application. When is an abelian category, an object Q of is injective if and only if its hom functor HomC(–,Q) is exact.
If is an exact sequence in such that Q is injective, then the sequence splits.
Enough injectives and injective hulls
The category is said to have enough injectives if for every object X of , there exists a monomorphism from X to an injective object.
A monomorphism g in is called an essential monomorphism if for any morphism f, the composite fg is a monomorphism only if f is a monomorphism.
If g is an essential monomorphism with domain X and an injective codomain G, then G is called an injective hull of X. The injective hull is then uniquely determined by X up to a non-canonical isomorphism.
Examples
In the category of abelian groups and group homomorphisms, Ab, an injective object is necessarily a divisible group. Assuming the axiom of choice, the notions are equivalent.
In the category of (left) modules and module homomorphisms, R-Mod, an injective object is an injective module. R-Mod has injective hulls (as a consequence, R-Mod has enough injectives).
In the cate |
https://en.wikipedia.org/wiki/800%20%28number%29 | 800 (eight hundred) is the natural number following 799 and preceding 801.
It is the sum of four consecutive primes (193 + 197 + 199 + 211). It is a Harshad number, an Achilles number and the area of a square with diagonal 40.
Integers from 801 to 899
800s
801 = 32 × 89, Harshad number, number of clubs patterns appearing in 50 × 50 coins
802 = 2 × 401, sum of eight consecutive primes (83 + 89 + 97 + 101 + 103 + 107 + 109 + 113), nontotient, happy number, sum of 4 consecutive triangular numbers (171 + 190 + 210 + 231)
803 = 11 × 73, sum of three consecutive primes (263 + 269 + 271), sum of nine consecutive primes (71 + 73 + 79 + 83 + 89 + 97 + 101 + 103 + 107), Harshad number, number of partitions of 34 into Fibonacci parts
804 = 22 × 3 × 67, nontotient, Harshad number, refactorable number
"The 804" is a local nickname for the Greater Richmond Region of the U.S. state of Virginia, derived from its telephone area code (although the area code covers a larger area).
805 = 5 × 7 × 23, sphenic number, number of partitions of 38 into nonprime parts
806 = 2 × 13 × 31, sphenic number, nontotient, totient sum for first 51 integers, happy number, Phi(51)
807 = 3 × 269, antisigma(42)
808 = 23 × 101, refactorable number, strobogrammatic number
809 = prime number, Sophie Germain prime, Chen prime, Eisenstein prime with no imaginary part
810s
810 = 2 × 34 × 5, Harshad number, number of distinct reduced words of length 5 in the Coxeter group of "Apollonian reflections" in three dimensions, number of non-equivalent ways of expressing 100,000 as the sum of two prime numbers
811 = prime number, sum of five consecutive primes (151 + 157 + 163 + 167 + 173), Chen prime, happy number, largest minimal prime in base 9, the Mertens function of 811 returns 0
812 = 22 × 7 × 29, admirable number, pronic number, balanced number, the Mertens function of 812 returns 0
813 = 3 × 271, blum integer
814 = 2 × 11 × 37, sphenic number, the Mertens function of 814 returns 0, nontotie |
https://en.wikipedia.org/wiki/900%20%28number%29 | 900 (nine hundred) is the natural number following 899 and preceding 901. It is the square of 30 and the sum of Euler's totient function for the first 54 positive integers. In base 10 it is a Harshad number. It is also the first number to be the square of a sphenic number.
In other fields
900 is also:
A telephone area code for "premium" telephone calls in the North American Numbering Plan (900 number)
In Greek number symbols, the sign Sampi ("ϡ", literally "like a pi")
A skateboarding trick in which the skateboarder spins two and a half times (360 degrees times 2.5 is 900)
A 900 series refers to three consecutive perfect games in bowling
Yoda's age in Star Wars
Integers from 901 to 999
900s
901 = 17 × 53, centered triangular number, happy number
902 = 2 × 11 × 41, sphenic number, nontotient, Harshad number
903 = 3 × 7 × 43, sphenic number, triangular number, Schröder–Hipparchus number, Mertens function (903) returns 0, little Schroeder number
904 = 23 × 113 or 113 × 8, refactorable number, Mertens function(904) returns 0, lazy caterer number, number of 1's in all partitions of 26 into odd parts
905 = 5 × 181, sum of seven consecutive primes (109 + 113 + 127 + 131 + 137 + 139 + 149), smallest composite de Polignac number
"The 905" is a common nickname for the suburban portions of the Greater Toronto Area in Canada, a region whose telephones used area code 905 before overlay plans added two more area codes.
906 = 2 × 3 × 151, strobogrammatic, sphenic number, Mertens function(906) returns 0
907 = prime number
908 = 22 × 227, nontotient, number of primitive sorting networks on 6 elements, number of rhombic tilings of a 12-gon
909 = 32 × 101, number of non-isomorphic aperiodic multiset partitions of weight 7
910s
910 = 2 × 5 × 7 × 13, Mertens function(910) returns 0, Harshad number, happy number, balanced number, number of polynomial symmetric functions of matrix of order 7 under separate row and column permutations
911 = Sophie Germain prime numbe |
https://en.wikipedia.org/wiki/Software%20engineering%20professionalism | Software engineering professionalism is a movement to make software engineering a profession, with aspects such as degree and certification programs, professional associations, professional ethics, and government licensing. The field is a licensed discipline in Texas in the United States (Texas Board of Professional Engineers, since 2013), Engineers Australia(Course Accreditation since 2001, not Licensing), and many provinces in Canada.
History
In 1993 the IEEE and ACM began a joint effort called JCESEP, which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers. ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing,
and that licensing would give false assurances of competence even if the body of knowledge were mature.
The IEEE continued to support making software engineering a branch of traditional engineering.
In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association.
In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term "software engineering" in the name of a computer science program. Concerns were raised over inappropriate use of the name "software engineering" to describe non-engineering programs could lead to student and public confusion, a |
https://en.wikipedia.org/wiki/Mutually%20orthogonal%20Latin%20squares | In combinatorial mathematics, two Latin squares of the same size (order) are said to be orthogonal if when superimposed the ordered paired entries in the positions are all distinct. A set of Latin squares, all of the same order, all pairs of which are orthogonal is called a set of mutually orthogonal Latin squares. This concept of orthogonality in combinatorics is strongly related to the concept of blocking in statistics, which ensures that independent variables are truly independent with no hidden confounding correlations. "Orthogonal" is thus synonymous with "independent" in that knowing one variable's value gives no further information about another variable's likely value.
An outdated term for pair of orthogonal Latin squares is Graeco-Latin square, found in older literature.
Graeco-Latin squares
A Graeco-Latin square or Euler square or pair of orthogonal Latin squares of order over two sets and (which may be the same), each consisting of symbols, is an arrangement of cells, each cell containing an ordered pair , where is in and is in , such that every row and every column contains each element of and each element of exactly once, and that no two cells contain the same ordered pair.
The arrangement of the -coordinates by themselves (which may be thought of as Latin characters) and of the -coordinates (the Greek characters) each forms a Latin square. A Graeco-Latin square can therefore be decomposed into two orthogonal Latin squares. Orthogonality here means that every pair from the Cartesian product occurs exactly once.
Orthogonal Latin squares were studied in detail by Leonhard Euler, who took the two sets to be }, the first upper-case letters from the Latin alphabet, and },
the first lower-case letters from the Greek alphabet—hence the name Graeco-Latin square.
Existence
When a Graeco-Latin square is viewed as a pair of orthogonal Latin squares, each of the Latin squares is said to have an orthogonal mate. In an arbitrary Latin square, a sel |
https://en.wikipedia.org/wiki/Gaston%20Tarry | Gaston Tarry (27 September 1843 – 21 June 1913) was a French mathematician. Born in Villefranche de Rouergue, Aveyron, he studied mathematics at high school before joining the civil service in Algeria. He pursued mathematics as an amateur.
In 1901 Tarry confirmed Leonhard Euler's conjecture that no 6×6 Graeco-Latin square was possible (the 36 officers problem).
See also
List of amateur mathematicians
Prouhet-Tarry-Escott problem
Tarry point
Tetramagic square
References
External links
People from Villefranche-de-Rouergue
1843 births
1913 deaths
Combinatorialists
19th-century French mathematicians
20th-century French mathematicians |
https://en.wikipedia.org/wiki/Module%20homomorphism | In algebra, a module homomorphism is a function between modules that preserves the module structures. Explicitly, if M and N are left modules over a ring R, then a function is called an R-module homomorphism or an R-linear map if for any x, y in M and r in R,
In other words, f is a group homomorphism (for the underlying additive groups) that commutes with scalar multiplication. If M, N are right R-modules, then the second condition is replaced with
The preimage of the zero element under f is called the kernel of f. The set of all module homomorphisms from M to N is denoted by . It is an abelian group (under pointwise addition) but is not necessarily a module unless R is commutative.
The composition of module homomorphisms is again a module homomorphism, and the identity map on a module is a module homomorphism. Thus, all the (say left) modules together with all the module homomorphisms between them form the category of modules.
Terminology
A module homomorphism is called a module isomorphism if it admits an inverse homomorphism; in particular, it is a bijection. Conversely, one can show a bijective module homomorphism is an isomorphism; i.e., the inverse is a module homomorphism. In particular, a module homomorphism is an isomorphism if and only if it is an isomorphism between the underlying abelian groups.
The isomorphism theorems hold for module homomorphisms.
A module homomorphism from a module M to itself is called an endomorphism and an isomorphism from M to itself an automorphism. One writes for the set of all endomorphisms of a module M. It is not only an abelian group but is also a ring with multiplication given by function composition, called the endomorphism ring of M. The group of units of this ring is the automorphism group of M.
Schur's lemma says that a homomorphism between simple modules (modules with no non-trivial submodules) must be either zero or an isomorphism. In particular, the endomorphism ring of a simple module is a division ring. |
https://en.wikipedia.org/wiki/Intel%20iAPX%20432 | The iAPX 432 (Intel Advanced Performance Architecture) is a discontinued computer architecture introduced in 1981. It was Intel's first 32-bit processor design. The main processor of the architecture, the general data processor, is implemented as a set of two separate integrated circuits, due to technical limitations at the time. Although some early 8086, 80186 and 80286-based systems and manuals also used the iAPX prefix for marketing reasons, the iAPX 432 and the 8086 processor lines are completely separate designs with completely different instruction sets.
The project started in 1975 as the 8800 (after the 8008 and the 8080) and was intended to be Intel's major design for the 1980s. Unlike the 8086, which was designed the following year as a successor to the 8080, the iAPX 432 was a radical departure from Intel's previous designs meant for a different market niche, and completely unrelated to the 8080 or x86 product lines.
The iAPX 432 project is considered a commercial failure for Intel, and was discontinued in 1986.
Description
The iAPX 432 was referred to as a "micromainframe", designed to be programmed entirely in high-level languages. The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors. Intel iMAX 432 is a discontinued operating system for the 432, written entirely in Ada, and Ada was also the intended primary language for application programming. In some aspects, it may be seen as a high-level language computer architecture.
These properties and features resu |
https://en.wikipedia.org/wiki/Color%20Graphics%20Adapter | The Color Graphics Adapter (CGA), originally also called the Color/Graphics Adapter or IBM Color/Graphics Monitor Adapter, introduced in 1981, was IBM's first color graphics card for the IBM PC and established a de facto computer display standard.
Hardware design
The original IBM CGA graphics card was built around the Motorola 6845 display controller, came with 16 kilobytes of video memory built in, and featured several graphics and text modes. The highest display resolution of any mode was 640×200, and the highest color depth supported was 4-bit (16 colors).
The CGA card could be connected either to a direct-drive CRT monitor using a 4-bit digital (TTL) RGBI interface, such as the IBM 5153 color display, or to an NTSC-compatible television or composite video monitor via an RCA connector. The RCA connector provided only baseband video, so to connect the CGA card to a television set without a composite video input required a separate RF modulator.
IBM produced the 5153 Personal Computer Color Display for use with the CGA, but this was not available at release and would not be released until March 1983.
Although IBM's own color display was not available, customers could either use the composite output (with an RF modulator if needed), or the direct-drive output with available third-party monitors that supported the RGBI format and scan rate. Some third-party displays lacked the intensity input, reducing the number of available colors to eight, and many also lacked IBM's unique circuitry which rendered the dark-yellow color as brown, so any software which used brown would be displayed incorrectly.
Output capabilities
CGA offered several video modes.
Graphics modes:
160×100 in 16 colors, chosen from a 16-color palette, utilizing a specific configuration of the 80x25 text mode.
320×200 in 4 colors, chosen from 3 fixed palettes, with high- and low-intensity variants, with color 1 chosen from a 16-color palette.
640×200 in 2 colors, one black, one chosen from a 1 |
https://en.wikipedia.org/wiki/Glycosylation | Glycosylation is the reaction in which a carbohydrate (or 'glycan'), i.e. a glycosyl donor, is attached to a hydroxyl or other functional group of another molecule (a glycosyl acceptor) in order to form a glycoconjugate. In biology (but not always in chemistry), glycosylation usually refers to an enzyme-catalysed reaction, whereas glycation (also 'non-enzymatic glycation' and 'non-enzymatic glycosylation') may refer to a non-enzymatic reaction.
Glycosylation is a form of co-translational and post-translational modification. Glycans serve a variety of structural and functional roles in membrane and secreted proteins. The majority of proteins synthesized in the rough endoplasmic reticulum undergo glycosylation. Glycosylation is also present in the cytoplasm and nucleus as the O-GlcNAc modification. Aglycosylation is a feature of engineered antibodies to bypass glycosylation. Five classes of glycans are produced:
N-linked glycans attached to a nitrogen of asparagine or arginine side-chains. N-linked glycosylation requires participation of a special lipid called dolichol phosphate.
O-linked glycans attached to the hydroxyl oxygen of serine, threonine, tyrosine, hydroxylysine, or hydroxyproline side-chains, or to oxygens on lipids such as ceramide.
Phosphoglycans linked through the phosphate of a phosphoserine.
C-linked glycans, a rare form of glycosylation where a sugar is added to a carbon on a tryptophan side-chain. Aloin is one of the few naturally occurring substances.
Glypiation, which is the addition of a GPI anchor that links proteins to lipids through glycan linkages.
Purpose
Glycosylation is the process by which a carbohydrate is covalently attached to a target macromolecule, typically proteins and lipids. This modification serves various functions. For instance, some proteins do not fold correctly unless they are glycosylated. In other cases, proteins are not stable unless they contain oligosaccharides linked at the amide nitrogen of certain asparagine |
https://en.wikipedia.org/wiki/Data%20striping | In computer data storage, data striping is the technique of segmenting logically sequential data, such as a file, so that consecutive segments are stored on different physical storage devices.
Striping is useful when a processing device requests data more quickly than a single storage device can provide it. By spreading segments across multiple devices which can be accessed concurrently, total data throughput is increased. It is also a useful method for balancing I/O load across an array of disks. Striping is used across disk drives in redundant array of independent disks (RAID) storage, network interface controllers, disk arrays, different computers in clustered file systems and grid-oriented storage, and RAM in some systems.
Method
One method of striping is done by interleaving sequential segments on storage devices in a round-robin fashion from the beginning of the data sequence. This works well for streaming data, but subsequent random accesses will require knowledge of which device contains the data. If the data is stored such that the physical address of each data segment is assigned a one-to-one mapping to a particular device, the device to access each segment requested can be calculated from the address without knowing the offset of the data within the full sequence.
Other methods might be employed in which sequential segments are not stored on sequential devices. Such non-sequential interleaving can have benefits in some error correction schemes.
Advantages and disadvantages
Advantages of striping include performance and throughput. Sequential time interleaving of data accesses allows the lesser data access throughput of each storage devices to be cumulatively multiplied by the number of storage devices employed. Increased throughput allows the data processing device to continue its work without interruption, and thereby finish its procedures more quickly. This is manifested in improved performance of the data processing.
Because different segments |
https://en.wikipedia.org/wiki/Program%20derivation | In computer science, program derivation is the derivation of a program from its specification, by mathematical means.
To derive a program means to write a formal specification, which is usually non-executable, and then apply mathematically correct rules in order to obtain an executable program satisfying that specification. The program thus obtained is then correct by construction. Program and correctness proof are constructed together.
The approach usually taken in formal verification is to first write a program, and then provide a proof that it conforms to a given specification. The main problems with this are that:
the resulting proof is often long and cumbersome;
no insight is given as to how the program was developed; it appears "like a rabbit out of a hat";
should the program happen to be incorrect in some subtle way, the attempt to verify it is likely to be long and certain to be fruitless.
Program derivation tries to remedy these shortcomings by:
keeping proofs shorter, by development of appropriate mathematical notations;
making design decisions through formal manipulation of the specification.
Terms that are roughly synonymous with program derivation are: transformational programming, algorithmics, deductive programming.
The Bird-Meertens Formalism is an approach to program derivation.
Approaches to achieving correctness in Distributed computing include research languages such as the P programming language.
See also
Automatic programming
Hoare logic
Program refinement
Design by contract
Program synthesis
Proof-carrying code
References
Edsger W. Dijkstra, Wim H. J. Feijen, A Method of Programming, Addison-Wesley, 1988, 188 pages
Edward Cohen, Programming in the 1990s, Springer-Verlag, 1990
Anne Kaldewaij, Programming: The Derivation of Algorithms, Prentice-Hall, 1990, 216 pages
David Gries, The Science of Programming, Springer-Verlag, 1981, 350 pages
Carroll Morgan (computer scientist), Programming from Specifications, Internati |
https://en.wikipedia.org/wiki/Skipjack%20%28cipher%29 | In cryptography, Skipjack is a block cipher—an algorithm for encryption—developed by the U.S. National Security Agency (NSA). Initially classified, it was originally intended for use in the controversial Clipper chip. Subsequently, the algorithm was declassified.
History of Skipjack
Skipjack was proposed as the encryption algorithm in a US government-sponsored scheme of key escrow, and the cipher was provided for use in the Clipper chip, implemented in tamperproof hardware. Skipjack is used only for encryption; the key escrow is achieved through the use of a separate mechanism known as the Law Enforcement Access Field (LEAF).
The algorithm was initially secret, and was regarded with considerable suspicion by many for that reason. It was declassified on 24 June 1998, shortly after its basic design principle had been discovered independently by the public cryptography community.
To ensure public confidence in the algorithm, several academic researchers from outside the government were called in to evaluate the algorithm. The researchers found no problems with either the algorithm itself or the evaluation process. Moreover, their report gave some insight into the (classified) history and development of Skipjack:
In March 2016, NIST published a draft of its cryptographic standard which no longer certifies Skipjack for US government applications.
Description
Skipjack uses an 80-bit key to encrypt or decrypt 64-bit data blocks. It is an unbalanced Feistel network with 32 rounds. It was designed to be used in secured phones.
Cryptanalysis
Eli Biham and Adi Shamir discovered an attack against 16 of the 32 rounds within one day of declassification, and (with Alex Biryukov) extended this to 31 of the 32 rounds (but with an attack only slightly faster than exhaustive search) within months using impossible differential cryptanalysis.
A truncated differential attack was also published against 28 rounds of Skipjack cipher.
A claimed attack against the full cipher was publ |
https://en.wikipedia.org/wiki/Legendre%20transform%20%28integral%20transform%29 | In mathematics, Legendre transform is an integral transform named after the mathematician Adrien-Marie Legendre, which uses Legendre polynomials as kernels of the transform. Legendre transform is a special case of Jacobi transform.
The Legendre transform of a function is
The inverse Legendre transform is given by
Associated Legendre transform
Associated Legendre transform is defined as
The inverse Legendre transform is given by
Some Legendre transform pairs
References
Integral transforms
Mathematical physics |
https://en.wikipedia.org/wiki/Color%20histogram | In image processing and photography, a color histogram is a representation of the distribution of colors in an image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image's color space, the set of all possible colors.
The color histogram can be built for any kind of color space, although the term is more often used for three-dimensional spaces such as RGB or HSV. For monochromatic images, the term intensity histogram may be used instead. For multi-spectral images, where each pixel is represented by an arbitrary number of measurements (for example, beyond the three measurements in RGB), the color histogram is N-dimensional, with N being the number of measurements taken. Each measurement has its own wavelength range of the light spectrum, some of which may be outside the visible spectrum.
If the set of possible color values is sufficiently small, each of those colors may be placed on a range by itself; then the histogram is merely the count of pixels that have each possible color. Most often, the space is divided into an appropriate number of ranges, often arranged as a regular grid, each containing many similar color values. The color histogram may also be represented and displayed as a smooth function defined over the color space that approximates the pixel counts.
Like other kinds of histograms, the color histogram is a statistic that can be viewed as an approximation of an underlying continuous distribution of color values.
Overview
Color histograms are flexible constructs that can be built from images in various color spaces, whether RGB, rg chromaticity or any other color space of any dimension. A histogram of an image is produced first by discretization of the colors in the image into a number of bins, and counting the number of image pixels in each bin. For example, a Red–Blue chromaticity histogram can be formed by first normalizing color pixel values by |
https://en.wikipedia.org/wiki/Self-oscillation | Self-oscillation is the generation and maintenance of a periodic motion by a source of power that lacks any corresponding periodicity. The oscillator itself controls the phase with which the external power acts on it. Self-oscillators are therefore distinct from forced and parametric resonators, in which the power that sustains the motion must be modulated externally.
In linear systems, self-oscillation appears as an instability associated with a negative damping term, which causes small perturbations to grow exponentially in amplitude. This negative damping is due to a positive feedback between the oscillation and the modulation of the external source of power. The amplitude and waveform of steady self-oscillations are determined by the nonlinear characteristics of the system.
Self-oscillations are important in physics, engineering, biology, and economics.
History of the subject
The study of self-oscillators dates back to Robert Willis, George Biddell Airy, James Clerk Maxwell, and Lord Rayleigh in the 19th century. The term itself (also translated as "auto-oscillation") was coined by the Soviet physicist Aleksandr Andronov, who studied them in the context of the mathematical theory of the structural stability of dynamical systems. Other important work on the subject, both theoretical and experimental, was due to André Blondel, Balthasar van der Pol, Alfred-Marie Liénard, and Philippe Le Corbeiller in the 20th century.
The same phenomenon is sometimes labelled as "maintained", "sustained", "self-exciting", "self-induced", "spontaneous", or "autonomous" oscillation. Unwanted self-oscillations are known in the mechanical engineering literature as hunting, and in electronics as parasitic oscillations. Important early studied examples of self-oscillation include the centrifugal governor and railroad wheels.
Mathematical basis
Self-oscillation is manifested as a linear instability of a dynamical system's static equilibrium. Two mathematical tests that can be |
https://en.wikipedia.org/wiki/Geometric%20hashing | In computer science, geometric hashing is a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation, though extensions exist to other object representations and transformations. In an off-line step, the objects are encoded by treating each pair of points as a geometric basis. The remaining points can be represented in an invariant fashion with respect to this basis using two parameters. For each point, its quantized transformed coordinates are stored in the hash table as a key, and indices of the basis points as a value. Then a new pair of basis points is selected, and the process is repeated. In the on-line (recognition) step, randomly selected pairs of data points are considered as candidate bases. For each candidate basis, the remaining data points are encoded according to the basis and possible correspondences from the object are found in the previously constructed table. The candidate basis is accepted if a sufficiently large number of the data points index a consistent object basis.
Geometric hashing was originally suggested in computer vision for object recognition in 2D and 3D, but later was applied to different problems such as structural alignment of proteins.
Geometric hashing in computer vision
Geometric hashing is a method used for object recognition. Let’s say that we want to check if a model image can be seen in an input image. This can be accomplished with geometric hashing. The method could be used to recognize one of the multiple objects in a base, in this case the hash table should store not only the pose information but also the index of object model in the base.
Example
For simplicity, this example will not use too many point features and assume that their descriptors are given by their coordinates only (in practice local descriptors such as SIFT could be used for indexing).
Training Phase
Find the model's feature points. Assume that 5 feature points are found in |
https://en.wikipedia.org/wiki/Stiffness | Stiffness is the extent to which an object resists deformation in response to an applied force.
The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is.
Calculations
The stiffness, of a body is a measure of the resistance offered by an elastic body to deformation. For an elastic body with a single degree of freedom (DOF) (for example, stretching or compression of a rod), the stiffness is defined as
where,
is the force on the body
is the displacement produced by the force along the same degree of freedom (for instance, the change in length of a stretched spring)
Stiffness is usually defined under quasi-static conditions, but sometimes under dynamic loading.
In the International System of Units, stiffness is typically measured in newtons per meter (). In Imperial units, stiffness is typically measured in pounds (lbs) per inch.
Generally speaking, deflections (or motions) of an infinitesimal element (which is viewed as a point) in an elastic body can occur along multiple DOF (maximum of six DOF at a point). For example, a point on a horizontal beam can undergo both a vertical displacement and a rotation relative to its undeformed axis. When there are degrees of freedom a matrix must be used to describe the stiffness at the point. The diagonal terms in the matrix are the direct-related stiffnesses (or simply stiffnesses) along the same degree of freedom and the off-diagonal terms are the coupling stiffnesses between two different degrees of freedom (either at the same or different points) or the same degree of freedom at two different points. In industry, the term influence coefficient is sometimes used to refer to the coupling stiffness.
It is noted that for a body with multiple DOF, the equation above generally does not apply since the applied force generates not only the deflection along its direction (or degree of freedom) but also those along with other directions.
For a body with multiple DOF, to |
https://en.wikipedia.org/wiki/Pseudoscalar | In linear algebra, a pseudoscalar is a quantity that behaves like a scalar, except that it changes sign under a parity inversion while a true scalar does not.
A pseudoscalar, when multiplied by an ordinary vector, becomes a pseudovector (or axial vector); a similar construction creates the pseudotensor.
A pseudoscalar also results from any scalar product between a pseudovector and an ordinary vector. The prototypical example of a pseudoscalar is the scalar triple product, which can be written as the scalar product between one of the vectors in the triple product and the cross product between the two other vectors, where the latter is a pseudovector.
In physics
In physics, a pseudoscalar denotes a physical quantity analogous to a scalar. Both are physical quantities which assume a single value which is invariant under proper rotations. However, under the parity transformation, pseudoscalars flip their signs while scalars do not. As reflections through a plane are the combination of a rotation with the parity transformation, pseudoscalars also change signs under reflections.
Motivation
One of the most powerful ideas in physics is that physical laws do not change when one changes the coordinate system used to describe these laws. That a pseudoscalar reverses its sign when the coordinate axes are inverted suggests that it is not the best object to describe a physical quantity. In 3D-space, quantities described by a pseudovector are anti-symmetric tensors of order 2, which are invariant under inversion. The pseudovector may be a simpler representation of that quantity, but suffers from the change of sign under inversion. Similarly, in 3D-space, the Hodge dual of a scalar is equal to a constant times the 3-dimensional Levi-Civita pseudotensor (or "permutation" pseudotensor); whereas the Hodge dual of a pseudoscalar is an anti-symmetric (pure) tensor of order three. The Levi-Civita pseudotensor is a completely anti-symmetric pseudotensor of order 3. Since the dual of t |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.