source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Seagate%20Technology | Seagate Technology Holdings plc is an American data storage company. It was incorporated in 1978 as Shugart Technology and commenced business in 1979. Since 2010, the company has been incorporated in Dublin, Ireland, with operational headquarters in Fremont, California, United States.
Seagate developed the first 5.25-inch hard disk drive (HDD), the 5-megabyte ST-506, in 1980. They were a major supplier in the microcomputer market during the 1980s, especially after the introduction of the IBM XT in 1983. Much of their growth has come through their acquisition of competitors. In 1989, Seagate acquired Control Data Corporation's Imprimis division, the makers of CDC's HDD products. Seagate acquired Conner Peripherals in 1996, Maxtor in 2006, and Samsung's HDD business in 2011. Today, Seagate, along with its competitor Western Digital, dominates the HDD market.
History
Founding as Shugart Technology
Seagate Technology (then called Shugart Technology) was incorporated on November 1, 1978, and commenced operations with co-founders Al Shugart, Tom Mitchell, Doug Mahon, Finis Conner, and Syed Iftikar in October 1979. The company came into being when Conner approached Shugart with the idea of starting a new company to develop 5.25-inch HDDs which Conner predicted would be a coming economic boom in the disk drive market. The name was changed to Seagate Technology to avoid a lawsuit from Xerox's subsidiary Shugart Associates (also founded by Shugart).
Early history and Tom Mitchell era
The company's first product, the ST-506, with a storage capacity of 5 megabytes (MB), was released in 1980. It was the first hard disk to fit the 5.25-inch form factor of the Shugart mini-floppy drive. It used a Modified Frequency Modulation (MFM) encoding and was later released in a 10 MB version, the ST-412. With this, Seagate secured a contract as a major OEM supplier for the IBM XT, IBM's first personal computer to contain a hard disk. The large volumes of units sold to IBM fueled Seaga |
https://en.wikipedia.org/wiki/Pick%27s%20theorem | In geometry, Pick's theorem provides a formula for the area of a simple polygon with integer vertex coordinates, in terms of the number of integer points within it and on its boundary. The result was first described by Georg Alexander Pick in 1899. It was popularized in English by Hugo Steinhaus in the 1950 edition of his book Mathematical Snapshots. It has multiple proofs, and can be generalized to formulas for certain kinds of non-simple polygons.
Formula
Suppose that a polygon has integer coordinates for all of its vertices. Let be the number of integer points interior to the polygon, and let be the number of integer points on its boundary (including both vertices and points along the sides). Then the area of this polygon is:
The example shown has interior points and boundary points, so its area is square units.
Proofs
Via Euler's formula
One proof of this theorem involves subdividing the polygon into triangles with three integer vertices and no other integer points. One can then prove that each subdivided triangle has area exactly . Therefore, the area of the whole polygon equals half the number of triangles in the subdivision. After relating area to the number of triangles in this way, the proof concludes by using Euler's polyhedral formula to relate the number of triangles to the number of grid points in the polygon.
The first part of this proof shows that a triangle with three integer vertices and no other integer points has area exactly , as Pick's formula states. The proof uses the fact that all triangles tile the plane, with adjacent triangles rotated by 180° from each other around their shared edge. For tilings by a triangle with three integer vertices and no other integer points, each point of the integer grid is a vertex of six tiles. Because the number of triangles per grid point (six) is twice the number of grid points per triangle (three), the triangles are twice as dense in the plane as the grid points. Any scaled region of the plane con |
https://en.wikipedia.org/wiki/Palynology | Palynology is the study of microorganisms and microscopic fragments of mega-organisms that are composed of acid-resistant organic material and occur in sediments, sedimentary rocks, and even some metasedimentary rocks. Palynomorphs are the microscopic, acid-resistant organic remains and debris produced by a wide variety plants, animals, and Protista that have existed since the late Proterozoic.
It is the science that studies contemporary and fossil palynomorphs (paleopalynology), including pollen, spores, orbicules, dinocysts, acritarchs, chitinozoans and scolecodonts, together with particulate organic matter (POM) and kerogen found in sedimentary rocks and sediments. Palynology does not include diatoms, foraminiferans or other organisms with siliceous or calcareous tests. The name of the science and organisms is derived from the Greek , "strew, sprinkle" and -logy) or of "particles that are strewn".
Palynology is an interdisciplinary science that stands at the intersection of earth science (geology or geological science) and biological science (biology), particularly plant science (botany). In Biostratigraphy, a branch of paleontology and paleobotany, it involves fossil palynomorphs from the Precambrian to the Holocene for their usefulness in the relative dating and correlation of sedimentary strata. Palynology is also used to date and understand the evolution of many kinds of plants and animals. In paleoclimatology, fossil palynomorphs are studied for their usefullness in understanding ancient Earth history in terms of reconstructing paleoenvironments and paleoclimates.
Palynology is quite useful in disciplines such as Archeology, in honey production, and criminal and civil law. In archaeology, palynology is widely used to reconstruct ancient paleoenvironments and environmental shifts that significantly influenced past human societies and reconstruct the diet of prehistoric and historic humans. Melissopalynology, the study of pollen and other palynomorphs in h |
https://en.wikipedia.org/wiki/Anastomosis | An anastomosis (, : anastomoses) is a connection or opening between two things (especially cavities or passages) that are normally diverging or branching, such as between blood vessels, leaf veins, or streams. Such a connection may be normal (such as the foramen ovale in a fetus' heart) or abnormal (such as the patent foramen ovale in an adult's heart); it may be acquired (such as an arteriovenous fistula) or innate (such as the arteriovenous shunt of a metarteriole); and it may be natural (such as the aforementioned examples) or artificial (such as a surgical anastomosis). The reestablishment of an anastomosis that had become blocked is called a reanastomosis. Anastomoses that are abnormal, whether congenital or acquired, are often called fistulas.
The term is used in medicine, biology, mycology, geology, and geography.
Etymology
Anastomosis: medical or Modern Latin, from Greek ἀναστόμωσις, anastomosis, "outlet, opening", Gr ana- "up, on, upon", stoma "mouth", "to furnish with a mouth". Thus the -stom- syllable is cognate with that of stoma in botany or stoma in medicine.
Medical anatomy
An anastomosis is the connection of two normally divergent structures. It refers to connections between blood vessels or between other tubular structures such as loops of intestine.
Circulatory
In circulatory anastomoses, many arteries naturally anastomose with each other; for example, the inferior epigastric artery and superior epigastric artery, or the anterior and/or posterior communicating arteries in the Circle of Willis in the brain. The circulatory anastomosis is further divided into arterial and venous anastomosis. Arterial anastomosis includes actual arterial anastomosis (e.g., palmar arch, plantar arch) and potential arterial anastomosis (e.g. coronary arteries and cortical branch of cerebral arteries). Anastomoses also form alternative routes around capillary beds in areas that do not need a large blood supply, thus helping regulate systemic blood flow.
Surgical
Su |
https://en.wikipedia.org/wiki/Annihilator%20%28ring%20theory%29 | In mathematics, the annihilator of a subset of a module over a ring is the ideal formed by the elements of the ring that give always zero when multiplied by each element of .
Over an integral domain, a module that has a nonzero annihilator is a torsion module, and a finitely generated torsion module has a nonzero annihilator.
The above definition applies also in the case noncommutative rings, where the left annihilator of a left module is a left ideal, and the right-annihilator, of a right module is a right ideal.
Definitions
Let R be a ring, and let M be a left R-module. Choose a non-empty subset S of M. The annihilator of S, denoted AnnR(S), is the set of all elements r in R such that, for all s in S, . In set notation,
for all
It is the set of all elements of R that "annihilate" S (the elements for which S is a torsion set). Subsets of right modules may be used as well, after the modification of "" in the definition.
The annihilator of a single element x is usually written AnnR(x) instead of AnnR({x}). If the ring R can be understood from the context, the subscript R can be omitted.
Since R is a module over itself, S may be taken to be a subset of R itself, and since R is both a right and a left R module, the notation must be modified slightly to indicate the left or right side. Usually and or some similar subscript scheme are used to distinguish the left and right annihilators, if necessary.
If M is an R-module and , then M is called a faithful module.
Properties
If S is a subset of a left R module M, then Ann(S) is a left ideal of R.
If S is a submodule of M, then AnnR(S) is even a two-sided ideal: (ac)s = a(cs) = 0, since cs is another element of S.
If S is a subset of M and N is the submodule of M generated by S, then in general AnnR(N) is a subset of AnnR(S), but they are not necessarily equal. If R is commutative, then the equality holds.
M may be also viewed as an R/AnnR(M)-module using the action . Incidentally, it is not always p |
https://en.wikipedia.org/wiki/Tierra%20%28computer%20simulation%29 | Tierra is a computer simulation developed by ecologist Thomas S. Ray in the early 1990s in which computer programs compete for time (central processing unit (CPU) time) and space (access to main memory). In this context, the computer programs in Tierra are considered to be evolvable and can mutate, self-replicate and recombine. Tierra's virtual machine is written in C. It operates on a custom instruction set designed to facilitate code changes and reordering, including features such as jump to template (as opposed to the relative or absolute jumps common to most instruction sets).
Simulations
The basic Tierra model has been used to experimentally explore in silico the basic processes of evolutionary and ecological dynamics. Processes such as the dynamics of punctuated equilibrium, host-parasite co-evolution and density-dependent natural selection are amenable to investigation within the Tierra framework. A notable difference between Tierra and more conventional models of evolutionary computation, such as genetic algorithms, is that there is no explicit, or exogenous fitness function built into the model. Often in such models there is the notion of a function being "optimized"; in the case of Tierra, the fitness function is endogenous: there is simply survival and death.
According to Thomas S. Ray and others, this may allow for more "open-ended" evolution, in which the dynamics of the feedback between evolutionary and ecological processes can itself change over time (see evolvability), although this claim has not been realized – like other digital evolution systems, it eventually reaches a point where novelty ceases to be created, and the system at large begins either looping or ceases to 'evolve'. The issue of how true open-ended evolution can be implemented in an artificial system is still an open question in the field of artificial life.
Mark Bedau and Norman Packard developed a statistical method of classifying evolutionary systems and in 1997, Bedau et al. |
https://en.wikipedia.org/wiki/Emblem | An emblem is an abstract or representational pictorial image that represents a concept, like a moral truth, or an allegory, or a person, like a monarch or saint.
Emblems vs. symbols
Although the words emblem and symbol are often used interchangeably, an emblem is a pattern that is used to represent an idea or an individual. An emblem develops in concrete, visual terms some abstraction: a deity, a tribe or nation, or a virtue or vice.
An emblem may be worn or otherwise used as an identifying badge or patch. For example, in America, police officers' badges refer to their personal metal emblem whereas their woven emblems on uniforms identify members of a particular unit. A real or metal cockle shell, the emblem of James the Great, sewn onto the hat or clothes, identified a medieval pilgrim to his shrine at Santiago de Compostela. In the Middle Ages, many saints were given emblems, which served to identify them in paintings and other images: St. Catherine of Alexandria had a wheel, or a sword, St. Anthony the Abbot, a pig and a small bell. These are also called attributes, especially when shown carried by or close to the saint in art. Monarchs and other grand persons increasingly adopted personal devices or emblems that were distinct from their family heraldry. The most famous include Louis XIV of France's sun, the salamander of Francis I of France, the boar of Richard III of England and the armillary sphere of Manuel I of Portugal. In the fifteenth and sixteenth century, there was a fashion, started in Italy, for making large medals with a portrait head on the obverse and the emblem on the reverse; these would be given to friends and as diplomatic gifts. Pisanello produced many of the earliest and finest of these.
A symbol, on the other hand, substitutes one thing for another, in a more concrete fashion:
The Christian cross is a symbol of the crucifixion of Jesus; it is an emblem of sacrifice.
The Red Cross is one of three symbols representing the International R |
https://en.wikipedia.org/wiki/Computer%20scientist | A computer scientist is a scholar who specializes in the academic study of computer science.
Computer scientists typically work on the theoretical side of computation. Although computer scientists can also focus their work and research on specific areas (such as algorithm and data structure development and design, software engineering, information theory, database theory, theoretical computer science, numerical analysis, programming language theory, compiler, computer graphics, computer vision, robotics, computer architecture, operating system), their foundation is the theoretical study of computing from which these other fields derive.
A primary goal of computer scientists is to develop or validate models, often mathematical, to describe the properties of computational systems (processors, programs, computers interacting with people, computers interacting with other computers, etc.) with an overall objective of discovering designs that yield useful benefits (faster, smaller, cheaper, more precise, etc.).
Education
Most computer scientists are required to possess a PhD, M.S., Bachelor's degree in computer science, or other similar fields like Information and Computer Science (CIS), or a closely related discipline such as mathematics or physics.
Areas of specialization
Theoretical computer science – including data structures and algorithms, theory of computation, information theory and coding theory, programming language theory, and formal methods
Computer systems – including computer architecture and computer engineering, computer performance analysis, concurrency, and distributed computing, computer networks, computer security and cryptography, and databases.
Computer applications – including computer graphics and visualization, human–computer interaction, scientific computing, and artificial intelligence.
Software engineering – the application of engineering to software development in a systematic method
Employment
Computer scientists are often hired by sof |
https://en.wikipedia.org/wiki/MCM/70 | The MCM/70 was a pioneering microcomputer first built in 1973 in Toronto, Ontario, Canada and released the next year. This makes it one of the first microcomputers in the world, the second to be shipped in completed form, and the first portable computer. The MCM/70 was the product of Micro Computer Machines, one of three related companies set up in Toronto in 1971 by Mers Kutt. It is considered by some historians to be the first usable personal microcomputer system.
Early history
Kutt, a professor of mathematics at Queen's University in Kingston, Ontario during the late 1960s, noted that the efficiency of computer users there was hampered by the long wait times involved in submitting programs in punched card form for batch processing by a shared mainframe computer. In 1968, Kutt and Donald Pamenter started a firm, Consolidated Computer Inc., and began to produce a data-entry device named Key-Edit. This was a low-cost terminal, with a one-line display device, which bypassed the need for keypunching.
In 1971, Kutt, no longer part of CCI, began planning a machine to support software development in the recently developed programming language APL. APL was best programmed using a custom keyboard and these were very rare at the time. He initially named his design the Key-Cassette; similar in design and concept to Key-Edit, it would offer editing ability and support for either two cassette decks or one cassette and an acoustic coupler to upload programs to other machines.
The original design resembled a desktop electronic calculator. Kutt's notes of the era showed his intent to use the cover and display from an extant calculator with a modified power supply, to include a small keyboard with 32 keys, and a display made of either 13 or 15 segmented LEDs. Kutt also created a company, Micro Computer Machines, which would later manufacture the devices.
Development
Through his acquaintance with Intel founder Robert Noyce, Kutt had been following Intel's work on the 1201, a |
https://en.wikipedia.org/wiki/Danny%20Hillis | William Daniel Hillis (born September 25, 1956) is an American inventor, entrepreneur, and computer scientist, who pioneered parallel computers and their use in artificial intelligence. He founded Thinking Machines Corporation, a parallel supercomputer manufacturer, and subsequently was Vice President of Research and Disney Fellow at Walt Disney Imagineering.
Hillis was elected a member of the National Academy of Engineering in 2001 for advances in parallel computers, parallel software, and parallel storage.
More recently, Hillis co-founded Applied Minds, and Applied Invention, an interdisciplinary group of engineers, scientists, and artists. He is a visiting professor at the MIT Media Lab.
Biography
Early life and academic work
Born September 25, 1956, in Baltimore, Maryland, Danny Hillis spent much of his childhood living overseas, in Europe, Africa, and Asia.
He attended the Massachusetts Institute of Technology (MIT) and received his bachelor of science in mathematics in 1978. As an undergraduate, he worked at the MIT Logo Laboratory under the tutelage of Seymour Papert, developing computer hardware and software for children. During this time, he also designed computer-oriented toys and games for the Milton Bradley Company. While still in college, he co-founded Terrapin Inc., a producer of computer software, including Logo, for elementary schools.
As a graduate student at the MIT Computer Science and Artificial Intelligence Laboratory, Hillis designed tendon-controlled robot arms and a touch-sensitive robot "skin".
During his college years, Hillis was part of the team that built a computer composed entirely of Tinkertoys, currently at the Computer History Museum in Mountain View, California.
At MIT, Hillis began to study Artificial Intelligence under Marvin Minsky. In 1981, he proposed building a massively parallel computer for Artificial Intelligence, consisting of a million processors, each similar to a modern Graphics Processing Unit. This work culmi |
https://en.wikipedia.org/wiki/Edge%20connector | An edge connector is the portion of a printed circuit board (PCB) consisting of traces leading to the edge of the board that are intended to plug into a matching socket. The edge connector is a money-saving device because it only requires a single discrete female connector (the male connector is formed out of the edge of the PCB), and they also tend to be fairly robust and durable. They are commonly used in computers for expansion slots for peripheral cards, such as PCI, PCI Express, and AGP cards.
Socket design
Edge connector sockets consist of a plastic "box" open on one side, with pins on one or both side(s) of the longer edges, sprung to push into the middle of the open center. Connectors are often keyed to ensure the correct polarity, and may contain bumps or notches both for polarity and to ensure that the wrong type of device is not inserted. The socket's width is chosen to fit to the thickness of the connecting PCB.
The opposite side of the socket is often an insulation-piercing connector which is clamped onto a ribbon cable. Alternatively, the other side may be soldered to a motherboard or daughtercard.
Uses
Edge connectors are commonly used in personal computers for connecting expansion cards and computer memory to the system bus. Example expansion peripheral technologies which use edge connectors include PCI, PCI Express, and AGP. Slot 1 and Slot A also used edge connectors; the processor being mounted on a card with an edge connector, instead of directly to the motherboard as before and since.
IBM PCs used edge connector sockets attached to ribbon cables to connect 5.25" floppy disk drives. 3.5" drives use a pin connector instead.
Video game cartridges typically take the form of a PCB with an edge connector: the socket is located within the console itself. The Nintendo Entertainment System was unusual in that it was designed to use a zero insertion force edge connector: instead of the user forcing the cartridge into the socket directly, the cartr |
https://en.wikipedia.org/wiki/Secret%20decoder%20ring | A secret decoder ring (or secret decoder) is a device that allows one to decode a simple substitution cipher—or to encrypt a message by working in the opposite direction.
As inexpensive toys, secret decoders have often been used as promotional items by retailers, as well as radio and television programs, from the 1930s through to the current day.
Decoders, whether badges or rings, are an entertaining way for children to tap into a common fascination with encryption, ciphers, and secret codes, and are used to send hidden messages back and forth to one another.
History
Secret decoders are generally circular scales, descendants of the cipher disk developed in the 15th century by Leon Battista Alberti. Rather than the complex polyalphabetic Alberti cipher method, the decoders for children invariably use simple Caesar cipher substitutions.
The most well-known example started in 1934 with the Ovaltine company's sponsored radio program Little Orphan Annie. The show's fan club, "Radio Orphan Annie's Secret Society", distributed a member's handbook that included a simple substitution cipher with a resulting numeric cipher text. This was followed the next year with a membership pin that included a cipher disk—enciphering the letters A–Z to numbers 1–26. From 1935 to 1940, metal decoders were produced for the promotion. From 1941 on, paper decoders were produced. Similar metal badges and pocket decoders continued with the Captain Midnight radio and television programs.
None of these early decoders were in the form of finger rings; however, "secret compartment" rings were common radio program premiums. In the early 1960s, secret decoder rings appeared—notably in conjunction with the Jonny Quest television program sponsored by PF Shoes. A later, less ornate, decoder ring was offered by Kix Cereals.
Today, high quality, stainless steel decoder rings for children and adults are being produced by companies such as Retroworks and ThinkGeek.
Messages
Ovaltine and other companie |
https://en.wikipedia.org/wiki/Solid%20of%20revolution | In geometry, a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution.
Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area (Pappus's second centroid theorem).
A representative disc is a three-dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length ) around some axis (located units away), so that a cylindrical volume of units is enclosed.
Finding the volume
Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration. To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness , or a cylindrical shell of width ; and then find the limiting sum of these volumes as approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration.
Disc method
The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution.
The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by
If (e.g. revolving an area between the curve and the -axis), this reduces to:
The method can be visualized by considering a thin horizontal rectangle at between on top and on the bottom, and revolving it about the -axis; it forms a ring (or disc in the case |
https://en.wikipedia.org/wiki/Disk%20%28mathematics%29 | In geometry, a disk (also spelled disc) is the region in a plane bounded by a circle. A disk is said to be closed if it contains the circle that constitutes its boundary, and open if it does not.
For a radius, , an open disk is usually denoted as and a closed disk is . However in the field of topology the closed disk is usually denoted as while the open disk is .
Formulas
In Cartesian coordinates, the open disk of center and radius R is given by the formula:
while the closed disk of the same center and radius is given by:
The area of a closed or open disk of radius R is πR2 (see area of a disk).
Properties
The disk has circular symmetry.
The open disk and the closed disk are not topologically equivalent (that is, they are not homeomorphic), as they have different topological properties from each other. For instance, every closed disk is compact whereas every open disk is not compact. However from the viewpoint of algebraic topology they share many properties: both of them are contractible and so are homotopy equivalent to a single point. This implies that their fundamental groups are trivial, and all homology groups are trivial except the 0th one, which is isomorphic to Z. The Euler characteristic of a point (and therefore also that of a closed or open disk) is 1.
Every continuous map from the closed disk to itself has at least one fixed point (we don't require the map to be bijective or even surjective); this is the case n=2 of the Brouwer fixed point theorem. The statement is false for the open disk:
Consider for example the function
which maps every point of the open unit disk to another point on the open unit disk to the right of the given one. But for the closed unit disk it fixes every point on the half circle
As a statistical distribution
A uniform distribution on a unit circular disk is occasionally encountered in statistics. It most commonly occurs in operations research in the mathematics of urban planning, where it may be used to model a pop |
https://en.wikipedia.org/wiki/Surface%20of%20revolution | A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints).
The volume bounded by the surface created by this revolution is the solid of revolution.
Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle, and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus).
Properties
The sections of the surface of revolution made by planes through the axis are called meridional sections. Any meridional section can be considered to be the generatrix in the plane determined by it and the axis.
The sections of the surface of revolution made by planes that are perpendicular to the axis are circles.
Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular.
Area formula
If the curve is described by the parametric functions , , with ranging over some interval , and the axis of revolution is the -axis, then the surface area is given by the integral
provided that is never negative between the endpoints and . This formula is the calculus equivalent of Pappus's centroid theorem. The quantity
comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity is the path of (the centroid of) this small segment, as required by Pappus' theorem.
Likewise, when the axis of rotation is the -axis and provided that is never negative, the area is given by
If the continuous cu |
https://en.wikipedia.org/wiki/Vertical%20blanking%20interval | In a raster scan display, the vertical blanking interval (VBI), also known as the vertical interval or VBLANK, is the time between the end of the final visible line of a frame or field and the beginning of the first visible line of the next frame or field. It is present in analog television, VGA, DVI and other signals.
In raster cathode ray tube displays, the blank level is usually supplied during this period to avoid painting the retrace line — see raster scan for details; signal sources such as television broadcasts do not supply image information during the blanking period. Digital displays usually will not display incoming data stream during the blanking interval even if present.
The VBI was originally needed because of the inductive inertia of the magnetic coils which deflect the electron beam vertically in a CRT; the magnetic field, and hence the position being drawn, cannot change instantly. Additionally, the speed of older circuits was limited. For horizontal deflection, there is also a pause between successive lines, to allow the beam to return from right to left, called the horizontal blanking interval. Modern CRT circuitry does not require such a long blanking interval, and thin panel displays require none, but the standards were established when the delay was needed (and to allow the continued use of older equipment). Blanking of a CRT may not be perfect due to equipment faults or brightness set very high; in this case a white retrace line shows on the screen, often alternating between fairly steep diagonals from right to left and less-steep diagonals back from left to right, starting in the lower right of the display.
In analog television systems the vertical blanking interval can be used for datacasting (to carry digital data), since nothing sent during the VBI is displayed on the screen; various test signals, VITC timecode, closed captioning, teletext, CGMS-A copy-protection indicators, and various data encoded by the XDS protocol (e.g., the conten |
https://en.wikipedia.org/wiki/Taijitu | In Chinese philosophy, a taijitu () is a symbol or diagram () representing Taiji () in both its monist (wuji) and its dualist (yin and yang) in application as a deductive and inductive theoretical model. Such a diagram was first introduced by Neo-Confucian philosopher Zhou Dunyi (; 1017–1073) of the Song Dynasty in his Taijitu shuo ().
The Daozang, a Taoist canon compiled during the Ming era, has at least half a dozen variants of the taijitu. The two most similar are the Taiji Xiantiandao and the diagrams, both of which have been extensively studied during the Qing period for their possible connection with Zhou Dunyi's taijitu.
Ming period author Lai Zhide (1525–1604) simplified the taijitu to a design of two interlocking spirals with two black-and-white dots superimposed on them, became synonymous with the Yellow River Map. This version was represented in Western literature and popular culture in the late 19th century as the "Great Monad", this depiction became known as the "yin-yang symbol" since the 1960s. The contemporary Chinese term for the modern symbol is referred to as "the two-part Taiji diagram".
Ornamental patterns with visual similarity to the "yin yang symbol" are found in archaeological artefacts of European prehistory; such designs are sometimes descriptively dubbed "yin yang symbols" in archaeological literature by modern scholars.
Structure
The taijitu consists of five parts. Strictly speaking, the "yin and yang symbol", itself popularly called taijitu, represents the second of these five parts of the diagram.
At the top, an empty circle depicts the absolute (wuji). According to Zhou, wuji is also a synonym for taiji.
A second circle represents the Taiji as harboring Dualism, yin and yang, represented by filling the circle in a black-and-white pattern. In some diagrams, there is a smaller empty circle at the center of this, representing Emptiness as the foundation of duality.
Below this second circle is a five-part diagram representing the |
https://en.wikipedia.org/wiki/Stationary%20process | In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. If you draw a line through the middle of a stationary process then it should be flat; it may have 'seasonal' cycles around the trend line, but overall it does not trend up nor down.
Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean.
A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time.
For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th-order stationarity are then employed. The definitions for different kinds of stationarity are not consistent among different authors (see Other terminology).
Strict-sense stationarity
Definition
Formally, let be a |
https://en.wikipedia.org/wiki/Supercentenarian | A supercentenarian (sometimes hyphenated as super-centenarian) is a person who has reached the age of 110 years. This age is achieved by about one in 1,000 centenarians. Supercentenarians typically live a life free of major age-related diseases until shortly before the maximum human lifespan is reached.
Etymology
The term "Supercentenarian", originally hyphenated as Super-centenarian, has existed since 1870.
Norris McWhirter, editor of Guinness World Records, used the term in association with age claim's researcher A. Ross Eckler Jr. in 1976, and the term was further popularised in 1991 by William Strauss and Neil Howe in their book Generations.
The term "semisupercentenarian", has been used to describe someone from 105-109. Originally the term "supercentenarian" was used to mean someone well over the age of 100, but 110 years and over became the cutoff point of accepted criteria for demographers.
Incidence
The Gerontology Research Group maintains a top 30–40 list of oldest verified living people. The researchers estimate, based on a 0.15% to 0.25% survival rate of centenarians until the age of 110, that there should be between 300 and 450 living supercentenarians in the world. A study conducted in 2010 by the Max Planck Institute for Demographic Research found 663 validated supercentenarians, living and dead, and showed that the countries with the highest total number (not frequency) of supercentenarians (in decreasing order) were the United States, Japan, England plus Wales, France, and Italy. The first verified supercentenarian in human history was Dutchman Geert Adriaans Boomgaard (1788–1899), and it was not until the 1980s that the oldest verified age surpassed 115.
History
While claims of extreme age have persisted from the earliest times in history, the earliest supercentenarian accepted by Guinness World Records is Dutchman Thomas Peters (reportedly 1745–1857).. Other scholars, such as French demographer Jean-Marie Robine, consider Geert Adriaans Boom |
https://en.wikipedia.org/wiki/Discretization | In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification).
Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused.
Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.
The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error.
Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold.
Discretization of linear state space models
Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing.
The following continuous-time state space model
where v and w are continuous zero-mean white noise sources with power spectral densities
can be discretized, assuming zero-order hold for the input u and continuous integration for the noise v, to
with covariances
where
, if is nonsingular
and is the sample time, although is the transposed matrix of . The equation for the discretized measurement nois |
https://en.wikipedia.org/wiki/Toom%E2%80%93Cook%20multiplication | Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers.
Given two large integers, a and b, Toom–Cook splits up a and b into k smaller parts each of length l, and performs operations on the parts. As k grows, one may combine many of the multiplication sub-operations, thus reducing the overall computational complexity of the algorithm. The multiplication sub-operations can then be computed recursively using Toom–Cook multiplication again, and so on. Although the terms "Toom-3" and "Toom–Cook" are sometimes incorrectly used interchangeably, Toom-3 is only a single instance of the Toom–Cook algorithm, where k = 3.
Toom-3 reduces 9 multiplications to 5, and runs in Θ(nlog(5)/log(3)) ≈ Θ(n1.46). In general, Toom-k runs in , where , ne is the time spent on sub-multiplications, and c is the time spent on additions and multiplication by small constants. The Karatsuba algorithm is equivalent to Toom-2, where the number is split into two smaller ones. It reduces 4 multiplications to 3 and so operates at Θ(nlog(3)/log(2)) ≈ Θ(n1.58). Ordinary long multiplication is equivalent to Toom-1, with complexity Θ(n2).
Although the exponent e can be set arbitrarily close to 1 by increasing k, the function c grows very rapidly. The growth rate for mixed-level Toom–Cook schemes was still an open research problem in 2005. An implementation described by Donald Knuth achieves the time complexity .
Due to its overhead, Toom–Cook is slower than long multiplication with small numbers, and it is therefore typically used for intermediate-size multiplications, before the asymptotically faster Schönhage–Strassen algorithm (with complexity ) becomes practical.
Toom first described this algorithm in 1963, and Cook published an improved (asymptotically equivalent) algorithm in his PhD thesis in 1966.
Details
This section discusses |
https://en.wikipedia.org/wiki/Kernel%20%28set%20theory%29 | In set theory, the kernel of a function (or equivalence kernel) may be taken to be either
the equivalence relation on the function's domain that roughly expresses the idea of "equivalent as far as the function can tell", or
the corresponding partition of the domain.
An unrelated notion is that of the kernel of a non-empty family of sets which by definition is the intersection of all its elements:
This definition is used in the theory of filters to classify them as being free or principal.
Definition
For the formal definition, let be a function between two sets.
Elements are equivalent if and are equal, that is, are the same element of
The kernel of is the equivalence relation thus defined.
The is
The kernel of is also sometimes denoted by The kernel of the empty set, is typically left undefined.
A family is called and is said to have if its is not empty.
A family is said to be if it is not fixed; that is, if its kernel is the empty set.
Quotients
Like any equivalence relation, the kernel can be modded out to form a quotient set, and the quotient set is the partition:
This quotient set is called the coimage of the function and denoted (or a variation).
The coimage is naturally isomorphic (in the set-theoretic sense of a bijection) to the image, specifically, the equivalence class of in (which is an element of ) corresponds to in (which is an element of ).
As a subset of the square
Like any binary relation, the kernel of a function may be thought of as a subset of the Cartesian product
In this guise, the kernel may be denoted (or a variation) and may be defined symbolically as
The study of the properties of this subset can shed light on
Algebraic structures
If and are algebraic structures of some fixed type (such as groups, rings, or vector spaces), and if the function is a homomorphism, then is a congruence relation (that is an equivalence relation that is compatible with the algebraic structure), and the coimage |
https://en.wikipedia.org/wiki/Differentiable%20function | In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
If is an interior point in the domain of a function , then is said to be differentiable at if the derivative exists. In other words, the graph of has a non-vertical tangent line at the point . is said to be differentiable on if it is differentiable at every point of . is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function . Generally speaking, is said to be of class if its first derivatives exist and are continuous over the domain of the function .
For a multivariable function, as shown here, the differentiability of it is something more than the existence of the partial derivatives of it.
Differentiability of real functions of one variable
A function , defined on an open set , is said to be differentiable at if the derivative
exists. This implies that the function is continuous at .
This function is said to be differentiable on if it is differentiable at every point of . In this case, the derivative of is thus a function from into
A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as being shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes).
Differentiability and continuity
If is differentiable |
https://en.wikipedia.org/wiki/Zygoma | The term zygoma generally refers to the zygomatic bone, a bone of the human skull commonly referred to as the cheekbone or malar bone, but it may also refer to:
The zygomatic arch, a structure in the human skull formed primarily by parts of the zygomatic bone and the temporal bone
The zygomatic process, a bony protrusion of the human skull, mostly composed of the zygomatic bone but also contributed to by the frontal bone, temporal bone, and maxilla
See also
Zygoma reduction plasty
Anatomy |
https://en.wikipedia.org/wiki/Rank%E2%80%93nullity%20theorem | The rank–nullity theorem is a theorem in linear algebra, which asserts:
the number of columns of a matrix is the sum of the rank of and the nullity of ; and
the dimension of the domain of a linear transformation is the sum of the rank of (the dimension of the image of ) and the nullity of (the dimension of the kernel of ).
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.
Stating the theorem
Linear transformations
Let be a linear transformation between two vector spaces where 's domain is finite dimensional. Then
where is the rank of (the dimension of its image) and is the nullity of (the dimension of its kernel). In other words,
This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since induces an isomorphism from to the existence of a basis for that extends any given basis of implies, via the splitting lemma, that Taking dimensions, the rank–nullity theorem follows.
Matrices
Linear maps can be represented with matrices. More precisely, an matrix represents a linear map where is the underlying field. So, the dimension of the domain of is , the number of columns of , and the rank–nullity theorem for an matrix is
Proofs
Here we provide two proofs. The first operates in the general case, using linear maps. The second proof looks at the homogeneous system where is a with rank and shows explicitly that there exists a set of linearly independent solutions that span the null space of .
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map fr |
https://en.wikipedia.org/wiki/Exponential%20decay | A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where is the quantity and (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant:
The solution to this equation (see derivation below) is:
where is the quantity at time , is the initial quantity, that is, the quantity at time .
Measuring rates of decay
Mean lifetime
If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, , relates to the decay rate constant, λ, in the following way:
The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, , instead of the decay constant, λ:
and that is the time at which the population of the assembly is reduced to ≈ 0.367879441 times its initial value.
For example, if the initial population of the assembly, N(0), is 1000, then the population at time , , is 368.
A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life".
Half-life
A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as:
When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equat |
https://en.wikipedia.org/wiki/Disc%20integration | Disc integration, also known in integral calculus as the disc method, is a method for calculating the volume of a solid of revolution of a solid-state material when integrating along an axis "parallel" to the axis of revolution. This method models the resulting three-dimensional shape as a stack of an infinite number of discs of varying radius and infinitesimal thickness. It is also possible to use the same principles with rings instead of discs (the "washer method") to obtain hollow solids of revolutions. This is in contrast to shell integration, which integrates along an axis perpendicular to the axis of revolution.
Definition
Function of
If the function to be revolved is a function of , the following integral represents the volume of the solid of revolution:
where is the distance between the function and the axis of rotation. This works only if the axis of rotation is horizontal (example: or some other constant).
Function of
If the function to be revolved is a function of , the following integral will obtain the volume of the solid of revolution:
where is the distance between the function and the axis of rotation. This works only if the axis of rotation is vertical (example: or some other constant).
Washer method
To obtain a hollow solid of revolution (the “washer method”), the procedure would be to take the volume of the inner solid of revolution and subtract it from the volume of the outer solid of revolution. This can be calculated in a single integral similar to the following:
where is the function that is farthest from the axis of rotation and is the function that is closest to the axis of rotation. For example, the next figure shows the rotation along the -axis of the red "leaf" enclosed between the square-root and quadratic curves:
The volume of this solid is:
One should take caution not to evaluate the square of the difference of the two functions, but to evaluate the difference of the squares of the two functions.
(This formula only wo |
https://en.wikipedia.org/wiki/Shell%20integration | Shell integration (the shell method in integral calculus) is a method for calculating the volume of a solid of revolution, when integrating along an axis perpendicular to the axis of revolution. This is in contrast to disc integration which integrates along the axis parallel to the axis of revolution.
Definition
The shell method goes as follows: Consider a volume in three dimensions obtained by rotating a cross-section in the -plane around the -axis. Suppose the cross-section is defined by the graph of the positive function on the interval . Then the formula for the volume will be:
If the function is of the coordinate and the axis of rotation is the -axis then the formula becomes:
If the function is rotating around the line then the formula becomes:
and for rotations around it becomes
The formula is derived by computing the double integral in polar coordinates.
Derivation of the formula
Example
Consider the volume, depicted below, whose cross section on the interval [1, 2] is defined by:
In the case of disc integration we would need to solve for given and because the volume is hollow in the middle we would find two functions, one that defined the inner solid and one that defined the outer solid. After integrating these two functions with the disk method we would subtract them to yield the desired volume.
With the shell method all we need is the following formula:
By expanding the polynomial the integral becomes very simple. In the end we find the volume is cubic units.
See also
Solid of revolution
Disc integration
References
Frank Ayres, Elliott Mendelson. Schaum's Outlines: Calculus. McGraw-Hill Professional 2008, . pp. 244–248 ()
Integral calculus |
https://en.wikipedia.org/wiki/Anthropometry | Anthropometry () refers to the measurement of the human individual. An early tool of physical anthropology, it has been used for identification, for the purposes of understanding human physical variation, in paleoanthropology and in various attempts to correlate physical with racial and psychological traits. Anthropometry involves the systematic measurement of the physical properties of the human body, primarily dimensional descriptors of body size and shape. Since commonly used methods and approaches in analysing living standards were not helpful enough, the anthropometric history became very useful for historians in answering questions that interested them.
Today, anthropometry plays an important role in industrial design, clothing design, ergonomics and architecture where statistical data about the distribution of body dimensions in the population are used to optimize products. Changes in lifestyles, nutrition, and ethnic composition of populations lead to changes in the distribution of body dimensions (e.g. the rise in obesity) and require regular updating of anthropometric data collections.
History
The history of anthropometry includes and spans various concepts, both scientific and pseudoscientific, such as craniometry, paleoanthropology, biological anthropology, phrenology, physiognomy, forensics, criminology, phylogeography, human origins, and cranio-facial description, as well as correlations between various anthropometrics and personal identity, mental typology, personality, cranial vault and brain size, and other factors.
At various times in history, applications of anthropometry have ranged from accurate scientific description and epidemiological analysis to rationales for eugenics and overtly racist social movements. One of its misuses was the discredited pseudoscience, phrenology.
Individual variation
Auxologic
Auxologic is a broad term covering the study of all aspects of human physical growth.
Height
Human height varies greatly between ind |
https://en.wikipedia.org/wiki/DIN%20connector | The DIN connector is an electrical connector that was standardized by the (DIN), the German Institute for Standards, in the mid 1950's, initial with 3 pins for mono, but when stereo connections and gear appeared in late 1950's ( 1959 or so), versions with 5 pins or more were launched. The male DIN connectors (plugs) feature a 13.2 mm diameter metal shield with a notch that limits the orientation in which plug and socket can mate. The range of DIN connectors, different only in the configuration of the pins, have been standardized as DIN 41524 / IEC/DIN EN 60130-9 (3-pin at 90° and 5-pin at 45°); DIN 45322 (5-pin and 6-pin at 60°); DIN 45329 / IEC/DIN EN 60130–9 (7-pin at 45°); and DIN 45326 / IEC/DIN EN 60130-9 (8-pin at 45°).
In consumer electronics, the term "DIN connector" identifies types of cylindrical connectors that the German Institute for Standards (DIN) had initially standardised for analog audio signals. Some DIN connectors have been used in analog video applications, for power connections, and for digital interfaces, such as the MIDI (DIN 41524), the IBM PC keyboard and the IBM AT keyboard connectors (DIN 41524). The original, technical standards for these models of DIN connector are unavailable, and were replaced with equivalent connectors, such as the international standard IEC 60130-9.
Standards
The term "DIN connector" alone does not unambiguously identify any particular type of connector unless the document number of the relevant DIN standard is added (e.g., "DIN 45322 connector"). Some DIN connector standards are:
DIN 41524, for circular connectors often used for audio signals or some digital signals like MIDI
DIN 41612, rectangular connectors used to connect plug-in cards to a back plane or motherboard
DIN 41652 D-subminiature connectors used for computer data and video
DIN 41585 automotive coaxial connectors
Circular connectors
The plugs consist of a circular shielding metal skirt protecting a number of straight round pins. The pins a |
https://en.wikipedia.org/wiki/Squeeze%20theorem | In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is trapped between two other functions.
The squeeze theorem is used in calculus and mathematical analysis, typically to confirm the limit of a function via comparison with two other functions whose limits are known. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute , and was formulated in modern terms by Carl Friedrich Gauss.
Statement
The squeeze theorem is formally stated as follows.
The functions and are said to be lower and upper bounds (respectively) of .
Here, is not required to lie in the interior of . Indeed, if is an endpoint of , then the above limits are left- or right-hand limits.
A similar statement holds for infinite intervals: for example, if , then the conclusion holds, taking the limits as .
This theorem is also valid for sequences. Let be two sequences converging to , and a sequence. If we have , then also converges to .
Proof
According to the above hypotheses we have, taking the limit inferior and superior:
so all the inequalities are indeed equalities, and the thesis immediately follows.
A direct proof, using the -definition of limit, would be to prove that for all real there exists a real such that for all with we have Symbolically,
As
means that
and
means that
then we have
We can choose . Then, if , combining () and (), we have
which completes the proof. Q.E.D
The proof for sequences is very similar, using the -definition of the limit of a sequence.
Examples
First example
The limit
cannot be determined through the limit law
because
does not exist.
However, by the definition of the sine function,
It follows that
Since , by the squeeze theorem, must also be 0.
Second example
Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities
The first limit follows by means of the |
https://en.wikipedia.org/wiki/Matryoshka%20doll | Matryoshka dolls ( ; ), also known as stacking dolls, nesting dolls, Russian tea dolls, or Russian dolls, are a set of wooden dolls of decreasing size placed one inside another. The name matryoshka, mainly known as "little matron", is a diminutive form of Matryosha (), in turn a diminutive of the Russian female first name Matryona ().
A set of matryoshkas consists of a wooden figure, which separates at the middle, top from bottom, to reveal a smaller figure of the same sort inside, which has, in turn, another figure inside of it, and so on.
The first Russian nested doll set was made in 1890 by wood turning craftsman and wood carver Vasily Zvyozdochkin from a design by Sergey Malyutin, who was a folk crafts painter at Abramtsevo. Traditionally the outer layer is a woman, dressed in a sarafan, a long and shapeless traditional Russian peasant jumper dress. The figures inside may be of any gender; the smallest, innermost doll is typically a baby turned from a single piece of wood. Much of the artistry is in the painting of each doll, which can be very elaborate. The dolls often follow a theme; the themes may vary, from fairy tale characters to Soviet leaders. In some countries, matryoshka dolls are often referred to as babushka dolls, though they are not known by this name in Russian; babushka () means "grandmother" or "old woman".
History
The first Russian nested doll set was carved in 1890 at the Children's Education Workshop by Vasily Zvyozdochkin and designed by Sergey Malyutin, who was a folk crafts painter in the Abramtsevo estate of Savva Mamontov, a Russian industrialist and patron of arts. Mamontov's brother, Anatoly Ivanovich Mamontov (1839–1905), created the Children's Education Workshop to make and sell children's toys. The doll set was painted by Malyutin. Malyutin's doll set consisted of eight dolls—the outermost was a mother in a traditional dress holding a red-combed rooster. The inner dolls were her children, girls and a boy, and the innermost a ba |
https://en.wikipedia.org/wiki/Contig | A contig (from contiguous) is a set of overlapping DNA segments that together represent a consensus region of DNA.
In bottom-up sequencing projects, a contig refers to overlapping sequence data (reads); in top-down sequencing projects, contig refers to the overlapping clones that form a physical map of the genome that is used to guide sequencing and assembly. Contigs can thus refer both to overlapping DNA sequences and to overlapping physical segments (fragments) contained in clones depending on the context.
Original definition of contig
In 1980, Staden wrote: In order to make it easier to talk about our data gained by the shotgun method of sequencing we have invented the word "contig". A contig is a set of gel readings that are related to one another by overlap of their sequences. All gel readings belong to one and only one contig, and each contig contains at least one gel reading. The gel readings in a contig can be summed to form a contiguous consensus sequence and the length of this sequence is the length of the contig.
Sequence contigs
A sequence contig is a continuous (not contiguous) sequence resulting from the reassembly of the small DNA fragments generated by bottom-up sequencing strategies. This meaning of contig is consistent with the original definition by Rodger Staden (1979). The bottom-up DNA sequencing strategy involves shearing genomic DNA into many small fragments ("bottom"), sequencing these fragments, reassembling them back into contigs and eventually the entire genome ("up"). Because current technology allows for the direct sequencing of only relatively short DNA fragments (300–1000 nucleotides), genomic DNA must be fragmented into small pieces prior to sequencing. In bottom-up sequencing projects, amplified DNA is sheared randomly into fragments appropriately sized for sequencing. The subsequent sequence reads, which are the data that contain the sequences of the small fragments, are put into a database. The assembly software then searches |
https://en.wikipedia.org/wiki/Live%20CD | A live CD (also live DVD, live disc, or live operating system) is a complete bootable computer installation including operating system which runs directly from a CD-ROM or similar storage device into a computer's memory, rather than loading from a hard disk drive. A live CD allows users to run an operating system for any purpose without installing it or making any changes to the computer's configuration. Live CDs can run on a computer without secondary storage, such as a hard disk drive, or with a corrupted hard disk drive or file system, allowing data recovery.
As CD and DVD drives have been steadily phased-out, live CDs have become less popular, being replaced by live USBs, which are equivalent systems written onto USB flash drives, which have the added benefit of having writeable storage. The functionality of a live CD is also available with an external hard disk drive connected by USB. Many live CDs offer the option of persistence by writing files to a hard drive or USB flash drive.
Many Linux distributions make ISO images available for burning to CD or DVD. While open source operating systems can be used for free, some commercial software, such as Windows To Go requires a license to use. Many live CDs are used for data recovery, computer forensics, disk imaging, system recovery and malware removal. The Tails operating system is aimed at preserving privacy and anonymity of its users, allowing them to work with sensitive documents without leaving a record on a computer's hard drive.
History
All computers except the earliest digital computers are built with some form of minimal built-in loader, which loads a program or succession of programs from a storage medium, which then operate the computer. Initially a read-only medium such as punched tape or punched cards was used for initial program load. With the introduction of inexpensive read-write storage, read-write floppy disks and hard disks were used as boot media.
After the introduction of the audio compact |
https://en.wikipedia.org/wiki/Color%20depth | Color depth or colour depth (see spelling differences), also known as bit depth, is either the number of bits used to indicate the color of a single pixel, or the number of bits used for each color component of a single pixel. When referring to a pixel, the concept can be defined as bits per pixel (bpp). When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.
Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.
The number of bits of resolved intensity in a color channel is also known as radiometric resolution, especially in the context of satellite images.
Comparison
Indexed color
With the relatively low color depth, the stored value is typically a number representing the index into a color map or palette (a form of vector quantization). The colors available in the palette itself may be fixed by the hardware or modifiable by software. Modifiable palettes are sometimes referred to as pseudocolor palettes.
Old graphics chips, particularly those used in home computers and video game consoles, often have the ability to use a different palette per sprites and tiles in order to increase the maximum number of simultaneously displayed colors, while minimizing use of then-expensive memory (and bandwidth). For example, in the ZX Spectrum the picture is stored in a two-color format, but these two colors can be separately defined for each rectangular block of |
https://en.wikipedia.org/wiki/Mealy%20machine | In the theory of computation, a Mealy machine is a finite-state machine whose output values are determined both by its current state and the current inputs. This is in contrast to a Moore machine, whose output values are determined solely by its current state. A Mealy machine is a deterministic finite-state transducer: for each state and input, at most one transition is possible.
History
The Mealy machine is named after George H. Mealy, who presented the concept in a 1955 paper, "A Method for Synthesizing Sequential Circuits".
Formal definition
A Mealy machine is a 6-tuple consisting of the following:
a finite set of states
a start state (also called initial state) which is an element of
a finite set called the input alphabet
a finite set called the output alphabet
a transition function mapping pairs of a state and an input symbol to the corresponding next state.
an output function mapping pairs of a state and an input symbol to the corresponding output symbol.
In some formulations, the transition and output functions are coalesced into a single function .
Comparison of Mealy machines and Moore machines
Mealy machines tend to have fewer states:
Different outputs on arcs (n2) rather than states (n).
Moore machines are safer to use:
Outputs change at the clock edge (always one cycle later).
In Mealy machines, input change can cause output change as soon as logic is done—a big problem when two machines are interconnected – asynchronous feedback may occur if one isn't careful.
Mealy machines react faster to inputs:
React in the same cycle—they don't need to wait for the clock.
In Moore machines, more logic may be necessary to decode state into outputs—more gate delays after clock edge.
Diagram
The state diagram for a Mealy machine associates an output value with each transition edge, in contrast to the state diagram for a Moore machine, which associates an output value with each state.
When the input and output alphabet are both , one can als |
https://en.wikipedia.org/wiki/Nucleic%20acid%20sequence | A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
|
https://en.wikipedia.org/wiki/Digital%20microfluidics | Digital microfluidics (DMF) is a platform for lab-on-a-chip systems that is based upon the manipulation of microdroplets. Droplets are dispensed, moved, stored, mixed, reacted, or analyzed on a platform with a set of insulated electrodes. Digital microfluidics can be used together with analytical analysis procedures such as mass spectrometry, colorimetry, electrochemical, and electrochemiluminescense.
Overview
]
In analogy to digital microelectronics, digital microfluidic operations can be combined and reused within hierarchical design structures so that complex procedures (e.g. chemical synthesis or biological assays) can be built up step-by-step. And in contrast to continuous-flow microfluidics, digital microfluidics works much the same way as traditional bench-top protocols, only with much smaller volumes and much higher automation. Thus a wide range of established chemical procedures and protocols can be seamlessly transferred to a nanoliter droplet format. Electrowetting, dielectrophoresis, and immiscible-fluid flows are the three most commonly used principles, which have been used to generate and manipulate microdroplets in a digital microfluidic device.
A digital microfluidic (DMF) device set-up depends on the substrates used, the electrodes, the configuration of those electrodes, the use of a dielectric material, the thickness of that dielectric material, the hydrophobic layers, and the applied voltage.
A common substrate used in this type of system is glass. Depending if the system is open or closed, there would be either one or two layers of glass. The bottom layer of the device contains a patterned array of individually controllable electrodes. When looking at a closed system, there is usually a continuous ground electrode found through the top layer made usually of indium tin oxide (ITO). The dielectric layer is found around the electrodes in the bottom layer of the device and is important for building up charges and electrical field gradients on the |
https://en.wikipedia.org/wiki/Kuratowski%20closure%20axioms | In topology and related branches of mathematics, the Kuratowski closure axioms are a set of axioms that can be used to define a topological structure on a set. They are equivalent to the more commonly used open set definition. They were first formalized by Kazimierz Kuratowski, and the idea was further studied by mathematicians such as Wacław Sierpiński and António Monteiro, among others.
A similar set of axioms can be used to define a topological structure using only the dual notion of interior operator.
Definition
Kuratowski closure operators and weakenings
Let be an arbitrary set and its power set. A Kuratowski closure operator is a unary operation with the following properties:
A consequence of preserving binary unions is the following condition:
In fact if we rewrite the equality in [K4] as an inclusion, giving the weaker axiom [K4''] (subadditivity):
then it is easy to see that axioms [K4'] and [K4''] together are equivalent to [K4] (see the next-to-last paragraph of Proof 2 below).
includes a fifth (optional) axiom requiring that singleton sets should be stable under closure: for all , . He refers to topological spaces which satisfy all five axioms as T1-spaces in contrast to the more general spaces which only satisfy the four listed axioms. Indeed, these spaces correspond exactly to the topological T1-spaces via the usual correspondence (see below).
If requirement [K3] is omitted, then the axioms define a Čech closure operator. If [K1] is omitted instead, then an operator satisfying [K2], [K3] and [K4'] is said to be a Moore closure operator. A pair is called Kuratowski, Čech or Moore closure space depending on the axioms satisfied by .
Alternative axiomatizations
The four Kuratowski closure axioms can be replaced by a single condition, given by Pervin:
Axioms [K1]–[K4] can be derived as a consequence of this requirement:
Choose . Then , or . This immediately implies [K1].
Choose an arbitrary and . Then, applying axiom [K1], , implying [ |
https://en.wikipedia.org/wiki/Edge%20detection | Edge detection includes a variety of mathematical methods that aim at identifying edges, defined as curves in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction.
Motivations
The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world.
It can be shown that under rather general assumptions for an image formation model, discontinuities in image brightness are likely to correspond to:
discontinuities in depth,
discontinuities in surface orientation,
changes in material properties and
variations in scene illumination.
In the ideal case, the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects, the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation.
Thus, applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant, while preserving the important structural properties of an image.
If the edge detection step is successful, the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified.
However, it is not always possible to obtain such ideal edges from real life images of moderate complexity.
Edges extracted from non-trivial images are often hampered by fragmentation, meaning that the edge curves are not connected, missing edge segments as well as false edges not corresponding to interesting |
https://en.wikipedia.org/wiki/Transitional%20fossil | A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. Therefore, it cannot be assumed that transitional fossils are direct ancestors of more recent groups, though they are frequently used as models for such ancestors.
In 1859, when Charles Darwin's On the Origin of Species was first published, the fossil record was poorly known. Darwin described the perceived lack of transitional fossils as "the most obvious and gravest objection which can be urged against my theory," but he explained it by relating it to the extreme imperfection of the geological record. He noted the limited collections available at the time but described the available information as showing patterns that followed from his theory of descent with modification through natural selection. Indeed, Archaeopteryx was discovered just two years later, in 1861, and represents a classic transitional form between earlier, non-avian dinosaurs and birds. Many more transitional fossils have been discovered since then, and there is now abundant evidence of how all classes of vertebrates are related, including many transitional fossils. Specific examples of class-level transitions are: tetrapods and fish, birds and dinosaurs, and mammals and "mammal-like reptiles".
The term "missing link" has been used extensively in popular writings on human evolution to refer to a perceived gap in the hominid evolutionary record. It is most commonly used to refer to any new transitional fossil fi |
https://en.wikipedia.org/wiki/Common%20name | In biology, a common name of a taxon or organism (also known as a vernacular name, English name, colloquial name, country name, popular name, or farmer's name) is a name that is based on the normal language of everyday life; and is often contrasted with the scientific name for the same organism, which is often based in Latin. A common name is sometimes frequently used, but that is not always the case.
In chemistry, IUPAC defines a common name as one that, although it unambiguously defines a chemical, does not follow the current systematic naming convention, such as acetone, systematically 2-propanone, while a vernacular name describes one used in a lab, trade or industry that does not unambiguously describe a single chemical, such as copper sulfate, which may refer to either copper(I) sulfate or copper(II) sulfate.
Sometimes common names are created by authorities on one particular subject, in an attempt to make it possible for members of the general public (including such interested parties as fishermen, farmers, etc.) to be able to refer to one particular species of organism without needing to be able to memorise or pronounce the scientific name. Creating an "official" list of common names can also be an attempt to standardize the use of common names, which can sometimes vary a great deal between one part of a country and another, as well as between one country and another country, even where the same language is spoken in both places.
Use as part of folk taxonomy
A common name intrinsically plays a part in a classification of objects, typically an incomplete and informal classification, in which some names are degenerate examples in that they are unique and lack reference to any other name, as is the case with say, ginkgo, okapi, and ratel. Folk taxonomy, which is a classification of objects using common names, has no formal rules and need not be consistent or logical in its assignment of names, so that say, not all flies are called flies (for example Brauli |
https://en.wikipedia.org/wiki/Keyboard%20buffer | A keyboard buffer is a section of computer memory used to hold keystrokes before they are processed.
Keyboard buffers have long been used in command-line processing. As a user enters a command, they see it echoed on their terminal and can edit it before it is processed by the computer.
In time-sharing systems, the location of the buffer depends on whether communications is full-duplex or half-duplex. In full-duplex systems, keystrokes are transmitted one by one. As the main computer receives each keystroke, it ordinarily appends the character which it represents to the end of the keyboard buffer. The exception is control characters, such as "delete" or "backspace" which correct typing mistakes by deleting the character at the end of the buffer.
In half-duplex systems, keystrokes are echoed locally on a computer terminal. The user can see the command line on his terminal and edit it before it is transmitted to the main computer. Thus the buffer is local.
On some early home computers, to minimize the necessary hardware, a CPU interrupt checked the keyboard's switches for key presses multiple times each second, and recorded the key presses in a keyboard buffer for the operating system or application software to read.
On some systems, if the user presses too many keys at once, the keyboard buffer overflows and will emit a beep from the computer's internal speaker.
Other uses
The use of keyboard buffers is sometimes known from the user experience side as typeahead.
References
Computer keyboards
Computer memory |
https://en.wikipedia.org/wiki/Computably%20enumerable%20set | In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:
There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S.
Or, equivalently,
There is an algorithm that enumerates the members of S. That means that its output is simply a list of all the members of S: s1, s2, s3, ... . If S is infinite, this algorithm will run forever.
The first condition suggests why the term semidecidable is sometimes used. More precisely, if a number is in the set, one can decide this by running the algorithm, but if the number is not in the set, the algorithm runs forever, and no information is returned. A set that is "completely decidable" is a computable set. The second condition suggests why computably enumerable is used. The abbreviations c.e. and r.e. are often used, even in print, instead of the full phrase.
In computational complexity theory, the complexity class containing all computably enumerable sets is RE. In recursion theory, the lattice of c.e. sets under inclusion is denoted .
Formal definition
A set S of natural numbers is called computably enumerable if there is a partial computable function whose domain is exactly S, meaning that the function is defined if and only if its input is a member of S.
Equivalent formulations
The following are all equivalent properties of a set S of natural numbers:
Semidecidability:
The set S is computably enumerable. That is, S is the domain (co-range) of a partial computable function.
The set S is (referring to the arithmetical hierarchy).
There is a partial computable function f such that:
Enumerability:
The set S is the range of a partial computable function.
The set S is the range of a total computable function, or empty. If S is infinite, the function can be chosen to be injective.
The set S is the range of a primitive recursive funct |
https://en.wikipedia.org/wiki/Ian%20Stewart%20%28mathematician%29 | Ian Nicholas Stewart (born 24 September 1945) is a British mathematician and a popular-science and science-fiction writer. He is Emeritus Professor of Mathematics at the University of Warwick, England.
Education and early life
Stewart was born in 1945 in Folkestone, England. While in the sixth form at Harvey Grammar School in Folkestone he came to the attention of the mathematics teacher. The teacher had Stewart sit mock A-level examinations without any preparation along with the upper-sixth students; Stewart was placed first in the examination. He was awarded a scholarship to study at the University of Cambridge as an undergraduate student of Churchill College, Cambridge, where he studied the Mathematical Tripos and obtained a first-class Bachelor of Arts degree in mathematics in 1966. Stewart then went to the University of Warwick where his PhD on Lie algebras was supervised by Brian Hartley and completed in 1969.
Career and research
After his PhD, Stewart was offered an academic position at Warwick. He is well known for his popular expositions of mathematics and his contributions to catastrophe theory.
While at Warwick, Stewart edited the mathematical magazine Manifold. He also wrote a column called "Mathematical Recreations" for Scientific American magazine from 1991 to 2001. This followed the work of past columnists like Martin Gardner, Douglas Hofstadter, and A. K. Dewdney. Altogether, he wrote 96 columns for Scientific American, which were later reprinted in the books "Math Hysteria", "How to Cut a Cake: And Other Mathematical Conundrums" and "Cows in the Maze".
Stewart has held visiting academic positions in Germany (1974), New Zealand (1976), and the US (University of Connecticut 1977–78, University of Houston 1983–84).
Stewart has published more than 140 scientific papers, including a series of influential papers co-authored with Jim Collins on coupled oscillators and the symmetry of animal gaits.
Stewart has collaborated with Jack Cohen and Ter |
https://en.wikipedia.org/wiki/Phantom%20power | Phantom power, in the context of professional audio equipment, is DC electric power equally applied to both signal wires in balanced microphone cables, forming a phantom circuit, to operate microphones that contain active electronic circuitry.
It is best known as a convenient power source for condenser microphones, though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.
Phantom power supplies are often built into mixing consoles, microphone preamplifiers and similar equipment. In addition to powering the circuitry of a microphone, traditional condenser microphones also use phantom power for polarizing the microphone's transducer element.
History
Phantom powering was first used (and still is used) in copper wire-based landline telephone systems since the introduction of the rotary-dial telephone in 1919. One such application in the telephone system was to provide a DC signalling path around transformer-connected amplifiers such as analogue line transmission systems.
The first known commercially available phantom-powered microphone was the Schoeps model CMT 20, which came out in 1964, built to the specifications of French radio with 9–12 volt DC phantom power; the positive pole of this powering was grounded. Microphone preamplifiers of the Nagra IV-series tape recorders offered this type of powering as an option for many years and Schoeps continued to support "negative phantom" until the CMT series was discontinued in the mid-1970s, but it is obsolete now.
In 1966, Neumann GmbH presented a new type of transistorized microphone to the Norwegian Broadcasting Corporation, NRK. Norwegian Radio had requested phantom-powered operation. Since NRK already had 48-volt power available in their studios for their emergency lighting systems, this voltage was used for powering the new microphones (model KM 84), and is the origin of 48-volt phantom power. This arra |
https://en.wikipedia.org/wiki/Hushmail | Hushmail is an encrypted proprietary web-based email service offering PGP-encrypted e-mail and vanity domain service. Hushmail uses OpenPGP standards. If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext. In July, 2016, the company launched an iOS app that offers end-to-end encryption and full integration with the webmail settings. The company is located in Vancouver, British Columbia, Canada.
History
Hushmail was founded by Cliff Baltzley in 1999 after he left Ultimate Privacy.
Accounts
Individuals
There is one type of paid account, Hushmail Premium, which provides 10GB of storage, as well as IMAP and POP3 service. Hushmail offers a two-week free trial of this account.
Businesses
The standard business account provides the same features as the paid individual account, plus other features like vanity domain, email forwarding, catch-all email and user admin. A standard business plan with email archiving is also available. Features like secure forms and email archiving can be found in the healthcare and legal industry-specific plans.
Additional security features include hidden IP addresses in e-mail headers, two-step verification and HIPAA compliant encryption.
Instant messaging
An instant messaging service, Hush Messenger, was offered until July 1, 2011.
Compromises to email privacy
Hushmail received favorable reviews in the press. It was believed that possible threats, such as demands from the legal system to reveal the content of traffic through the system, were not imminent in Canada unlike the United States and that if data were to be handed over, encrypted messages would be |
https://en.wikipedia.org/wiki/Canary%20trap | A canary trap is a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document.
The term was coined by Tom Clancy in his novel Patriot Games, although Clancy did not invent the technique. The actual method (usually referred to as a barium meal test in espionage circles) has been used by intelligence agencies for many years. The fictional character Jack Ryan describes the technique he devised for identifying the sources of leaked classified documents:
Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it.
A refinement of this technique uses a thesaurus program to shuffle through synonyms, thus making every copy of the document unique.
Known canary trap cases
Following the troubled production of Star Trek: The Motion Picture in the late 1970s, Paramount Pictures effectively replaced Gene Roddenberry as producer of further movies in the franchise with Harve Bennett. Roddenberry was retained as an "executive consultant", due to the high regard the series' fans held him in; while he had little real authority he was still kept involved in the creative process. The fans often complained about particular plot developments proposed for the films, such as the death of S |
https://en.wikipedia.org/wiki/Additive%20group | An additive group is a group of which the group operation is to be thought of as addition in some sense. It is usually abelian, and typically written using the symbol + for its binary operation.
This terminology is widely used with structures equipped with several operations for specifying the structure obtained by forgetting the other operations. Examples include the additive group of the integers, of a vector space and of a ring. This is particularly useful with rings and fields to distinguish the additive underlying group from the multiplicative group of the invertible elements.
References
Algebraic structures
Group theory |
https://en.wikipedia.org/wiki/Gabriel%27s%20horn | A Gabriel's horn (also called Torricelli's trumpet) is a type of geometric figure that has infinite surface area but finite volume. The name refers to the Christian tradition where the archangel Gabriel blows the horn to announce Judgment Day. The properties of this figure were first studied by Italian physicist and mathematician Evangelista Torricelli in the 17th century.
These colourful informal names and the allusion to religion came along later.
Torricelli's own name for it is to be found in the Latin title of his paper , written in 1643, a truncated acute hyperbolic solid, cut by a plane.
Volume 1, part 1 of his published the following year included that paper and a second more orthodox (for the time) Archimedean proof of its theorem about the volume of a truncated acute hyperbolic solid.
This name was used in mathematical dictionaries of the 18th century, including "Hyperbolicum Acutum" in Harris' 1704 dictionary and in Stone's 1726 one, and the French translation in d'Alembert's 1751 one.
Although credited with primacy by his contemporaries, Torricelli was not the first to describe an infinitely long shape with a finite volume or area.
The work of Nicole Oresme in the 14th century had either been forgotten by, or was unknown to them.
Oresme had posited such things as an infinitely long shape constructed by subdividing two squares of finite total area 2 using a geometric series and rearranging the parts into a figure, infinitely long in one dimension, comprising a series of rectangles.
Mathematical definition
Gabriel's horn is formed by taking the graph of
with the domain and rotating it in three dimensions about the axis. The discovery was made using Cavalieri's principle before the invention of calculus, but today, calculus can be used to calculate the volume and surface area of the horn between and , where . Using integration (see Solid of revolution and Surface of revolution for details), it is possible to find the volume and the surface area :
|
https://en.wikipedia.org/wiki/Computable%20set | In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not.
A set which is not computable is called noncomputable or undecidable.
A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set.
Formal definition
A subset of the natural numbers is called computable if there exists a total computable function such that if and if . In other words, the set is computable if and only if the indicator function is computable.
Examples and non-examples
Examples:
Every finite or cofinite subset of the natural numbers is computable. This includes these special cases:
The empty set is computable.
The entire set of natural numbers is computable.
Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable.
The subset of prime numbers is computable.
A recursive language is a computable subset of a formal language.
The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems.
Non-examples:
The set of Turing machines that halt is not computable.
The isomorphism class of two finite simplicial complexes is not computable.
The set of busy beaver champions is not computable.
Hilbert's tenth problem is not computable.
Properties
If A is a computable set then the complement of A is a computable set. If A and B are comput |
https://en.wikipedia.org/wiki/Sociable%20number | In mathematics, sociable numbers are numbers whose aliquot sums form a periodic sequence. They are generalizations of the concepts of perfect numbers and amicable numbers. The first two sociable sequences, or sociable chains, were discovered and named by the Belgian mathematician Paul Poulet in 1918. In a sociable sequence, each number is the sum of the proper divisors of the preceding number, i.e., the sum excludes the preceding number itself. For the sequence to be sociable, the sequence must be cyclic and return to its starting point.
The period of the sequence, or order of the set of sociable numbers, is the number of numbers in this cycle.
If the period of the sequence is 1, the number is a sociable number of order 1, or a perfect number—for example, the proper divisors of 6 are 1, 2, and 3, whose sum is again 6. A pair of amicable numbers is a set of sociable numbers of order 2. There are no known sociable numbers of order 3, and searches for them have been made up to as of 1970.
It is an open question whether all numbers end up at either a sociable number or at a prime (and hence 1), or, equivalently, whether there exist numbers whose aliquot sequence never terminates, and hence grows without bound.
Example
As an example, the number 1,264,460 is a sociable number whose cyclic aliquot sequence has a period of 4:
The sum of the proper divisors of () is
1 + 2 + 4 + 5 + 10 + 17 + 20 + 34 + 68 + 85 + 170 + 340 + 3719 + 7438 + 14876 + 18595 + 37190 + 63223 + 74380 + 126446 + 252892 + 316115 + 632230 = 1547860,
the sum of the proper divisors of () is
1 + 2 + 4 + 5 + 10 + 20 + 193 + 386 + 401 + 772 + 802 + 965 + 1604 + 1930 + 2005 + 3860 + 4010 + 8020 + 77393 + 154786 + 309572 + 386965 + 773930 = 1727636,
the sum of the proper divisors of () is
1 + 2 + 4 + 521 + 829 + 1042 + 1658 + 2084 + 3316 + 431909 + 863818 = 1305184, and
the sum of the proper divisors of () is
1 + 2 + 4 + 8 + 16 + 32 + 40787 + 81574 + 163148 + 326296 + 652592 = 1264460.
List of k |
https://en.wikipedia.org/wiki/Industrial%20data%20processing | Industrial data processing is a branch of applied computer science that covers the area of design and programming of computerized systems which are not computers as such — often referred to as embedded systems (PLCs, automated systems, intelligent instruments, etc.). The products concerned contain at least one microprocessor or microcontroller, as well as couplers (for I/O).
Another current definition of industrial data processing is that it concerns those computer programs whose variables in some way represent physical quantities; for example the temperature and pressure of a tank, the position of a robot arm, etc.
Computer engineering |
https://en.wikipedia.org/wiki/Rate%20%28mathematics%29 | In mathematics, a rate is the quotient of two quantities in different units of measurement, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable.
Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux.
In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates.
In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
Properties and examples
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning con |
https://en.wikipedia.org/wiki/Preamplifier | A preamplifier, also known as a preamp, is an electronic amplifier that converts a weak electrical signal into an output signal strong enough to be noise-tolerant and strong enough for further processing, or for sending to a power amplifier and a loudspeaker. Without this, the final signal would be noisy or distorted. They are typically used to amplify signals from analog sensors such as microphones and pickups. Because of this, the preamplifier is often placed close to the sensor to reduce the effects of noise and interference.
Description
An ideal preamp will be linear (have a constant gain through its operating range), have high input impedance (requiring only a minimal amount of current to sense the input signal) and a low output impedance (when current is drawn from the output there is minimal change in the output voltage). It is used to boost the signal strength to drive the cable to the main instrument without significantly degrading the signal-to-noise ratio (SNR). The noise performance of a preamplifier is critical. According to Friis's formula, when the gain of the preamplifier is high, the SNR of the final signal is determined by the SNR of the input signal and the noise figure of the preamplifier.
Three basic types of preamplifiers are available:
current-sensitive preamplifier
parasitic-capacitance preamplifier
charge-sensitive preamplifier.
Audio systems
In an audio system, they are typically used to amplify signals from analog sensors to line level. The second amplifier is typically a power amplifier (power amp). The preamplifier provides voltage gain (e.g., from 10 mV to 1 V) but no significant current gain. The power amplifier provides the higher current necessary to drive loudspeakers. For these systems, some common sensors are microphones, instrument pickups, and phonographs. Preamplifiers are often integrated into the audio inputs on mixing consoles, DJ mixers, and sound cards. They can also be stand-alone devices
Examples
The integrat |
https://en.wikipedia.org/wiki/Crystal%20Space | Crystal Space is an unmaintained framework for developing 3D applications written in C++ by Jorrit Tyberghein and others. The first public release was on August 26, 1997. It is typically used as a game engine but the framework is more general and can be used for any kind of 3D visualization. It is very portable and runs on Microsoft Windows, Linux, UNIX, and Mac OS X. It is also free and open-source software, licensed under the GNU LGPL-2.0-or-later, and was SourceForge.net's Project of the Month for February 2003. In 2019, one of the project's main developers described it as "effectively dead and has been for a good number of years".
Engine design
Crystal Space is programmed in object oriented C++. It is very modularly built with a number of more or less independent plugins. The client programs use the plugins, such as the OpenGL 3D renderer, by registering them via Crystal Space's Shared Class Facility (SCF).
Features
Crystal Space has modules for 2D and 3D graphics, sound, collision detection and physics through ODE and Bullet.
Graphics:
OpenGL rendering
Supports hardware acceleration from all major card vendors
Allows use of shaders
Library of common shaders like normal mapping, parallax mapping and hardware skinning
Supports software rendering with limited features
Mesh objects:
Plugin-based mesh system
Triangle-based meshes with frame and bone animation support
Collision detection and dynamics:
ODE and Bullet dynamics
Simplified collision detection when full dynamic simulation is not needed
Reception and usage
The engine was for instance used for the OpenOutcast and PlaneShift projects. It was the Project of the Month on SourceForge in February 2003.
References
External links
at SourceForge
Crystal Space engine details and reviews at the Internet Archive
1997 software
Cross-platform software
Free game engines
Free software programmed in C++
Game engines for Linux
Python (programming language)-scriptable game engines |
https://en.wikipedia.org/wiki/Hereditarily%20finite%20set | In mathematics and set theory, hereditarily finite sets are defined as finite sets whose elements are all hereditarily finite sets. In other words, the set itself is finite, and all of its elements are finite sets, recursively all the way down to the empty set.
Formal definition
A recursive definition of well-founded hereditarily finite sets is as follows:
Base case: The empty set is a hereditarily finite set.
Recursion rule: If a1,...,ak are hereditarily finite, then so is {a1,...,ak}.
and only sets that can be built by a finite number of applications of these two rules are hereditarily finite.
The set is an example for such a hereditarily finite set and so is the empty set .
On the other hand, the sets or are examples of finite sets that are not hereditarily finite. For example, the first cannot be hereditarily finite since it contains at least one infinite set as an element, when .
Discussion
The class of hereditarily finite sets is denoted by , meaning that the cardinality of each member is smaller than . (Analogously, the class of hereditarily countable sets is denoted by .)
It can also be denoted by , which denotes the th stage of the von Neumann universe.
The class is countable.
Ackermann coding
In 1937, Wilhelm Ackermann introduced an encoding of hereditarily finite sets as natural numbers.
It is defined by a function that maps each hereditarily finite set to a natural number, given by the following recursive definition:
For example, the empty set contains no members, and is therefore mapped to an empty sum, that is, the number zero. On the other hand, a set with distinct members is mapped to .
The inverse of , which maps natural numbers back to sets, is
where BIT denotes the BIT predicate.
The Ackermann coding can be used to construct a model of finitary set theory in the natural numbers. More precisely, (where is the converse relation of BIT, swapping its two arguments) models Zermelo–Fraenkel set theory without the axiom of infinity |
https://en.wikipedia.org/wiki/Quotient | In arithmetic, a quotient (from 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics. It has two definitions: either the integer part of a division (in the case of Euclidean division), or as a fraction or a ratio (in the case of a general division). For example, when dividing 20 (the dividend) by 3 (the divisor), the quotient is 6 (with a remainder of 2) in the first sense, and (a repeating decimal) in the second sense.
In metrology (International System of Quantities and the International System of Units), "quotient" refers to the general case with respect to the units of measurement of physical quantities.
Ratios is the special case for dimensionless quotients of two quantities of the same kind.
Quotients with a non-trivial dimension and compound units, especially when the divisor is a duration (e.g., "per second"), are known as rates.
For example, density (mass divided by volume, in units of kg/m3) is said to be a "quotient", whereas mass fraction (mass divided by mass, in kg/kg or in percent) is a "ratio".
Specific quantities are intensive quantities resulting from the quotient of a physical quantity by mass, volume, or other measures of the system "size".
Notation
The quotient is most frequently encountered as two numbers, or two variables, divided by a horizontal line. The words "dividend" and "divisor" refer to each individual part, while the word "quotient" refers to the whole.
Integer part definition
The quotient is also less commonly defined as the greatest whole number of times a divisor may be subtracted from a dividend—before making the remainder negative. For example, the divisor 3 may be subtracted up to 6 times from the dividend 20, before the remainder becomes negative:
20 − 3 − 3 − 3 − 3 − 3 − 3 ≥ 0,
while
20 − 3 − 3 − 3 − 3 − 3 − 3 − 3 < 0.
In this sense, a quotient is the integer part of the ratio of two numbers.
Quotient of two integers
A rati |
https://en.wikipedia.org/wiki/Fa%C3%A7ade | A façade or facade (; ) is generally the front part or exterior of a building. It is a loanword from the French (), which means "frontage" or "face".
In architecture, the façade of a building is often the most important aspect from a design standpoint, as it sets the tone for the rest of the building. From the engineering perspective, the façade is also of great importance due to its impact on energy efficiency. For historical façades, many local zoning regulations or other laws greatly restrict or even forbid their alteration.
Etymology
The word is a loanword from the French , which in turn comes from the Italian , from meaning 'face', ultimately from post-classical Latin . The earliest usage recorded by the Oxford English Dictionary is 1656.
Façades added to earlier buildings
It was quite common in the Georgian period for existing houses in English towns to be given a fashionable new façade. For example, in the city of Bath, The Bunch of Grapes in Westgate Street appears to be a Georgian building, but the appearance is only skin deep and some of the interior rooms still have Jacobean plasterwork ceilings.
This new construction has happened also in other places: in Santiago de Compostela the three-metre-deep Casa do Cabido was built to match the architectural order of the square, and the main Churrigueresque façade of the Santiago de Compostela Cathedral, facing the Plaza del Obradoiro, is actually encasing and concealing the older Portico of Glory.
High rise façades
In modern high-rise building, the exterior walls are often suspended from the concrete floor slabs. Examples include curtain walls and precast concrete walls. The façade can at times be required to have a fire-resistance rating, for instance, if two buildings are very close together, to lower the likelihood of fire spreading from one building to another.
In general, the façade systems that are suspended or attached to the precast concrete slabs will be made from aluminum (powder coated or anodi |
https://en.wikipedia.org/wiki/Burroughs%20Large%20Systems | The Burroughs Large Systems Group produced a family of large 48-bit mainframes using stack machine instruction sets with dense syllables. The first machine in the family was the B5000 in 1961, which was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. The B5000 evolved into the B5500 (disk rather than drum) and the B5700 (up to four systems running as a cluster). Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line.
In the 1970s, the Burroughs Corporation was organized into three divisions with very different product line architectures for high-end, mid-range, and entry-level business computer systems. Each division's product line grew from a different concept for how to optimize a computer's instruction set for particular programming languages. "Burroughs Large Systems" referred to all of these large-system product lines together, in contrast to the COBOL-optimized Medium Systems (B2000, B3000, and B4000) or the flexible-architecture Small Systems (B1000).
Background
Founded in the 1880s, Burroughs was the oldest continuously operating company in computing (Elliott Brothers was founded before Burroughs, but did not make computing devices in the 19th century). By the late 1950s its computing equipment was still limited to electromechanical accounting machines such as the Sensimatic. It had nothing to compete with its traditional rivals IBM and NCR, who had started to produce larger-scale computers, or with recently founded Univac. In 1956, they purchased ElectroData Corporation and rebranded its design as the B205.
Burroughs' first internally developed machine, the B5000, was designed in 1961 and Burroughs sought to address its late entry in the market with the strategy of a completely different design based on the most advanced computing ideas available at the time. While the B5000 architecture is dead, it inspired the B6500 (and subsequent B6700 and B7700). Computers |
https://en.wikipedia.org/wiki/List%20of%20writing%20systems | This is a list of writing systems (or scripts), classified according to some common distinguishing features.
The usual name of the script is given first; the name of the language(s) in which the script is written follows (in brackets), particularly in the case where the language name differs from the script name. Other informative or qualifying annotations for the script may also be provided.
Pictographic/ideographic writing systems
Ideographic scripts (in which graphemes are ideograms representing concepts or ideas rather than a specific word in a language) and pictographic scripts (in which the graphemes are iconic pictures) are not thought to be able to express all that can be communicated by language, as argued by the linguists John DeFrancis and J. Marshall Unger. Essentially, they postulate that no full writing system can be completely pictographic or ideographic; it must be able to refer directly to a language in order to have the full expressive capacity of a language. Unger disputes claims made on behalf of Blissymbols in his 2004 book Ideogram.
Although a few pictographic or ideographic scripts exist today, there is no single way to read them because there is no one-to-one correspondence between symbol and language. Hieroglyphs were commonly thought to be ideographic before they were translated, and to this day, Chinese is often erroneously said to be ideographic. In some cases of ideographic scripts, only the author of a text can read it with any certainty, and it may be said that they are interpreted rather than read. Such scripts often work best as mnemonic aids for oral texts or as outlines that will be fleshed out in speech.
Adinkra
Aztec script Nahuatl (includes syllabic and logographic elements)
Birch-bark glyphsAnishinaabemowin
Dongba Naxi Although this is often supplemented with syllabic Geba script.
Emoji – used in electronic messages and web pages.
Ersu Shābā – Ersu
Kaidā glyphs
Lusona
Nsibidi Ekoi, Efik/Ibibio, Igbo
Siglas poveiras
Sucke |
https://en.wikipedia.org/wiki/Vegetative%20reproduction | Vegetative reproduction (also known as vegetative propagation, vegetative multiplication or cloning) is any form of asexual reproduction occurring in plants in which a new plant grows from a fragment or cutting of the parent plant or specialized reproductive structures, which are sometimes called vegetative propagules.
Many plants naturally reproduce this way, but it can also be induced artificially. Horticulturists have developed asexual propagation techniques that use vegetative propagules to replicate plants. Success rates and difficulty of propagation vary greatly. Monocotyledons typically lack a vascular cambium, making them more challenging to propagate.
Background
Plant propagation is the process of plant reproduction of a species or cultivar, and it can be sexual or asexual. It can happen through the use of vegetative parts of the plants, such as leaves, stems, and roots to produce new plants or through growth from specialized vegetative plant parts.
While many plants reproduce by vegetative reproduction, they rarely exclusively use that method to reproduce. Vegetative reproduction is not evolutionary advantageous; it does not allow for genetic diversity and could lead plants to accumulate deleterious mutations. Vegetative reproduction is favored when it allows plants to produce more offspring per unit of resource than reproduction through seed production. In general, juveniles of a plant are easier to propagate vegetatively.
Although most plants normally reproduce sexually, many can reproduce vegetatively, or can be induced to do so via hormonal treatments. This is because meristematic cells capable of cellular differentiation are present in many plant tissues.
Vegetative propagation is usually considered a cloning method. However, root cuttings of thornless blackberries (Rubus fruticosus) will revert to thorny type because the adventitious shoot develops from a cell that is genetically thorny. Thornless blackberry is a chimera, with the epidermal |
https://en.wikipedia.org/wiki/Fluctuation%20theorem | The fluctuation theorem (FT), which originated from statistical mechanics, deals with the relative probability that the entropy of a system which is currently away from thermodynamic equilibrium (i.e., maximum entropy) will increase or decrease over a given amount of time. While the second law of thermodynamics predicts that the entropy of an isolated system should tend to increase until it reaches equilibrium, it became apparent after the discovery of statistical mechanics that the second law is only a statistical one, suggesting that there should always be some nonzero probability that the entropy of an isolated system might spontaneously decrease; the fluctuation theorem precisely quantifies this probability.
Statement
Roughly, the fluctuation theorem relates to the probability distribution of the time-averaged irreversible entropy production, denoted . The theorem states that, in systems away from equilibrium over a finite time t, the ratio between the probability that takes on a value A and the probability that it takes the opposite value, −A, will be exponential in At.
In other words, for a finite non-equilibrium system in a finite time, the FT gives a precise mathematical expression for the probability that entropy will flow in a direction opposite to that dictated by the second law of thermodynamics.
Mathematically, the FT is expressed as:
This means that as the time or system size increases (since is extensive), the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially. The FT is one of the few expressions in non-equilibrium statistical mechanics that is valid far from equilibrium.
Note that the FT does not state that the second law of thermodynamics is wrong or invalid. The second law of thermodynamics is a statement about macroscopic systems. The FT is more general. It can be applied to both microscopic and macroscopic systems. When applied to macroscopic systems, the FT |
https://en.wikipedia.org/wiki/Sosumi | Sosumi is an alert sound introduced by Jim Reekes in Apple Inc.'s Macintosh System 7 operating system in 1991. The name is derived from the phrase "so, sue me!" because of a long running court battle with Apple Corps, the similarly named music company, regarding the use of music in Apple Inc.'s computer products.
History
Sosumi is a short xylophone sample, which gained notoriety in computer folklore as a defiant pun name, in response to a long-running Apple Corps v Apple Computer trademark conflict. The sound was long included in subsequent versions of its computer OS releases. However, in 2020 it was replaced in macOS Big Sur.
During the development of System 7, the two companies concluded a settlement agreement from an earlier dispute when Apple added a sound synthesis chip to its Apple IIGS machine. As a result, Apple Computer was prohibited from using its trademark on "creative works whose principal content is music".
When new sounds for System 7 were created, the sounds were reviewed by Apple's Legal Department who objected that the new sound alert "chime" had a name that was "too musical", under a 1991 settlement. Jim Reekes, the creator of the new sound alerts for System 7, had grown frustrated with the legal scrutiny and first quipped it should be named "Let It Beep", a pun on "Let It Be". When someone remarked that that would not pass the Legal Department's approval, he remarked, "so sue me". After a brief reflection, he resubmitted the sound's name as sosumi (a homophone of "so sue me"). Careful to submit it in written form rather than spoken form to avoid pronunciation, he told the Legal Department that the name was Japanese and had nothing to do with music.
In macOS Big Sur, the original chime was replaced with a different sample, named Sonumi (presumably a homophone of "so new me", due to the change in versioning from macOS 10.15 to macOS 11). The original name was retained in the first public version of the OS, and was later changed to "Sonumi" as |
https://en.wikipedia.org/wiki/Modal%20logic | Modal logic is a kind of logic used to represent statements about necessity and possibility. It plays a major role in philosophy and related fields as a tool for understanding concepts such as knowledge, obligation, and causation. For instance, in epistemic modal logic, the formula can be used to represent the statement that is known. In deontic modal logic, that same formula can represent that is a moral obligation. Modal logic considers the inferences that modal statements give rise to. For instance, most epistemic logics treat the formula as a tautology, representing the principle that only true statements can count as knowledge.
Modal logics are formal systems that include unary operators such as and , representing possibility and necessity respectively. For instance the modal formula can be read as "possibly " while can be read as "necessarily ". In the standard relational semantics for modal logic, formulas are assigned truth values relative to a possible world. A formula's truth value at one possible world can depend on the truth values of other formulas at other accessible possible worlds. In particular, is true at a world if is true at some accessible possible world, while is true at a world if is true at every accessible possible world. A variety of proof systems exist which are sound and complete with respect to the semantics one gets by restricting the accessibility relation. For instance, the deontic modal logic D is sound and complete if one requires the accessibility relation to be serial.
While the intuition behind modal logic dates back to antiquity, the first modal axiomatic systems were developed by C. I. Lewis in 1912. The now-standard relational semantics emerged in the mid twentieth century from work by Arthur Prior, Jaakko Hintikka, and Saul Kripke. Recent developments include alternative topological semantics such as neighborhood semantics as well as applications of the relational semantics beyond its original philosophical motiv |
https://en.wikipedia.org/wiki/List%20of%20content%20management%20systems | Content management systems (CMS) are used to organize and facilitate collaborative content creation. Many of them are built on top of separate content management frameworks. The list is limited to notable services.
Open source software
This section lists free and open-source software that can be installed and managed on a web server.
Systems listed on a light purple background are no longer in active development.
Java
Java packages/bundle
Microsoft ASP.NET
Perl
PHP
Python
Ruby on Rails
ColdFusion Markup Language (CFML)
JavaScript
Others
Software as a service (SaaS)
This section lists proprietary software that includes software, hosting, and support with a single vendor. This section includes free services.
Proprietary software
This section lists proprietary software to be installed and managed on a user's own server. This section includes freeware proprietary software.
Systems listed on a light purple background are no longer in active development.
Other content management frameworks
A content management framework (CMF) is a system that facilitates the use of reusable components or customized software for managing Web content. It shares aspects of a Web application framework and a content management system (CMS).
Below is a list of notable systems that claim to be CMFs.
See also
Comparison of web frameworks
Comparison of wiki software
Comparison of photo gallery software
References
Content management systems |
https://en.wikipedia.org/wiki/Nowhere%20continuous%20function | In mathematics, a nowhere continuous function, also called an everywhere discontinuous function, is a function that is not continuous at any point of its domain. If is a function from real numbers to real numbers, then is nowhere continuous if for each point there is some such that for every we can find a point such that and . Therefore, no matter how close we get to any fixed point, there are even closer points at which the function takes not-nearby values.
More general definitions of this kind of function can be obtained, by replacing the absolute value by the distance function in a metric space, or by using the definition of continuity in a topological space.
Examples
Dirichlet function
One example of such a function is the indicator function of the rational numbers, also known as the Dirichlet function. This function is denoted as and has domain and codomain both equal to the real numbers. By definition, is equal to if is a rational number and it is if otherwise.
More generally, if is any subset of a topological space such that both and the complement of are dense in then the real-valued function which takes the value on and on the complement of will be nowhere continuous. Functions of this type were originally investigated by Peter Gustav Lejeune Dirichlet.
Non-trivial additive functions
A function is called an if it satisfies Cauchy's functional equation:
For example, every map of form where is some constant, is additive (in fact, it is linear and continuous). Furthermore, every linear map is of this form (by taking ).
Although every linear map is additive, not all additive maps are linear. An additive map is linear if and only if there exists a point at which it is continuous, in which case it is continuous everywhere. Consequently, every non-linear additive function is discontinuous at every point of its domain.
Nevertheless, the restriction of any additive function to any real scalar multiple of the rational number |
https://en.wikipedia.org/wiki/Radiation%20protection | Radiation protection, also known as radiological protection, is defined by the International Atomic Energy Agency (IAEA) as "The protection of people from harmful effects of exposure to ionizing radiation, and the means for achieving this". Exposure can be from a source of radiation external to the human body or due to internal irradiation caused by the ingestion of radioactive contamination.
Ionizing radiation is widely used in industry and medicine, and can present a significant health hazard by causing microscopic damage to living tissue. There are two main categories of ionizing radiation health effects. At high exposures, it can cause "tissue" effects, also called "deterministic" effects due to the certainty of them happening, conventionally indicated by the unit gray and resulting in acute radiation syndrome. For low level exposures there can be statistically elevated risks of radiation-induced cancer, called "stochastic effects" due to the uncertainty of them happening, conventionally indicated by the unit sievert.
Fundamental to radiation protection is the avoidance or reduction of dose using the simple protective measures of time, distance and shielding. The duration of exposure should be limited to that necessary, the distance from the source of radiation should be maximised, and the source or the target shielded wherever possible. To measure personal dose uptake in occupational or emergency exposure, for external radiation personal dosimeters are used, and for internal dose to due to ingestion of radioactive contamination, bioassay techniques are applied.
For radiation protection and dosimetry assessment the International Commission on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU) publish recommendations and data which is used to calculate the biological effects on the human body of certain levels of radiation, and thereby advise acceptable dose uptake limits.
Principles
The ICRP recommends, devel |
https://en.wikipedia.org/wiki/Horn%20clause | In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951.
Definition
A Horn clause is a disjunctive clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal.
Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause.
A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause; a definite clause with no negative literals is a unit clause, and a unit clause without variables is a fact;.
A Horn clause without a positive literal is a goal clause.
Note that the empty clause, consisting of no literals (which is equivalent to false) is a goal clause.
These three kinds of Horn clauses are illustrated in the following propositional example:
All variables in a clause are implicitly universally quantified with the scope being the entire clause. Thus, for example:
¬ human(X) ∨ mortal(X)
stands for:
∀X( ¬ human(X) ∨ mortal(X) ),
which is logically equivalent to:
∀X ( human(X) → mortal(X) ).
Significance
Horn clauses play a basic role in constructive logic and computational logic. They are important in automated theorem proving by first-order resolution, because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent of a goal clause and a definite clause is a goal clause. These properties of Horn clauses can lead to greater efficiency of proving a theorem: the goal clause is the negation of this theorem; see Goal clause in the above table. Intuitively, if we wish to prove φ, we assume ¬φ (the goal) and check whether such assumption leads to a contradiction. If so, then φ must hold. This way, a mechanical proving tool needs to maintain only one set of formulas (assumptions |
https://en.wikipedia.org/wiki/Free%20object | In mathematics, the idea of a free object is one of the basic concepts of abstract algebra. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure. Examples include free groups, tensor algebras, or free lattices.
The concept is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms.
Definition
Free objects are the direct generalization to categories of the notion of basis in a vector space. A linear function between vector spaces is entirely determined by its values on a basis of the vector space The following definition translates this to any category.
A concrete category is a category that is equipped with a faithful functor to Set, the category of sets. Let be a concrete category with a faithful functor . Let be a set (that is, an object in Set), which will be the basis of the free object to be defined. A free object on is a pair consisting of an object in and an injection (called the canonical injection), that satisfies the following universal property:
For any object in and any map between sets , there exists a unique morphism in such that . That is, the following diagram commutes:
If free objects exist in , the universal property implies every map between two sets induces a unique morphism between the free objects built on them, and this defines a functor . It follows that, if free objects exist in , the functor , called the free functor is a left adjoint to the forgetful functor ; that is, there is a bijection
Examples
The creation of free objects proceeds in two steps. For algebras that conform to the associative law, the first step is to consider the collection of all possible words formed |
https://en.wikipedia.org/wiki/Nucleomorph | Nucleomorphs are small, vestigial eukaryotic nuclei found between the inner and outer pairs of membranes in certain plastids. They are thought to be vestiges of primitive red and green algal nuclei that were engulfed by a larger eukaryote. Because the nucleomorph lies between two sets of membranes, nucleomorphs support the endosymbiotic theory and are evidence that the plastids containing them are complex plastids. Having two sets of membranes indicate that the plastid, a prokaryote, was engulfed by a eukaryote, an alga, which was then engulfed by another eukaryote, the host cell, making the plastid an example of secondary endosymbiosis.
Organisms with known nucleomorphs
So far, only two monophyletic groups of organisms are known to contain plastids with a vestigial nucleus or nucleomorph: the cryptomonads of the supergroup Chromista and the chlorarachniophytes of the supergroup Rhizaria, both of which have examples of sequenced nucleomorph genomes. Studies of the genomic organization and of the molecular phylogeny have shown that the nucleomorph of the cryptomonads used to be the nucleus of a red alga, whereas the nucleomorph of the chlorarchniophytes was the nucleus of a green alga. In both groups of organisms the plastids originate from engulfed photoautotrophic eukaryotes.
Of the two known plastids that contain nucleomorphs, both have four membranes, the nucleomorph residing in the periplastidial compartment, evidence of being engulfed by a eukaryote through phagocytosis.
In addition, some species within the dinoflagellates that have gone through tertiary endosymbiosis also have endosymbionts with both a nucleus and mitochondria present.
Nucleomorph genome
Nucleomorphs represent some of the smallest genomes ever sequenced. After the red or green alga was engulfed by a cryptomonad or chlorarachniophyte, respectively, its genome was reduced. The nucleomorph genomes of both cryptomonads and chlorarachniophytes converged upon a similar size from larger genomes. |
https://en.wikipedia.org/wiki/Ultrametric%20space | In mathematics, an ultrametric space is a metric space in which the triangle inequality is strengthened to . Sometimes the associated metric is also called a non-Archimedean metric or super-metric.
Formal definition
An ultrametric on a set is a real-valued function
(where denote the real numbers), such that for all :
;
(symmetry);
;
if then ;
} (strong triangle inequality or ultrametric inequality).
An ultrametric space is a pair consisting of a set together with an ultrametric on , which is called the space's associated distance function (also called a metric).
If satisfies all of the conditions except possibly condition 4 then is called an ultrapseudometric on . An ultrapseudometric space is a pair consisting of a set and an ultrapseudometric on .
In the case when is an Abelian group (written additively) and is generated by a length function (so that ), the last property can be made stronger using the Krull sharpening to:
with equality if .
We want to prove that if , then the equality occurs if . Without loss of generality, let us assume that . This implies that . But we can also compute . Now, the value of cannot be , for if that is the case, we have contrary to the initial assumption. Thus, , and . Using the initial inequality, we have and therefore .
Properties
From the above definition, one can conclude several typical properties of ultrametrics. For example, for all , at least one of the three equalities or or holds. That is, every triple of points in the space forms an isosceles triangle, so the whole space is an isosceles set.
Defining the (open) ball of radius centred at as , we have the following properties:
Every point inside a ball is its center, i.e. if then .
Intersecting balls are contained in each other, i.e. if is non-empty then either or .
All balls of strictly positive radius are both open and closed sets in the induced topology. That is, open balls are also closed, and closed balls (replace wit |
https://en.wikipedia.org/wiki/Rule%20of%2072 | In finance, the rule of 72, the rule of 70 and the rule of 69.3 are methods for estimating an investment's doubling time. The rule number (e.g., 72) is divided by the interest percentage per period (usually years) to obtain the approximate number of periods required for doubling. Although scientific calculators and spreadsheet programs have functions to find the accurate doubling time, the rules are useful for mental calculations and when only a basic calculator is available.
These rules apply to exponential growth and are therefore used for compound interest as opposed to simple interest calculations. They can also be used for decay to obtain a halving time. The choice of number is mostly a matter of preference: 69 is more accurate for continuous compounding, while 72 works well in common interest situations and is more easily divisible.
There are a number of variations to the rules that improve accuracy. For periodic compounding, the exact doubling time for an interest rate of r percent per period is
,
where t is the number of periods required. The formula above can be used for more than calculating the doubling time. If one wants to know the tripling time, for example, replace the constant 2 in the numerator with 3. As another example, if one wants to know the number of periods it takes for the initial value to rise by 50%, replace the constant 2 with 1.5.
Using the rule to estimate compounding periods
To estimate the number of periods required to double an original investment, divide the most convenient "rule-quantity" by the expected growth rate, expressed as a percentage.
For instance, if you were to invest $100 with compounding interest at a rate of 9% per annum, the rule of 72 gives 72/9 = 8 years required for the investment to be worth $200; an exact calculation gives ln(2)/ln(1+0.09) = 8.0432 years.
Similarly, to determine the time it takes for the value of money to halve at a given rate, divide the rule quantity by that rate.
To determine the time |
https://en.wikipedia.org/wiki/Return-to-zero | Return-to-zero (RZ or RTZ) describes a line code used in telecommunications signals in which the signal drops (returns) to zero between each pulse. This takes place even if a number of consecutive 0s or 1s occur in the signal. The signal is self-clocking. This means that a separate clock does not need to be sent alongside the signal, but suffers from using twice the bandwidth to achieve the same data-rate as compared to non-return-to-zero format.
The "zero" between each bit is a neutral or rest condition, such as a zero amplitude in pulse-amplitude modulation (PAM), zero phase shift in phase-shift keying (PSK), or mid-frequency in frequency-shift keying (FSK).
That "zero" condition is typically halfway between the significant condition representing a 1 bit and the other significant condition representing a 0 bit.
Although return-to-zero (RZ) contains a provision for synchronization, it still has a DC component resulting in “baseline wander” during long strings of 0 or 1 bits, just like the line code non-return-to-zero.
Return-to-zero in optical communication
Return to zero, inverted
Return-to-zero, inverted (RZI) is a method of mapping for transmission. The two-level RZI signal has a pulse (shorter than a clock cycle) if the binary signal is 0, and no pulse if the binary signal is 1. It is used (with a pulse 3/16 of a bit long) by the IrDA serial infrared (SIR) physical layer specification. Required bandwidth for this kind of modulation is: BW = R(data rate).
Bipolar return-to-Zero (bipolar RZ)
For bipolar return-to-zero (bipolar RZ), a binary one is encoded as +V volts, a binary zero is encoded as −V volts, and 0 volt is used to provide padding and separation between bits.
Bipolar return-to-zero encoding is used by the ARINC 429 bus.
See also
Other line codes that have 3 states:
Hybrid ternary code
Bipolar encoding
MLT-3 encoding
4B3T
References
Further reading
Encodings
Line codes |
https://en.wikipedia.org/wiki/AX.25 | AX.25 (Amateur X.25) is a data link layer protocol originally derived from layer 2 of the X.25 protocol suite and designed for use by amateur radio operators. It is used extensively on amateur packet radio networks.
AX.25 v2.0 and later occupies the data link layer, the second layer of the OSI model. It is responsible for establishing link-layer connections, transferring data encapsulated in frames between nodes, and detecting errors introduced by the communications channel. As AX.25 is a pre-OSI-model protocol, the original specification was not written to cleanly separate into OSI layers. This was rectified with version 2.0 (1984), which assumes compliance with OSI level 2.
AX.25 v2.2 (1998) added improvements to improve efficiency, especially at higher data rates. Stations can automatically negotiate payload sizes larger than the previous limitation of 256 bytes. Extended sequence numbers (7 vs. 3 bits) allow a larger window size, the number of frames that can be sent before waiting for acknowledgement. "Selective Reject" allows only the missing frames to be resent, rather than having to wastefully resend frames that have already been received successfully. Despite all these advantages, few implementations have been updated to include these improvements published more than 20 years ago. The only known complete implementation of v2.2, at this time (2020), is the Dire Wolf software TNC.
AX.25 is commonly used as the data link layer for network layer such as IPv4, with TCP used on top of that. AX.25 supports a limited form of source routing. Although it is possible to build AX.25 switches similar to the way Ethernet switches work, this has not yet been accomplished.
Specification
AX.25 does not define a physical layer implementation. In practice 1200 baud Bell 202 tones and 9600 baud G3RUH DFSK are almost exclusively used on VHF and UHF. On HF the standard transmission mode is 300 baud Bell 103 tones. At the physical layer, AX.25 defines only a "physical |
https://en.wikipedia.org/wiki/Directory%20service | In computing, a directory service or name service maps the names of network resources to their respective network addresses. It is a shared information infrastructure for locating, managing, administering and organizing everyday items and network resources, which can include volumes, folders, files, printers, users, groups, devices, telephone numbers and other objects. A directory service is a critical component of a network operating system. A directory server or name server is a server which provides such a service. Each resource on the network is considered an object by the directory server. Information about a particular resource is stored as a collection of attributes associated with that resource or object.
A directory service defines a namespace for the network. The namespace is used to assign a name (unique identifier) to each of the objects. Directories typically have a set of rules determining how network resources are named and identified, which usually includes a requirement that the identifiers be unique and unambiguous. When using a directory service, a user does not have to remember the physical address of a network resource; providing a name locates the resource. Some directory services include access control provisions, limiting the availability of directory information to authorized users.
Comparison with relational databases
Several things distinguish a directory service from a relational database. Data can be made redundant if it aids performance (e.g. by repeating values through rows in a table instead of relating them to the contents of a different table through a key, which technique is called denormalization; another technique could be the utilization of replicas for increasing actual throughput).
Directory schemas are object classes, attributes, name bindings and knowledge (namespaces) where an object class has:
Must - attributes that each instances must have
May - attributes which can be defined for an instance but can be omitted, wit |
https://en.wikipedia.org/wiki/CIX%20%28website%29 | CIX (originally Compulink Information eXchange) is an online based conferencing discussion system and was one of the earliest British Internet service providers. Founded in 1983 by Frank and Sylvia Thornley, it began as a FidoNet bulletin board system, but in 1987 was relaunched commercially as CIX. At the core of the service were many thousands of "conferences" - groups established by users to discuss particular topics, conceptually not unlike newsgroups but limited to CIX subscribers (who sometimes describe themselves as 'Cixen'). These conferences still exist today although the CIX service has since expanded to include many other features. The service is funded by a monthly subscription charge rather than by advertising.
In 1988 it provided the first commercial Internet email and Usenet access in the UK. CIX then grew rapidly, reaching a peak of more than 16,000 users in 1994, before starting to lose customers to the newly formed Internet service providers that offered free access to the mass market using 0845 dial-up, such as Demon (which was started by Cixen Cliff Stanford, whose CIX nickname was 'Demon'), Pipex, AOL and Freeserve. In 2011, it still had almost 9,000 users.
In its heyday, CIX was one of the UK's premier online locations for both technical and social interaction. It hosted several official online support areas for companies such as Borland and Novell and counted among its subscribers many of the UK's technology journalists (some of them wooed with free accounts), which ensured regular mention in the computing press.
The Liberal Democrats have used CIX as a conferencing system and a branded version of the off-line reader Ameol (A Most Excellent Offline-reader) is provided for their use.
Later company history
In 1996 the Thornleys decided to expand CIX's services to include full 0845 dialup Internet access known as CIX Internet. However, take up was limited (possibly due to an above-average cost) even though technically it was rated for many y |
https://en.wikipedia.org/wiki/Allopatric%20speciation | Allopatric speciation () – also referred to as geographic speciation, vicariant speciation, or its earlier name the dumbbell model – is a mode of speciation that occurs when biological populations become geographically isolated from each other to an extent that prevents or interferes with gene flow.
Various geographic changes can arise such as the movement of continents, and the formation of mountains, islands, bodies of water, or glaciers. Human activity such as agriculture or developments can also change the distribution of species populations. These factors can substantially alter a region's geography, resulting in the separation of a species population into isolated subpopulations. The vicariant populations then undergo genetic changes as they become subjected to different selective pressures, experience genetic drift, and accumulate different mutations in the separated populations' gene pools. The barriers prevent the exchange of genetic information between the two populations leading to reproductive isolation. If the two populations come into contact they will be unable to reproduce—effectively speciating. Other isolating factors such as population dispersal leading to emigration can cause speciation (for instance, the dispersal and isolation of a species on an oceanic island) and is considered a special case of allopatric speciation called peripatric speciation.
Allopatric speciation is typically subdivided into two major models: vicariance and peripatric. Both models differ from one another by virtue of their population sizes and geographic isolating mechanisms. The terms allopatry and vicariance are often used in biogeography to describe the relationship between organisms whose ranges do not significantly overlap but are immediately adjacent to each other—they do not occur together or only occur within a narrow zone of contact. Historically, the language used to refer to modes of speciation directly reflected biogeographical distributions. As such, allopa |
https://en.wikipedia.org/wiki/Compressive%20strength | In mechanics, compressive strength (or compression strength) is the capacity of a material or structure to withstand loads tending to reduce size (as opposed to tensile strength which withstands loads tending to elongate). In other words, compressive strength resists compression (being pushed together), whereas tensile strength resists tension (being pulled apart). In the study of strength of materials, tensile strength, compressive strength, and shear strength can be analyzed independently.
Some materials fracture at their compressive strength limit; others deform irreversibly, so a given amount of deformation may be considered as the limit for compressive load. Compressive strength is a key value for design of structures.
Compressive strength is often measured on a universal testing machine. Measurements of compressive strength are affected by the specific test method and conditions of measurement. Compressive strengths are usually reported in relationship to a specific technical standard.
Introduction
When a specimen of material is loaded in such a way that it extends it is said to be in tension. On the other hand, if the material compresses and shortens it is said to be in compression.
On an atomic level, the molecules or atoms are forced apart when in tension whereas in compression they are forced together. Since atoms in solids always try to find an equilibrium position, and distance between other atoms, forces arise throughout the entire material which oppose both tension or compression. The phenomena prevailing on an atomic level are therefore similar.
The "strain" is the relative change in length under applied stress; positive strain characterizes an object under tension load which tends to lengthen it, and a compressive stress that shortens an object gives negative strain. Tension tends to pull small sideways deflections back into alignment, while compression tends to amplify such deflection into buckling.
Compressive strength is measured on materi |
https://en.wikipedia.org/wiki/Source-code%20compatibility | Source-code compatibility (source-compatible) means that a program can run on computers (or operating systems), independently of binary-code compatibility and that the source code is needed for portability.
The source code must be compiled before running, unless the computer used has an interpreter for the language at hand. The term is also used for assembly language compatibility, where the source is a human-readable form of machine code that must be converted into numerical (i.e. executable) machine code by an assembler. This is different from binary-code compatibility, where no recompilation (or assembly) is needed.
Source compatibility is a major issue in the developing of computer programs. For example, most Unix systems are source-compatible, as long as one uses only standard libraries. Microsoft Windows systems are source-compatible across one major family (the Windows NT family, from NT 3.1 through Windows 11, or the family that includes Windows 95, Windows 98, and Windows Me), with partial source compatibility between the two families.
See also
Backward compatibility
Source upgrade
References
Backward compatibility
Source code |
https://en.wikipedia.org/wiki/RS-422 | RS-422, also known as TIA/EIA-422, is a technical standard originated by the Electronic Industries Alliance, first issued in 1975, that specifies electrical characteristics of a digital signaling circuit. It was meant to be the foundation of a suite of standards that would replace the older RS-232C standard with standards that offered much higher speed, better immunity from noise, and longer cable lengths. RS-422 systems can transmit data at rates as high as 10 Mbit/s, or may be sent on cables as long as at lower rates. It is closely related to RS-423, which uses the same signaling systems but on a different wiring arrangement.
RS-422 specifies differential signaling, with every data line paired with a dedicated return line. It is the voltage difference between these two lines that define the mark and space, rather than, as in RS-232, the difference in voltage between a data line and a local ground. As the ground voltage can differ at either end of the cable, this required RS-232 to use signals with voltage magnitudes greater than 5 volts. Moving to dedicated return lines and always defining ground in reference to the sender allows RS-422 to use 0.4 V, allowing it to run at much higher speeds. RS-423 differs primarily in that it has a single return pin instead of one for each data pin.
Standard scope
RS-422 is the common short form title of American National Standards Institute (ANSI) standard ANSI/TIA/EIA-422-B Electrical Characteristics of Balanced Voltage Differential Interface Circuits and its international equivalent ITU-T Recommendation T-REC-V.11, also known as X.27. These technical standards specify the electrical characteristics of the balanced voltage digital interface circuit. RS-422 provides for data transmission, using balanced, or differential, signaling, with unidirectional/non-reversible, terminated or non-terminated transmission lines, point to point, or multi-drop. In contrast to EIA-485, RS-422/V.11 does not allow multiple drivers but only m |
https://en.wikipedia.org/wiki/Wait%20state | A wait state is a delay experienced by a computer processor when accessing external memory or another device that is slow to respond.
Computer microprocessors generally run much faster than the computer's other subsystems, which hold the data the CPU reads and writes. Even memory, the fastest of these, cannot supply data as fast as the CPU could process it. In an example from 2011, typical PC processors like the Intel Core 2 and the AMD Athlon 64 X2 run with a clock of several GHz, which means that one clock cycle is less than 1 nanosecond (typically about 0.3 ns to 0.5 ns on modern desktop CPUs), while main memory has a latency of about 15–30 ns. Some second-level CPU caches run slower than the processor core.
When the processor needs to access external memory, it starts placing the address of the requested information on the address bus. It then must wait for the answer, that may come back tens if not hundreds of cycles later. Each of the cycles spent waiting is called a wait state.
Wait states are a pure waste of a processor's performance. Modern designs try to eliminate or hide them using a variety of techniques: CPU caches, instruction pipelines, instruction prefetch, branch prediction, simultaneous multithreading and others. No single technique is 100% successful, but together can significantly reduce the problem.
Energy conservation
Wait states can be used to reduce the energy consumption of a processor, by allowing the main processor clock to either slow down or temporarily pause during the wait state if the CPU has no other work to do. Rather than spinning uselessly in a tight loop waiting for data, sporadically reducing the clock speed in this manner helps to keep the processor core cool and to extend battery life in portable computing devices.
Alternative meaning on IBM mainframes
On IBM mainframes, the term wait state is used with a different meaning. A wait state refers to a CPU being halted, possibly due to some kind of serious error condition ( |
https://en.wikipedia.org/wiki/Adaptive%20Communication%20Environment | The Adaptive Communication Environment (ACE) is an open source software framework used for network programming. It provides a set of object-oriented C++ classes designed to help address the inherent complexities and challenges in network programming by preventing common errors.
History
ACE was initially developed by Douglas C. Schmidt during his graduate work at the University of California, Irvine. Development followed him to the Washington University in St. Louis, where he was employed. ACE is open-source software released by WU's Distributed Object Computer (DOC) group. Its development continued in the Institute for Software Integrated Systems (ISIS) at Vanderbilt University.
Features
ACE provides a standardized usage for operating system/machine specific features. It provides common data types and methods to access the powerful but complex features of modern operating systems. These include: inter-process communication, thread management, efficient memory management, etc.
It was designed to be portable and provide a common framework. The same code will work on most Unixes, Windows, VxWorks, QNX, OpenVMS, etc., with minimal changes. Due to this cross-platform support, it has been widely used in the development of communication software. Some of the successful projects that have used ACE includes: Motorola Iridium satellites, Boeing Wedgetail's Australian airborne early warning & control (AEW&C) system, and others.
ACE used software design patterns.
See also
Communication software
Component-integrated ACE ORB (CIAO, a CORBA implementation)
Cross-platform support middleware
TAO (software)
References
External links
Distributed object computer (DOC) Group website
Institute for Software Integrated Systems (ISIS) website
ACE Doxygen reference
ACE github code repository
Application programming interfaces
C++ libraries
Cross-platform software |
https://en.wikipedia.org/wiki/Mathematics%20and%20God | Connections between mathematics and God include the use of mathematics in arguments about the existence of God and about whether belief in God is beneficial.
Mathematical arguments for God's existence
In the 1070s, Anselm of Canterbury, an Italian medieval philosopher and theologian, created an ontological argument which sought to use logic to prove the existence of God. A more elaborate version was given by Gottfried Leibniz in the early eighteenth century. Kurt Gödel created a formalization of Leibniz' version, known as Gödel's ontological proof.
A more recent argument was made by Stephen D. Unwin in 2003, who suggested the use of Bayesian probability to estimate the probability of God's existence.
Mathematical arguments for belief
A common application of decision theory to the belief in God is Pascal's wager, published by Blaise Pascal in his 1669 work Pensées. The application was a defense of Christianity stating that "If God does not exist, the Atheist loses little by believing in him and gains little by not believing. If God does exist, the Atheist gains eternal life by believing and loses an infinite good by not believing". The atheist's wager has been proposed as a counterargument to Pascal's Wager.
See also
Existence of God
Further reading
Cohen, Daniel J., Equations from God: Pure Mathematics and Victorian Faith, Johns Hopkins University Press, 2007 .
Livio, Mario, Is God a Mathematician?, Simon & Schuster, 2011 .
Ransford, H. Chris, God and the Mathematics of Infinity: What Irreducible Mathematics Says about Godhood, Columbia University Press, 2017 .
References
Mathematics and culture
God
Arguments against the existence of God
Arguments for the existence of God |
https://en.wikipedia.org/wiki/Ferdinand%20von%20Lindemann | Carl Louis Ferdinand von Lindemann (12 April 1852 – 6 March 1939) was a German mathematician, noted for his proof, published in 1882, that (pi) is a transcendental number, meaning it is not a root of any polynomial with rational coefficients.
Life and education
Lindemann was born in Hanover, the capital of the Kingdom of Hanover. His father, Ferdinand Lindemann, taught modern languages at a Gymnasium in Hanover. His mother, Emilie Crusius, was the daughter of the Gymnasium's headmaster. The family later moved to Schwerin, where young Ferdinand attended school.
He studied mathematics at Göttingen, Erlangen, and Munich. At Erlangen he received a doctorate, supervised by Felix Klein, on non-Euclidean geometry. Lindemann subsequently taught in Würzburg and at the University of Freiburg. During his time in Freiburg, Lindemann devised his proof that is a transcendental number (see Lindemann–Weierstrass theorem). After his time in Freiburg, Lindemann transferred to the University of Königsberg. While a professor in Königsberg, Lindemann acted as supervisor for the doctoral theses of the mathematicians David Hilbert, Hermann Minkowski, and Arnold Sommerfeld.
Transcendence proof
In 1882, Lindemann published the result for which he is best known, the transcendence of . His methods were similar to those used nine years earlier by Charles Hermite to show that e, the base of natural logarithms, is transcendental. Before the publication of Lindemann's proof, it was known that was irrational, as Johann Heinrich Lambert proved was irrational in the 1760s.
References
External links
Lindemann, F. "Über die Zahl ", Mathematische Annalen 20 (1882): pp. 213–225.
1852 births
1939 deaths
19th-century German mathematicians
20th-century German mathematicians
Squaring the circle
Number theorists
Scientists from Hanover
People from the Kingdom of Hanover
University of Göttingen alumni
University of Erlangen-Nuremberg alumni
Ludwig Maximilian University of Munich alumni
Aca |
https://en.wikipedia.org/wiki/Software%20bloat | Software bloat is a process whereby successive versions of a computer program become perceptibly slower, use more memory, disk space or processing power, or have higher hardware requirements than the previous version, while making only dubious user-perceptible improvements or suffering from feature creep. The term is not applied consistently; it is often used as a pejorative by end users (bloatware) to describe undesired user interface changes even if those changes had little or no effect on the hardware requirements. In long-lived software, perceived bloat can occur from the software servicing a large, diverse marketplace with many differing requirements. Most end users will feel they only need some limited subset of the available functions, and will regard the others as unnecessary bloat, even if end users with different requirements require those functions.
Actual (measurable) bloat can occur due to de-emphasising algorithmic efficiency in favour of other concerns like developer productivity, or possibly through the introduction of new layers of abstraction like a virtual machine or other scripting engine for the purposes of convenience when developer constraints are reduced. The perception of improved developer productivity, in the case of practising development within virtual machine environments, comes from the developers no longer taking resource constraints and usage into consideration during design and development; this allows the product to be completed faster but it results in increases to the end user's hardware requirements to compensate.
The term "bloatware" is also used to describe unwanted pre-installed software or bundled programs.
Types of bloat
Program bloat
In computer programming, code bloat refers to the presence of program code (source code or machine code) that is perceived as unnecessarily long, slow, or otherwise wasteful of resources.
Causes
Software inefficiency
Software developers involved in the industry during the 1970s had sev |
https://en.wikipedia.org/wiki/Restriction%20modification%20system | The restriction modification system (RM system) is found in bacteria and other prokaryotic organisms, and provides a defense against foreign DNA, such as that borne by bacteriophages.
Bacteria have restriction enzymes, also called restriction endonucleases, which cleave double stranded DNA at specific points into fragments, which are then degraded further by other endonucleases. This prevents infection by effectively destroying the foreign DNA introduced by an infectious agent (such as a bacteriophage). Approximately one-quarter of known bacteria possess RM systems and of those about one-half have more than one type of system.
As the sequences recognized by the restriction enzymes are very short, the bacterium itself will almost certainly contain some within its genome. In order to prevent destruction of its own DNA by the restriction enzymes, methyl groups are added. These modifications must not interfere with the DNA base-pairing, and therefore, usually only a few specific bases are modified on each strand.
Endonucleases cleave internal/non-terminal phosphodiester bonds. They do so only after recognising specific sequences in DNA which are usually 4–6 base pairs long, and often palindromic.
History
The RM system was first discovered by Salvatore Luria and Mary Human in 1952 and 1953. They found that bacteriophage growing within an infected bacterium could be modified, so that upon their release and re-infection of a related bacterium the bacteriophage's growth is restricted (inhibited; also described by Luria in his autobiography on pages 45 and 99 in 1984). In 1953, Jean Weigle and Giuseppe Bertani reported similar examples of host-controlled modification using different bacteriophage system. Later work by Daisy Roulland-Dussoix and Werner Arber in 1962 and many other subsequent workers led to the understanding that restriction was due to attack and breakdown of the modified bacteriophage's DNA by specific enzymes of the recipient bacteria. Further work by H |
https://en.wikipedia.org/wiki/DCF77 | DCF77 is a German longwave time signal and standard-frequency radio station. It started service as a standard-frequency station on 1 January 1959. In June 1973 date and time information was added. Its primary and backup transmitter are located at in Mainflingen, about 25 km south-east of Frankfurt am Main, Germany. The transmitter generates a nominal power of 50 kW, of which about 30 to 35 kW can be radiated via a T-antenna.
DCF77 is controlled by the Physikalisch-Technische Bundesanstalt (PTB), Germany's national physics laboratory and transmits in continuous operation (24 hours). It is operated by Media Broadcast GmbH (previously a subsidiary of Deutsche Telekom AG), on behalf of the PTB. With Media Broadcast GmbH, a temporal transmission availability of at least 99.7% per year or under 26.28 hours of annual downtime has been agreed upon. Most service interruptions are short-term disconnections of under two minutes. Longer lasting transmission service interruptions are generally caused by strong winds, freezing rain or snow-induced T-antenna movement. This manifests itself in electrical detuning of the antenna resonance circuit and hence a measurable phase modulation of the received signal. When the maladjustment is too large, the transmitter is taken out of service temporarily. In the year 2002, almost 99.95% availability, or just over 4.38 hours of downtime, was realized. The timestamp sent is either in Coordinated Universal Time (UTC)+1 or UTC+2 depending on daylight saving time.
The highly accurate 77.5 kHz ( wavelength) carrier signal is generated from local atomic clocks that are linked with the German master clocks at the PTB in Braunschweig. The DCF77 time signal is used for the dissemination of the German national legal time to the public.
Radio clocks and watches have been very popular in Europe since the late 1980s and, in mainland Europe, most of them use the DCF77 signal to set their time automatically. The DCF77 longwave radio emission offers pen |
https://en.wikipedia.org/wiki/Approximation | An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Mathematics
Approximation theory is a branch of mathematics, a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between |
https://en.wikipedia.org/wiki/Egyptian%20fraction | An Egyptian fraction is a finite sum of distinct unit fractions, such as
That is, each fraction in the expression has a numerator equal to 1 and a denominator that is a positive integer, and all the denominators differ from each other. The value of an expression of this type is a positive rational number ; for instance the Egyptian fraction above sums to . Every positive rational number can be represented by an Egyptian fraction. Sums of this type, and similar sums also including and as summands, were used as a serious notation for rational numbers by the ancient Egyptians, and continued to be used by other civilizations into medieval times. In modern mathematical notation, Egyptian fractions have been superseded by vulgar fractions and decimal notation. However, Egyptian fractions continue to be an object of study in modern number theory and recreational mathematics, as well as in modern historical studies of ancient mathematics.
Applications
Beyond their historical use, Egyptian fractions have some practical advantages over other representations of fractional numbers.
For instance, Egyptian fractions can help in dividing food or other objects into equal shares. For example, if one wants to divide 5 pizzas equally among 8 diners, the Egyptian fraction
means that each diner gets half a pizza plus another eighth of a pizza, for example by splitting 4 pizzas into 8 halves, and the remaining pizza into 8 eighths. Exercises in performing this sort of fair division of food are a standard classroom example in teaching students to work with unit fractions.
Egyptian fractions can provide a solution to rope-burning puzzles, in which a given duration is to be measured by igniting non-uniform ropes which burn out after a unit time. Any rational fraction of a unit of time can be measured by expanding the fraction into a sum of unit fractions and then, for each unit fraction , burning a rope so that it always has simultaneously lit points where it is burning. For this ap |
https://en.wikipedia.org/wiki/Jughead%20%28search%20engine%29 | Jughead is a search engine system for the Gopher protocol. It is distinct from Veronica in that it searches a single server at a time.
Jughead was developed by Rhett Jones in 1993 at the University of Utah.
The name "Jughead" was originally chosen to match the Archie search engine, as Jughead Jones is Archie's friend in Archie Comics. Later a backronym was developed: Jonzy's Universal Gopher Hierarchy Excavation And Display.
It was released by the original author under the GNU General Public License in 2006, and its source code has been modernized to better run on current POSIX systems.
Due to trademark issues, the modified version was called Jugtail, and has been made available for download on GNU Savannah.
External links
Jughead Source
Jugtail Project
References
Internet protocols
Internet Standards
Unix Internet software
Gopher (protocol)
Internet search engines |
https://en.wikipedia.org/wiki/Classical%20orthogonal%20polynomials | In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials (including as a special case the Gegenbauer polynomials, Chebyshev polynomials, and Legendre polynomials).
They have many important applications in such areas as mathematical physics (in particular, the theory of random matrices), approximation theory, numerical analysis, and many others.
Classical orthogonal polynomials appeared in the early 19th century in the works of Adrien-Marie Legendre, who introduced the Legendre polynomials. In the late 19th century, the study of continued fractions to solve the moment problem by P. L. Chebyshev and then A.A. Markov and T.J. Stieltjes led to the general notion of orthogonal polynomials.
For given polynomials and the classical orthogonal polynomials are characterized by being solutions of the differential equation
with to be determined constants .
There are several more general definitions of orthogonal classical polynomials; for example, use the term for all polynomials in the Askey scheme.
Definition
In general, the orthogonal polynomials with respect to a weight satisfy
The relations above define up to multiplication by a number. Various normalisations are used to fix the constant, e.g.
The classical orthogonal polynomials correspond to the following three families of weights:
The standard normalisation (also called standardization) is detailed below.
Jacobi polynomials
For the Jacobi polynomials are given by the formula
They are normalised (standardized) by
and satisfy the orthogonality condition
The Jacobi polynomials are solutions to the differential equation
Important special cases
The Jacobi polynomials with are called the Gegenbauer polynomials (with parameter )
For , these are called the Legendre polynomials (for which the interval of orthogonality is [−1, 1] and the weight function is simply 1):
For , one obtains the |
https://en.wikipedia.org/wiki/List%20of%20conjectures%20by%20Paul%20Erd%C5%91s | The prolific mathematician Paul Erdős and his various collaborators made many famous mathematical conjectures, over a wide field of subjects, and in many cases Erdős offered monetary rewards for solving them.
Unsolved
The Erdős–Gyárfás conjecture on cycles with lengths equal to a power of two in graphs with minimum degree 3.
The Erdős–Hajnal conjecture that in a family of graphs defined by an excluded induced subgraph, every graph has either a large clique or a large independent set.
The Erdős–Mollin–Walsh conjecture on consecutive triples of powerful numbers.
The Erdős–Selfridge conjecture that a covering system with distinct moduli contains at least one even modulus.
The Erdős–Straus conjecture on the Diophantine equation 4/n = 1/x + 1/y + 1/z.
The Erdős conjecture on arithmetic progressions in sequences with divergent sums of reciprocals.
The Erdős–Szekeres conjecture on the number of points needed to ensure that a point set contains a large convex polygon.
The Erdős–Turán conjecture on additive bases of natural numbers.
A conjecture on quickly growing integer sequences with rational reciprocal series.
A conjecture with Norman Oler on circle packing in an equilateral triangle with a number of circles one less than a triangular number.
The minimum overlap problem to estimate the limit of M(n).
A conjecture that the ternary expansion of contains at least one digit 2 for every .
Solved
The Erdős–Faber–Lovász conjecture on coloring unions of cliques, proved (for all large n) by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus.
The Erdős sumset conjecture on sets, proven by Joel Moreira, Florian Karl Richter, Donald Robertson in 2018. The proof has appeared in "Annals of Mathematics" in March 2019.
The Burr–Erdős conjecture on Ramsey numbers of graphs, proved by Choongbum Lee in 2015.
A conjecture on equitable colorings proven in 1970 by András Hajnal and Endre Szemerédi and now known as the Hajnal–Szemerédi theorem.
A co |
https://en.wikipedia.org/wiki/Girsanov%20theorem | In probability theory, the Girsanov theorem tells how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying.
History
Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977).
Significance
Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P-semimartingale is a Q-semimartingale.
Statement of theorem
We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model.
Let be a Wiener process on the Wiener probability space . Let be a measurable process adapted to the natural filtration of the Wiener process ; we assume that the usual conditions have been satisfied.
Given an adapted process define
where is the stochastic exponential of X with respect to W, i.e.
and denotes the quadratic variation of the process X.
If is a martingale then a probability
measure Q can be defined on such that Radon–Nikodym derivative
Then for each t the measure Q restricted to the unaugmented sigma fields is equivalent to P restricted to
Furthermore if is a local martingale under P then the process
is a Q local martingale on the filtered probability space .
Corollary
If X is a continuous process and W is Brownian motion under measure P then
is Brownian motion |
https://en.wikipedia.org/wiki/Candidate%20key | A candidate key, or simply a key, of a relational database is a minimal superkey. In other words, it is any set of columns that have a unique combination of values in each row (which makes it a superkey), with the additional constraint that removing any column could produce duplicate combinations of values (which makes it a minimal superkey). Because a candidate key is a superkey that doesn't contain a smaller one, a relation can have multiple candidate keys, each with a different number of attributes.
Specific candidate keys are sometimes called primary keys, secondary keys or alternate keys.
The columns in a candidate key are called prime attributes, and a column that does not occur in any candidate key is called a non-prime attribute.
Every relation without NULL values will have at least one candidate key: Since there cannot be duplicate rows, the set of all columns is a superkey, and if that isn't minimal, some subset of that will be minimal.
There is a functional dependency from the candidate key to all the attributes in the relation.
The superkeys of a relation are all the possible ways we can identify a row. The candidate keys are the minimal subsets of each superkey and as such, they are an important concept for the design of database schema.
Example
The definition of candidate keys can be illustrated with the following (abstract) example. Consider a relation variable (relvar) R with attributes (A, B, C, D) that has only the following two legal values r1 and r2:
Here r2 differs from r1 only in the A and D values of the last tuple.
For r1 the following sets have the uniqueness property, i.e., there are no two distinct tuples in the instance with the same attribute values in the set:
{A,B}, {A,C}, {B,C}, {A,B,C}, {A,B,D}, {A,C,D}, {B,C,D}, {A,B,C,D}
For r2 the uniqueness property holds for the following sets;
{B,C}, {B,D}, {C,D}, {A,B,C}, {A,B,D}, {A,C,D}, {B,C,D}, {A,B,C,D}
Since superkeys of a relvar are those sets of attributes that have the uniqu |
https://en.wikipedia.org/wiki/Armstrong%20Flight%20Research%20Center | The NASA Neil A. Armstrong Flight Research Center (AFRC) is an aeronautical research center operated by NASA. Its primary campus is located inside Edwards Air Force Base in California and is considered NASA's premier site for aeronautical research. AFRC operates some of the most advanced aircraft in the world and is known for many aviation firsts, including supporting the first crewed airplane to exceed the speed of sound in level flight (Bell X-1), highest speed by a crewed, powered aircraft (North American X-15), the first pure digital fly-by-wire aircraft (F-8 DFBW), and many others. AFRC operates a second site next to Air Force Plant 42 in Palmdale, California, known as Building 703, once the former Rockwell International/North American Aviation production facility. There, AFRC houses and operates several of NASA's Science Mission Directorate aircraft including SOFIA (Stratospheric Observatory For Infrared Astronomy), a DC-8 Flying Laboratory, a Gulfstream C-20A UAVSAR and ER-2 High Altitude Platform. As of 2023, Bradley Flick is the center's director.
Established as the National Advisory Committee for Aeronautics Muroc Flight Test Unit (1946), the center was subsequently known as the NACA High-Speed Flight Research Station (1949), the NACA High-Speed Flight Station (1954), the NASA High-Speed Flight Station (1958) and the NASA Flight Research Center (1959). On 26 March 1976, the center was renamed the NASA Ames-Dryden Flight Research Center (DFRC) after Hugh L. Dryden, a prominent aeronautical engineer who died in office as NASA's deputy administrator in 1965 and Joseph Sweetman Ames, who was an eminent physicist, and served as president of Johns Hopkins University. The facility took its current name on 1 March 2014, honoring Neil Armstrong, a former test pilot at the center and the first human being to walk on the Moon.
AFRC was the home of the Shuttle Carrier Aircraft (SCA), a modified Boeing 747 designed to carry a Space Shuttle orbiter back to Kennedy Sp |
https://en.wikipedia.org/wiki/Distributed.net | Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3).
Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just over 44 years as of September 2023, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.
In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. , the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs.
History
A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.
A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.
The RC5-56 challenge was solved on October 19, 1997 after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is |
https://en.wikipedia.org/wiki/Particle%20swarm%20optimization | In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found.
Algorithm
A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae. The move |
https://en.wikipedia.org/wiki/Personal%20identification%20number | A personal identification number (PIN), or sometimes redundantly a PIN number or PIN code, is a numeric (sometimes alpha-numeric) passcode used in the process of authenticating a user accessing a system.
The PIN has been the key to facilitating the private data exchange between different data-processing centers in computer networks for financial institutions, governments, and enterprises. PINs may be used to authenticate banking systems with cardholders, governments with citizens, enterprises with employees, and computers with users, among other uses.
In common usage, PINs are used in ATM or POS transactions, secure access control (e.g. computer access, door access, car access), internet transactions, or to log into a restricted website.
History
The PIN originated with the introduction of the automated teller machine (ATM) in 1967, as an efficient way for banks to dispense cash to their customers. The first ATM system was that of Barclays in London, in 1967; it accepted cheques with machine-readable encoding, rather than cards, and matched the PIN to the cheque. 1972, Lloyds Bank issued the first bank card to feature an information-encoding magnetic strip, using a PIN for security. James Goodfellow, the inventor who patented the first personal identification number, was awarded an OBE in the 2006 Queen's Birthday Honours.
Mohamed M. Atalla invented the first PIN-based hardware security module (HSM), dubbed the "Atalla Box," a security system that encrypted PIN and ATM messages and protected offline devices with an un-guessable PIN-generating key. In 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification.
He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was release |
https://en.wikipedia.org/wiki/National%20Ignition%20Facility | The National Ignition Facility (NIF) is a laser-based inertial confinement fusion (ICF) research device, located at Lawrence Livermore National Laboratory in Livermore, California, United States. NIF's mission is to achieve fusion ignition with high energy gain. It achieved the first instance of scientific breakeven controlled fusion in an experiment on December 5, 2022, with an energy gain factor of 1.5. It supports nuclear weapon maintenance and design by studying the behavior of matter under the conditions found within nuclear explosions.
NIF is the largest and most powerful ICF device built to date. The basic ICF concept is to squeeze a small amount of fuel to reach pressure and temperature necessary for fusion. NIF hosts the world's most energetic laser. The laser heats the outer layer of a small sphere. The energy is so intense that it causes the sphere to implode, squeezing the fuel inside. The implosion reaches a peak speed of , raising the fuel density from about that of water to about 100 times that of lead. The delivery of energy and the adiabatic process during implosion raises the temperature of the fuel to hundreds of millions of degrees. At these temperatures, fusion processes occur in the tiny interval before the fuel explodes outward.
Construction on the NIF began in 1997. NIF was completed five years behind schedule and cost almost four times its original budget. Construction was certified complete on March 31, 2009, by the U.S. Department of Energy. The first large-scale experiments were performed in June 2009 and the first "integrated ignition experiments" (which tested the laser's power) were declared completed in October 2010.
From 2009 to 2012 experiments were conducted under the National Ignition Campaign, with the goal of reaching ignition just after the laser reached full power, some time in the second half of 2012. The campaign officially ended in September 2012, at about the conditions needed for ignition. Thereafter NIF has been used |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.