source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Schmitt%20trigger | In electronics, a Schmitt trigger is a comparator circuit with hysteresis implemented by applying positive feedback to the noninverting input of a comparator or differential amplifier. It is an active circuit which converts an analog input signal to a digital output signal. The circuit is named a trigger because the output retains its value until the input changes sufficiently to trigger a change. In the non-inverting configuration, when the input is higher than a chosen threshold, the output is high. When the input is below a different (lower) chosen threshold the output is low, and when the input is between the two levels the output retains its value. This dual threshold action is called hysteresis and implies that the Schmitt trigger possesses memory and can act as a bistable multivibrator (latch or flip-flop). There is a close relation between the two kinds of circuits: a Schmitt trigger can be converted into a latch and a latch can be converted into a Schmitt trigger.
Schmitt trigger devices are typically used in signal conditioning applications to remove noise from signals used in digital circuits, particularly mechanical contact bounce in switches. They are also used in closed loop negative feedback configurations to implement relaxation oscillators, used in function generators and switching power supplies.
In signal theory, a schmitt trigger is essentially a one-bit quantizer.
Invention
The Schmitt trigger was invented by American scientist Otto H. Schmitt in 1934 while he was a graduate student, later described in his doctoral dissertation (1937) as a thermionic trigger. It was a direct result of Schmitt's study of the neural impulse propagation in squid nerves.
Implementation
Fundamental idea
Circuits with hysteresis are based on positive feedback. Any active circuit can be made to behave as a Schmitt trigger by applying a positive feedback so that the loop gain is more than one. The positive feedback is introduced by adding a part of the output |
https://en.wikipedia.org/wiki/Aquatic%20respiration | Aquatic respiration is the process whereby an aquatic organism exchanges respiratory gases with water, obtaining oxygen from oxygen dissolved in water and excreting carbon dioxide and some other metabolic waste products into the water.
Unicellular and simple small organisms
In very small animals, plants and bacteria, simple diffusion of gaseous metabolites is sufficient for respiratory function and no special adaptations are found to aid respiration. Passive diffusion or active transport are also sufficient mechanisms for many larger aquatic animals such as many worms, jellyfish, sponges, bryozoans and similar organisms. In such cases, no specific respiratory organs or organelles are found.
Higher plants
Although higher plants typically use carbon dioxide and excrete oxygen during photosynthesis, they also respire and, particularly during darkness, many plants excrete carbon dioxide and require oxygen to maintain normal functions. In fully submerged aquatic higher plants specialised structures such as stoma on leaf surfaces to control gas interchange. In many species, these structures can be controlled to be open or closed depending on environmental conditions. In conditions of high light intensity and relatively high carbonate ion concentrations, oxygen may be produced in sufficient quantities to form gaseous bubbles on the surface of leaves and may produce oxygen super-saturation in the surrounding water body.
Animals
All animals that practice truly aquatic respiration are poikilothermic. All aquatic homeothermic animals and birds including cetaceans and penguins are air breathing despite a fully aquatic life-style.
Echinoderms
Echinoderms have a specialised water vascular system which provides a number of functions including providing the hydraulic power for tube feet but also serves to convey oxygenated sea water into the body and carry waste water out again. In many genera, the water enters through a madreporite, a sieve like structure on the upper surfac |
https://en.wikipedia.org/wiki/Gobiidae | Gobiidae or gobies is a family of bony fish in the order Gobiiformes, one of the largest fish families comprising more than 2,000 species in more than 200 genera. Most of gobiid fish are relatively small, typically less than in length, and the family includes some of the smallest vertebrates in the world, such as Trimmatom nanus and Pandaka pygmaea, Trimmatom nanus are under long when fully grown, then Pandaka pygmaea standard length are , maximum known standard length are . Some large gobies can reach over in length, but that is exceptional. Generally, they are benthic or bottom-dwellers. Although few are important as food fish for humans, they are of great significance as prey species for other commercially important fish such as cod, haddock, sea bass and flatfish. Several gobiids are also of interest as aquarium fish, such as the dartfish of the genus Ptereleotris. Phylogenetic relationships of gobiids have been studied using molecular data.
Description
The most distinctive aspects of gobiid morphology are the fused pelvic fins that form a disc-shaped sucker. This sucker is functionally analogous to the dorsal fin sucker possessed by the remoras or the pelvic fin sucker of the lumpsuckers, but is anatomically distinct; these similarities are the product of convergent evolution. The species in this family can often be seen using the sucker to adhere to rocks and corals, and in aquariums they will stick to glass walls of the tank, as well.
Distribution and habitat
Gobiidae are spread all over the world in tropical and temperate near shore-marine, brackish, and freshwater environments. Their range extends from the Old World coral reefs to the seas of the New World, and includes the rivers and near-shore habitats of Europe and Asia. Gobies are generally bottom-dwellers. Although many live in burrows, a few species (e.g. in the genus Glossogobius) are true cavefish. On coral reefs, species of gobiids constitute 35% of the total number of fishes and 20% of the s |
https://en.wikipedia.org/wiki/Gas%20exchange | Gas exchange is the physical process by which gases move passively by diffusion across a surface. For example, this surface might be the air/water interface of a water body, the surface of a gas bubble in a liquid, a gas-permeable membrane, or a biological membrane that forms the boundary between an organism and its extracellular environment.
Gases are constantly consumed and produced by cellular and metabolic reactions in most living things, so an efficient system for gas exchange between, ultimately, the interior of the cell(s) and the external environment is required. Small, particularly unicellular organisms, such as bacteria and protozoa, have a high surface-area to volume ratio. In these creatures the gas exchange membrane is typically the cell membrane. Some small multicellular organisms, such as flatworms, are also able to perform sufficient gas exchange across the skin or cuticle that surrounds their bodies. However, in most larger organisms, which have small surface-area to volume ratios, specialised structures with convoluted surfaces such as gills, pulmonary alveoli and spongy mesophylls provide the large area needed for effective gas exchange. These convoluted surfaces may sometimes be internalised into the body of the organism. This is the case with the alveoli, which form the inner surface of the mammalian lung, the spongy mesophyll, which is found inside the leaves of some kinds of plant, or the gills of those molluscs that have them, which are found in the mantle cavity.
In aerobic organisms, gas exchange is particularly important for respiration, which involves the uptake of oxygen () and release of carbon dioxide (). Conversely, in oxygenic photosynthetic organisms such as most land plants, uptake of carbon dioxide and release of both oxygen and water vapour are the main gas-exchange processes occurring during the day. Other gas-exchange processes are important in less familiar organisms: e.g. carbon dioxide, methane and hydrogen are exchanged a |
https://en.wikipedia.org/wiki/35%20%28number%29 | 35 (thirty-five) is the natural number following 34 and preceding 36.
In mathematics
35 is the sum of the first five triangular numbers, making it a tetrahedral number.
35 is the 10th discrete semiprime () and the first with 5 as the lowest non-unitary factor, thus being the first of the form (5.q) where q is a higher prime.
35 has two prime factors, (5 and 7 ) which also form its main factor pair (5 x 7) and comprise the second twin-prime distinct semiprime pair.
The aliquot sum of 35 is 13, within an aliquot sequence of only one composite number (35,13,1,0) to the Prime in the 13-aliquot tree. 35 is the second composite number with the aliquot sum 13; the first being the cube 27.
35 is the last member of the first triple cluster of semiprimes 33, 34, 35. The second such triple distinct semiprime cluster is 85, 86,and 87.
35 is the number of ways that three things can be selected from a set of seven unique things, also known as the "combination of seven things taken three at a time".
35 is a centered cube number, a centered tetrahedral number, a pentagonal number, and a pentatope number.
35 is a highly cototient number, since there are more solutions to the equation than there are for any other integers below it except 1.
There are 35 free hexominoes, the polyominoes made from six squares.
Since the greatest prime factor of is 613, which is more than 35 twice, 35 is a Størmer number.
35 is the highest number one can count to on one's fingers using senary.
35 is the number of quasigroups of order 4.
35 is the smallest composite number of the form , where is a non-negative integer.
In science
The atomic number of bromine
In other fields
35 mm film is the basic film gauge most commonly used for both analog photography and motion pictures.
The minimum age of presidential candidates for election to the United States, Ireland, Poland, Russia, Trinidad and Tobago, and Uruguay.
For Social Security in the United States, the 35 highest years of earn |
https://en.wikipedia.org/wiki/Stone%20duality | In mathematics, there is an ample supply of categorical dualities between certain categories of topological spaces and categories of partially ordered sets. Today, these dualities are usually collected under the label Stone duality, since they form a natural generalization of Stone's representation theorem for Boolean algebras. These concepts are named in honor of Marshall Stone. Stone-type dualities also provide the foundation for pointless topology and are exploited in theoretical computer science for the study of formal semantics.
This article gives pointers to special cases of Stone duality and explains a very general instance thereof in detail.
Overview of Stone-type dualities
Probably the most general duality that is classically referred to as "Stone duality" is the duality between the category Sob of sober spaces with continuous functions and the category SFrm of spatial frames with appropriate frame homomorphisms. The dual category of SFrm is the category of spatial locales denoted by SLoc. The categorical equivalence of Sob and SLoc is the basis for the mathematical area of pointless topology, which is devoted to the study of Loc—the category of all locales, of which SLoc is a full subcategory. The involved constructions are characteristic for this kind of duality, and are detailed below.
Now one can easily obtain a number of other dualities by restricting to certain special classes of sober spaces:
The category CohSp of coherent sober spaces (and coherent maps) is equivalent to the category CohLoc of coherent (or spectral) locales (and coherent maps), on the assumption of the Boolean prime ideal theorem (in fact, this statement is equivalent to that assumption). The significance of this result stems from the fact that CohLoc in turn is dual to the category DLat01 of bounded distributive lattices. Hence, DLat01 is dual to CohSp—one obtains Stone's representation theorem for distributive lattices.
When restricting further to coherent sober spaces that |
https://en.wikipedia.org/wiki/The%20C%20Programming%20Language | The C Programming Language (sometimes termed K&R, after its authors' initials) is a computer programming book written by Brian Kernighan and Dennis Ritchie, the latter of whom originally designed and implemented the C programming language, as well as co-designed the Unix operating system with which development of the language was closely intertwined. The book was central to the development and popularization of C and is still widely read and used today. Because the book was co-authored by the original language designer, and because the first edition of the book served for many years as the de facto standard for the language, the book was regarded by many to be the authoritative reference on C.
History
C was created by Dennis Ritchie at Bell Labs in the early 1970s as an augmented version of Ken Thompson's B.
Another Bell Labs employee, Brian Kernighan, had written the first C tutorial,
and he persuaded Ritchie to coauthor a book on the language.
Kernighan would write most of the book's "expository" material, and Ritchie's reference manual became its appendices.
The first edition, published February 22, 1978, was the first widely available book on the C programming language. Its version of C is sometimes termed K&R C (after the book's authors), often to distinguish this early version from the later version of C standardized as ANSI C.
In April 1988, the second edition of the book was published, updated to cover the changes to the language resulting from the then-new ANSI C standard, particularly with the inclusion of reference material on standard libraries. The second edition of the book (and , the most recent) has since been translated into over 20 languages. In 2012, an eBook version of the second edition was published in ePub, Mobi, and PDF formats.
ANSI C, first standardized in 1989 (as ANSI X3.159-1989), has since undergone several revisions, the most recent of which is ISO/IEC 9899:2018 (also termed C17 or C18), adopted as an ANSI standard in June 2018. |
https://en.wikipedia.org/wiki/TRIAC | A TRIAC (triode for alternating current; also bidirectional triode thyristor or bilateral triode thyristor) is a three-terminal electronic component that conducts current in either direction when triggered. The term TRIAC is a genericised trademark.
TRIACs are a subset of thyristors (analogous to a relay in that a small voltage and current can control a much larger voltage and current) and are related to silicon controlled rectifiers (SCRs). TRIACs differ from SCRs in that they allow current flow in both directions, whereas an SCR can only conduct current in a single direction. Most TRIACs can be triggered by applying either a positive or negative voltage to the gate (an SCR requires a positive voltage). Once triggered, SCRs and TRIACs continue to conduct, even if the gate current ceases, until the main current drops below a certain level called the holding current.
Gate turn-off thyristors (GTOs) are similar to TRIACs but provide more control by turning off when the gate signal ceases.
The bidirectionality of TRIACs makes them convenient switches for alternating-current (AC). In addition, applying a trigger at a controlled phase angle of the AC in the main circuit allows control of the average current flowing into a load (phase control). This is commonly used for controlling the speed of a universal motor, dimming lamps, and controlling electric heaters. TRIACs are Bipolar devices.
Operation
To understand how TRIACs work, consider the triggering in each of the four possible combinations of gate and MT2 voltages with respect MT1. The four separate cases (quadrants) are illustrated in Figure 1. Main Terminal 1 (MT1) and Main Terminal (MT2) are also referred to as Anode 1 (A1) and Anode 2 (A2) respectively.
The relative sensitivity depends on the physical structure of a particular triac, but as a rule, quadrant I is the most sensitive (least gate current required), and quadrant 4 is the least sensitive (most gate current required).
In quadrants 1 and 2, MT2 is |
https://en.wikipedia.org/wiki/Clock%20generator | A clock generator is an electronic oscillator that produces a clock signal for use in synchronizing a circuit's operation. The signal can range from a simple symmetrical square wave to more complex arrangements. The basic parts that all clock generators share are a resonant circuit and an amplifier.
The resonant circuit is usually a quartz piezo-electric oscillator, although simpler tank circuits and even RC circuits may be used.
The amplifier circuit usually inverts the signal from the oscillator and feeds a portion back into the oscillator to maintain oscillation.
The generator may have additional sections to modify the basic signal. The 8088 for example, used a 2/3 duty cycle clock, which required the clock generator to incorporate logic to convert the 50/50 duty cycle which is typical of raw oscillators.
Other such optional sections include frequency divider or clock multiplier sections.
Programmable clock generators allow the number used in the divider or multiplier to be changed, allowing any of a wide variety of output frequencies to be selected without modifying the hardware.
The clock generator in a motherboard is often changed by computer enthusiasts to control the speed of their CPU, FSB, GPU and RAM.
Typically the programmable clock generator is set by the BIOS at boot time to the selected value; although some systems have dynamic frequency scaling, which frequently re-programs the clock generator.
Timing-signal generators (TSGs)
TSGs are clocks that are used throughout service-provider networks, frequently as the building integrated timing supply (BITS) for a central office.
Digital switching systems and some transmission systems (e.g., SONET) depend on reliable, high-quality synchronization (or timing) to prevent impairments. To provide this, most service providers utilize interoffice synchronization distribution networks based on the stratum hierarchy and implement the BITS concept to meet intraoffice synchronization needs.
A TSG is clock equ |
https://en.wikipedia.org/wiki/Geographic%20coordinate%20conversion | In geodesy, conversion among different geographic coordinate systems is made necessary by the different geographic coordinate systems in use across the world and over time. Coordinate conversion is composed of a number of different types of conversion: format change of geographic coordinates, conversion of coordinate systems, or transformation to different geodetic datums. Geographic coordinate conversion has applications in cartography, surveying, navigation and geographic information systems.
In geodesy, geographic coordinate conversion is defined as translation among different coordinate formats or map projections all referenced to the same geodetic datum. A geographic coordinate transformation is a translation among different geodetic datums. Both geographic coordinate conversion and transformation will be considered in this article.
This article assumes readers are already familiar with the content in the articles geographic coordinate system and geodetic datum.
Change of units and format
Informally, specifying a geographic location usually means giving the location's latitude and longitude. The numerical values for latitude and longitude can occur in a number of different units or formats:
sexagesimal degree: degrees, minutes, and seconds : 40° 26′ 46″ N 79° 58′ 56″ W
degrees and decimal minutes: 40° 26.767′ N 79° 58.933′ W
decimal degrees: +40.446 -79.982
There are 60 minutes in a degree and 60 seconds in a minute. Therefore, to convert from a degrees minutes seconds format to a decimal degrees format, one may use the formula
.
To convert back from decimal degree format to degrees minutes seconds format,
where and are just temporary variables to handle both positive and negative values properly.
Coordinate system conversion
A coordinate system conversion is a conversion from one coordinate system to another, with both coordinate systems based on the same geodetic datum. Common conversion tasks include conversion between geodetic and earth-ce |
https://en.wikipedia.org/wiki/One-sided%20limit | In calculus, a one-sided limit refers to either one of the two limits of a function of a real variable as approaches a specified point either from the left or from the right.
The limit as decreases in value approaching ( approaches "from the right" or "from above") can be denoted:
The limit as increases in value approaching ( approaches "from the left" or "from below") can be denoted:
If the limit of as approaches exists then the limits from the left and from the right both exist and are equal. In some cases in which the limit
does not exist, the two one-sided limits nonetheless exist. Consequently, the limit as approaches is sometimes called a "two-sided limit".
It is possible for exactly one of the two one-sided limits to exist (while the other does not exist). It is also possible for neither of the two one-sided limits to exist.
Formal definition
Definition
If represents some interval that is contained in the domain of and if is a point in then the right-sided limit as approaches can be rigorously defined as the value that satisfies:
and the left-sided limit as approaches can be rigorously defined as the value that satisfies:
We can represent the same thing more symbolically, as follows.
Let represent an interval, where , and .
Intuition
In comparison to the formal definition for the limit of a function at a point, the one-sided limit (as the name would suggest) only deals with input values to one side of the approached input value.
For reference, the formal definition for the limit of a function at a point is as follows:
To define a one-sided limit, we must modify this inequality. Note that the absolute distance between and is .
For the limit from the right, we want to be to the right of , which means that , so is positive. From above, is the distance between and . We want to bound this distance by our value of , giving the inequality . Putting together the inequalities and and using the transitivity property o |
https://en.wikipedia.org/wiki/Inflection%20point | In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection (rarely inflexion) is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph of a function, it is a point where the function changes from being concave (concave downward) to convex (concave upward), or vice versa.
For the graph of a function of differentiability class (f, its first derivative f, and its second derivative f'', exist and are continuous), the condition f'' = 0 can also be used to find an inflection point since a point of f'' = 0 must be passed to change f'' from a positive value (concave upward) to a negative value (concave downward) or vice versa as f'' is continuous; an inflection point of the curve is where f'' = 0 and changes its sign at the point (from positive to negative or from negative to positive). A point where the second derivative vanishes but does not change its sign is sometimes called a point of undulation or undulation point.
In algebraic geometry an inflection point is defined slightly more generally, as a regular point where the tangent meets the curve to order at least 3, and an undulation point or hyperflex''' is defined as a point where the tangent meets the curve to order at least 4.
Definition
Inflection points in differential geometry are the points of the curve where the curvature changes its sign.
For example, the graph of the differentiable function has an inflection point at if and only if its first derivative has an isolated extremum at . (this is not the same as saying that has an extremum). That is, in some neighborhood, is the one and only point at which has a (local) minimum or maximum. If all extrema of are isolated, then an inflection point is a point on the graph of at which the tangent crosses the curve.
A falling point of inflection is an inflection point where the derivative is negative on both sides of the point; in other words, it is |
https://en.wikipedia.org/wiki/In%20situ | In situ (; often not italicized in English) is a Latin phrase that translates literally to "on site" or "in position." It can mean "locally", "on site", "on the premises", or "in place" to describe where an event takes place and is used in many different contexts. For example, in fields such as physics, geology, chemistry, or biology, in situ may describe the way a measurement is taken, that is, in the same place the phenomenon is occurring without isolating it from other systems or altering the original conditions of the test. The opposite of in situ is ex situ.
Aerospace
In the aerospace industry, equipment on-board aircraft must be tested in situ, or in place, to confirm everything functions properly as a system. Individually, each piece may work but interference from nearby equipment may create unanticipated problems. Special test equipment is available for this in situ testing. It can also refer to repairs made to the aircraft structure or flight controls while still in place.
Archaeology
In archaeology, in situ refers to an artifact that has not been moved from its original place of deposition. In other words, it is stationary, meaning "still." An artifact being in situ is critical to the interpretation of that artifact and, consequently, of the culture which formed it. Once an artifact's 'find-site' has been recorded, the artifact can then be moved for conservation, further interpretation and display. An artifact that is not discovered in situ is considered out of context and as not providing an accurate picture of the associated culture. However, the out-of-context artifact can provide scientists with an example of types and locations of in situ artifacts yet to be discovered. When excavating a burial site or surface deposit "in situ" refers to cataloging, recording, mapping, photographing human remains in the position they are discovered.
The label in situ indicates only that the object has not been "newly" moved. Thus, an archaeological in situ find ma |
https://en.wikipedia.org/wiki/Evolution%20%28Baxter%20novel%29 | Evolution is a collection of short stories that work together to form an episodic science fiction novel by author Stephen Baxter. It follows 565 million years of human evolution, from shrewlike mammals 65 million years in the past to the ultimate fate of humanity (and its descendants, both biological and non-biological) 500 million years in the future.
Plot summary
The book follows the evolution of mankind as it shapes surviving Purgatorius into tree dwellers, remoulds a group that drifts from Africa to a (then much closer) New World on a raft formed out of debris, and confronting others with a terrible dead end as ice clamps down on Antarctica.
The stream of DNA runs on elsewhere, where ape-like creatures in North Africa are forced out of their diminishing forests to come across grasslands where their distant descendants will later run joyously. At one point, hominids become sapient, and go on to develop technology, including an evolving universal constructor machine that goes to Mars and multiplies, and in an act of global ecophagy consumes Mars by converting the planet into a mass of machinery that leaves the Solar system in search of new planets to assimilate. Human extinction (or the extinction of human culture) also occurs in the book, as well as the end of planet Earth and the rebirth of life on another planet. (The extinction-level event that causes the human extinction is, indirectly, an eruption of the Rabaul caldera, coupled with various actions of humans themselves, some of which are only vaguely referred to, but implied to be a form of genetic engineering which removed the ability to reproduce with non-engineered humans.) Also to be found in Evolution are ponderous Romans, sapient dinosaurs, the last of the wild Neanderthals, a primate who witnesses the extinction of the dinosaurs, symbiotic primate-tree relationships, mole people, and primates who live on a Mars-like Earth. The final chapter witnesses the final fate of the last primate and the des |
https://en.wikipedia.org/wiki/Crypto%20%28book%29 | Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age is a book about cryptography written by Steven Levy, published in 2001. Levy details the emergence of public key cryptography, digital signatures and the struggle between the National Security Agency (NSA) and the "cypherpunks". The book details the creation of Data Encryption Standard (DES), RSA and the Clipper chip.
Summary
See also
Books on cryptography
Crypto wars
References
External links
Presentation on Crypto by Levy, January 22, 2001, C-SPAN
2001 non-fiction books
Cryptography books
Computer security books
Books by Steven Levy |
https://en.wikipedia.org/wiki/Load%20testing | Load testing is the process of putting demand on a structure or system and measuring its response.
Software load testing
Physical load testing
Many types of machinery, engines, structures, and motors are load tested. The load may be at a designated safe working load (SWL), full load, or at an aggravated level of load. The governing contract, technical specification or test method contains the details of conducting the test. The purpose of a mechanical load test is to verify that all the component parts of a structure including materials, base-fixings are fit for task and loading it is designed for.
Several types of load testing are employed
Static testing is when a designated constant load is applied for a specified time.
Dynamic testing is when a variable or moving load is applied.
Cyclical testing consists of repeated loading and unloading for specified cycles, durations and conditions.
The Supply of Machinery (Safety) Regulation 1992 UK state that load testing is undertaken before the equipment is put into service for the first time. Performance testing applies a safe working load (SWL), or other specified load, for a designated time in a governing test method, specification, or contract. Under the Lifting Operations and Lifting Equipment Regulations 1998 UK load testing after the initial test is required if a major component is replaced, if the item is moved from one location to another or as dictated by the competent person.
Car charging system
A load test can be used to evaluate the health of a car's battery. The tester consists of a large resistor that has a resistance similar to a car's starter motor and a meter to read the battery's output voltage both in the unloaded and loaded state. When the tester is used, the battery's open circuit voltage is checked first. If the open circuit voltage is below spec (12.6 volts for a fully charged battery), the battery is charged first. After reading the battery's open circuit voltage, the load is applied. |
https://en.wikipedia.org/wiki/Stone%20space | In topology and related areas of mathematics, a Stone space, also known as a profinite space or profinite set, is a compact totally disconnected Hausdorff space. Stone spaces are named after Marshall Harvey Stone who introduced and studied them in the 1930s in the course of his investigation of Boolean algebras, which culminated in his representation theorem for Boolean algebras.
Equivalent conditions
The following conditions on the topological space are equivalent:
is a Stone space;
is homeomorphic to the projective limit (in the category of topological spaces) of an inverse system of finite discrete spaces;
is compact and totally separated;
is compact, T0 , and zero-dimensional (in the sense of the small inductive dimension);
is coherent and Hausdorff.
Examples
Important examples of Stone spaces include finite discrete spaces, the Cantor set and the space of -adic integers, where is any prime number. Generalizing these examples, any product of finite discrete spaces is a Stone space, and the topological space underlying any profinite group is a Stone space. The Stone–Čech compactification of the natural numbers with the discrete topology, or indeed of any discrete space, is a Stone space.
Stone's representation theorem for Boolean algebras
To every Boolean algebra we can associate a Stone space as follows: the elements of are the ultrafilters on and the topology on called , is generated by the sets of the form where
Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the Boolean algebra of clopen sets of the Stone space ; and furthermore, every Stone space is homeomorphic to the Stone space belonging to the Boolean algebra of clopen sets of These assignments are functorial, and we obtain a category-theoretic duality between the category of Boolean algebras (with homomorphisms as morphisms) and the category of Stone spaces (with continuous maps as morphisms).
Stone's theorem gave r |
https://en.wikipedia.org/wiki/Perspective%20%28graphical%29 | Linear or point-projection perspective () is one of two types of graphical projection perspective in the graphic arts; the other is parallel projection. Linear perspective is an approximate representation, generally on a flat surface, of an image as it is seen by the eye. Perspective drawing is useful for representing a three-dimensional scene in a two-dimensional medium, like paper.
The most characteristic features of linear perspective are that objects appear smaller as their distance from the observer increases, and that they are subject to , meaning that an object's dimensions along the line of sight appear shorter than its dimensions across the line of sight. All objects will recede to points in the distance, usually along the horizon line, but also above and below the horizon line depending on the view used.
Italian Renaissance painters and architects including Filippo Brunelleschi, Leon Battista Alberti, Masaccio, Paolo Uccello, Piero della Francesca and Luca Pacioli studied linear perspective, wrote treatises on it, and incorporated it into their artworks.
Overview
Perspective works by representing the light that passes from a scene through an imaginary rectangle (the picture plane), to the viewer's eye, as if a viewer were looking through a window and painting what is seen directly onto the windowpane. If viewed from the same spot as the windowpane was painted, the painted image would be identical to what was seen through the unpainted window. Each painted object in the scene is thus a flat, scaled down version of the object on the other side of the window.
Examples of one-point perspective
Examples of two-point perspective
Examples of three-point perspective
Examples of curvilinear perspective
Additionally, a central vanishing point can be used (just as with one-point perspective) to indicate frontal (foreshortened) depth.
History
Early history
The earliest art paintings and drawings typically sized many objects and characters hierarchically a |
https://en.wikipedia.org/wiki/Walsh%20function | In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis. They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.
The system of Walsh functions is known as the Walsh system. It is an extension of the Rademacher system of orthogonal functions.
Walsh functions, the Walsh system, the Walsh series, and the fast Walsh–Hadamard transform are all named after the American mathematician Joseph L. Walsh. They find various applications in physics and engineering when analyzing digital signals.
Historically, various numerations of Walsh functions have been used; none of them is particularly superior to another. This articles uses the Walsh–Paley numeration.
Definition
We define the sequence of Walsh functions , as follows.
For any natural number k, and real number , let
be the jth bit in the binary representation of k, starting with as the least significant bit, and
be the jth bit in the fractional binary representation of , starting with as the most significant fractional bit.
Then, by definition
In particular, everywhere on the interval, since all bits of k are zero.
Notice that is precisely the Rademacher function rm.
Thus, the Rademacher system is a subsystem of the Walsh system. Moreover, every Walsh function is a product of Rademacher functions:
Comparison between Walsh functions and trigonometric functions
Walsh functions and trigonometric functions are both systems that form a complete, orthonormal set of functions, an orthonormal basis in Hilbert space of the square-inte |
https://en.wikipedia.org/wiki/Nanobacterium | Nanobacterium ( , pl. nanobacteria ) is the unit or member name of a former proposed class of living organisms, specifically cell-walled microorganisms, now discredited, with a size much smaller than the generally accepted lower limit for life (about 200 nm for bacteria, like mycoplasma). Originally based on observed nano-scale structures in geological formations (including one meteorite), the status of nanobacteria was controversial, with some researchers suggesting they are a new class of living organism capable of incorporating radiolabeled uridine, and others attributing to them a simpler, abiotic nature. One skeptic dubbed them "the cold fusion of microbiology", in reference to a notorious episode of supposed erroneous science. The term "calcifying nanoparticles" (CNPs) has also been used as a conservative name regarding their possible status as a life form.
Research tends to agree that these structures exist, and appear to replicate in some way. However, the idea that they are living entities has now largely been discarded, and the particles are instead thought to be nonliving crystallizations of minerals and organic molecules.
1981–2000
In 1981 Francisco Torella and Richard Y. Morita described very small cells called ultramicrobacteria. Defined as being smaller than 300 nm, by 1982 MacDonell and Hood found that some could pass through a 200 nm membrane. Early in 1989, geologist Robert L. Folk found what he later identified as nannobacteria (written with double "n"), that is, nanoparticles isolated from geological specimens in travertine from hot springs of Viterbo, Italy. Initially searching for a bacterial cause for travertine deposition, scanning electron microscope examination of the mineral where no bacteria were detectable revealed extremely small objects which appeared to be biological. His first oral presentation elicited what he called "mostly a stony silence", at the 1992 Geological Society of America's annual convention. He proposed that nanoba |
https://en.wikipedia.org/wiki/Support%20%28mathematics%29 | In mathematics, the support of a real-valued function is the subset of the function domain containing the elements which are not mapped to zero. If the domain of is a topological space, then the support of is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used very widely in mathematical analysis.
Formulation
Suppose that is a real-valued function whose domain is an arbitrary set The of written is the set of points in where is non-zero:
The support of is the smallest subset of with the property that is zero on the subset's complement. If for all but a finite number of points then is said to have .
If the set has an additional structure (for example, a topology), then the support of is defined in an analogous way as the smallest subset of of an appropriate type such that vanishes in an appropriate sense on its complement. The notion of support also extends in a natural way to functions taking values in more general sets than and to other objects, such as measures or distributions.
Closed support
The most common situation occurs when is a topological space (such as the real line or -dimensional Euclidean space) and is a continuous real- (or complex-) valued function. In this case, the of , , or the of , is defined topologically as the closure (taken in ) of the subset of where is non-zero that is,
Since the intersection of closed sets is closed, is the intersection of all closed sets that contain the set-theoretic support of
For example, if is the function defined by
then , the support of , or the closed support of , is the closed interval since is non-zero on the open interval and the closure of this set is
The notion of closed support is usually applied to continuous functions, but the definition makes sense for arbitrary real or complex-valued functions on a topological space, and some authors do not require that (or ) be continuous.
Compact support
Functions with |
https://en.wikipedia.org/wiki/Chicken%20%28game%29 | The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield (to avoid the worst outcome if neither yields), individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.
The name "chicken" has its origins in a game in which two drivers drive toward each other on a collision course: one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved will be called a "chicken", meaning a coward; this terminology is most prevalent in political science and economics. The name "hawk–dove" refers to a situation in which there is a competition for a shared resource and the contestants can choose either conciliation or conflict; this terminology is most commonly used in biology and evolutionary game theory. From a game-theoretic point of view, "chicken" and "hawk–dove" are identical. The game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis.
Popular versions
The game of chicken models two drivers, both headed for a single-lane bridge from opposite directions. The first to swerve away yields the bridge to the other. If neither player swerves, the result is a costly deadlock in the middle of the bridge or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves (since the other is the "chicken" while a crash is avoided). Additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure their best outcome, risks the worst.
The phrase game of chi |
https://en.wikipedia.org/wiki/Hilbert%27s%20sixteenth%20problem | Hilbert's 16th problem was posed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900, as part of his list of 23 problems in mathematics.
The original problem was posed as the Problem of the topology of algebraic curves and surfaces (Problem der Topologie algebraischer Kurven und Flächen).
Actually the problem consists of two similar problems in different branches of mathematics:
An investigation of the relative positions of the branches of real algebraic curves of degree n (and similarly for algebraic surfaces).
The determination of the upper bound for the number of limit cycles in two-dimensional polynomial vector fields of degree n and an investigation of their relative positions.
The first problem is yet unsolved for n = 8. Therefore, this problem is what usually is meant when talking about Hilbert's sixteenth problem in real algebraic geometry. The second problem also remains unsolved: no upper bound for the number of limit cycles is known for any n > 1, and this is what usually is meant by Hilbert's sixteenth problem in the field of dynamical systems.
The Spanish Royal Society for Mathematics published an explanation of Hilbert's sixteenth problem.
The first part of Hilbert's 16th problem
In 1876, Harnack investigated algebraic curves in the real projective plane and found that curves of degree n could have no more than
separate connected components. Furthermore, he showed how to construct curves that attained that upper bound, and thus that it was the best possible bound. Curves with that number of components are called M-curves.
Hilbert had investigated the M-curves of degree 6, and found that the 11 components always were grouped in a certain way. His challenge to the mathematical community now was to completely investigate the possible configurations of the components of the M-curves.
Furthermore, he requested a generalization of Harnack's curve theorem to algebraic surfaces and a similar investigation |
https://en.wikipedia.org/wiki/Adam%27s%20apple | The Adam's apple or laryngeal prominence is the protrusion in the human neck formed by the angle of the thyroid cartilage surrounding the larynx, typically visible in men, less frequently in women. The prominence of the Adam's apple increases as a secondary male sex characteristic in puberty.
Structure
The topographic structure which is externally visible and colloquially called the "Adam's apple" is caused by an anatomical structure of the thyroid cartilage called the laryngeal prominence or laryngeal protuberance protruding and forming a "bump" under the skin at the front of the throat. All human beings with a normal anatomy have a laryngeal protuberance of the thyroid cartilage. This prominence is typically larger and more externally noticeable in adult males. There are two reasons for this phenomenon. Firstly, the structural size of the thyroid cartilage in males tends to increase during puberty, and the laryngeal protuberance becomes more anteriorly focused. Secondly, the larynx, which the thyroid cartilage partially envelops, increases in size in male subjects during adolescence, moving the thyroid cartilage and its laryngeal protuberance towards the front of the neck. The adolescent development of both the larynx and the thyroid cartilage in males occur as a result of hormonal changes, especially the normal increase in testosterone production in adolescent males. In females, the laryngeal protuberance sits on the upper edge of the thyroid cartilage, and the larynx tends to be smaller in size, and so the "bump" caused by protrusion of the laryngeal protuberance is much less visible or not discernible. Even so, many women display an externally visible protrusion of the thyroid cartilage, an "Adam's apple", to varying degrees which are usually minor, and this should not normally be viewed as a medical disorder.
Function
The Adam's apple, in relation with the thyroid cartilage which forms it, helps protect the walls and the frontal part of the larynx, includin |
https://en.wikipedia.org/wiki/Singleton%20%28mathematics%29 | In mathematics, a singleton, also known as a unit set or one-point set, is a set with exactly one element. For example, the set is a singleton whose single element is .
Properties
Within the framework of Zermelo–Fraenkel set theory, the axiom of regularity guarantees that no set is an element of itself. This implies that a singleton is necessarily distinct from the element it contains, thus 1 and {1} are not the same thing, and the empty set is distinct from the set containing only the empty set. A set such as is a singleton as it contains a single element (which itself is a set, however, not a singleton).
A set is a singleton if and only if its cardinality is . In von Neumann's set-theoretic construction of the natural numbers, the number 1 is defined as the singleton
In axiomatic set theory, the existence of singletons is a consequence of the axiom of pairing: for any set A, the axiom applied to A and A asserts the existence of which is the same as the singleton (since it contains A, and no other set, as an element).
If A is any set and S is any singleton, then there exists precisely one function from A to S, the function sending every element of A to the single element of S. Thus every singleton is a terminal object in the category of sets.
A singleton has the property that every function from it to any arbitrary set is injective. The only non-singleton set with this property is the empty set.
Every singleton set is an ultra prefilter. If is a set and then the upward of in which is the set is a principal ultrafilter on Moreover, every principal ultrafilter on is necessarily of this form. The ultrafilter lemma implies that non-principal ultrafilters exist on every infinite set (these are called ).
Every net valued in a singleton subset of is an ultranet in
The Bell number integer sequence counts the number of partitions of a set (), if singletons are excluded then the numbers are smaller ().
In category theory
Structures built on singletons |
https://en.wikipedia.org/wiki/Block%20%28programming%29 | In computer programming, a block or code block or block of code is a lexical structure of source code which is grouped together. Blocks consist of one or more declarations and statements. A programming language that permits the creation of blocks, including blocks nested within other blocks, is called a block-structured programming language. Blocks are fundamental to structured programming, where control structures are formed from blocks.
Blocks have two functions: to group statements so that they can be treated as one statement, and to define scopes for names to distinguish them from the same name used elsewhere. In a block-structured programming language, the objects named in outer blocks are visible inside inner blocks, unless they are masked by an object declared with the same name.
History
Ideas of block structure were developed in the 1950s during the development of the first autocodes, and were formalized in the Algol 58 and Algol 60 reports. Algol 58 introduced the notion of the "compound statement", which was related solely to control flow. The subsequent Revised Report which described the syntax and semantics of Algol 60 introduced the notion of a block and block scope, with a block consisting of " A sequence of declarations followed by a sequence of statements and enclosed between begin and end..." in which "[e]very declaration appears in a block in this way and is valid only for that block."
Syntax
Blocks use different syntax in different languages. Two broad families are:
the ALGOL family in which blocks are delimited by the keywords "begin" and "end" or equivalent. In C, blocks are delimited by curly braces - "{" and "}". ALGOL 68 uses parentheses.
Parentheses - "(" and ")", are used in the MS-DOS batch language
indentation, as in Python
s-expressions with a syntactic keyword such as prog or let (as in the Lisp family)
In 1968 (with ALGOL 68), then in Edsger W. Dijkstra's 1974 Guarded Command Language the conditional and iterative code block a |
https://en.wikipedia.org/wiki/Kiyoshi%20Oka | was a Japanese mathematician who did fundamental work in the theory of several complex variables.
Biography
Oka was born in Osaka. He went to Kyoto Imperial University in 1919, turning to mathematics in 1923 and graduating in 1924.
He was in Paris for three years from 1929, returning to Hiroshima University. He published solutions to the first and second Cousin problems, and work on domains of holomorphy, in the period 1936–1940. He received his Doctor of Science degree from Kyoto Imperial University in 1940. These were later taken up by Henri Cartan and his school, playing a basic role in the development of sheaf theory.
The Oka–Weil theorem is due to a work of André Weil in 1935 and Oka's work in 1937.
Oka continued to work in the field, and proved Oka's coherence theorem in 1950. Oka's lemma is also named after him.
He was a professor at Nara Women's University from 1949 to retirement at 1964. He received many honours in Japan.
Honors
1951 Japan Academy Prize
1954 Asahi Prize
1960 Order of Culture
1973 Order of the Sacred Treasure, 1st class
Bibliography
KIYOSHI OKA COLLECTED PAPERS
- Includes bibliographical references.
Selected papers (Sur les fonctions analytiques de plusieurs variables)
PDF TeX
References
External links
Oka library at NWU
Photos of Prof. Kiyoshi Oka
Related to Works of Dr. Kiyoshi OKA
Oka Mathematical Institute
1901 births
1978 deaths
People from Osaka
Kyoto University alumni
20th-century Japanese mathematicians
Complex analysts
Mathematical analysts
Japanese essayists
Academic staff of Kyoto University
Recipients of the Order of the Sacred Treasure, 1st class
Recipients of the Order of Culture
Academic staff of Hiroshima University
20th-century essayists
Academic staff of Nara Women's University |
https://en.wikipedia.org/wiki/Medical%20privacy | Medical privacy, or health privacy, is the practice of maintaining the security and confidentiality of patient records. It involves both the conversational discretion of health care providers and the security of medical records. The terms can also refer to the physical privacy of patients from other patients and providers while in a medical facility, and to modesty in medical settings. Modern concerns include the degree of disclosure to insurance companies, employers, and other third parties. The advent of electronic medical records (EMR) and patient care management systems (PCMS) have raised new concerns about privacy, balanced with efforts to reduce duplication of services and medical errors.
Most developed countries including Australia, Canada, Turkey, the United Kingdom, the United States, New Zealand, and the Netherlands have enacted laws protecting people's medical health privacy. However, many of these health securing privacy laws have proven less effective in practice than in theory. In 1996, the United States passed the Health Insurance Portability and Accountability Act (HIPAA) which aimed to increase privacy precautions within medical institutions.
History of medical privacy
The history of medical privacy traces back to Hippocratic Oath that postulates the secrecy of information obtained when helping a patient.
Prior to the technological boom, medical institutions relied on the paper medium to file individual medical data. Nowadays, more and more information is stored within electronic databases. Research shows that it is safer to have information stored within a paper medium as it is harder to physically steal data, whilst digital records are vulnerable to access by hackers.
In order to reform the healthcare privacy issues in the early 1990s, researchers looked into the use of credit cards and smart cards to allow access to their medical information without fear of stolen information. The "smart" card allowed the storage and processing of informati |
https://en.wikipedia.org/wiki/Lexicostatistics | Lexicostatistics is a method of comparative linguistics that involves comparing the percentage of lexical cognates between languages to determine their relationship. Lexicostatistics is related to the comparative method but does not reconstruct a proto-language. It is to be distinguished from glottochronology, which attempts to use lexicostatistical methods to estimate the length of time since two or more languages diverged from a common earlier proto-language. This is merely one application of lexicostatistics, however; other applications of it may not share the assumption of a constant rate of change for basic lexical items.
The term "lexicostatistics" is misleading in that mathematical equations are used but not statistics. Other features of a language may be used other than the lexicon, though this is unusual. Whereas the comparative method used shared identified innovations to determine sub-groups, lexicostatistics does not identify these. Lexicostatistics is a distance-based method, whereas the comparative method considers language characters directly. The lexicostatistics method is a simple and fast technique relative to the comparative method but has limitations (discussed below). It can be validated by cross-checking the trees produced by both methods.
History
Lexicostatistics was developed by Morris Swadesh in a series of articles in the 1950s, based on earlier ideas. The concept's first known use was by Dumont d'Urville in 1834 who compared various "Oceanic" languages and proposed a method for calculating a coefficient of relationship. Hymes (1960) and Embleton (1986) both review the history of lexicostatistics.
Method
Create word list
The aim is to generate a list of universally used meanings (hand, mouth, sky, I). Words are then collected for these meaning slots for each language being considered. Swadesh reduced a larger set of meanings down to 200 originally. He later found that it was necessary to reduce it further but that he could include some |
https://en.wikipedia.org/wiki/Proof%20that%2022/7%20exceeds%20%CF%80 | Proofs of the mathematical result that the rational number is greater than (pi) date back to antiquity. One of these proofs, more recently developed but requiring only elementary techniques from calculus, has attracted attention in modern mathematics due to its mathematical elegance and its connections to the theory of Diophantine approximations. Stephen Lucas calls this proof "one of the more beautiful results related to approximating ".
Julian Havil ends a discussion of continued fraction approximations of with the result, describing it as "impossible to resist mentioning" in that context.
The purpose of the proof is not primarily to convince its readers that is indeed bigger than ; systematic methods of computing the value of exist. If one knows that is approximately 3.14159, then it trivially follows that < , which is approximately 3.142857. But it takes much less work to show that < by the method used in this proof than to show that is approximately 3.14159.
Background
is a widely used Diophantine approximation of . It is a convergent in the simple continued fraction expansion of . It is greater than , as can be readily seen in the decimal expansions of these values:
The approximation has been known since antiquity. Archimedes wrote the first known proof that is an overestimate in the 3rd century BCE, although he may not have been the first to use that approximation. His proof proceeds by showing that is greater than the ratio of the perimeter of a regular polygon with 96 sides to the diameter of a circle it circumscribes.
The proof
The proof can be expressed very succinctly:
Therefore, > .
The evaluation of this integral was the first problem in the 1968 Putnam Competition.
It is easier than most Putnam Competition problems, but the competition often features seemingly obscure problems that turn out to refer to something very familiar. This integral has also been used in the entrance examinations for the Indian Institutes of Technology.
|
https://en.wikipedia.org/wiki/Sysmex%20XE-2100 | The Sysmex XE-2100 is a haematology automated analyser, used to quickly perform full blood counts and reticulocyte counts. It is made by the Sysmex Corporation.
It can be run on its own, or connected to a blood film making and staining unit. Racks of blood go in on a tray on the right, and come out the left side. The racks hold ten 4.5 mL tubes, and have a notch so they can only go in one way.
As the tubes go through the machine, two are picked up and inverted five times to mix, the first one is sampled. They are put down again, the rack moves along one space and two more are picked up and mixed five times, this assures that each tube is inverted ten times before being sampled.
The caps are left on the tubes as they go through the machine. A piercer takes a sample through the rubber centre while the tube is upside down. EDTA (lavender) tubes are usually used, although citrate (blue top) tubes will also work (although the result must be corrected because of dilution).
Paediatric and oversized tubes can be put through manually via a sampler on the left-hand side of the machine.
Data from the XE-2100 can be viewed with a computer program.
Price
This machine can be purchased for around US$107,000
Principles of measurement
Blood is sampled and diluted, and moves through a tube thin enough that cells pass by one at a time. Characteristics about the cell are measured using lasers (fluorescence flow cytometry) or electrical impedance.
Because not everything about the cells can be measured at the same time, blood is separated into a number of different channels. In the XE-2100 there are five different channels: WBC/BASO, DIFF, IMI, RET and NRBC.
See also
Medical technologist
Laboratory equipment
Measuring instruments |
https://en.wikipedia.org/wiki/Trivial%20topology | In topology, a topological space with the trivial topology is one where the only open sets are the empty set and the entire space. Such spaces are commonly called indiscrete, anti-discrete, concrete or codiscrete. Intuitively, this has the consequence that all points of the space are "lumped together" and cannot be distinguished by topological means. Every indiscrete space is a pseudometric space in which the distance between any two points is zero.
Details
The trivial topology is the topology with the least possible number of open sets, namely the empty set and the entire space, since the definition of a topology requires these two sets to be open. Despite its simplicity, a space X with more than one element and the trivial topology lacks a key desirable property: it is not a T0 space.
Other properties of an indiscrete space X—many of which are quite unusual—include:
The only closed sets are the empty set and X.
The only possible basis of X is {X}.
If X has more than one point, then since it is not T0, it does not satisfy any of the higher T axioms either. In particular, it is not a Hausdorff space. Not being Hausdorff, X is not an order topology, nor is it metrizable.
X is, however, regular, completely regular, normal, and completely normal; all in a rather vacuous way though, since the only closed sets are ∅ and X.
X is compact and therefore paracompact, Lindelöf, and locally compact.
Every function whose domain is a topological space and codomain X is continuous.
X is path-connected and so connected.
X is second-countable, and therefore is first-countable, separable and Lindelöf.
All subspaces of X have the trivial topology.
All quotient spaces of X have the trivial topology
Arbitrary products of trivial topological spaces, with either the product topology or box topology, have the trivial topology.
All sequences in X converge to every point of X. In particular, every sequence has a convergent subsequence (the whole sequence or any other subseque |
https://en.wikipedia.org/wiki/GloFish | The GloFish is a patented and trademarked brand of fluorescently colored genetically modified fish. They have been created from several different species of fish: zebrafish (Danio rerio) were the first GloFish available in pet stores, and recently tetra (Gymnocorymbus ternetzi), tiger barbs (Puntius tetrazona), Rainbow Shark (Epalzeorhynchos frenatum), Siamese fighting fish (Betta splendens), and most recently Bronze corydoras (Corydoras aeneus) have been added to the lineup. They are sold in many colors, trademarked as "Starfire Red", "Moonrise Pink", "Sunburst Orange", "Electric Green", "Cosmic Blue", and "Galactic Purple", although not all species are available in all colors. Although not originally developed for the ornamental fish trade, it is one of the first genetically modified animals to become publicly available. The rights to GloFish are owned by Spectrum Brands, Inc., which purchased GloFish from Yorktown Technologies, the original developer of GloFish, in May 2017.
History
Early development
The original zebrafish (or zebra danio, Danio rerio) is a native of rivers in India and Bangladesh. It measures three centimeters long and has gold and dark blue stripes. In 1999, Dr. Zhiyuan Gong and his colleagues at the National University of Singapore were working with a gene that encodes the green fluorescent protein (GFP), originally extracted from a jellyfish, that naturally produced bright green fluorescence. They inserted the gene into a zebrafish embryo, allowing it to integrate into the zebrafish's genome, which caused the fish to be brightly fluorescent under both natural white light and ultraviolet light. Their goal was to develop a fish that could detect pollution by selectively fluorescing in the presence of environmental toxins. The development of the constantly fluorescing fish was the first step in this process, and the National University of Singapore filed a patent application on this work. Shortly thereafter, his team developed a line of re |
https://en.wikipedia.org/wiki/Cofiniteness | In mathematics, a cofinite subset of a set is a subset whose complement in is a finite set. In other words, contains all but finitely many elements of If the complement is not finite, but is countable, then one says the set is cocountable.
These arise naturally when generalizing structures on finite sets to infinite sets, particularly on infinite products, as in the product topology or direct sum.
This use of the prefix "" to describe a property possessed by a set's mplement is consistent with its use in other terms such as "meagre set".
Boolean algebras
The set of all subsets of that are either finite or cofinite forms a Boolean algebra, which means that it is closed under the operations of union, intersection, and complementation. This Boolean algebra is the on A Boolean algebra has a unique non-principal ultrafilter (that is, a maximal filter not generated by a single element of the algebra) if and only if there exists an infinite set such that is isomorphic to the finite–cofinite algebra on In this case, the non-principal ultrafilter is the set of all cofinite sets.
Cofinite topology
The cofinite topology (sometimes called the finite complement topology) is a topology that can be defined on every set It has precisely the empty set and all cofinite subsets of as open sets. As a consequence, in the cofinite topology, the only closed subsets are finite sets, or the whole of Symbolically, one writes the topology as
This topology occurs naturally in the context of the Zariski topology. Since polynomials in one variable over a field are zero on finite sets, or the whole of the Zariski topology on (considered as affine line) is the cofinite topology. The same is true for any irreducible algebraic curve; it is not true, for example, for in the plane.
Properties
Subspaces: Every subspace topology of the cofinite topology is also a cofinite topology.
Compactness: Since every open set contains all but finitely many points of the space is co |
https://en.wikipedia.org/wiki/Polydivisible%20number | In mathematics a polydivisible number (or magic number) is a number in a given number base with digits abcde... that has the following properties:
Its first digit a is not 0.
The number formed by its first two digits ab is a multiple of 2.
The number formed by its first three digits abc is a multiple of 3.
The number formed by its first four digits abcd is a multiple of 4.
etc.
Definition
Let be a positive integer, and let be the number of digits in n written in base b. The number n is a polydivisible number if for all ,
.
Example
For example, 10801 is a seven-digit polydivisible number in base 4, as
Enumeration
For any given base , there are only a finite number of polydivisible numbers.
Maximum polydivisible number
The following table lists maximum polydivisible numbers for some bases b, where represent digit values 10 to 35.
Estimate for Fb(n) and Σ(b)
Let be the number of digits. The function determines the number of polydivisible numbers that has digits in base , and the function is the total number of polydivisible numbers in base .
If is a polydivisible number in base with digits, then it can be extended to create a polydivisible number with digits if there is a number between and that is divisible by . If is less or equal to , then it is always possible to extend an digit polydivisible number to an -digit polydivisible number in this way, and indeed there may be more than one possible extension. If is greater than , it is not always possible to extend a polydivisible number in this way, and as becomes larger, the chances of being able to extend a given polydivisible number become smaller. On average, each polydivisible number with digits can be extended to a polydivisible number with digits in different ways. This leads to the following estimate for :
Summing over all values of n, this estimate suggests that the total number of polydivisible numbers will be approximately
Specific bases
All numbers are represented in bas |
https://en.wikipedia.org/wiki/Seifert%E2%80%93Van%20Kampen%20theorem | In mathematics, the Seifert–Van Kampen theorem of algebraic topology (named after Herbert Seifert and Egbert van Kampen), sometimes just called Van Kampen's theorem, expresses the structure of the fundamental group of a topological space in terms of the fundamental groups of two open, path-connected subspaces that cover . It can therefore be used for computations of the fundamental group of spaces that are constructed out of simpler ones.
Van Kampen's theorem for fundamental groups
Let X be a topological space which is the union of two open and path connected subspaces U1, U2. Suppose U1 ∩ U2 is path connected and nonempty, and let x0 be a point in U1 ∩ U2 that will be used as the base of all fundamental groups. The inclusion maps of U1 and U2 into X induce group homomorphisms and . Then X is path connected and and form a commutative pushout diagram:
The natural morphism k is an isomorphism. That is, the fundamental group of X is the free product of the fundamental groups of U1 and U2 with amalgamation of .
Usually the morphisms induced by inclusion in this theorem are not themselves injective, and the more precise version of the statement is in terms of pushouts of groups.
Van Kampen's theorem for fundamental groupoids
Unfortunately, the theorem as given above does not compute the fundamental group of the circle – which is the most important basic example in algebraic topology – because the circle cannot be realised as the union of two open sets with connected intersection. This problem can be resolved by working with the fundamental groupoid on a set A of base points, chosen according to the geometry of the situation. Thus for the circle, one uses two base points.
This groupoid consists of homotopy classes relative to the end points of paths in X joining points of A ∩ X. In particular, if X is a contractible space, and A consists of two distinct points of X, then is easily seen to be isomorphic to the groupoid often written with two vertices and ex |
https://en.wikipedia.org/wiki/Electronic%20Recording%20Machine%2C%20Accounting | ERMA (Electronic Recording Machine, Accounting) was a computer technology that automated bank bookkeeping and check processing. Developed at the nonprofit research institution SRI International under contract from Bank of America, the project began in 1950 and was publicly revealed in September 1955.
Payments experts contend that ERMA "established the foundation for computerized banking, magnetic ink character recognition (MICR), and credit-card processing". General Electric (GE) won the production contract, deciding to transistorize the design in the process. Calling the machine the GE-100, a total of 32 ERMA machines were built. GE would use this experience to develop several mainframe computer lines before selling the division to Honeywell in 1970.
History
Background
In 1950, Bank of America (BoA) was the largest bank in California, and led the world in the use of checks. This presented a serious problem due to the workload processing time. An experienced bookkeeper could post 245 accounts in an hour, about 2,000 in an eight-hour workday and approximately 10,000 per week. Bank of America's checking accounts were growing at a rate of 23,000 per month and banks were being forced to close their doors by 2 p.m. to finish daily postings.
S. Clark Beise was a senior vice president at BoA who was introduced to Thomas H. Morrin, SRI's Director of Engineering. They formed an alliance under which SRI would essentially act as BoA's research and development arm. In July 1950 they contracted SRI for an initial feasibility study for automating their bookkeeping and check handling. ERMA was under the technical leadership of computer scientist Jerre Noe.
First study
SRI immediately found a problem. Because accounts were kept alphabetically, adding a new account required a reshuffling of the account listings. SRI instead suggested using account numbers, simply adding new ones to the end of the list. In addition these numbers would be pre-printed on checks, thereby dramati |
https://en.wikipedia.org/wiki/Cantor%E2%80%93Bernstein%20theorem | In set theory and order theory, the Cantor–Bernstein theorem states that the cardinality of the second type class, the class of countable order types, equals the cardinality of the continuum. It was used by Felix Hausdorff and named by him after Georg Cantor and Felix Bernstein. Cantor constructed a family of countable order types with the cardinality of the continuum, and in his 1901 inaugural dissertation Bernstein proved that such a family can have no higher cardinality.
References
Order theory |
https://en.wikipedia.org/wiki/Dyne%3Abolic | dyne:bolic GNU/Linux is a Live CD/DVD distribution based on the Linux kernel. It is shaped by the needs of media activists, artists and creators to be a practical tool with a focus on multimedia production, that delivers a large assortment of applications. It allows manipulation and broadcast of both sound and video with tools to record, edit, encode, and stream. In addition to multimedia specific programs, dyne:bolic also provides word processors and common desktop computing tools.
Termed "Rastasoft" by its author, it is based entirely on free software, and as such is recommended and endorsed by the GNU Project. dyne:bolic is created by volunteers and the author and maintainer Jaromil, who also included multimedia tools such as MusE, HasciiCam, and FreeJ in the distribution.
Live CD/DVD
dyne:bolic is intended to be used as Live CD/DVD. It does not require installation to a hard drive, and attempts to recognize most devices and peripherals (sound, video, TV, etc.) automatically. It is designed to work with older and slower computers, its kernel optimized for low latency and for performance, making the distribution suitable for audio and video production and turning PCs into full media stations. For that reason software included is sometimes not at the newest version available.
Modules
dyne:bolic can be extended by downloading extra modules such as development tools or common software like OpenOffice.org. These are SquashFS files placed in the directory of a dock (see below) or a burnt CD and are automatically integrated at boot.
System requirements
Basic system requirements for version 1.x and 2.x were relatively low. A PC with Pentium or AMD K5 (i586) class CPU and 64 MB of RAM and an IDE CD-ROM drive is sufficient. Some versions of dyne:bolic 1.x were ported by co-developer Smilzo to be used on the Xbox game console, multiple Xbox installations could be clustered. Console installation and clustering is currently not supported by version 2.x and up.
Vers |
https://en.wikipedia.org/wiki/Trichophagia | Trichophagia is a form of disordered eating in which persons with the disorder suck on, chew, swallow, or otherwise eat hair. The term is derived from ancient Greek θρίξ, ("hair") and φαγεῖν, ("to eat"). Tricho-phagy refers only to the chewing of hair, whereas tricho-phagia is ingestion of hair, but many texts refer to both habits just as trichophagia. It is considered a chronic psychiatric disorder of impulse control. Trichophagia belongs to a subset of pica disorders and is often associated with trichotillomania, the compulsive pulling out of ones own hair. People with trichotillomania often also have trichophagia, with estimates ranging from 48-58% having an oral habit such as biting or chewing (i.e. trichophagy), and 4-20% actually swallowing and ingesting their hair (true trichophagia). In an even smaller subset of people with trichotillomania, their trichophagia can become so severe that they develop a hair ball.Termed a trichobezoar, these masses can be benign, or cause significant health concerns and require emergency surgery to remove them. Rapunzel syndrome is a further complicaiton whereby the hair ball extends past the stomach and can cause blockages of gastrointestinal system.
Signs and symptoms
Signs and symptoms of trichophagia are variable depending on the individual's behavior patterns. Trichophagia's loosest definition is the putting of hair in ones mouth, whether that be to chew it or suck on it, with the strictest definition being that the hair is swallowed and ingested. Trichophagia is most closely associated with trichotillomania, the pulling out of ones own hair, and thus any symptoms of trichotillomania could be predictive of trichophagia and must be ruled out. Rarely, person's with trichophagia do not exclusively have trichotillomania, and instead will eat the hair of others.
Trichotillomania can be categorized as either "automatic", where the hair pulling is so habitual it is almost unconscious, or "focussed" where the pulling is more d |
https://en.wikipedia.org/wiki/Time%20Cube | Time Cube was a pseudoscientific personal web page founded in 1997 by the self-proclaimed "wisest man on earth," Otis Eugene "Gene" Ray. It was a self-published outlet for Ray's "theory of everything", also called "Time Cube," which polemically claims that all modern sciences are participating in a worldwide conspiracy to teach lies, by omitting his theory's alleged truth that each day actually consists of four days occurring simultaneously. Alongside these statements, Ray described himself as a "godlike being with superior intelligence who has absolute evidence and proof" for his views. Ray asserted repeatedly and variously that the academic world had not taken Time Cube seriously.
Ray died on March 18, 2015, at the age of 87. His website domain names expired in August 2015, and Time Cube was last archived by the Wayback Machine on January 12, 2016 (January 10–14).
Content
Style
The Time Cube website contained no home page. It consisted of a number of web pages that contained a single vertical centre-aligned column of body text in various sizes and colors, resulting in extremely long main pages. Finding any particular passage was almost impossible without manually searching.
A large amount of self-invented jargon is used throughout: some words and phrases are used frequently but never defined, likely terms materially referring to the weakness of widely propagated ideas that Ray detests throughout the text, and are usually capitalized even when used as adjectives. In one paragraph, he claimed that his own wisdom "so antiquates known knowledge" that a psychiatrist examining his behavior diagnosed him with schizophrenia.
Various commentators have asserted that it is futile to analyze the text rationally, interpret meaningful proofs from the text, or test any claims.
Time Cube concept
Ray's personal model of reality, called "Time Cube", states that all of modern physics and education is wrong, and argues that, among many other things, Greenwich Time is a global |
https://en.wikipedia.org/wiki/Italian%20school%20of%20algebraic%20geometry | In relation to the history of mathematics, the Italian school of algebraic geometry refers to mathematicians and their work in birational geometry, particularly on algebraic surfaces, centered around Rome roughly from 1885 to 1935. There were 30 to 40 leading mathematicians who made major contributions, about half of those being Italian. The leadership fell to the group in Rome of Guido Castelnuovo, Federigo Enriques and Francesco Severi, who were involved in some of the deepest discoveries, as well as setting the style.
Algebraic surfaces
The emphasis on algebraic surfaces—algebraic varieties of dimension two—followed on from an essentially complete geometric theory of algebraic curves (dimension 1). The position in around 1870 was that the curve theory had incorporated with Brill–Noether theory the Riemann–Roch theorem in all its refinements (via the detailed geometry of the theta-divisor).
The classification of algebraic surfaces was a bold and successful attempt to repeat the division of algebraic curves by their genus g. The division of curves corresponds to the rough classification into the three types: g = 0 (projective line); g = 1 (elliptic curve); and g > 1 (Riemann surfaces with independent holomorphic differentials). In the case of surfaces, the Enriques classification was into five similar big classes, with three of those being analogues of the curve cases, and two more (elliptic fibrations, and K3 surfaces, as they would now be called) being with the case of two-dimension abelian varieties in the 'middle' territory. This was an essentially sound, breakthrough set of insights, recovered in modern complex manifold language by Kunihiko Kodaira in the 1950s, and refined to include mod p phenomena by Zariski, the Shafarevich school and others by around 1960. The form of the Riemann–Roch theorem on a surface was also worked out.
Foundational issues
Some proofs produced by the school are not considered satisfactory because of foundational difficulties. The |
https://en.wikipedia.org/wiki/Moodle | Moodle ( ) is a free and open-source learning management system written in PHP and distributed under the GNU General Public License. Moodle is used for blended learning, distance education, flipped classroom and other online learning projects in schools, universities, workplaces and other sectors.
Moodle is used to create custom websites with online courses and allows for community-sourced plugins.
Overview
Moodle was originally developed by Martin Dougiamas with the goal of helping educators and scholars create online courses and focus on interaction and collaborative construction of content. The first version of Moodle was released on , and it continues to be actively developed.
The Moodle Project is led and coordinated by Moodle HQ, an Australian company, that is financially supported by a network of eighty-four Moodle Partner service companies worldwide. Development is also assisted by the open-source community.
Moodle is a learning platform used to augment and move existing learning environments online. As an E-learning tool, Moodle developed a number of features now considered standard for learning management systems, such as a calendar and gradebook.
Plugins, custom graphical themes, mobile responsive web design, and a Moodle mobile app are available to customize each individual's experience on the platform. Moodle's mobile app is available on Google Play, App Store (iOS), F-Droid (Android FLOSS repository), and the Windows Phone Store.
E-learning standards support
Moodle has adopted the following e-learning standards:
Sharable Content Object Reference Model (SCORM) is a collection of E-learning standards and specifications that define communications between client side content and a server side learning management system, as well as how externally authored content should be packaged in order to integrate with the LMS effectively. There are two versions: SCORM 1.2 and SCORM 2004. Moodle is SCORM 1.2 compliant, and passes all the tests in the ADL Co |
https://en.wikipedia.org/wiki/Binary%20system | A binary system is a system of two astronomical bodies which are close enough that their gravitational attraction causes them to orbit each other around a barycenter (also see animated examples). More restrictive definitions require that this common center of mass is not located within the interior of either object, in order to exclude the typical planet–satellite systems and planetary systems.
The most common binary systems are binary stars and binary asteroid, but brown dwarfs, planets, neutron stars, black holes and galaxies can also form binaries.
A multiple system is like a binary system but consists of three or more objects such as for trinary stars and trinary asteroids.
Classification
In a binary system, the brighter object is referred to as primary, and the other the secondary.
They are also classified based on orbit. Wide binaries are objects with orbits that keep them apart from one another. They evolve separately and have very little effect on each other. Close binaries are close to each other and are able to transfer mass from one another.
They can also be classified based on how we observe them. Visual binaries are two stars separated enough that they can be viewed through a telescope or binoculars.
Eclipsing binaries are where the objects' orbits are at an angle that when one passes in front of the other it causes an eclipse, as seen from Earth.
Astrometric binaries are objects that seem to move around nothing as their companion object cannot be identified, it can only be inferred. The companion object may not be bright enough or may be hidden in the glare from the primary object.
A related classification though not a binary system is optical binary, which refers to objects that are so close together in the sky that they appear to be a binary system, but are not. Such objects merely appear to be close together, but lie at different distances from the Solar System.
Binary companion (minor planets)
When binary minor planets are similar in |
https://en.wikipedia.org/wiki/Algorithmic%20learning%20theory | Algorithmic learning theory is a mathematical framework for analyzing
machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches of computational learning theory.
Distinguishing characteristics
Unlike statistical learning theory and most statistical theory in general, algorithmic learning theory does not assume that data are random samples, that is, that data points are independent of each other. This makes the theory suitable for domains where observations are (relatively) noise-free but not random, such as language learning and automated scientific discovery.
The fundamental concept of algorithmic learning theory is learning in the limit: as the number of data points increases, a learning algorithm should converge to a correct hypothesis on every possible data sequence consistent with the problem space. This is a non-probabilistic version of statistical consistency, which also requires convergence to a correct model in the limit, but allows a learner to fail on data sequences with probability measure 0 .
Algorithmic learning theory investigates the learning power of Turing machines. Other frameworks consider a much more restricted class of learning algorithms than Turing machines, for example, learners that compute hypotheses more quickly, for instance in polynomial time. An example of such a framework is probably approximately correct learning .
Learning in the limit
The concept was introduced in E. Mark Gold's seminal paper "Language identification in the limit". The objective of language identification is for a machine running one program to be capable of developing another program by which any given sentence can be tested to determin |
https://en.wikipedia.org/wiki/Chaos%20model | In computing, the chaos model is a structure of software development. Its creator, who used the pseudonym L.B.S. Raccoon, noted that project management models such as the spiral model and waterfall model, while good at managing schedules and staff, didn't provide methods to fix bugs or solve other technical problems. At the same time, programming methodologies, while effective at fixing bugs and solving technical problems, do not help in managing deadlines or responding to customer requests. The structure attempts to bridge this gap. Chaos theory was used as a tool to help understand these issues.
Software development life cycle
The chaos model notes that the phases of the life cycle apply to all levels of projects, from the whole project to individual lines of code.
The whole project must be defined, implemented, and integrated.
Systems must be defined, implemented, and integrated.
Modules must be defined, implemented, and integrated.
Functions must be defined, implemented, and integrated.
Lines of code are defined, implemented and integrated.
One important change in perspective is whether projects can be thought of as whole units, or must be thought of in pieces. Nobody writes tens of thousands of lines of code in one sitting. They write small pieces, one line at a time, verifying that the small pieces work. Then they build up from there. The behavior of a complex system emerges from the combined behavior of the smaller building blocks.
Chaos strategy
The chaos strategy is a strategy of software development based on the chaos model. The main rule is always resolve the most important issue first.
An issue is an incomplete programming task.
The most important issue is a combination of big, urgent, and robust.
Big issues provide value to users as working functionality.
Urgent issues are timely in that they would otherwise hold up other work.
Robust issues are trusted and tested when resolved. Developers can then safely focus their attention elsewhere. |
https://en.wikipedia.org/wiki/Index%20register | An index register in a computer's CPU is a processor register (or an assigned memory location) used for pointing to operand addresses during the run of a program. It is useful for stepping through strings and arrays. It can also be used for holding loop iterations and counters. In some architectures it is used for read/writing blocks of memory. Depending on the architecture it may be a dedicated index register or a general-purpose register. Some instruction sets allow more than one index register to be used; in that case additional instruction fields may specify which index registers to use.
Generally, the contents of an index register is added to (in some cases subtracted from) an immediate address (that can be part of the instruction itself or held in another register) to form the "effective" address of the actual data (operand). Special instructions are typically provided to test the index register and, if the test fails, increments the index register by an immediate constant and branches, typically to the start of the loop. While normally processors that allow an instruction to specify multiple index registers add the contents together, IBM had a line of computers in which the contents were or'd together.
Index registers has proved useful for doing vector/array operations and in commercial data processing for navigating from field to field within records. In both uses index registers substantially reduced the amount of memory used and increased execution speed.
History
In early computers without any form of indirect addressing, array operations had to be performed by modifying the instruction address, which required several additional program steps and used up more computer memory, a scarce resource in computer installations of the early era (as well as in early microcomputers two decades later).
Index registers, commonly known as B-lines in early British computers, as B-registers on some machines and a X-registers on others, were first used in the British M |
https://en.wikipedia.org/wiki/NEC | is a Japanese multinational information technology and electronics corporation, headquartered at the NEC Supertower in Minato, Tokyo, Japan. It provides IT and network solutions, including cloud computing, artificial intelligence (AI), Internet of Things (IoT) platform, and telecommunications equipment and software to business enterprises, communications services providers and to government agencies, and has also been the biggest PC vendor in Japan since the 1980s when it launched the PC-8000 series.
NEC was the world's fourth-largest PC manufacturer by 1990. Its semiconductors business unit was the world's largest semiconductor company by annual revenue from 1985 to 1992, the second largest in 1995, one of the top three in 2000, and one of the top 10 in 2006. NEC spun off its semiconductor business to Renesas Electronics and Elpida Memory. Once Japan's major electronics company, NEC has largely withdrawn from manufacturing since the beginning of the 21st century.
NEC was #463 on the 2017 Fortune 500 list. NEC is a member of the Sumitomo Group.
History
NEC
Kunihiko Iwadare and Takeshiro Maeda established Nippon Electric Limited Partnership on August 31, 1898, by using facilities that they had bought from Miyoshi Electrical Manufacturing Company. Iwadare acted as the representative partner; Maeda handled company sales. Western Electric, which had an interest in the Japanese phone market, was represented by Walter Tenney Carleton. Carleton was also responsible for the renovation of the Miyoshi facilities. It was agreed that the partnership would be reorganized as a joint-stock company when the treaty would allow it. On July 17, 1899, the revised treaty between Japan and the United States went into effect. Nippon Electric Company, Limited was organized the same day as Western Electric Company to become the first Japanese joint-venture with foreign capital. Iwadare was named managing director. Ernest Clement and Carleton were named as directors. Maeda and Mototeru F |
https://en.wikipedia.org/wiki/D-subminiature | The D-subminiature or D-sub is a common type of electrical connector. They are named for their characteristic D-shaped metal shield. When they were introduced, D-subs were among the smallest connectors used on computer systems.
Description, nomenclature, and variants
A D-sub contains two or more parallel rows of pins or sockets usually surrounded by a D-shaped metal shield that provides mechanical support, ensures correct orientation, and may screen against electromagnetic interference. D-sub connectors have gender: parts with pin contacts are called male connectors or plugs, while those with socket contacts are called female connectors or sockets. The socket's shield fits tightly inside the plug's shield. Panel-mounted connectors usually have #4-40 UNC (as designated with the Unified Thread Standard) jackscrews that accept screws on the cable end connector cover that are used for locking the connectors together and offering mechanical strain relief, and can be tightened with a 3/16" (or 5mm) hex socket. Occasionally the nuts may be found on a cable end connector if it is expected to connect to another cable end (see the male DE-9 pictured). When screened cables are used, the shields are connected to the overall screens of the cables. This creates an electrically continuous screen covering the whole cable and connector system.
The D-sub series of connectors was introduced by Cannon in 1952. Cannon's part-numbering system uses D as the prefix for the whole series, followed by one of A, B, C, D, or E denoting the shell size, followed by the number of pins or sockets, followed by either P (plug or pins) or S (socket) denoting the gender of the part. Each shell size usually (see below for exceptions) corresponds to a certain number of pins or sockets: A with 15, B with 25, C with 37, D with 50, and E with 9. For example, DB-25 denotes a D-sub with a 25-position shell size and a 25-position contact configuration. The contacts in each row of these connectors are spa |
https://en.wikipedia.org/wiki/General%20position | In algebraic geometry and computational geometry, general position is a notion of genericity for a set of points, or other geometric objects. It means the general case situation, as opposed to some more special or coincidental cases that are possible, which is referred to as special position. Its precise meaning differs in different settings.
For example, generically, two lines in the plane intersect in a single point (they are not parallel or coincident). One also says "two generic lines intersect in a point", which is formalized by the notion of a generic point. Similarly, three generic points in the plane are not collinear; if three points are collinear (even stronger, if two coincide), this is a degenerate case.
This notion is important in mathematics and its applications, because degenerate cases may require an exceptional treatment; for example, when stating general theorems or giving precise statements thereof, and when writing computer programs (see generic complexity).
General linear position
A set of points in a -dimensional affine space (-dimensional Euclidean space is a common example) is in general linear position (or just general position) if no of them lie in a -dimensional flat for . These conditions contain considerable redundancy since, if the condition holds for some value then it also must hold for all with . Thus, for a set containing at least points in -dimensional affine space to be in general position, it suffices that no hyperplane contains more than points – i.e. the points do not satisfy any more linear relations than they must.
A set of at most points in general linear position is also said to be affinely independent (this is the affine analog of linear independence of vectors, or more precisely of maximal rank), and points in general linear position in affine d-space are an affine basis. See affine transformation for more.
Similarly, n vectors in an n-dimensional vector space are linearly independent if and only if the point |
https://en.wikipedia.org/wiki/MOBIDIC | Sylvania's MOBIDIC, short for "MOBIle DIgital Computer", was a transistorized computer intended to store, sort and route information as one part of the United States Army's Fieldata concept. Fieldata aimed to automate the distribution of battlefield data in any form, ensuring the delivery of reports to the proper recipients regardless of the physical form they were sent or received. MOBIDIC was mounted in the trailer of a semi-trailer truck, while a second supplied power, allowing it to be moved about the battlefield. The Army referred to the system as the AN/MYK-1, or AN/MYK-2 for the dual-CPU version, Sylvania later offered a commercial version as the S 9400.
History
In early 1956 the Army Signal Corps at Fort Monmouth released a contract tender for the development of a van-mounted mobile computer as part of their Fieldata efforts. Fieldata envisioned a system where any sort of reports would be converted into text format and then sent electronically around an extended battlefield. At the recipient's end, it would be converted into an appropriate output, often on a line printer or similar device. By automating the process of routing the messages in the middle of the information flow, the Signal Corps was hoping to guarantee delivery and improve responsiveness. Fieldata can be thought of as a general purpose version of the system the US Air Force was developing in their SAGE system, which did the same task but limited to the field of information about aircraft locations and status.
The heart of Fieldata would be computer systems that would receive, store, prioritize and send the messages. The machines would have to be built using transistors in order to meet the size and power requirements, so in effect, the Army was paying to develop transistorized computers. In spite of this, most established players ignored the Army's calls for the small machine. Sylvania's director of development speculated that the Army's terminology in the contract may have hidden the appare |
https://en.wikipedia.org/wiki/Parasitology | Parasitology is the study of parasites, their hosts, and the relationship between them. As a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology.
Fields
The study of these diverse organisms means that the subject is often broken up into simpler, more focused units, which use common techniques, even if they are not studying the same organisms or diseases. Much research in parasitology falls somewhere between two or more of these definitions. In general, the study of prokaryotes falls under the field of bacteriology rather than parasitology.
Medical
The parasitologist F. E. G. Cox noted that "Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth".
One of the largest fields in parasitology, medical parasitology is the subject that deals with the parasites that infect humans, the diseases caused by them, clinical picture and the response generated by humans against them. It is also concerned with the various methods of their diagnosis, treatment and finally their prevention & control.
A parasite is an organism that live on or within another organism called the host.
These include organisms such as:
Plasmodium spp., the protozoan parasite which causes malaria. The four species infective to humans are P. falciparum, P. malariae, P. vivax and P. ovale.
Leishmania, unicellular organisms which cause leishmaniasis
Entamoeba and Giardia, which cause intestinal infections (dysentery and diarrhoea)
Multicellular organisms and intestinal worms (helminths) such as Schistosoma spp., Wuchereri |
https://en.wikipedia.org/wiki/Fieldata | FIELDATA (also written as Fieldata) was a pioneering computer project run by the US Army Signal Corps in the late 1950s that intended to create a single standard (as defined in MIL-STD-188A/B/C) for collecting and distributing battlefield information. In this respect it could be thought of as a generalization of the US Air Force's SAGE system that was being created at about the same time.
Unlike SAGE, FIELDATA was intended to be much larger in scope, allowing information to be gathered from any number of sources and forms. Much of the FIELDATA system was the specifications for the format the data would take, leading to a character set that would be a huge influence on ASCII a few years later. FIELDATA also specified the message formats and even the electrical standards for connecting FIELDATA-standard machines together.
Another part of the FIELDATA project was the design and construction of computers at several different scales, from data-input terminals at one end, to theatre-wide data processing centers at the other. Several FIELDATA-standard computers were built during the lifetime of the project, including the transportable MOBIDIC from Sylvania, and the BASICPAC and LOGICPAC from Philco. Another system, ARTOC, was intended to provide graphical output (in the form of photographic slides), but was never completed.
Because FIELDATA did not specify codes for interconnection and data transmission control, different systems (like "STANDARD FORM", "COMLOGNET Common language code", "SACCOMNET (465L) Control Code") used different control functions. Intercommunication between them was difficult.
FIELDATA is the original character set used internally in UNIVAC computers of the 1100 series, each six-bit character contained in six sequential bits of the 36-bit word of that computer. The direct successor to the UNIVAC 1100 is the Unisys 2200 series computers, which used FIELDATA (although ASCII is now also common with each character encoded in 1/4 of a word, or 9 bits). |
https://en.wikipedia.org/wiki/Modified%20frequency%20modulation | Modified frequency modulation (MFM) is a run-length limited (RLL) line code used to encode data on most floppy disks and some hard disk drives. It was first introduced on hard disks in 1970 with the IBM 3330 and then in floppy disk drives beginning with the IBM 53FD in 1976.
MFM is a modification to the original frequency modulation encoding (FM) code specifically for use with magnetic storage. MFM allowed devices to double the speed data was written to the media as the code guaranteed only one polarity change per encoded data bit. For this reason, MFM disks are typically known as "double density" while the earlier FM became known as "single density".
MFM is used with a data rate of 250–500 kbit/s (500–1000 kbit/s encoded) on industry-standard -inch and -inch ordinary and high-density floppy diskettes. MFM was also used in early hard disk designs, before the advent of more efficient types of RLL codes. Outside of niche applications, MFM encoding is obsolete in magnetic recording.
Magnetic storage
Magnetic storage devices, like hard drives and magnetic tape, store data not as absolute values, but in the changes in polarity. This is because a changing magnetic field will induce an electrical current in a nearby wire, and vice versa. By sending a series of changing currents to the read/write head while the media moves past it, the result will be a pattern of magnetic polarities on the media that change where the data was a "1". The exact nature of the media determines how many of these changes can occur within a given surface area, and when this is combined with the nominal speed of movement, it produces the maximum data rate for that system.
Disk drives are subject to a variety of mechanical and materials effects that cause the original pattern of data to "jitter" in time. If a long string of "0" are sent to disk, there is nothing to indicate which bit a following "1" might belong to - due to the effects of jitter it may become misplaced in time. Re-aligning the |
https://en.wikipedia.org/wiki/Intersection%20number | In mathematics, and especially in algebraic geometry, the intersection number generalizes the intuitive notion of counting the number of times two curves intersect to higher dimensions, multiple (more than 2) curves, and accounting properly for tangency. One needs a definition of intersection number in order to state results like Bézout's theorem.
The intersection number is obvious in certain cases, such as the intersection of the x- and y-axes in a plane, which should be one. The complexity enters when calculating intersections at points of tangency, and intersections which are not just points, but have higher dimension. For example, if a plane is tangent to a surface along a line, the intersection number along the line should be at least two. These questions are discussed systematically in intersection theory.
Definition for Riemann surfaces
Let X be a Riemann surface. Then the intersection number of two closed curves on X has a simple definition in terms of an integral. For every closed curve c on X (i.e., smooth function ), we can associate a differential form of compact support, the Poincaré dual of c, with the property that integrals along c can be calculated by integrals over X:
, for every closed (1-)differential on X,
where is the wedge product of differentials, and is the Hodge star. Then the intersection number of two closed curves, a and b, on X is defined as
.
The have an intuitive definition as follows. They are a sort of dirac delta along the curve c, accomplished by taking the differential of a unit step function that drops from 1 to 0 across c. More formally, we begin by defining for a simple closed curve c on X, a function fc by letting be a small strip around c in the shape of an annulus. Name the left and right parts of as and . Then take a smaller sub-strip around c, , with left and right parts and . Then define fc by
.
The definition is then expanded to arbitrary closed curves. Every closed curve c on X is homologous to for |
https://en.wikipedia.org/wiki/Lilith%20%28computer%29 | The DISER Lilith is a custom built workstation computer based on the Advanced Micro Devices (AMD) 2901 bit slicing processor, created by a group led by Niklaus Wirth at ETH Zürich. The project began in 1977, and by 1984 several hundred workstations were in use. It has a high resolution full page portrait oriented cathode ray tube display, a mouse, a laser printer interface, and a computer networking interface. Its software is written fully in Modula-2 and includes a relational database program named Lidas.
The Lilith processor architecture is a stack machine. Citing from Sven Erik Knudsen's contribution to "The Art of Simplicity": "Lilith's clock speed was around 7 MHz and enabled Lilith to execute between 1 and 2 million instructions (called M-code) per second. (...) Initially, the main memory was planned to have 65,536 16-bit words memory, but soon after its first version, it was enlarged to twice that capacity. For regular Modula-2 programs however, only the initial 65,536 words were usable for storage of variables."
History
The development of Lilith was influenced by the Xerox Alto from the Xerox PARC (1973) where Niklaus Wirth spent a sabbatical from 1976 to 1977. Unable to bring back one of the Alto systems to Europe, Wirth decided to build a new system from scratch between 1978 and 1980, selling it under the company name DISER (Data Image Sound Processor and Emitter Receiver System). In 1985, he had a second sabbatical leave to PARC, which led to the design of the Oberon System. Ceres, the follow-up to Lilith, was released in 1987.
Operating system
The Lilith operating system (OS), named Medos-2, was developed at ETH Zurich, by Svend Erik Knudsen with advice from Wirth. It is a single user, object-oriented operating system built from modules of Modula-2.
Its design influenced design of the OS Excelsior, developed for the Soviet Kronos workstation (see below), by the Kronos Research Group (KRG).
Soviet variants
From 1986 into the early 1990s, Soviet Unio |
https://en.wikipedia.org/wiki/Ramification%20%28mathematics%29 | In geometry, ramification is 'branching out', in the way that the square root function, for complex numbers, can be seen to have two branches differing in sign. The term is also used from the opposite perspective (branches coming together) as when a covering map degenerates at a point of a space, with some collapsing of the fibers of the mapping.
In complex analysis
In complex analysis, the basic model can be taken as the z → zn mapping in the complex plane, near z = 0. This is the standard local picture in Riemann surface theory, of ramification of order n. It occurs for example in the Riemann–Hurwitz formula for the effect of mappings on the genus.
In algebraic topology
In a covering map the Euler–Poincaré characteristic should multiply by the number of sheets; ramification can therefore be detected by some dropping from that. The z → zn mapping shows this as a local pattern: if we exclude 0, looking at 0 < |z| < 1 say, we have (from the homotopy point of view) the circle mapped to itself by the n-th power map (Euler–Poincaré characteristic 0), but with the whole disk the Euler–Poincaré characteristic is 1, n – 1 being the 'lost' points as the n sheets come together at z = 0.
In geometric terms, ramification is something that happens in codimension two (like knot theory, and monodromy); since real codimension two is complex codimension one, the local complex example sets the pattern for higher-dimensional complex manifolds. In complex analysis, sheets can't simply fold over along a line (one variable), or codimension one subspace in the general case. The ramification set (branch locus on the base, double point set above) will be two real dimensions lower than the ambient manifold, and so will not separate it into two 'sides', locally―there will be paths that trace round the branch locus, just as in the example. In algebraic geometry over any field, by analogy, it also happens in algebraic codimension one.
In algebraic number theory
In algebraic extensions |
https://en.wikipedia.org/wiki/Mihrab | Mihrab (, , pl. ) is a niche in the wall of a mosque that indicates the qibla, the direction of the Kaaba in Mecca towards which Muslims should face when praying. The wall in which a mihrab appears is thus the "qibla wall".
The minbar, which is the raised platform from which an imam (leader of prayer) addresses the congregation, is located to the right of the mihrab.
Etymology
The origin of the word miḥrāb is complicated and multiple explanations have been proposed by different sources and scholars. It may come from Old South Arabian (possibly Sabaic) mḥrb meaning a certain part of a palace, as well as "part of a temple where tḥrb (a certain type of visions) is obtained," from the root word ḥrb "to perform a certain religious ritual (which is compared to combat or fighting and described as an overnight retreat) in the mḥrb of the temple." It may also possibly be related to Ethiopic məkʷrab "temple, sanctuary," whose equivalent in Sabaic is mkrb of the same meaning, from the root word krb "to dedicate" (cognate with Akkadian karābu "to bless" and related to Hebrew kerūḇ "cherub (either of the heavenly creatures that bound the Ark in the inner sanctuary)").
Arab lexicographers traditionally derive the word from the Arabic root (Ḥ-R-B) relating to "war, fighting or anger," (which, though cognate with the South Arabian root, does not however carry any relation to religious rituals) thus leading some to interpret it to mean a "fortress", or "place of battle (with Satan)," the latter due to mihrabs being private prayer chambers. The latter interpretation though bears similarity to the nature of the ḥrb ritual.
The word mihrab originally had a non-religious meaning and simply denoted a special room in a house; a throne room in a palace, for example. The Fath al-Bari (p. 458), on the authority of others, suggests the mihrab is "the most honorable location of kings" and "the master of locations, the front and the most honorable." The Mosques in Islam (p. 13), |
https://en.wikipedia.org/wiki/Substitution%E2%80%93permutation%20network | In cryptography, an SP-network, or substitution–permutation network (SPN), is a series of linked mathematical operations used in block cipher algorithms such as AES (Rijndael), 3-Way, Kalyna, Kuznyechik, PRESENT, SAFER, SHARK, and Square.
Such a network takes a block of the plaintext and the key as inputs, and applies several alternating rounds or layers of substitution boxes (S-boxes) and permutation boxes (P-boxes) to produce the ciphertext block. The S-boxes and P-boxes transform of input bits into output bits. It is common for these transformations to be operations that are efficient to perform in hardware, such as exclusive or (XOR) and bitwise rotation. The key is introduced in each round, usually in the form of "round keys" derived from it. (In some designs, the S-boxes themselves depend on the key.)
Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
Components
An S-box substitutes a small block of bits (the input of the S-box) by another block of bits (the output of the S-box). This substitution should be one-to-one, to ensure invertibility (hence decryption). In particular, the length of the output should be the same as the length of the input (the picture on the right has S-boxes with 4 input and 4 output bits), which is different from S-boxes in general that could also change the length, as in Data Encryption Standard (DES), for example. An S-box is usually not simply a permutation of the bits. Rather, a good S-box will have the property that changing one input bit will change about half of the output bits (or an avalanche effect). It will also have the property that each output bit will depend on every input bit.
A P-box is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distribute |
https://en.wikipedia.org/wiki/SableVM | SableVM was a clean room implementation of Java bytecode interpreter implementing the Java virtual machine (VM) specification, second edition. SableVM was designed to be a robust, extremely portable, efficient, and fully specifications-compliant (JVM spec, Java Native Interface, Invocation interface, Debug interface, etc.) Java Virtual Machine that would be easy to maintain and to extend. It is now no longer being maintained.
The implementation was a part of the effort in the early 2000s to break the Java ecosystem free from Sun Microsystems's control.
Overview
The core engine is an interpreter which used ground-breaking techniques to deliver performance that can approach that of a "naive" just-in-time (JIT) compiler, while retaining the software engineering advantages of interpreters: portability, maintainability and simplicity. This simplicity makes SableVM's source code very accessible and easy to understand for new users/programmers.
SableVM is Free Software — it is licensed under the GNU Lesser General Public License (LGPL). It also makes use of GNU Classpath (copyrighted by the FSF) which is licensed under the GNU General Public License with linking exception.
SableVM is the first open-source virtual machine for Java to include the support for JVMDI (Java Virtual Machine Debugging Interface) and JDWP (Java Debug Wire Protocol). These standard Java debugging interfaces are used for example by Eclipse to provide a rich and user-friendly Java development environment.
Java Intermediate Language
Some versions of the SableVM use Java Intermediate Language, an intermediate language (which is a subset of XML) representing the type structure of a Java program. The language was proposed by the team of SableVM in McGill University in January 2002 to aid the analysis of a Java program with the goals of scalability and good performance. The language has not been widely adopted.
Consider the following piece of Java code.
public MyClass implements MyInterface extends |
https://en.wikipedia.org/wiki/Pin%20compatibility | In electronics, pin-compatible devices are electronic components, generally integrated circuits or expansion cards, sharing a common footprint and with the same functions assigned or usable on the same pins. Pin compatibility is a property desired by systems integrators as it allows a product to be updated without redesigning printed circuit boards, which can reduce costs and decrease time to market.
Although devices which are pin-compatible share a common footprint, they are not necessarily electrically or thermally compatible. As a result, manufacturers often specify devices as being either pin-to-pin or drop-in compatible. Pin-compatible devices are generally produced to allow upgrading within a single product line, to allow end-of-life devices to be replaced with newer equivalents, or to compete with the equivalent products of other manufacturers.
Pin-to-pin compatibility
Pin-to-pin compatible devices share an assignment of functions to pins, but may have differing electrical characteristics (supply voltages, or oscillator frequencies) or thermal characteristics (TDPs, reflow curves, or temperature tolerances). As a result, their use in a system may require that portions of the system, such as its power delivery subsystem, be adapted to fit the new component.
A common example of pin-to-pin compatible devices which may not be electrically compatible are the 7400 series integrated circuits. The 7400 series devices have been produced on a number of different manufacturing processes, but have retained the same pinouts throughout. For example, all 7405 devices provide six NOT gates (or inverters) but may have incompatible supply voltage tolerances.
7405 – Standard TTL, 4.75–5.25 V.
74C05 – CMOS, 4–15 V.
74LV05 – Low-voltage CMOS, 2.0–5.5 V.
In other cases, particularly with computers, devices may be pin-to-pin compatible but made otherwise incompatible as a result of market segmentation. For example, Intel Skylake desktop-class Core and Xeon E3v5 processor |
https://en.wikipedia.org/wiki/Plasma%20diagnostics | Plasma diagnostics are a pool of methods, instruments, and experimental techniques used to measure properties of a plasma, such as plasma components' density, distribution function over energy (temperature), their spatial profiles and dynamics, which enable to derive plasma parameters.
Invasive probe methods
Ball-pen probe
A ball-pen probe is novel technique used to measure directly the plasma potential in magnetized plasmas. The probe was invented by Jiří Adámek in the Institute of Plasma Physics AS CR in 2004. The ball-pen probe balances the electron saturation current to the same magnitude as that of the ion saturation current. In this case, its floating potential becomes identical to the plasma potential. This goal is attained by a ceramic shield, which screens off an adjustable part of the electron current from the probe collector due to the much smaller gyro–radius of the electrons. The electron temperature is proportional to the difference of ball-pen probe(plasma potential) and Langmuir probe (floating potential) potential. Thus, the electron temperature can be obtained directly with high temporal resolution without additional power supply.
Faraday cup
The conventional Faraday cup is applied for measurements of ion (or electron) flows from plasma boundaries and for mass spectrometry.
Langmuir probe
Measurements with electric probes, called Langmuir probes, are the oldest and most often used procedures for low-temperature plasmas. The method was developed by Irving Langmuir and his co-workers in the 1920s, and has since been further developed in order to extend its applicability to more general conditions than those presumed by Langmuir. Langmuir probe measurements are based on the estimation of current versus voltage characteristics of a circuit consisting of two metallic electrodes that are both immersed in the plasma under study. Two cases are of interest:
(a) The surface areas of the two electrodes differ by several orders of magnitude. This is kno |
https://en.wikipedia.org/wiki/Inverse%20scattering%20problem | In mathematics and physics, the inverse scattering problem is the problem of determining characteristics of an object, based on data of how it scatters incoming radiation or particles. It is the inverse problem to the direct scattering problem, which is to determine how radiation or particles are scattered based on the properties of the scatterer.
Soliton equations are a class of partial differential equations which can be studied and solved by a method called the inverse scattering transform, which reduces the nonlinear PDEs to a linear inverse scattering problem. The nonlinear Schrödinger equation, the Korteweg–de Vries equation and the KP equation are examples of soliton equations. In one space dimension the inverse scattering problem is equivalent to a Riemann-Hilbert problem. Since its early statement for radiolocation, many applications have been found for inverse scattering techniques, including echolocation, geophysical survey, nondestructive testing, medical imaging, quantum field theory.
References
.
Inverse Acoustic and Electromagnetic Scattering Theory; Colton, David and Kress, Rainer
Scattering theory
Scattering, absorption and radiative transfer (optics)
Inverse problems |
https://en.wikipedia.org/wiki/%C3%89mile%20Reynaud | Charles-Émile Reynaud (8 December 1844 – 9 January 1918) was a French inventor, responsible for the praxinoscope (an animation device patented in 1877 that improved on the zoetrope) and was responsible for the first projected animated films. His Pantomimes Lumineuses
premiered on 28 October 1892 in Paris. His Théâtre Optique film system, patented in 1888, is also notable as the first known instance of film perforations being used. The performances predated Auguste and Louis Lumière's first paid public screening of the cinematographe on 26 December 1895, often seen as the birth of cinema.
Early life
Charles-Émile Reynaud was born in Montreuil, Seine-Saint-Denis, on 8 December 1844, to Brutus Reynaud, an engineer who moved to Paris from Le Puy-en-Velay in 1842, and Marie-Caroline Bellanger, a former schoolteacher who educated Émile at home. Marie-Caroline was trained in watercolor painting by Pierre-Joseph Redouté
and taught her son drawing and painting techniques. By 1862 he started his own career as a photographer in Paris.
Reynaud constructed steam engines at age 13. He worked as an apprentice for Antoine Samuel Adam-Salomon. At age 19 he met François-Napoléon-Marie Moigno at one of Moigno's lectures and became his assistant. Brutus died in 1865, and the Reynaud family moved to Le Puy-en-Velay. Reynaud was taught Latin, Greek, physics, chemistry, mechanics, and natural sciences by his uncle, a doctor in the area. He was a nurse during the Franco-Prussian War.
Career
Reynaud started holding free magic lantern shows similar to Moigno's in December 1873. He created the praxinoscope out of a cookie box after reading a series of 1876 articles in La Nature about optical illusion devices. He patented it in 1877, and received a honourable mention at the 1878 Exposition Universelle. He started production on the device and was able to quit his teaching job after its financial success after initially being offered at Le Bon Marché stores. Ernest Meissonier displayed Eadwe |
https://en.wikipedia.org/wiki/Identity-based%20encryption | ID-based encryption, or identity-based encryption (IBE), is an important primitive of ID-based cryptography. As such it is a type of public-key encryption in which the public key of a user is some unique information about the identity of the user (e.g. a user's email address). This means that a sender who has access to the public parameters of the system can encrypt a message using e.g. the text-value of the receiver's name or email address as a key. The receiver obtains its decryption key from a central authority, which needs to be trusted as it generates secret keys for every user.
ID-based encryption was proposed by Adi Shamir in 1984. He was however only able to give an instantiation of identity-based signatures. Identity-based encryption remained an open problem for many years.
The pairing-based Boneh–Franklin scheme and Cocks's encryption scheme based on quadratic residues both solved the IBE problem in 2001.
Usage
Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called the Private Key Generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID.
As a result, parties may encrypt messages (or verify signatures) with no prior distribution of keys between individual participants. This is extremely useful in cases where pre-distribution of authenticated keys is inconvenient or infeasible due to technical restraints. However, to decrypt or sign messages, the authorized user must obtain the appropriate priva |
https://en.wikipedia.org/wiki/Irreducible%20representation | In mathematics, specifically in the representation theory of groups and algebras, an irreducible representation or irrep of an algebraic structure is a nonzero representation that has no proper nontrivial subrepresentation , with closed under the action of .
Every finite-dimensional unitary representation on a Hilbert space is the direct sum of irreducible representations. Irreducible representations are always indecomposable (i.e. cannot be decomposed further into a direct sum of representations), but the converse may not hold, e.g. the two-dimensional representation of the real numbers acting by upper triangular unipotent matrices is indecomposable but reducible.
History
Group representation theory was generalized by Richard Brauer from the 1940s to give modular representation theory, in which the matrix operators act on a vector space over a field of arbitrary characteristic, rather than a vector space over the field of real numbers or over the field of complex numbers. The structure analogous to an irreducible representation in the resulting theory is a simple module.
Overview
Let be a representation i.e. a homomorphism of a group where is a vector space over a field . If we pick a basis for , can be thought of as a function (a homomorphism) from a group into a set of invertible matrices and in this context is called a matrix representation. However, it simplifies things greatly if we think of the space without a basis.
A linear subspace is called -invariant if for all and all . The co-restriction of to the general linear group of a -invariant subspace is known as a subrepresentation. A representation is said to be irreducible if it has only trivial subrepresentations (all representations can form a subrepresentation with the trivial -invariant subspaces, e.g. the whole vector space , and {0}). If there is a proper nontrivial invariant subspace, is said to be reducible.
Notation and terminology of group representations
Group elements c |
https://en.wikipedia.org/wiki/Eisenstein%27s%20criterion | In mathematics, Eisenstein's criterion gives a sufficient condition for a polynomial with integer coefficients to be irreducible over the rational numbers – that is, for it to not be factorizable into the product of non-constant polynomials with rational coefficients.
This criterion is not applicable to all polynomials with integer coefficients that are irreducible over the rational numbers, but it does allow in certain important cases for irreducibility to be proved with very little effort. It may apply either directly or after transformation of the original polynomial.
This criterion is named after Gotthold Eisenstein. In the early 20th century, it was also known as the Schönemann–Eisenstein theorem because Theodor Schönemann was the first to publish it.
Criterion
Suppose we have the following polynomial with integer coefficients:
If there exists a prime number such that the following three conditions all apply:
divides each for ,
does not divide , and
does not divide ,
then is irreducible over the rational numbers. It will also be irreducible over the integers, unless all its coefficients have a nontrivial factor in common (in which case as integer polynomial will have some prime number, necessarily distinct from , as an irreducible factor). The latter possibility can be avoided by first making primitive, by dividing it by the greatest common divisor of its coefficients (the content of ). This division does not change whether is reducible or not over the rational numbers (see Primitive part–content factorization for details), and will not invalidate the hypotheses of the criterion for (on the contrary it could make the criterion hold for some prime, even if it did not before the division).
Examples
Eisenstein's criterion may apply either directly (i.e., using the original polynomial) or after transformation of the original polynomial.
Direct (without transformation)
Consider the polynomial . In order for Eisenstein's criterion to apply for |
https://en.wikipedia.org/wiki/Homothety | In mathematics, a homothety (or homothecy, or homogeneous dilation) is a transformation of an affine space determined by a point S called its center and a nonzero number called its ratio, which sends point to a point by the rule
for a fixed number .
Using position vectors:
.
In case of (Origin):
,
which is a uniform scaling and shows the meaning of special choices for :
for one gets the identity mapping,
for one gets the reflection at the center,
For one gets the inverse mapping defined by .
In Euclidean geometry homotheties are the similarities that fix a point and either preserve (if ) or reverse (if ) the direction of all vectors. Together with the translations, all homotheties of an affine (or Euclidean) space form a group, the group of dilations or homothety-translations. These are precisely the affine transformations with the property that the image of every line g is a line parallel to g.
In projective geometry, a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwise invariant.
In Euclidean geometry, a homothety of ratio multiplies distances between points by , areas by and volumes by . Here is the ratio of magnification or dilation factor or scale factor or similitude ratio. Such a transformation can be called an enlargement if the scale factor exceeds 1. The above-mentioned fixed point S is called homothetic center or center of similarity or center of similitude.
The term, coined by French mathematician Michel Chasles, is derived from two Greek elements: the prefix homo- (), meaning "similar", and thesis (), meaning "position". It describes the relationship between two figures of the same shape and orientation. For example, two Russian dolls looking in the same direction can be considered homothetic.
Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops.
Properties
The following properties h |
https://en.wikipedia.org/wiki/Spectral%20radius | In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by .
Definition
Matrices
Let be the eigenvalues of a matrix . The spectral radius of is defined as
The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand, for every natural matrix norm ; and on the other hand, Gelfand's formula states that . Both of these results are shown below.
However, the spectral radius does not necessarily satisfy for arbitrary vectors . To see why, let be arbitrary and consider the matrix
.
The characteristic polynomial of is , so its eigenvalues are and thus . However, . As a result,
As an illustration of Gelfand's formula, note that as , since if is even and if is odd.
A special case in which for all is when is a Hermitian matrix and is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length. As a result,
Bounded linear operators
In the context of a bounded linear operator on a Banach space, the eigenvalues need to be replaced with the elements of the spectrum of the operator, i.e. the values for which is not bijective. We denote the spectrum by
The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum:
Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting denote the operator norm, we have
A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator.
Graphs
The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix.
This definit |
https://en.wikipedia.org/wiki/Lines%20of%20Torres%20Vedras | The Lines of Torres Vedras were lines of forts and other military defences built in secrecy to defend Lisbon during the Peninsular War. Named after the nearby town of Torres Vedras, they were ordered by Arthur Wellesley, Viscount Wellington, constructed by Colonel Richard Fletcher and his Portuguese workers between November 1809 and September 1810, and used to stop Marshal Masséna's 1810 offensive. The Lines were declared a National Heritage by the Portuguese Government in March 2019.
Development
At the beginning of the Peninsular War (1807–14) France and Spain signed the Treaty of Fontainebleau in October 1807. This provided for the invasion and subsequent division of Portuguese territory into three kingdoms. Subsequently, French troops under the command of General Junot entered Portugal, which requested support from the British. In July 1808 troops commanded by Sir Arthur Wellesley, the later Duke of Wellington, landed in Portugal and defeated French troops at the Battles of Roliça and Vimeiro. This forced Junot to negotiate the Convention of Cintra, which led to the evacuation of the French army from Portugal. In March 1809, Marshal Soult led a new French expedition that advanced south to the city of Porto before being repulsed by Portuguese-British troops and forced to withdraw. After this retreat, Wellesley's forces advanced into Spain to join 33,000 Spanish troops under General Cuesta. At Talavera, some southwest of Madrid, they encountered and defeated 46,000 French soldiers under Marshal Claude Victor. After the Battle of Talavera, Wellington realised that he was seriously outnumbered by the French army, giving rise to the possibility that he could be forced to retreat to Portugal and possibly evacuate. He decided to strengthen the proposed evacuation area around the Fort of São Julião da Barra on the estuary of the River Tagus, near Lisbon.
Planning
In October 1809, Wellington, drawing on topographical maps prepared by José Maria das Neves Costa, and ma |
https://en.wikipedia.org/wiki/Discrete%20geometry | Discrete geometry and combinatorial geometry are branches of geometry that study combinatorial properties and constructive methods of discrete geometric objects. Most questions in discrete geometry involve finite or discrete sets of basic geometric objects, such as points, lines, planes, circles, spheres, polygons, and so forth. The subject focuses on the combinatorial properties of these objects, such as how they intersect one another, or how they may be arranged to cover a larger object.
Discrete geometry has a large overlap with convex geometry and computational geometry, and is closely related to subjects such as finite geometry, combinatorial optimization, digital geometry, discrete differential geometry, geometric graph theory, toric geometry, and combinatorial topology.
History
Although polyhedra and tessellations had been studied for many years by people such as Kepler and Cauchy, modern discrete geometry has its origins in the late 19th century. Early topics studied were: the density of circle packings by Thue, projective configurations by Reye and Steinitz, the geometry of numbers by Minkowski, and map colourings by Tait, Heawood, and Hadwiger.
László Fejes Tóth, H.S.M. Coxeter, and Paul Erdős laid the foundations of discrete geometry.
Topics
Polyhedra and polytopes
A polytope is a geometric object with flat sides, which exists in any general number of dimensions. A polygon is a polytope in two dimensions, a polyhedron in three dimensions, and so on in higher dimensions (such as a 4-polytope in four dimensions). Some theories further generalize the idea to include such objects as unbounded polytopes (apeirotopes and tessellations), and abstract polytopes.
The following are some of the aspects of polytopes studied in discrete geometry:
Polyhedral combinatorics
Lattice polytopes
Ehrhart polynomials
Pick's theorem
Hirsch conjecture
Opaque set
Packings, coverings and tilings
Packings, coverings, and tilings are all ways of arranging uniform objects |
https://en.wikipedia.org/wiki/Internet%20Security%20Association%20and%20Key%20Management%20Protocol | Internet Security Association and Key Management Protocol (ISAKMP) is a protocol defined by RFC 2408 for establishing security association (SA) and cryptographic keys in an Internet environment. ISAKMP only provides a framework for authentication and key exchange and is designed to be key exchange independent; protocols such as Internet Key Exchange (IKE) and Kerberized Internet Negotiation of Keys (KINK) provide authenticated keying material for use with ISAKMP. For example: IKE describes a protocol using part of Oakley and part of SKEME in conjunction with ISAKMP to obtain authenticated keying material for use with ISAKMP, and for other security associations such as AH and ESP for the IETF IPsec DOI.
Overview
ISAKMP defines the procedures for authenticating a communicating peer, creation and management of Security Associations, key generation techniques and threat mitigation (e.g. denial of service and replay attacks). As a framework, ISAKMP typically utilizes IKE for key exchange, although other methods have been implemented such as Kerberized Internet Negotiation of Keys. A Preliminary SA is formed using this protocol; later a fresh keying is done.
ISAKMP defines procedures and packet formats to establish, negotiate, modify and delete Security Associations. SAs contain all the information required for execution of various network security services, such as the IP layer services (such as header authentication and payload encapsulation), transport or application layer services or self-protection of negotiation traffic. ISAKMP defines payloads for exchanging key generation and authentication data. These formats provide a consistent framework for transferring key and authentication data which is independent of the key generation technique, encryption algorithm and authentication mechanism.
ISAKMP is distinct from key exchange protocols in order to cleanly separate the details of security association management (and key management) from the details of key ex |
https://en.wikipedia.org/wiki/Incidence%20%28geometry%29 | In geometry, an incidence relation is a heterogeneous relation that captures the idea being expressed when phrases such as "a point lies on a line" or "a line is contained in a plane" are used. The most basic incidence relation is that between a point, , and a line, , sometimes denoted . If the pair is called a flag. There are many expressions used in common language to describe incidence (for example, a line passes through a point, a point lies in a plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner. Statements such as "line intersects line " are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a point that is incident with both line and line ". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed as containment.
Statements such as "any two lines in a plane meet" are called incidence propositions. This particular statement is true in a projective plane, though not true in the Euclidean plane where lines may be parallel. Historically, projective geometry was developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view of synthetic geometry, projective geometry should be developed using such propositions as axioms. This is most significant for projective planes due to the universal validity of Desargues' theorem in higher dimensions.
In contrast, the analytic approach is to define projective space based on linear algebra and utilizing homogeneous co-ordinates. The propositions of incidence are derived from the following basic result on vector spaces: given subspaces and of a (finite-dimensional) vector space , the dimension of their intersection is . Bearing in mi |
https://en.wikipedia.org/wiki/Anycast | Anycast is a network addressing and routing methodology in which a single IP address is shared by devices (generally servers) in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and name servers, to bring their content closer to end users.
Addressing methods
There are four principal addressing methods in the Internet Protocol:
History
The first documented use of anycast routing for topological load-balancing of Internet-connected services was in 1989, the technique was first formally documented in the IETF four years later. It was first applied to critical infrastructure in 2001 with the anycasting of the I-root nameserver.
Early objections
Early objections to the deployment of anycast routing centered on the perceived conflict between long-lived TCP connections and the volatility of the Internet's routed topology. In concept, a long-lived connection, such as an FTP file transfer (which can take hours to complete for large files) might be re-routed to a different anycast instance in mid-connection due to changes in network topology or routing, with the result that the server changes mid-connection, and the new server is not aware of the connection and does not possess the TCP connection state of the previous anycast instance.
In practice, such problems were not observed, and these objections dissipated by the early 2000s. Many initial anycast deployments consisted of DNS servers, using principally UDP transport. Measurements of long-term anycast flows revealed very few failures due to mid-connection instance switches, far fewer (less than 0.017% or "less than one flow per ten thousand per hour of duration" according to various sources) than were attributed to other causes of failure. Numerous mechanisms were developed to efficiently shar |
https://en.wikipedia.org/wiki/Sierpi%C5%84ski%20space | In mathematics, the Sierpiński space (or the connected two-point set) is a finite topological space with two points, only one of which is closed.
It is the smallest example of a topological space which is neither trivial nor discrete. It is named after Wacław Sierpiński.
The Sierpiński space has important relations to the theory of computation and semantics, because it is the classifying space for open sets in the Scott topology.
Definition and fundamental properties
Explicitly, the Sierpiński space is a topological space S whose underlying point set is and whose open sets are
The closed sets are
So the singleton set is closed and the set is open ( is the empty set).
The closure operator on S is determined by
A finite topological space is also uniquely determined by its specialization preorder. For the Sierpiński space this preorder is actually a partial order and given by
Topological properties
The Sierpiński space is a special case of both the finite particular point topology (with particular point 1) and the finite excluded point topology (with excluded point 0). Therefore, has many properties in common with one or both of these families.
Separation
The points 0 and 1 are topologically distinguishable in S since is an open set which contains only one of these points. Therefore, S is a Kolmogorov (T0) space.
However, S is not T1 since the point 1 is not closed. It follows that S is not Hausdorff, or Tn for any
S is not regular (or completely regular) since the point 1 and the disjoint closed set cannot be separated by neighborhoods. (Also regularity in the presence of T0 would imply Hausdorff.)
S is vacuously normal and completely normal since there are no nonempty separated sets.
S is not perfectly normal since the disjoint closed sets and cannot be precisely separated by a function. Indeed, cannot be the zero set of any continuous function since every such function is constant.
Connectedness
The Sierpiński space S is both hyperconnecte |
https://en.wikipedia.org/wiki/Spermicide | Spermicide is a contraceptive substance that destroys sperm, inserted vaginally prior to intercourse to prevent pregnancy. As a contraceptive, spermicide may be used alone. However, the pregnancy rate experienced by couples using only spermicide is higher than that of couples using other methods. Usually, spermicides are combined with contraceptive barrier methods such as diaphragms, condoms, cervical caps, and sponges. Combined methods are believed to result in lower pregnancy rates than either method alone.
Spermicides are typically unscented, clear, unflavored, non-staining, and lubricative.
Types and effectiveness
The most common active ingredient of spermicides is nonoxynol-9. Spermicides containing nonoxynol-9 are available in many forms, such as jelly (gel), films, and foams. Used alone, spermicides have a perfect use failure rate of 6% per year when used correctly and consistently, and 16% failure rate per year in typical use.
Spermicide brands
This list of examples was provided by the Mayo Clinic:
VCF Vaginal Contraceptive Film
VCF Vaginal Contraceptive Gel
VCF Contraceptive Foam
Conceptrol
Crinone
Encare
Endometrin
First-Progesterone VGS
Gynol II
Prochieve
Today Sponge
Vagi-Gard Douche Non-Staining
Nonoxynol-9 is the primary chemical in spermicides to inhibit sperm motility. Active secondary spermicidal ingredients can include octoxynol-9, benzalkonium chloride and menfegol. These secondary ingredients are not mainstream in the United States, where nonoxynol-9 alone is typical. Preventing sperm motility will inhibit the sperm from travelling towards the egg moving down the fallopian tubes to the uterus. The deep proper insertion of spermicide should effectively block the cervix so that sperm cannot make it past the cervix to the uterus or the Fallopian tubes. A study observing the distribution of spermicide containing nonoxynol-9 in the vaginal tract showed “After 10 min the gel spread within the vaginal canal providing a contiguous covering of |
https://en.wikipedia.org/wiki/Outline%20of%20probability | Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition whose truth is not certain. The proposition of interest is usually of the form "A specific event will occur." The attitude of mind is of the form "How certain is it that the event will occur?" The certainty that is adopted can be described in terms of a numerical measure, and this number, between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty) is called the probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.
Introduction
Probability and randomness.
Basic probability
(Related topics: set theory, simple theorems in the algebra of sets)
Events
Events in probability theory
Elementary events, sample spaces, Venn diagrams
Mutual exclusivity
Elementary probability
The axioms of probability
Boole's inequality
Meaning of probability
Probability interpretations
Bayesian probability
Frequency probability
Calculating with probabilities
Conditional probability
The law of total probability
Bayes' theorem
Independence
Independence (probability theory)
Probability theory
(Related topics: measure theory)
Measure-theoretic probability
Sample spaces, σ-algebras and probability measures
Probability space
Sample space
Standard probability space
Random element
Random compact set
Dynkin system
Probability axioms
Event (probability theory)
Complementary event
Elementary event
"Almost surely"
Independence
Independence (probability theory)
The Borel–Cantelli lemmas and Kolmogorov's zero–one law
Conditional probability
Conditional probability
Conditioning (probability)
Conditional expectation
Conditional probability distribution
Regular conditional probability
Disintegration theorem
Bayes' theorem
Rule of succession
Condition |
https://en.wikipedia.org/wiki/French%20curve | A French curve is a template usually made from metal, wood or plastic composed of many different segments of the Euler spiral (aka the clothoid curve). It is used in manual drafting and in fashion design to draw smooth curves of varying radii. The curve is placed on the drawing material, and a pencil, knife or other implement is traced around its curves to produce the desired result. They were invented by the German mathematician Ludwig Burmester and are also known as Burmester (curve) set.
Clothing design
French curves are used in fashion design and sewing alongside hip curves, straight edges and right-angle rulers. Commercial clothing patterns can be personalized for fit by using French curves to draw neckline, sleeve, bust and waist variations.
See also
References
External links
Weisstein, Eric W. French Curve from MathWorld.
Use of the French Curve from Integrated Publishing.
Technical drawing tools
Curves
Mathematical tools |
https://en.wikipedia.org/wiki/Prime95 | Prime95, also distributed as the command-line utility mprime for FreeBSD and Linux, is a freeware application written by George Woltman. It is the official client of the Great Internet Mersenne Prime Search (GIMPS), a volunteer computing project dedicated to searching for Mersenne primes. It is also used in overclocking to test for system stability.
Although most of its source code is available, Prime95 is not free and open-source software because its end-user license agreement states that if the software is used to find a prime qualifying for a bounty offered by the Electronic Frontier Foundation, then that bounty will be claimed and distributed by GIMPS.
Finding Mersenne primes by volunteer computing
Prime95 tests numbers for primality using the Fermat primality test (referred to internally as PRP, or "probable prime"). For much of its history, it used the Lucas–Lehmer primality test, but the availability of Lucas–Lehmer assignments was deprecated in April 2021 to increase search throughput. Specifically, to guard against faulty results, every Lucas–Lehmer test had to be performed twice in its entirety, while Fermat tests can be verified in a small fraction of their original run time using a proof generated during the test by Prime95. Current versions of Prime95 remain capable of Lucas–Lehmer testing for the purpose of double-checking existing Lucas–Lehmer results, and for fully verifying "probably prime" Fermat test results (which, unlike "prime" Lucas–Lehmer results, are not conclusive).
To reduce the number of full-length primality tests needed, Prime95 also implements other, computationally simpler tests designed to filter out unviable candidates; as of 2021, this mainly comprises Pollard's p – 1 algorithm. The elliptic-curve factorization method and Williams's p + 1 algorithm are implemented, but are considered not useful at modern GIMPS testing levels, and mostly used in attempts to factor much smaller Mersenne numbers that have already undergone primalit |
https://en.wikipedia.org/wiki/Frogger | is a 1981 arcade action game developed by Konami and manufactured by Sega. In North America, it was released by Sega/Gremlin. The object of the game is to direct a series of frogs to their homes by crossing a busy road and a hazardous river.
Frogger was positively received as one of the greatest video games ever made and followed by several clones and sequels. By 2005, 20 million copies of its various home video game incarnations had been sold worldwide. It entered popular culture, including television and music.
Gameplay
The objective of the game is to guide a frog to each of the empty homes at the top of the screen. The game starts with three, five, or seven frogs, depending on the machine's settings. Losing all frogs is game over. The player uses the 4-direction joystick to hop the frog once. Frogger is either single-player or two players alternating turns.
The frog starts at the bottom of the screen, which contains a horizontal road occupied by speeding cars, trucks, and bulldozers. The player must guide the frog between opposing lanes of traffic to avoid becoming roadkill and losing a life. After the road, a median strip separates the two major parts of the screen. The upper part consists of a river with logs, alligators, and turtles, all moving horizontally across the screen. By jumping on swiftly moving logs and the backs of turtles and alligators, the player can guide the frog to safety. The player must avoid snakes, otters, and the open mouths of alligators. A brightly colored female frog is sometimes on a log and may be carried for bonus points. The top of the screen contains five "frog homes". These sometimes contain bonus insects or deadly alligators.
The opening tune is the first verse of a Japanese children's song called "Inu No Omawarisan" ("The Dog Policeman"). Other Japanese tunes include the themes to the anime series Hana no Ko Lunlun and Rascal the Raccoon. The American release has the same opening song plus "Yankee Doodle".
In 1982, Softli |
https://en.wikipedia.org/wiki/Graphics%20processing%20unit | A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing (either on a video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles). After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
History
1970s
Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.
A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito.
The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU.
1980s
The NEC µPD7220 was the first implementation of a personal computer graphics display processor as a single large |
https://en.wikipedia.org/wiki/BLISS | BLISS is a system programming language developed at Carnegie Mellon University (CMU) by W. A. Wulf, D. B. Russell, and A. N. Habermann around 1970. It was perhaps the best known system language until C debuted a few years later. Since then, C became popular and common, and BLISS faded into obscurity. When C was in its infancy, a few projects within Bell Labs debated the merits of BLISS vs. C.
BLISS is a typeless block-structured programming language based on expressions rather than statements, and includes constructs for exception handling, coroutines, and macros. It does not include a goto statement.
The name is variously said to be short for Basic Language for Implementation of System Software or System Software Implementation Language, Backwards. However, in his 2015 oral history for the Babbage Institute's Computer Security History Project, Wulf claimed that the acronym was originally based on the name "Bill's Language for Implementing System Software."
The original Carnegie Mellon compiler was notable for its extensive use of optimizations, and formed the basis of the classic book The Design of an Optimizing Compiler.
Digital Equipment Corporation (DEC) developed and maintained BLISS compilers for the PDP-10, PDP-11, VAX, DEC PRISM, MIPS, DEC Alpha, and Intel IA-32, The language did not become popular among customers and few had the compiler, but DEC used it heavily in-house into the 1980s; most of the utility programs for the OpenVMS operating system were written in BLISS-32. The DEC BLISS compiler has been ported to the IA-64 and x86-64 architectures as part of the ports of OpenVMS to these platforms. The x86-64 BLISS compiler uses LLVM as its backend code generator, replacing the proprietary GEM backend used for Alpha and IA-64.
Language description
The BLISS language has the following characteristics:
All constants are full word for the machine being used, e.g. on a 16-bit machine such as the PDP-11, a constant is 16 bits; on a VAX computer, consta |
https://en.wikipedia.org/wiki/Legendre%20transformation | In mathematics, the Legendre transformation (or Legendre transform), first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function. In physical problems, it is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of the conjugate quantity (momentum, volume, and entropy, respectively). In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism (or vice versa) and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables.
For sufficiently smooth functions on the real line, the Legendre transform of a function can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed in Euler's derivative notation as
where is an operator of differentiation, represents an argument or input to the associated function, is an inverse function such that ,
or equivalently, as and in Lagrange's notation.
The generalization of the Legendre transformation to affine spaces and non-convex functions is known as the convex conjugate (also called the Legendre–Fenchel transformation), which can be used to construct a function's convex hull.
Definition
Let be an interval, and a convex function; then the Legendre transform of is the function defined by
where denotes the supremum over , e.g., in is chosen such that is maximized at each , or is such that as a bounded value throughout exists (e.g., when is a linear function).
The transform is always well-defined when is convex. Th |
https://en.wikipedia.org/wiki/Tandy%202000 | The Tandy 2000 is a personal computer introduced by Radio Shack in September 1983 based on the 8 MHz Intel 80186 microprocessor running MS-DOS. By comparison, the IBM PC XT (introduced in March 1983) used the older 4.77 MHz Intel 8088 processor, and the IBM PC/AT (introduced in 1984) would later use the newer 6 MHz Intel 80286. Due to the 16-bit data bus and more efficient instruction decoding of the 80186, the Tandy 2000 ran significantly faster than other PC compatibles, and slightly faster than the PC AT. (Later IBM upgraded the 80286 in new PC AT models to 8 MHz, though with wait states.) The Tandy 2000 was the company's first computer built around an Intel x86 series microprocessor; previous models used the Zilog Z80 and Motorola 6809 CPUs.
While touted as being compatible with the IBM XT, the Tandy 2000 was different enough that most existing PC software that was not purely text-oriented failed to work properly.
The Tandy 2000 and its special version of MS-DOS supported up to 768 KB of RAM, significantly more than the 640 KB limit imposed by the IBM architecture. It used 80-track double-sided quad-density floppy drives of 720 KB capacity; the IBM standard at the time of the introduction of the Tandy 2000 was only 360 KB.
The Tandy 2000 had both "Tandy" and "TRS-80" logos on its case, marking the start of the phaseout of the "TRS-80" brand.
History
The introduction of IBM's Model 5150 Personal Computer in August 1981 created an entirely new market for microcomputers. Many hardware and software companies were founded specifically to exploit IBM's and Microsoft's new presence as a standard-setter for small computers, and most other established manufacturers shifted focus to it as well.
By this date Tandy/Radio Shack had been in the small-computer market for four years, since its August 1977 introduction of the TRS-80 Model I. The new computer division followed in October 1979 with the TRS-80 Model II—a high-end business-oriented system. In 1983 the TRS-80 M |
https://en.wikipedia.org/wiki/Fano%20plane | In finite geometry, the Fano plane (after Gino Fano) is a finite projective plane with the smallest possible number of points and lines: 7 points and 7 lines, with 3 points on every line and 3 lines through every point. These points and lines cannot exist with this pattern of incidences in Euclidean geometry, but they can be given coordinates using the finite field with two elements. The standard notation for this plane, as a member of a family of projective spaces, is . Here stands for "projective geometry", the first parameter is the geometric dimension (it is a plane, of dimension 2) and the second parameter is the order (the number of points per line, minus one).
The Fano plane is an example of a finite incidence structure, so many of its properties can be established using combinatorial techniques and other tools used in the study of incidence geometries. Since it is a projective space, algebraic techniques can also be effective tools in its study.
Homogeneous coordinates
The Fano plane can be constructed via linear algebra as the projective plane over the finite field with two elements. One can similarly construct projective planes over any other finite field, with the Fano plane being the smallest.
Using the standard construction of projective spaces via homogeneous coordinates, the seven points of the Fano plane may be labeled with the seven non-zero ordered triples of binary digits 001, 010, 011, 100, 101, 110, and 111. This can be done in such a way that for every two points p and q, the third point on line pq has the label formed by adding the labels of p and q modulo 2 digit by digit (e.g., 010 and 111 resulting in 101). In other words, the points of the Fano plane correspond to the non-zero points of the finite vector space of dimension 3 over the finite field of order 2.
Due to this construction, the Fano plane is considered to be a Desarguesian plane, even though the plane is too small to contain a non-degenerate Desargues configuration (which re |
https://en.wikipedia.org/wiki/Ad%20hoc%20polymorphism | In programming languages, ad hoc polymorphism is a kind of polymorphism in which polymorphic functions can be applied to arguments of different types, because a polymorphic function can denote a number of distinct and potentially heterogeneous implementations depending on the type of argument(s) to which it is applied. When applied to object-oriented or procedural concepts, it is also known as function overloading or operator overloading. The term ad hoc in this context is not intended to be pejorative; it refers simply to the fact that this type of polymorphism is not a fundamental feature of the type system. This is in contrast to parametric polymorphism, in which polymorphic functions are written without mention of any specific type, and can thus apply a single abstract implementation to any number of types in a transparent way. This classification was introduced by Christopher Strachey in 1967.
Early binding
Ad hoc polymorphism is a dispatch mechanism: control moving through one named function is dispatched to various other functions without having to specify the exact function being called. Overloading allows multiple functions taking different types to be defined with the same name; the compiler or interpreter automatically ensures that the right function is called. This way, functions appending lists of integers, lists of strings, lists of real numbers, and so on could be written, and all be called append—and the right append function would be called based on the type of lists being appended. This differs from parametric polymorphism, in which the function would need to be written generically, to work with any kind of list. Using overloading, it is possible to have a function perform two completely different things based on the type of input passed to it; this is not possible with parametric polymorphism. Another way to look at overloading is that a routine is uniquely identified not by its name, but by the combination of its name and the number, order and t |
https://en.wikipedia.org/wiki/Re-order%20buffer | A re-order buffer (ROB) is a hardware unit used in an extension to the Tomasulo algorithm to support out-of-order and speculative instruction execution. The extension forces instructions to be committed in-order.
The buffer is a circular buffer (to provide a FIFO instruction ordering queue) implemented as an array/vector (which allows recording of results against instructions as they complete out of order).
There are three stages to the Tomasulo algorithm: "Issue", "Execute", "Write Result". In an extension to the algorithm, there is an additional "Commit" stage. During the Commit stage, instruction results are stored in a register or memory. The "Write Result" stage is modified to place results in the re-order buffer. Each instruction is tagged in the reservation station with its index in the ROB for this purpose.
The contents of the buffer are used for data dependencies of other instructions scheduled in the buffer. The head of the buffer will be committed once its result is valid. Its dependencies will have already been calculated and committed since they must be ahead of the instruction in the buffer though not necessarily adjacent to it. Data dependencies between instructions would normally stall the pipeline while an instruction waits for its dependent values. The ROB allows the pipeline to continue to process other instructions while ensuring results are committed in order to prevent data hazards such as read ahead of write (RAW), write ahead of read (WAR) and write ahead of write (WAW).
There are additional fields in every entry of the buffer to support the extended algorithm:
Instruction type (jump, store to memory, store to register)
Destination (either memory address or register number)
Result (value that goes to destination or indication of a (un)successful jump)
Validity (does the result already exist?)
The consequences of the re-order buffer include precise exceptions and easy rollback control of target address mis-predictions (branch or jum |
https://en.wikipedia.org/wiki/K%C3%A4hler%20manifold | In mathematics and especially differential geometry, a Kähler manifold is a manifold with three mutually compatible structures: a complex structure, a Riemannian structure, and a symplectic structure. The concept was first studied by Jan Arnoldus Schouten and David van Dantzig in 1930, and then introduced by Erich Kähler in 1933. The terminology has been fixed by André Weil. Kähler geometry refers to the study of Kähler manifolds, their geometry and topology, as well as the study of structures and constructions that can be performed on Kähler manifolds, such as the existence of special connections like Hermitian Yang–Mills connections, or special metrics such as Kähler–Einstein metrics.
Every smooth complex projective variety is a Kähler manifold. Hodge theory is a central part of algebraic geometry, proved using Kähler metrics.
Definitions
Since Kähler manifolds are equipped with several compatible structures, they can be described from different points of view:
Symplectic viewpoint
A Kähler manifold is a symplectic manifold equipped with an integrable almost-complex structure J which is compatible with the symplectic form ω, meaning that the bilinear form
on the tangent space of X at each point is symmetric and positive definite (and hence a Riemannian metric on X).
Complex viewpoint
A Kähler manifold is a complex manifold X with a Hermitian metric h whose associated 2-form ω is closed. In more detail, h gives a positive definite Hermitian form on the tangent space TX at each point of X, and the 2-form ω is defined by
for tangent vectors u and v (where i is the complex number ). For a Kähler manifold X, the Kähler form ω is a real closed (1,1)-form. A Kähler manifold can also be viewed as a Riemannian manifold, with the Riemannian metric g defined by
Equivalently, a Kähler manifold X is a Hermitian manifold of complex dimension n such that for every point p of X, there is a holomorphic coordinate chart around p in which the metric agrees with the standard |
https://en.wikipedia.org/wiki/Tomasulo%27s%20algorithm | Tomasulo's algorithm is a computer architecture hardware algorithm for dynamic scheduling of instructions that allows out-of-order execution and enables more efficient use of multiple execution units. It was developed by Robert Tomasulo at IBM in 1967 and was first implemented in the IBM System/360 Model 91’s floating point unit.
The major innovations of Tomasulo’s algorithm include register renaming in hardware, reservation stations for all execution units, and a common data bus (CDB) on which computed values broadcast to all reservation stations that may need them. These developments allow for improved parallel execution of instructions that would otherwise stall under the use of scoreboarding or other earlier algorithms.
Robert Tomasulo received the Eckert–Mauchly Award in 1997 for his work on the algorithm.
Implementation concepts
The following are the concepts necessary to the implementation of Tomasulo's algorithm:
Common data bus
The Common Data Bus (CDB) connects reservation stations directly to functional units. According to Tomasulo it "preserves precedence while encouraging concurrency". This has two important effects:
Functional units can access the result of any operation without involving a floating-point-register, allowing multiple units waiting on a result to proceed without waiting to resolve contention for access to register file read ports.
Hazard Detection and control execution are distributed. The reservation stations control when an instruction can execute, rather than a single dedicated hazard unit.
Instruction order
Instructions are issued sequentially so that the effects of a sequence of instructions, such as exceptions raised by these instructions, occur in the same order as they would on an in-order processor, regardless of the fact that they are being executed out-of-order (i.e. non-sequentially).
Register renaming
Tomasulo's algorithm uses register renaming to correctly perform out-of-order execution. All general-purpose and rese |
https://en.wikipedia.org/wiki/Hazard%20%28computer%20architecture%29 | In the domain of central processing unit (CPU) design, hazards are problems with the instruction pipeline in CPU microarchitectures when the next instruction cannot execute in the following clock cycle, and can potentially lead to incorrect computation results. Three common types of hazards are data hazards, structural hazards, and control hazards (branching hazards).
There are several methods used to deal with hazards, including pipeline stalls/pipeline bubbling, operand forwarding, and in the case of out-of-order execution, the scoreboarding method and the Tomasulo algorithm.
Background
Instructions in a pipelined processor are performed in several stages, so that at any given time several instructions are being processed in the various stages of the pipeline, such as fetch and execute. There are many different instruction pipeline microarchitectures, and instructions may be executed out-of-order. A hazard occurs when two or more of these simultaneous (possibly out of order) instructions conflict.
Types
Data hazards
Data hazards occur when instructions that exhibit data dependence modify data in different stages of a pipeline. Ignoring potential data hazards can result in race conditions (also termed race hazards). There are three situations in which a data hazard can occur:
read after write (RAW), a true dependency
write after read (WAR), an anti-dependency
write after write (WAW), an output dependency
Read after read (RAR) is not a hazard case.
Consider two instructions and , with occurring before in program order.
Read after write (RAW)
( tries to read a source before writes to it)
A read after write (RAW) data hazard refers to a situation where an instruction refers to a result that has not yet been calculated or retrieved. This can occur because even though an instruction is executed after a prior instruction, the prior instruction has been processed only partly through the pipeline.
Example
For example:
i1. R2 <- R5 + R3
i2. R4 <- R2 + R3 |
https://en.wikipedia.org/wiki/NOP%20%28code%29 | In computer science, a NOP, no-op, or NOOP (pronounced "no op"; short for no operation) is a machine language instruction and its assembly language mnemonic, programming language statement, or computer protocol command that does nothing.
Machine language instructions
Some computer instruction sets include an instruction whose explicit purpose is to not change the state of any of the programmer-accessible registers, status flags, or memory. It often takes a well-defined number of clock cycles to execute. In other instruction sets, there is no explicit NOP instruction, but the assembly language mnemonic NOP represents an instruction which acts as a NOP; e.g., on the SPARC, sethi 0, %g0.
A NOP must not access memory, as that could cause a memory fault or page fault.
A NOP is most commonly used for timing purposes, to force memory alignment, to prevent hazards, to occupy a branch delay slot, to render void an existing instruction such as a jump, as a target of an execute instruction, or as a place-holder to be replaced by active instructions later on in program development (or to replace removed instructions when reorganizing would be problematic or time-consuming). In some cases, a NOP can have minor side effects; for example, on the Motorola 68000 series of processors, the NOP opcode causes a synchronization of the pipeline.
Listed below are the NOP instruction for some CPU architectures:
From a hardware design point of view, unmapped areas of a bus are often designed to return zeroes; since the NOP slide behavior is often desirable, it gives a bias to coding it with the all-zeroes opcode.
Code
A function or a sequence of programming language statements is a NOP or null statement if it has no effect. Null statements may be required by the syntax of some languages in certain contexts.
Ada
In Ada, the null statement serves as a NOP. As the syntax forbids that control statements or functions be empty, the null statement must be used to specify that no action i |
https://en.wikipedia.org/wiki/Flag%20%28linear%20algebra%29 | In mathematics, particularly in linear algebra, a flag is an increasing sequence of subspaces of a finite-dimensional vector space V. Here "increasing" means each is a proper subspace of the next (see filtration):
The term flag is motivated by a particular example resembling a flag: the zero point, a line, and a plane correspond to a nail, a staff, and a sheet of fabric.
If we write that dimVi = di then we have
where n is the dimension of V (assumed to be finite). Hence, we must have k ≤ n. A flag is called a complete flag if di = i for all i, otherwise it is called a partial flag.
A partial flag can be obtained from a complete flag by deleting some of the subspaces. Conversely, any partial flag can be completed (in many different ways) by inserting suitable subspaces.
The signature of the flag is the sequence (d1, ..., dk).
Bases
An ordered basis for V is said to be adapted to a flag V0 ⊂ V1 ⊂ ... ⊂ Vk if the first di basis vectors form a basis for Vi for each 0 ≤ i ≤ k. Standard arguments from linear algebra can show that any flag has an adapted basis.
Any ordered basis gives rise to a complete flag by letting the Vi be the span of the first i basis vectors. For example, the in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ith entry and 0's elsewhere. Concretely, the standard flag is the sequence of subspaces:
An adapted basis is almost never unique (the counterexamples are trivial); see below.
A complete flag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplying each vector by a unit (scalar of unit length, e.g. 1, −1, i). Such a basis can be constructed using the Gram-Schmidt process. The uniqueness up to units follows inductively, by noting that lies in the one-dimensional space .
More abstractly, it is unique up to an action of the maximal torus: the flag corresponds to the Borel group, and the inner product corresponds to the maximal compact subgroup. |
https://en.wikipedia.org/wiki/Klein%20quartic | In hyperbolic geometry, the Klein quartic, named after Felix Klein, is a compact Riemann surface of genus with the highest possible order automorphism group for this genus, namely order orientation-preserving automorphisms, and automorphisms if orientation may be reversed. As such, the Klein quartic is the Hurwitz surface of lowest possible genus; see Hurwitz's automorphisms theorem. Its (orientation-preserving) automorphism group is isomorphic to , the second-smallest non-abelian simple group after the alternating group A5. The quartic was first described in .
Klein's quartic occurs in many branches of mathematics, in contexts including representation theory, homology theory, octonion multiplication, Fermat's Last Theorem, and the Stark–Heegner theorem on imaginary quadratic number fields of class number one; see for a survey of properties.
Originally, the "Klein quartic" referred specifically to the subset of the complex projective plane defined by an algebraic equation. This has a specific Riemannian metric (that makes it a minimal surface in ), under which its Gaussian curvature is not constant. But more commonly (as in this article) it is now thought of as any Riemann surface that is conformally equivalent to this algebraic curve, and especially the one that is a quotient of the hyperbolic plane by a certain cocompact group that acts freely on by isometries. This gives the Klein quartic a Riemannian metric of constant curvature that it inherits from . This set of conformally equivalent Riemannian surfaces is precisely the same as all compact Riemannian surfaces of genus 3 whose conformal automorphism group is isomorphic to the unique simple group of order 168. This group is also known as , and also as the isomorphic group . By covering space theory, the group mentioned above is isomorphic to the fundamental group of the compact surface of genus .
Closed and open forms
It is important to distinguish two different forms of the quartic. The closed |
https://en.wikipedia.org/wiki/Basophil | Basophils are a type of white blood cell. Basophils are the least common type of granulocyte, representing about 0.5% to 1% of circulating white blood cells. However, they are the largest type of granulocyte and how they work is not fully understood. They are responsible for inflammatory reactions during immune response, as well as in the formation of acute and chronic allergic diseases, including anaphylaxis, asthma, atopic dermatitis and hay fever. They also produce compounds that coordinate immune responses, including histamine and serotonin that induce inflammation, and Heparin that prevents blood clotting, although there are less than that found in mast cell granules. Mast cells were once thought to be basophils that migrated from the blood into their resident tissues (connective tissue), but they are now known to be different types of cells.
Basophils were discovered in 1879 by German physician Paul Ehrlich, who one year earlier had found a cell type present in tissues that he termed mastzellen (now mast cells). Ehrlich received the 1908 Nobel Prize in Physiology or Medicine for his discoveries.
The name comes from the fact that these leukocytes are basophilic, i.e., they are susceptible to staining by basic dyes, as shown in the picture.
Structure
Basophils contain large cytoplasmic granules which obscure the cell nucleus under the microscope when stained. However, when unstained, the nucleus is visible and it usually has two lobes. The mast cell, another granulocyte, is similar in appearance and function. Both cell types store histamine, a chemical that is secreted by the cells when stimulated. However, they arise from different branches of hematopoiesis, and mast cells usually do not circulate in the blood stream, but instead are located in connective tissue. Like all circulating granulocytes, basophils can be recruited out of the blood into a tissue when needed.
Function
Basophils appear in many specific kinds of inflammatory reactions, particularl |
https://en.wikipedia.org/wiki/Key%20authentication | Key/Config-authentication is used to solve the problem of authenticating the keys of the person (say "person B") to some other person ("person A") is talking to or trying to talk to. In other words, it is the process of assuring that the key of "person A" held by "person B" does in fact belong to "person A" and vice versa.
This is usually done after the keys have been shared among the two sides over some secure channel. However, some algorithms share the keys at the time of authentication.
The simplest solution for this kind of problem is for the two concerned users to communicate and exchange keys. However, for systems in which there are a large number of users or in which the users do not personally know each other (e.g., Internet shopping), this is not practical. There are various algorithms for both symmetric keys and asymmetric public key cryptography to solve this problem.
Authentication using Shared Keys
For key authentication using the traditional symmetric key cryptography, this is the problem of assuring that there is no man-in-the-middle attacker who is trying to read or spoof the communication. There are various algorithms used now-a-days to prevent such attacks. The most common among the algorithms are Diffie–Hellman key exchange, authentication using Key distribution center, kerberos and Needham–Schroeder protocol. Other methods that can be used include Password-authenticated key agreement protocols etc.
Authentication using Public Key Cryptography
Crypto systems using asymmetric key algorithms do not evade the problem either. That a public key can be known by all without compromising the security of an encryption algorithm (for some such algorithms, though not for all) is certainly useful, but does not prevent some kinds of attacks. For example, a spoofing attack in which public key A is claimed publicly to be that of user Alice, but is in fact a public key belonging to man-in-the-middle attacker Mallet, is easily possible. No public key is inher |
https://en.wikipedia.org/wiki/Secure%20key%20issuing%20cryptography | Secure key issuing is variant of ID-based cryptography that reduces the level of trust that needs to be placed in a trusted third party by spreading the trust across multiple third parties. In addition to the normally transmitted information the user supplies what is known as "blinding" information
which can be used to blind (hide) data so that only the user can later retrieve it. The third party provides a "blinded" partial private key, which is then passed on to several other third party in order, each adding another part of the key before blinding it and passing it on. Once the user gets the key they (and only they) can unblind it and retrieve their full private key, after which point the system becomes the same as identity based cryptography.
If all of the third parties cooperate they can recover the private key, so key escrow problems arise only if all of the third parties are untrustworthy. In other areas of information security this is known as a cascade, if every member of the cascade is independent and the cascade is large then the system may be considered trustworthy in actual practice.
The paper below states that "Compared with certificate-based cryptography, ID-based cryptography is advantageous in key management, since key distribution and key revocation are not required." However this poses a problem in long-lived environments where an identity (such as an email address) may shift in ownership over time and old keys need to be revoked and new keys associated with that identity provided to a new party.
References
External links
'Secure Key Issuing in ID-based Cryptography'
Key management |
https://en.wikipedia.org/wiki/Certificate-based%20encryption | Certificate-based encryption is a system in which a certificate authority uses ID-based cryptography to produce a certificate. This system gives the users both implicit and explicit certification, the certificate can be used as a conventional certificate (for signatures, etc.), but also implicitly for the purpose of encryption.
Example
A user Alice can doubly encrypt a message using another user's (Bob) public key and his (Bob's) identity.
This means that the user (Bob) cannot decrypt it without a currently valid certificate and also that the certificate authority cannot decrypt the message as they don't have the user's private key (i.e., there is no implicit escrow as with ID-based cryptography, as the double encryption means they cannot decrypt it solely with the information they have).Certificate is the trust between two parties.
Key revocation
Key revocation can be added to the system by requiring a new certificate to be issued as frequently as the level of security requires. Because the certificate is "public information", it does not need to be transmitted over a secret channel. The downside of this is the requirement for regular communication between users and the certificate authority, which means the certificate authority is more vulnerable to electronic attacks (such as denial-of-service attacks) and also that such attacks could effectively stop the system from working. This risk can be partially but not completely reduced by having a hierarchy of multiple certificate authorities.
Practical applications
The best example of practical use of certificate-based encryption is Content Scrambling System (CSS), which is used to encode DVD movies in such a way as to make them playable only in a part of the world where they are sold. However, the fact that the region decryption key is stored on the hardware level in the DVD players substantially weakens this form of protection.
See also
X.509
Certificate server
References
Craig Gentry, Certificate-Based |
https://en.wikipedia.org/wiki/Certificateless%20cryptography | Certificateless cryptography is a variant of ID-based cryptography intended to prevent the key escrow problem. Ordinarily, keys are generated by a certificate authority or a key generation center (KGC) who is given complete power and is implicitly trusted. To prevent a complete breakdown of the system in the case of a compromised KGC, the key generation process is split between the KGC and the user. The KGC first generates a key pair, where the private key is now the partial private key of the system. The remainder of the key is a random value generated by the user, and is never revealed to anyone, not even the KGC. All cryptographic operations by the user are performed by using a complete private key which involves both the KGC's partial key, and the user's random secret value.
One disadvantage of this is that the identity information no longer forms the entire public key. Meaning, the user's public key is not discoverable from only the user's identity string and the KGC's public key. Thus, the user's public key must be published or otherwise obtained by other users. One advantage of the system, is that it is possible to verify that any such obtained public key belongs to the stated identity string. (In other words, the method of distributing the user's public key does not have to be secure.) The identity string and the KGC's public key can be used to verify that the obtained public key belongs to the identity string. (It can be verified that the obtained public key was generated from the identity string, the KGC's private key and some unknown value). Note that multiple public / private key pairs can be generated for any identity string, but attackers would not have access to the KGC's private key in the creation process.
To encrypt a message to another user, three pieces of information are needed: 1) the recipient's public key and 2) identity string, and also 3) the KGC's public information (public key). The identity string and the KGC's public key are used to v |
https://en.wikipedia.org/wiki/Comparametric%20equation | A comparametric equation is an equation that describes a parametric relationship between a function and a dilated version of the same function, where the equation does not involve the parameter. For example, ƒ(2t) = 4ƒ(t) is a comparametric equation, when we define g(t) = ƒ(2t), so that we have g = 4ƒ no longer contains the parameter, t. The comparametric equation g = 4ƒ has a family of solutions, one of which is ƒ = t2.
To see that ƒ = t2 is a solution, we merely substitute back in: g = ƒ(2t) = (2t)2 = 4t2 = 4ƒ, so that g = 4ƒ.
Comparametric equations arise naturally in signal processing when we have multiple measurements of the same phenomenon, in which each of the measurements was acquired using a different sensitivity. For example, two or more differently exposed pictures of the same subject matter give rise to a comparametric relationship, the solution of which is the response function of the camera, image sensor, or imaging system. In this sense, comparametric equations are the fundamental mathematical basis for HDR (high dynamic range) imaging, as well as HDR audio.
Comparametric equations have been used in many areas of research, and have many practical applications to the real world. They are used in radar, microphone arrays, and have been used in processing crime scene video in homicide trials in which the only evidence against the accused was video recordings of the murder.
Solution
An existing solution is comparametric camera response function (CCRF) for real-time comparametric analysis. It has applications in the analysis of multiple images.
References
Related concepts
Parametric equation
Functional equation
Contraction mapping
Multivariable calculus
Equations |
https://en.wikipedia.org/wiki/CICS | IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.
CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects. This processing is usually interactive (screen-oriented), but background transactions are possible.
CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices.
Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out.
While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications.
Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.