source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Head-related%20transfer%20function | A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), or a head shadow, is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person.
A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space. It is a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones use HRTFs. Some forms of HRTF processing have also been included in computer software to simulate surround sound playback from loudspeakers.
Sound localization
Humans have just two ears, but can locate sounds in three dimensions – in range (distance), in direction above and below (elevation), in front and to the rear, as well as to either side (azimuth). This is possible because the brain, inner ear, and the external ears (pinna) work together to make inferences about location. This ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity since the eyes can only see a fraction of the world around a viewer, and vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy,
regardless of the surrounding light.
Humans estimate the location of a source by taking cues derived from one ear (monaura |
https://en.wikipedia.org/wiki/Unary%20operation | In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function , where is a set. The function is a unary operation on .
Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial ), functional notation (e.g. or ), and superscripts (e.g. transpose ). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument.
Examples
Absolute value
Obtaining the absolute value of a number is a unary operation. This function is defined as where is the absolute value of .
Negation
This is used to find the negative value of a single number. This is technically not a unary operation as is just short form of . Here are some examples:
Unary negative and positive
As unary operations have only one operand they are evaluated before other operations containing them. Here is an example using negation:
Here, the first '−' represents the binary subtraction operation, while the second '−' represents the unary negation of the 2 (or '−2' could be taken to mean the integer −2). Therefore, the expression is equal to:
Technically, there is also a unary + operation but it is not needed since we assume an unsigned value to be positive:
The unary + operation does not change the sign of a negative operation:
In this case, a unary negation is needed to change the sign:
Trigonometry
In trigonometry, the trigonometric functions, such as , , and , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result.
Examples from programming languages
JavaScript
In JavaScript, these operators are unary:
Increment: ++x, x++
Decrement: --x, x--
Positive: +x |
https://en.wikipedia.org/wiki/Webcomic | Webcomics (also known as online comics or Internet comics) are comics published on the internet, such as on a website or a mobile app. While many webcomics are published exclusively online, others are also published in magazines, newspapers, or comic books.
Webcomics can be compared to self-published print comics in that anyone with an Internet connection can publish their own webcomic. Readership levels vary widely; many are read only by the creator's immediate friends and family, while some of the most widely read have audiences of well over one million readers. Webcomics range from traditional comic strips and graphic novels to avant garde comics, and cover many genres, styles, and subjects. They sometimes take on the role of a comic blog. The term web cartoonist is sometimes used to refer to someone who creates webcomics.
Medium
There are several differences between webcomics and print comics. With webcomics the restrictions of traditional books, newspapers or magazines can be lifted, allowing artists and writers to take advantage of the web's unique capabilities.
Styles
The creative freedom webcomics provide allows artists to work in nontraditional styles. Clip art or photo comics (also known as fumetti) are two types of webcomics that do not use traditional artwork. A Softer World, for example, is made by overlaying photographs with strips of typewriter-style text. As in the constrained comics tradition, a few webcomics, such as Dinosaur Comics by Ryan North, are created with most strips having art copied exactly from one (or a handful of) template comics and only the text changing. Pixel art, such as that created by Richard Stevens of Diesel Sweeties, is similar to that of sprite comics but instead uses low-resolution images created by the artist themself. However, it is also common for some artists to use traditional styles, similar to those typically published in newspapers or comic books.
Content
Webcomics that are independently published are not subj |
https://en.wikipedia.org/wiki/Liturgical%20colours | Liturgical colours are specific colours used for vestments and hangings within the context of Christian liturgy. The symbolism of violet, blue, white, green, red, gold, black, rose and other colours may serve to underline moods appropriate to a season of the liturgical year or may highlight a special occasion.
There is a distinction between the colour of the vestments worn by the clergy and their choir dress, which with a few exceptions does not change with the seasons of the liturgical year.
Roman Catholic Church
Current rubrics
In the Roman Rite, as reformed by Pope Paul VI, the following colours are used, in accordance with the rubrics of the General Instruction of the Roman Missal, Section 346.
On more solemn days, i.e. festive, more precious, sacred vestments may be used, even if not of the colour of the day. Such vestments may, for instance, be made from cloth of gold or cloth of silver. Moreover, the Conference of Bishops may determine and propose to the Apostolic See adaptations suited to the needs and culture of peoples.
Ritual Masses are celebrated in their proper colour or in white or in a festive colour. Masses for Various Needs, on the other hand, are celebrated in the colour proper to the day or the season or in violet if they bear a penitential character. Votive Masses are celebrated in the colour suited to the Mass itself or even in the colour proper to the day or the season.
Regional and situational exceptions
Some particular variations:
Blue, a colour associated with the Virgin Mary. While blue vestments are common in some Eastern churches, in the Latin rite, blue may only be used pursuant to a special privilege granted. The permission, sometimes called "cerulean privilege", is of two kinds: one pertains to particular Marian shrines and specifies when blue vestments may be worn. The other type of permission is that accorded to various countries. An apostolic indult was granted for the feast of the Immaculate Conception and its octave as wel |
https://en.wikipedia.org/wiki/Altitude%20%28triangle%29 | In geometry, an altitude of a triangle is a line segment through a vertex and perpendicular to a line containing the side opposite the vertex. This line containing the opposite side is called the extended base of the altitude. The intersection of the extended base and the altitude is called the foot of the altitude. The length of the altitude, often simply called "the altitude", is the distance between the extended base and the vertex. The process of drawing the altitude from the vertex to the foot is known as dropping the altitude at that vertex. It is a special case of orthogonal projection.
Altitudes can be used in the computation of the area of a triangle: one-half of the product of an altitude's length and its base's length equals the triangle's area. Thus, the longest altitude is perpendicular to the shortest side of the triangle. The altitudes are also related to the sides of the triangle through the trigonometric functions.
In an isosceles triangle (a triangle with two congruent sides), the altitude having the incongruent side as its base will have the midpoint of that side as its foot. Also the altitude having the incongruent side as its base will be the angle bisector of the vertex angle.
It is common to mark the altitude with the letter (as in height), often subscripted with the name of the side the altitude is drawn to.
In a right triangle, the altitude drawn to the hypotenuse divides the hypotenuse into two segments of lengths and . If we denote the length of the altitude by , we then have the relation
(Geometric mean theorem)
For acute triangles, the feet of the altitudes all fall on the triangle's sides (not extended). In an obtuse triangle (one with an obtuse angle), the foot of the altitude to the obtuse-angled vertex falls in the interior of the opposite side, but the feet of the altitudes to the acute-angled vertices fall on the opposite extended side, exterior to the triangle. This is illustrated in the adjacent diagram: in this obtuse |
https://en.wikipedia.org/wiki/Colony%20%28biology%29 | In biology, a colony is composed of two or more conspecific individuals living in close association with, or connected to, one another. This association is usually for mutual benefit such as stronger defense or the ability to attack bigger prey.
Colonies can form in various shapes and ways depending on the organism involved. For instance, the bacterial colony is a cluster of identical cells (clones). These colonies often form and grow on the surface of (or within) a solid medium, usually derived from a single parent cell.
Colonies, in the context of development, may be composed of two or more unitary (or solitary) organisms or be modular organisms. Unitary organisms have determinate development (set life stages) from zygote to adult form and individuals or groups of individuals (colonies) are visually distinct. Modular organisms have indeterminate growth forms (life stages not set) through repeated iteration of genetically identical modules (or individuals), and it can be difficult to distinguish between the colony as a whole and the modules within. In the latter case, modules may have specific functions within the colony.
In contrast, solitary organisms do not associate with colonies; they are ones in which all individuals live independently and have all of the functions needed to survive and reproduce.
Some organisms are primarily independent and form facultative colonies in reply to environmental conditions while others must live in a colony to survive (obligate). For example, some carpenter bees will form colonies when a dominant hierarchy is formed between two or more nest foundresses (facultative colony), while corals are animals that are physically connected by living tissue (the coenosarc) that contains a shared gastrovascular cavity.
Colony types
Social colonies
Unicellular and multicellular unitary organisms may aggregate to form colonies. For example,
Protists such as slime molds are many unicellular organisms that aggregate to form colonies whe |
https://en.wikipedia.org/wiki/List%20of%20geometers | A geometer is a mathematician whose area of study is geometry.
Some notable geometers and their main fields of work, chronologically listed, are:
1000 BCE to 1 BCE
Baudhayana (fl. c. 800 BC) – Euclidean geometry
Manava (c. 750 BC–690 BC) – Euclidean geometry
Thales of Miletus (c. 624 BC – c. 546 BC) – Euclidean geometry
Pythagoras (c. 570 BC – c. 495 BC) – Euclidean geometry, Pythagorean theorem
Zeno of Elea (c. 490 BC – c. 430 BC) – Euclidean geometry
Hippocrates of Chios (born c. 470 – 410 BC) – first systematically organized Stoicheia – Elements (geometry textbook)
Mozi (c. 468 BC – c. 391 BC)
Plato (427–347 BC)
Theaetetus (c. 417 BC – 369 BC)
Autolycus of Pitane (360–c. 290 BC) – astronomy, spherical geometry
Euclid (fl. 300 BC) – Elements, Euclidean geometry (sometimes called the "father of geometry")
Apollonius of Perga (c. 262 BC – c. 190 BC) – Euclidean geometry, conic sections
Archimedes (c. 287 BC – c. 212 BC) – Euclidean geometry
Eratosthenes (c. 276 BC – c. 195/194 BC) – Euclidean geometry
Katyayana (c. 3rd century BC) – Euclidean geometry
1–1300 AD
Hero of Alexandria (c. AD 10–70) – Euclidean geometry
Pappus of Alexandria (c. AD 290–c. 350) – Euclidean geometry, projective geometry
Hypatia of Alexandria (c. AD 370–c. 415) – Euclidean geometry
Brahmagupta (597–668) – Euclidean geometry, cyclic quadrilaterals
Vergilius of Salzburg (c.700–784) – Irish bishop of Aghaboe, Ossory and later Salzburg, Austria; antipodes, and astronomy
Al-Abbās ibn Said al-Jawharī (c. 800–c. 860)
Thabit ibn Qurra (826–901) – analytic geometry, non-Euclidean geometry, conic sections
Abu'l-Wáfa (940–998) – spherical geometry, spherical triangles
Alhazen (965–c. 1040)
Omar Khayyam (1048–1131) – algebraic geometry, conic sections
Ibn Maḍāʾ (1116–1196)
1301–1800 AD
Piero della Francesca (1415–1492)
Leonardo da Vinci (1452–1519) – Euclidean geometry
Jyesthadeva (c. 1500 – c. 1610) – Euclidean geometry, cyclic quadrilaterals
Marin Getaldić (1568 |
https://en.wikipedia.org/wiki/Indirect%20self-reference | Indirect self-reference describes an object referring to itself indirectly.
For example, define the function f such that f(x) = x(x). Any function passed as an argument to f is invoked with itself as an argument, and thus in any use of that argument is indirectly referring to itself.
This example is similar to the Scheme expression "((lambda(x)(x x)) (lambda(x)(x x)))", which is expanded to itself by beta reduction, and so its evaluation loops indefinitely despite the lack of explicit looping constructs. An equivalent example can be formulated in lambda calculus.
Indirect self-reference is special in that its self-referential quality is not explicit, as it is in the sentence "this sentence is false." The phrase "this sentence" refers directly to the sentence as a whole. An indirectly self-referential sentence would replace the phrase "this sentence" with an expression that effectively still referred to the sentence, but did not use the pronoun "this."
An example will help to explain this. Suppose we define the quine of a phrase to be the quotation of the phrase followed by the phrase itself. So, the quine of:
is a sentence fragment
would be:
"is a sentence fragment" is a sentence fragment
which, incidentally, is a true statement.
Now consider the sentence:
"when quined, makes quite a statement" when quined, makes quite a statement
The quotation here, plus the phrase "when quined," indirectly refers to the entire sentence. The importance of this fact is that the remainder of the sentence, the phrase "makes quite a statement," can now make a statement about the sentence as a whole. If we had used a pronoun for this, we could have written something like "this sentence makes quite a statement."
It seems silly to go through this trouble when pronouns will suffice (and when they make more sense to the casual reader), but in systems of mathematical logic, there is generally no analog of the pronoun. It is somewhat surprising, in fact, that self-reference can be a |
https://en.wikipedia.org/wiki/Olry%20Terquem | Olry Terquem (16 June 1782 – 6 May 1862) was a French mathematician. He is known for his works in geometry and for founding two scientific journals, one of which was the first journal about the history of mathematics. He was also the pseudonymous author (as Tsarphati) of a sequence of letters advocating radical reform in Judaism. He was French Jewish.
Education and career
Terquem grew up speaking Yiddish, and studying only the Hebrew language and the Talmud. However, after the French revolution his family came into contact with a wider society, and his studies broadened. Despite his poor French he was admitted to study mathematics at the École Polytechnique in Paris, beginning in 1801, as only the second Jew to study there. He became an assistant there in 1803, and earned his doctorate in 1804.
After finishing his studies he moved to Mainz (at that time known as Mayence and part of imperial France), where he taught at the Imperial Lycée. In 1811 he moved to the artillery school in the same city, in 1814 he moved again to the artillery school in Grenoble, and in 1815 he became the librarian of the Dépôt Central de l'Artillerie in Paris, where he remained for the rest of his life. He became an officer of the Legion of Honor in 1852. After he died, his funeral was officiated by Lazare Isidor, the Chief Rabbi of Paris and later of France, and attended by over 12 generals headed by Edmond Le Bœuf.
Mathematics
Terquem translated works concerning artillery, was the author of several textbooks, and became an expert on the history of mathematics. Terquem and Camille-Christophe Gerono were the founding editors of the Nouvelles Annales de Mathématiques in 1842. Terquem also founded another journal in 1855, the Bulletin de Bibliographie, d'Histoire et de Biographie de Mathématiques, which was published as a supplement to the Nouvelles Annales, and he continued editing it until 1861. This was the first journal dedicated to the history of mathematics.
In geometry, Terquem is |
https://en.wikipedia.org/wiki/PlanetMath | PlanetMath is a free, collaborative, mathematics online encyclopedia. The emphasis is on rigour, openness, pedagogy, real-time content, interlinked content, and also community of about 24,000 people with various maths interests. Intended to be comprehensive, the project is currently hosted by the University of Waterloo. The site is owned by a US-based nonprofit corporation, "PlanetMath.org, Ltd".
PlanetMath was started when the popular free online mathematics encyclopedia MathWorld was temporarily taken offline for 12 months by a court injunction as a result of the CRC Press lawsuit against the Wolfram Research company and its employee (and MathWorld's author) Eric Weisstein.
Materials
The main PlanetMath focus is on encyclopedic entries. It formerly operated a self-hosted forum, but now encourages discussion via Gitter.
, the encyclopedia hosted about 9,289 entries and over 16,258 concepts (a concept may be for example a specific notion defined within a more general entry). An overview of the current PlanetMath contents is also available. About 300 Wikipedia entries incorporate text from PlanetMath articles; they are listed in :Category:Wikipedia articles incorporating text from PlanetMath.
An all-inclusive PlanetMath Free Encyclopedia book of 2,300 pages is available for the encyclopedia contents up to 2006 as a free download PDF file.
Content development model
PlanetMath implements a specific content creation system called authority model.
An author who starts a new article becomes its owner, that is the only person authorized to edit that article. Other users may add corrections and discuss improvements but the resulting modifications of the article, if any, are always made by the owner. However, if there are long lasting unresolved corrections, the ownership can be removed. More precisely, after 2 weeks the system starts to remind the owner by mail; at 6 weeks any user can "adopt" the article; at 8 weeks the ownership of the entry is completely remov |
https://en.wikipedia.org/wiki/Moria%20%281983%20video%20game%29 | The Dungeons of Moria, usually referred to as simply Moria, is a computer game inspired by J. R. R. Tolkien's novel The Lord of the Rings. The objective of the game is to dive deep into the Mines of Moria and kill the Balrog. Moria, along with Hack (1984) and Larn (1986), is considered to be the first roguelike game, and the first to include a town level.
Moria was the basis of the better known Angband roguelike game, and influenced the preliminary design of Blizzard Entertainment's Diablo.
Gameplay
The player's goal is to descend to the depths of Moria to defeat the Balrog, akin to a boss battle. As with Rogue, levels are not persistent: when the player leaves the level and then tries to return, a new level is procedurally generated. Among other improvements to Rogue, there is a persistent town at the highest level where players can buy and sell equipment.
Moria begins with creation of a character. The player first chooses a "race" from the following: Human, Half-Elf, Elf, Halfling, Gnome, Dwarf, Half-Orc, or Half-Troll. Racial selection determines base statistics and class availability. One then selects the character's "class" from the following: Warrior, Mage, Priest, Rogue, Ranger, or Paladin. Class further determines statistics, as well as the abilities acquired during gameplay. Mages, Rangers, and Rogues can learn magic; Priests and Paladins can learn prayers. Warriors possess no additional abilities.
The player begins the game with a limited number of items on a town level consisting of six shops: (1) a General Store, (2) an Armory, (3) a Weaponsmith, (4) a Temple, (5) an Alchemy shop, and (6) a Magic-Users store. A staircase on this level descends into a series of randomly generated underground mazes. Deeper levels contain more powerful monsters and better treasures. Each time the player ascends or descends a staircase, a new level is created and the old one discarded; only the town persists throughout the game.
As in most roguelikes, it is impossible |
https://en.wikipedia.org/wiki/Truth%20value | In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth, which in classical logic has only two possible values (true or false).
Computing
In some programming languages, any expression can be evaluated in a context that expects a Boolean data type. Typically (though this varies by programming language) expressions like the number zero, the empty string, empty lists, and null evaluate to false, and strings with content (like "abc"), other numbers, and objects evaluate to true.
Sometimes these classes of expressions are called "truthy" and "falsy" / "false".
Classical logic
In classical logic, with its intended semantics, the truth values are true (denoted by 1 or the verum ⊤), and untrue or false (denoted by 0 or the falsum ⊥); that is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan's laws:
¬(
¬(
Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is referred to as valuation.
Intuitionistic and constructive logic
In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a truth value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if one can build a proof of the statement from those axioms. A statement is false if one can deduce a contradiction from it. This leaves open the possibility of statements that have not yet been assigned a truth value.
Unproven statements in intuitionistic logic are not given an intermediate truth value (as is sometime |
https://en.wikipedia.org/wiki/The%20Doctrine%20of%20Chances | The Doctrine of Chances was the first textbook on probability theory, written by 18th-century French mathematician Abraham de Moivre and first published in 1718. De Moivre wrote in English because he resided in England at the time, having fled France to escape the persecution of Huguenots. The book's title came to be synonymous with probability theory, and accordingly the phrase was used in Thomas Bayes' famous posthumous paper An Essay towards solving a Problem in the Doctrine of Chances, wherein a version of Bayes' theorem was first introduced.
Editions
The full title of the first edition was The doctrine of chances: or, a method for calculating the probabilities of events in play; it was published in 1718, by W. Pearson, and ran for 175 pages.
Published in 1738 by Woodfall and running for 258 pages, the second edition of de Moivre's book introduced the concept of normal distributions as approximations to binomial distributions. In effect de Moivre proved a special case of the central limit theorem. Sometimes his result is called the theorem of de Moivre–Laplace.
A third edition was published posthumously in 1756 by A. Millar, and ran for 348 pages; additional material in this edition included an application of probability theory to actuarial science in the calculation of annuities.
References
Further reading
.
External links
The third edition of The Doctrine of Chances.
Full text of “The Doctrine of Chances”, 1st edition; from books.google.com
The Doctrine of Chance at MathPages
Mathematics textbooks
1718 books
1738 books
1756 non-fiction books
Probability books
1718 in science
Abraham de Moivre |
https://en.wikipedia.org/wiki/Poppy | A poppy is a flowering plant in the subfamily Papaveroideae of the family Papaveraceae. Poppies are herbaceous plants, often grown for their colourful flowers. One species of poppy, Papaver somniferum, is the source of the narcotic drug mixture opium which contains powerful medicinal alkaloids such as morphine and has been used since ancient times as an analgesic and narcotic medicinal and recreational drug. It also produces edible seeds. Following the trench warfare in the poppy fields of Flanders, Belgium during World War I, poppies have become a symbol of remembrance of soldiers who have died during wartime, especially in the UK, Canada, Australia, New Zealand and other Commonwealth realms.
Description
Poppies are herbaceous annual, biennial or short-lived perennial plants. Some species are monocarpic, dying after flowering. Poppies can be over a metre tall with flowers up to 15 centimetres across. Flowers of species (not cultivars) have 4 or 6 petals, many stamens forming a conspicuous whorl in the center of the flower and an ovary of from 2 to many fused carpels. The petals are showy, may be of almost any colour and some have markings. The petals are crumpled in the bud and as blooming finishes, the petals often lie flat before falling away. In the temperate zones, poppies bloom from spring into early summer. Most species secrete latex when injured. Bees use poppies as a pollen source. The pollen of the oriental poppy, Papaver orientale, is dark blue, that of the field or corn poppy (Papaver rhoeas) is grey to dark green. The opium poppy, Papaver somniferum, grows wild in eastern and southern Asia, and South Eastern Europe. It is believed that it originated in the Mediterranean region.
Poppies belong to the subfamily Papaveroideae of the family Papaveraceae, which includes the following genera:
Papaver – Papaver rhoeas, Papaver somniferum, Papaver orientale, Papaver nudicaule, Papaver cambricum
Eschscholzia – Eschscholzia californica
Meconopsis – Meconops |
https://en.wikipedia.org/wiki/Dana%20Scott | Dana Stewart Scott (born October 11, 1932) is an American logician who is the emeritus Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic at Carnegie Mellon University; he is now retired and lives in Berkeley, California. His work on automata theory earned him the Turing Award in 1976, while his collaborative work with Christopher Strachey in the 1970s laid the foundations of modern approaches to the semantics of programming languages. He has worked also on modal logic, topology, and category theory.
Early career
He received his B.A. in Mathematics from the University of California, Berkeley, in 1954. He wrote his Ph.D. thesis on Convergent Sequences of Complete Theories under the supervision of Alonzo Church while at Princeton, and defended his thesis in 1958. Solomon Feferman (2005) writes of this period:
After completing his Ph.D. studies, he moved to the University of Chicago, working as an instructor there until 1960. In 1959, he published a joint paper with Michael O. Rabin, a colleague from Princeton, titled Finite Automata and Their Decision Problem (Scott and Rabin 1959) which introduced the idea of nondeterministic machines to automata theory. This work led to the joint bestowal of the Turing Award on the two, for the introduction of this fundamental concept of computational complexity theory.
University of California, Berkeley, 1960–1963
Scott took up a post as Assistant Professor of Mathematics, back at the University of California, Berkeley, and involved himself with classical issues in mathematical logic, especially set theory and Tarskian model theory. He proved that the axiom of constructibility is incompatible with the existence of a measurable cardinal, a result considered seminal in the evolution of Set Theory.
During this period he started supervising Ph.D. students, such as James Halpern (Contributions to the Study of the Independence of the Axiom of Choice) and Edgar Lopez-Escobar (Infinitely Long Formulas |
https://en.wikipedia.org/wiki/Scalar%20field | In mathematics and physics, a scalar field is a function associating a single number to every point in a space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units).
In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory.
Definition
Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U. The region U may be a set in some Euclidean space, Minkowski space, or more generally a subset of a manifold, and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form.
Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields.
Uses in physics
In physics, sca |
https://en.wikipedia.org/wiki/Uniqueness%20type | In computing, a unique type guarantees that an object is used in a single-threaded way, with at most a single reference to it. If a value has a unique type, a function applied to it can be optimized to update the value in-place in the object code. Such in-place updates improve the efficiency of functional languages while maintaining referential transparency. Unique types can also be used to integrate functional and imperative programming.
Introduction
Uniqueness typing is best explained using an example. Consider a function readLine that reads the next line of text from a given file:
function readLine(File f) returns String
return line where
String line = doImperativeReadLineSystemCall(f)
end
end
Now doImperativeReadLineSystemCall reads the next line from the file using an OS-level system call which has the side effect of changing the current position in the file. But this violates referential transparency because calling it multiple times with the same argument will return different results each time as the current position in the file gets moved. This in turn makes readLine violate referential transparency because it calls doImperativeReadLineSystemCall.
However, using uniqueness typing, we can construct a new version of readLine that is referentially transparent even though it's built on top of a function that's not referentially transparent:
function readLine2(unique File f) returns (unique File, String)
return (differentF, line) where
String line = doImperativeReadLineSystemCall(f)
File differentF = newFileFromExistingFile(f)
end
end
The unique declaration specifies that the type of f is unique; that is to say that f may never be referred to again by the caller of readLine2 after readLine2 returns, and this restriction is enforced by the type system. And since readLine2 does not return f itself but rather a new, different file object differentF, this means that it's impossible for readLine2 to be called with f as an ar |
https://en.wikipedia.org/wiki/Formal%20methods | In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
Formal methods employ a variety of theoretical computer science fundamentals, including logic calculi, formal languages, automata theory, control theory, program semantics, type systems, and type theory.
Background
Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test case generators.
Taxonomy
Formal methods can be used at a number of levels:
Level 0: Formal specification may be undertaken and then a program developed from this informally. This has been dubbed formal methods lite. This may be the most cost-effective option in many cases.
Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may be undertaken. This may be most appropriate in high-integrity systems involving safety or security.
Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. Despite improving tools and declining costs, this can be very expensive and is only practically worthwhile if the cost of mistakes is very high (e.g., in critical parts of operating system or microprocessor design).
Further information on this is expanded below.
As with programming language semantics, styles of formal methods may be roughly classified as follows:
Denotational semantics, in which the meaning of a system is expressed in t |
https://en.wikipedia.org/wiki/HOL%20%28proof%20assistant%29 | HOL (Higher Order Logic) denotes a family of interactive theorem proving systems using similar (higher-order) logics and implementation strategies. Systems in this family follow the LCF approach as they are implemented as a library which defines an abstract data type of proven theorems such that new objects of this type can only be created using the functions in the library which correspond to inference rules in higher-order logic. As long as these functions are correctly implemented, all theorems proven in the system must be valid. As such, a large system can be built on top of a small trusted kernel.
Systems in the HOL family use ML or its successors. ML was originally developed along with LCF as a meta-language for theorem proving systems; in fact, the name stands for "Meta-Language".
Underlying logic
HOL systems use variants of classical higher-order logic, which has simple axiomatic foundations with few axioms and well-understood semantics.
The logic used in HOL provers is closely related to Isabelle/HOL, the most widely used logic of Isabelle.
HOL implementations
A number of HOL systems (sharing essentially the same logic) remain active and in use:
HOL4 the only presently maintained and developed system stemming from the HOL88 system, which was the culmination of the original HOL implementation effort, led by Mike Gordon. HOL88 included its own ML implementation, which was in turn implemented on top of Common Lisp. The systems that followed HOL88 (HOL90, hol98 and HOL4) were all implemented in Standard ML; while hol98 is coupled to Moscow ML, HOL4 can be built with either Moscow ML or Poly/ML. All come with large libraries of theorem proving code which implement extra automation on top of the very simple core code. HOL4 is BSD licensed.
HOL Light an experimental "minimalist" version of HOL which has since grown into another mainstream HOL variant; its logical foundations remain unusually simple. HOL Light, originally implemented in Caml Light, now us |
https://en.wikipedia.org/wiki/Logic%20for%20Computable%20Functions | Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions.
Basic idea
Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.
Advantages
The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. The Theorem data type can be easily implemented to optionally store proof objects, depending on the system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers.
Disadvantages
Trusted computing base
The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML resulted in a formally verified ML compiler, alleviating some these concerns.
Efficiency and complexity of proof procedures
Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correc |
https://en.wikipedia.org/wiki/Program%20analysis | In computer science, program analysis is the process of automatically analyzing the behavior of computer programs regarding a property such as correctness, robustness, safety and liveness.
Program analysis focuses on two major areas: program optimization and program correctness. The first focuses on improving the program’s performance while reducing the resource usage while the latter focuses on ensuring that the program does what it is supposed to do.
Program analysis can be performed without executing the program (static program analysis), during runtime (dynamic program analysis) or in a combination of both.
Static program analysis
In the context of program correctness, static analysis can discover vulnerabilities during the development phase of the program. These vulnerabilities are easier to correct than the ones found during the testing phase since static analysis leads to the root of the vulnerability.
Due to many forms of static analysis being computationally undecidable, the mechanisms for doing it will not always terminate with the right answer either because they sometimes return a false negative ("no problems found" when the code does in fact have problems) or a false positive, or because they never return the wrong answer but sometimes never terminate. Despite their limitations, the first type of mechanism might reduce the number of vulnerabilities, while the second can sometimes give strong assurance of the lack of a certain class of vulnerabilities.
Incorrect optimizations are highly undesirable. So, in the context of program optimization, there are two main strategies to handle computationally undecidable analysis:
An optimizer that is expected to complete in a relatively short amount of time, such as the optimizer in an optimizing compiler, may use a truncated version of an analysis that is guaranteed to complete in a finite amount of time, and guaranteed to only find correct optimizations.
A third-party optimization tool may be implemented |
https://en.wikipedia.org/wiki/Quorum%20sensing | In biology, quorum sensing or quorum signaling (QS) is the ability to detect and respond to cell population density by gene regulation. Quorum sensing is a type of cellular signaling, and more specifically can be considered a type of paracrine signaling. However, it also contains traits of both autocrine signaling: a cell produces both the autoinducer molecule and the receptor for the autoinducer. As one example, QS enables bacteria to restrict the expression of specific genes to the high cell densities at which the resulting phenotypes will be most beneficial, especially for phenotypes that would be ineffective at low cell densities and therefore too energetically costly to express. Many species of bacteria use quorum sensing to coordinate gene expression according to the density of their local population. In a similar fashion, some social insects use quorum sensing to determine where to nest. Quorum sensing in pathogenic bacteria activates host immune signaling and prolongs host survival, by limiting the bacterial intake of nutrients, such as tryptophan, which further is converted to serotonin. As such, quorum sensing allows a commensal interaction between host and pathogenic bacteria. Quorum sensing may also be useful for cancer cell communications.
In addition to its function in biological systems, quorum sensing has several useful applications for computing and robotics. In general, quorum sensing can function as a decision-making process in any decentralized system in which the components have: (a) a means of assessing the number of other components they interact with and (b) a standard response once a threshold number of components is detected.
Discovery
Quorum sensing was first reported in 1970, by Kenneth Nealson, Terry Platt, and J. Woodland Hastings, who observed what they described as a conditioning of the medium in which they had grown the bioluminescent marine bacterium Aliivibrio fischeri. These bacteria did not synthesize luciferase—and therefore d |
https://en.wikipedia.org/wiki/Derangement | In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points.
The number of derangements of a set of size n is known as the subfactorial of n or the n-th derangement number or n-th de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include !n, Dn, dn, or n¡.
For n > 0, the subfactorial !n equals the nearest integer to n!/e, where n! denotes the factorial of n and e is Euler's number.
The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard. in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time.
Example
Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student received their own test back? Out of 24 possible permutations (4!) for handing back the tests,
{| style="font:125% monospace;line-height:1;border-collapse:collapse;"
|ABCD,
|ABDC,
|ACBD,
|ACDB,
|ADBC,
|ADCB,
|-
|BACD,
|BADC,
|BCAD,
|BCDA,
|BDAC,
|BDCA,
|-
|CABD,
|CADB,
|CBAD,
|CBDA,
|CDAB,
|CDBA,
|-
|DABC,
|DACB,
|DBAC,
|DBCA,
|DCAB,
|DCBA.
|}
there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red).
Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope.
Counting derangements
Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 |
https://en.wikipedia.org/wiki/Invariant%20mass | The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, it is a characteristic of the system's total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center-of-momentum frame exists for the system, then the invariant mass of a system is equal to its total mass in that "rest frame". In other reference frames, where the system's momentum is nonzero, the total mass (a.k.a. relativistic mass) of the system is greater than the invariant mass, but the invariant mass remains unchanged.
Because of mass–energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared. Similarly, the total energy of the system is its total (relativistic) mass times the speed of light squared.
Systems whose four-momentum is a null vector (for example, a single photon or many photons moving in exactly the same direction) have zero invariant mass and are referred to as massless. A physical object or particle moving faster than the speed of light would have space-like four-momenta (such as the hypothesized tachyon), and these do not appear to exist. Any time-like four-momentum possesses a reference frame where the momentum (3-dimensional) is zero, which is a center of momentum frame. In this case, invariant mass is positive and is referred to as the rest mass.
If objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects' rest masses. This is also equal to the total energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass. Since the mass of systems must be measured with a weight or mass scale in a center of momentum frame in which the entire system has zero momentum, such a scale always me |
https://en.wikipedia.org/wiki/MacOS%20version%20history | The history of macOS, Apple's current Mac operating system formerly named Mac OS X until 2011 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS. That system, up to and including its final release Mac OS 9, was a direct descendant of the operating system Apple had used in its Mac computers since their introduction in 1984. However, the current macOS is a UNIX operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997.
Although it was originally marketed as simply "version 10" of Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition for users and developers, versions through 10.4 were able to run Mac OS 9 and its applications in the Classic Environment, a compatibility layer.
macOS was first released in 1999 as Mac OS X Server 1.0. It was built using the technologies Apple acquired from NeXT, but did not include the signature Aqua user interface (UI). The desktop version aimed at regular users—Mac OS X 10.0—shipped in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion, macOS Server is no longer offered as a standalone operating system; instead, server management tools are available for purchase as an add-on. The macOS Server app was discontinued on April 21, 2022 and will stop working on macOS 13 Ventura or later. Starting with the Intel build of Mac OS X 10.5 Leopard, most releases have been certified as Unix systems conforming to the Single UNIX Specification.
Lion was referred to by Apple as "Mac OS X Lion" and sometimes as "OS X Lion"; Mountain Lion was officially referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was furth |
https://en.wikipedia.org/wiki/Collusion | Collusion is a deceitful agreement or secret cooperation between two or more parties to limit open competition by deceiving, misleading or defrauding others of their legal right. Collusion is not always considered illegal. It can be used to attain objectives forbidden by law; for example, by defrauding or gaining an unfair market advantage. It is an agreement among firms or individuals to divide a market, set prices, limit production or limit opportunities.
It can involve "unions, wage fixing, kickbacks, or misrepresenting the independence of the relationship between the colluding parties". In legal terms, all acts effected by collusion are considered void.
Definition
In the study of economics and market competition, collusion takes place within an industry when rival companies cooperate for their mutual benefit. Conspiracy usually involves an agreement between two or more sellers to take action to suppress competition between sellers in the market. Because competition among sellers can provide consumers with low prices, conspiracy agreements increase the price consumers pay for the goods. Because of this harm to consumers, it is against antitrust laws to fix prices by agreement between producers, so participants must keep it a secret. Collusion often takes place within an oligopoly market structure, where there are few firms and agreements that have significant impacts on the entire market or industry. To differentiate from a cartel, collusive agreements between parties may not be explicit; however, the implications of cartels and collusion are the same.
Under competition law, there is an important distinction between direct and covert collusion. Direct collusion generally refers to a group of companies communicating directly with each other to coordinate and monitor their actions, such as cooperating through pricing, market allocation, sales quotas, etc. On the other hand, tacit collusion is where companies coordinate and monitor their behavior without direct c |
https://en.wikipedia.org/wiki/Heterogamy | Heterogamy is a term applied to a variety of distinct phenomena in different scientific domains. Usually having to do with some kind of difference, "hetero", in reproduction, "gamy". See below for more specific senses.
Science
Reproductive biology
In reproductive biology, heterogamy is the alternation of differently organized generations, applied to the alternation between parthenogenetic and a sexual generation. This type of heterogamy occurs for example in some aphids.
Alternately, heterogamy or heterogamous is often used as a synonym of heterogametic, meaning the presence of two unlike chromosomes in a sex. For example, XY males and ZW females are called the heterogamous sex.
Cell biology
In cell biology, heterogamy is a synonym of anisogamy, the condition of having differently sized male and female gametes produced by different sexes or mating types in a species.
Botany
In botany, a plant is heterogamous when it carries at least two different types of flowers in regard to their reproductive structures, for example male and female flowers or bisexual and female flowers. Stamens and carpels are not regularly present in each flower or floret.
Social science
In sociology, heterogamy refers to a marriage between two individuals that differ in a certain criterion, and is contrasted with homogamy for a marriage or union between partners that match according to that criterion. For example, ethnic heterogamy refers to marriages involving individuals of different ethnic groups. Age heterogamy refers to marriages involving partners of significantly different ages. Heterogamy and homogamy are also used to describe marriage or union between people of unlike and like sex (or gender) respectively.
See also
Heterogametic
Homogametic
References
Plant reproduction
Genetics
Exogamy |
https://en.wikipedia.org/wiki/Megafauna | In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The most common thresholds to be a megafauna are weighing over (i.e., having a mass comparable to or larger than a human) or weighing over a tonne, (i.e., having a mass comparable to or larger than an ox). The first of these include many species not popularly thought of as overly large, and being the only few large animals left in a given range/area, such as white-tailed deer, Thomson's gazelle, and red kangaroo.
In practice, the most common usage encountered in academic and popular writing describes land mammals roughly larger than a human that are not (solely) domesticated. The term is especially associated with the Pleistocene megafauna – the land animals that are considered archetypical of the last ice age, such as mammoths, the majority of which in northern Eurasia, Australia-New Guinea and the Americas became extinct within the last forty thousand years.
Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and large bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and southern Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and, more rarely, megaomnivores (e.g., bears). The megafauna is also categorized by the class of animals that it belongs to, which are mammals, birds, reptiles, amphibians, fish, and invertebrates.
Other common uses are for giant aquatic species, especially whales, as |
https://en.wikipedia.org/wiki/Evolutionary%20economics | Evolutionary economics is a school of economic thought that is inspired by evolutionary biology. Although not defined by a strict set of principles and uniting various approaches, it treats economic development as a process rather than an equilibrium and emphasizes change (qualitative, organisational, and structural), innovation, complex interdependencies, self-evolving systems, and limited rationality as the drivers of economic evolution. The support for the evolutionary approach to economics in recent decades seems to have initially emerged as a criticism of the mainstream neoclassical economics, but by the beginning of the 21st century it had become part of the economic mainstream itself.
Evolutionary economics does not take the characteristics of either the objects of choice or of the decision-maker as fixed. Rather, it focuses on the non-equilibrium processes that transform the economy from within and their implications, considering interdependencies and feedback. The processes in turn emerge from the actions of diverse agents with bounded rationality who may learn from experience and interactions and whose differences contribute to the change.
Roots of evolutionary economics
Early ideas
The idea of human society and the world in general as subject to evolution has been following the mankind throughout its existence. Hesiod, an ancient Greek poet thought to be the first Western written poet regarding himself as an individual, described five Ages of Man – the Golden Age, the Silver Age, the Bronze Age, the Heroic Age, and the Iron Age – following from divine existence to toil and misery. Modern scholars consider his works as one of the sources for early economic thought. The concept is also present in the Metamorphoses by an ancient Roman poet Ovid. His Four Ages include technological progress: in the Golden Age, men did not know arts and craft, whereas by the Iron Age people had learnt and discovered agriculture, architecture, mining, navigation, and nation |
https://en.wikipedia.org/wiki/Individuation | The principle of individuation, or , describes the manner in which a thing is identified as distinct from other things.
The concept appears in numerous fields and is encountered in works of Leibniz, Carl Jung, Gunther Anders, Gilbert Simondon, Bernard Stiegler, Friedrich Nietzsche, Arthur Schopenhauer, David Bohm, Henri Bergson, Gilles Deleuze, and Manuel DeLanda.
Usage
The word individuation occurs with different meanings and connotations in different fields.
In philosophy
Philosophically, "individuation" expresses the general idea of how a thing is identified as an individual thing that "is not something else". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time.
In Jungian psychology
In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption.
In the news industry
The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users.
Com |
https://en.wikipedia.org/wiki/Concentration%20ratio | In economics, concentration ratios are used to quantify market concentration and are based on companies' market shares in a given industry.
A concentration ratio (CR) is the sum of the percentage market shares of (a pre-specified number of) the largest firms in an industry. An n-firm concentration ratio is a common measure of market structure and shows the combined market share of the n largest firms in the market. For example, if n = 5, CR5 defines the combined market share of the five largest firms in an industry.
Calculation
The concentration ratio is calculated as follows:
where defines the market share of the th largest firm in an industry as a percentage of total industry market share, and defines the number of firms included in the concentration ratio calculation.
The and concentration ratios are commonly used. Concentration ratios show the extent of largest firms' market shares in a given industry. Specifically, a concentration ratio close to 0% denotes a low concentration industry, and a concentration ratio near 100% shows that an industry has high concentration.
Concentration levels
Concentration ratios range from 0%–100%. Concentration levels are explained as follows:
Benefits and shortfalls
Concentration ratios can readily be calculated from industry data, but they are a simplistic, single parameter statistic. They can be used to quantify market concentration in a given industry in a relevant and succinct manner, but do not capture all available information about the distribution of market shares. In particular, the definition of the concentration ratio does not use the market shares of all the firms in the industry and does not account for the distribution of firm size. Also, it does not provide much detail about competitiveness of an industry.
The following example exposes the aforementioned shortfalls of the concentration ratio.
Example
The table below shows the market shares of the largest firms in two different industries (Industry |
https://en.wikipedia.org/wiki/Color%20television | Color television (American English) or colour television (Commonwealth English) is a television transmission technology that includes color information for the picture, so the video image can be displayed in color on the television set. It improves on the monochrome or black-and-white television technology, which displays the image in shades of gray (grayscale). Television broadcasting stations and networks in most parts of the world upgraded from black-and-white to color transmission between the 1960s and the 1980s. The invention of color television standards was an important part of the history and technology of television.
Transmission of color images using mechanical scanners had been conceived as early as the 1880s. A demonstration of mechanically scanned color television was given by John Logie Baird in 1928, but its limitations were apparent even then. Development of electronic scanning and display made a practical system possible. Monochrome transmission standards were developed prior to World War II, but civilian electronics development was frozen during much of the war. In August 1944, Baird gave the world's first demonstration of a practical fully electronic color television display. In the United States, competing color standards were developed, finally resulting in the NTSC color standard that was compatible with the prior monochrome system. Although the NTSC color standard was proclaimed in 1953 and limited programming soon became available, it was not until the early 1970s that color television in North America outsold black-and-white/monochrome units. Color broadcasting in Europe did not standardize on the PAL or SECAM formats until the 1960s.
Broadcasters began to upgrade from analog color television technology to higher resolution digital television ; the exact year varies by country. While the changeover is complete in many countries, analog television remains in use in some countries.
Development
The human eye's detection system in the retina |
https://en.wikipedia.org/wiki/Plesiochronous%20system | In telecommunications, a plesiochronous system is one where different parts of the system are almost, but not quite, perfectly synchronised. According to ITU-T standards, a pair of signals are plesiochronous if their significant instants occur at nominally the same rate, with any variation in rate being constrained within specified limits. A sender and receiver operate plesiosynchronously if they operate at the same nominal clock frequency but may have a slight clock frequency mismatch, which leads to a drifting phase. The mismatch between the two systems' clocks is known as the plesiochronous difference.
In general, plesiochronous systems behave similarly to synchronous systems, except they must employ some means in order to cope with "sync slips", which will happen at intervals due to the plesiochronous nature of the system. The most common example of a plesiochronous system design is the plesiochronous digital hierarchy networking standard.
The asynchronous serial communication protocol is asynchronous on the byte level, but plesiochronous on the bit level. The receiver detects the start of a byte by detecting a transition that may occur at a random time after the preceding byte. The indefinite wait and lack of external synchronization signals makes byte detection asynchronous. Then the receiver samples at predefined intervals to determine the values of the bits in the byte; this is plesiochronous since it depends on the transmitter to transmit at roughly the same rate the receiver expects, without coordination of the rate while the bits are being transmitted.
The modern tendency in systems engineering is towards using systems that are either fundamentally asynchronous (such as Ethernet), or fundamentally synchronous (such as synchronous optical networking), and layering these where necessary, rather than using a mixture between the two in a single technology.
The term plesiochronous comes from the Greek πλησίος plesios ("near") and χρόνος chrónos ("time").
|
https://en.wikipedia.org/wiki/Genlock | Genlock (generator locking) is a common technique where the video output of one source (or a specific reference signal from a signal generator) is used to synchronize other picture sources together. The aim in video applications is to ensure the coincidence of signals in time at a combining or switching point. When video instruments are synchronized in this way, they are said to be generator-locked, or genlocked.
Possible problems
Video signals generated and output by generator-locked instruments are said to be syntonized. Syntonized video signals will be precisely frequency-locked, but because of delays caused by the unequal transmission path lengths, the synchronized signals will exhibit differing phases at various points in the television system. Modern video equipment such as production switchers that have multiple video inputs often include a variable delay on each input to compensate for the phase differences and time all the input signals to precise phase coincidence.
Where two or more video signals are combined or being switched between, the horizontal and vertical timing of the picture sources should be coincident with each other. If they are not, the picture will appear to jump when switching between the sources whilst the display device re-adjusts the horizontal and/or vertical scan to correctly reframe the image.
Where composite video is in use, the phase of the chrominance subcarrier of each source being combined or switched should also be coincident. This is to avoid changes in colour hue and/or saturation during a transition between sources.
Scope
Generator locking can be used to synchronize as few as two isolated sources (e.g., a television camera and a videotape machine feeding a vision mixer (production switcher)), or in a wider facility where all the video sources are locked to a single synchronizing pulse generator (e.g., a fast-paced sporting event featuring multiple cameras and recording devices). Generator locking can also be used to |
https://en.wikipedia.org/wiki/Bubble%20memory | Bubble memory is a type of non-volatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas, known as bubbles or domains, each storing one bit of data. The material is arranged to form a series of parallel tracks that the bubbles can move along under the action of an external magnetic field. The bubbles are read by moving them to the edge of the material, where they can be read by a conventional magnetic pickup, and then rewritten on the far edge to keep the memory cycling through the material. In operation, bubble memories are similar to delay-line memory systems.
Bubble memory started out as a promising technology in the 1970s, offering memory density of an order similar to hard drives, but performance more comparable to core memory, while lacking any moving parts. This led many to consider it a contender for a "universal memory" that could be used for all storage needs. The introduction of dramatically faster semiconductor memory chips pushed bubble into the slow end of the scale, and equally dramatic improvements in hard-drive capacity made it uncompetitive in price terms. Bubble memory was used for some time in the 1970s and 1980s where its non-moving nature was desirable for maintenance or shock-proofing reasons. The introduction of flash storage and similar technologies rendered even this niche uncompetitive, and bubble disappeared entirely by the late 1980s.
History
Precursors
Bubble memory is largely the brainchild of a single person, Andrew Bobeck. Bobeck had worked on many kinds of magnetics-related projects through the 1960s, and two of his projects put him in a particularly good position for the development of bubble memory. The first was the development of the first magnetic-core memory system driven by a transistor-based controller, and the second was the development of twistor memory.
Twistor is essentially a version of core memory that replaces the "cores" with a piece of magnetic tape. The main advantag |
https://en.wikipedia.org/wiki/Interest%20rate | An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed (called the principal sum). The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed.
The annual interest rate is the rate over a period of one year. Other interest rates apply over different periods, such as a month or a day, but they are usually annualized.
The interest rate has been characterized as "an index of the preference . . . for a dollar of present [income] over a dollar of future income." The borrower wants, or needs, to have money sooner rather than later, and is willing to pay a fee—the interest rate—for that privilege.
Influencing factors
Interest rates vary according to:
the government's directives to the central bank to accomplish the government's goals
the currency of the principal sum lent or borrowed
the term to maturity of the investment
the perceived default probability of the borrower
supply and demand in the market
the amount of collateral
special features like call provisions
reserve requirements
compensating balance
as well as other factors.
Example
A company borrows capital from a bank to buy assets for its business. In return, the bank charges the company interest. (The lender might also require rights over the new assets as collateral.)
A bank will use the capital deposited by individuals to make loans to their clients. In return, the bank should pay interest to individuals who have deposited their capital. The amount of interest payment depends on the interest rate and the amount of capital they deposited.
Related terms
Base rate usually refers to the annualized effective interest rate offered on overnight deposits by the central bank or other monetary authority.
The annual percentage rate (APR) may refer either to a nominal APR or an effective APR (EAPR). The differenc |
https://en.wikipedia.org/wiki/Information%20commissioner | The role of information commissioner differs from nation to nation. Most commonly it is a title given to a government regulator in the fields of freedom of information and the protection of personal data in the widest sense. The office often functions as a specialist ombudsman service.
Australia
The Office of the Australian Information Commissioner (OAIC) has functions relating to freedom of information and privacy, as well as information policy. The Office of the Privacy Commissioner, which was the national privacy regulator, was integrated into the OAIC on 1 November 2010. There are three independent commissioners in the OAIC: the Australian Information Commissioner, the Freedom of Information Commissioner, and the Privacy Commissioner.
Bangladesh
The Information Commission of Bangladesh promotes and protects access to information. It is formed under the Right to Information Act, 2009, whose stated object is to empower the citizens by promoting transparency and accountability in the working of the public and private organizations, with the ultimate aim of decreasing corruption and establishing good governance. The Act creates a regime through which the citizens of the country may have access to information under the control of public and other authorities.
Canada
The Information Commissioner of Canada is an independent ombudsman appointed by the Parliament of Canada who investigates complaints from people who believe they have been denied rights provided under Canada's Access to Information Act. Similar bodies at provincial level include the Information and Privacy Commissioner (Ontario).
Germany
The Federal Commissioner for Data Protection and Freedom of Information (FfDF) is the federal commissioner not only for data protection but also (since commencement of the German Freedom of Information Act on January 1, 2006) for freedom of information.
Hong Kong
The Privacy Commissioner for Personal Data (PCPD) is charged with education and enforcement of the P |
https://en.wikipedia.org/wiki/Chicory | Common chicory (Cichorium intybus) is a somewhat woody, perennial herbaceous plant of the family Asteraceae, usually with bright blue flowers, rarely white or pink. Native to the Old World, it has been introduced to the Americas and Australia. Many varieties are cultivated for salad leaves, chicons (blanched buds), or roots (var. sativum), which are baked, ground, and used as a coffee substitute and food additive. In the 21st century, inulin, an extract from chicory root, has been used in food manufacturing as a sweetener and source of dietary fiber.
Chicory is grown as a forage crop for livestock. "Chicory" is also the common name in the United States for curly endive (Cichorium endivia); these two closely related species are often confused.
Description
When flowering, chicory has a tough, grooved, and more or less hairy stem. It can grow to tall. The leaves are stalked, lanceolate and unlobed; they range from in length (smallest near the top) and wide. The flower heads are wide, and usually light blue or lavender; it has also rarely been described as white or pink. Of the two rows of involucral bracts, the inner is longer and erect, the outer is shorter and spreading. It flowers from March until October. The seed has small scales at the tip.
Chemistry
Substances which contribute to the plant's bitterness are primarily the two sesquiterpene lactones, lactucin and lactucopicrin. Other components are aesculetin, aesculin, cichoriin, umbelliferone, scopoletin, 6,7-dihydrocoumarin, and further sesquiterpene lactones and their glycosides. Around 1970, it was discovered that the root contains up to 20% inulin, a polysaccharide similar to starch.
Names
Common chicory is also known as blue daisy, blue dandelion, blue sailors, blue weed, bunk, coffeeweed, cornflower, hendibeh, horseweed, ragged sailors, succory, wild bachelor's buttons, and wild endive. (Note: "cornflower" is commonly applied to Centaurea cyanus.) Common names for varieties of var. foliosum includ |
https://en.wikipedia.org/wiki/Geek%20Code | The Geek Code, developed in 1993, is a series of letters and symbols used by self-described "geeks" to inform fellow geeks about their personality, appearance, interests, skills, and opinions. The idea is that everything that makes a geek individual can be encoded in a compact format which only other geeks can read. This is deemed to be efficient in some sufficiently geeky manner.
It was once common practice to use a geek code as one's email or Usenet signature, but the last official version of the code was produced in 1996, and it has now largely fallen out of use.
History
The Geek Code was invented by Robert A. Hayden in 1993 and was defined at geekcode.com. It was inspired by a similar code for the bear subculture - which in turn was inspired by the Yerkes spectral classification system for describing stars.
After a number of updates, the last revision of the code was v3.12, in 1996.
Some alternative encodings have also been proposed. For example, the 1997 Acorn Code was a version specific to users of Acorn's RISC OS computers.
Format
Geek codes can be written in two formats; either as a simple string:
...or as a "Geek Code Block", a parody of the output produced by the encryption program PGP:
Note that this latter format has a line specifying the version of Geek Code being used.
(Both these examples use Hayden's own geek code.)
Encoding
Occupation
The code starts with the letter G (for Geek) followed by the geek's occupation(s): GMU for a geek of music, GCS for a geek of computer science etc. There are 28 occupations that can be represented, but GAT is for geeks that can do anything and everything - and "usually precludes the use of other vocational descriptors".
Categories
The Geek Code website contains the complete list of categories, along with all of the special syntax options.
Decoding
There have been several '"decoders" produced to transform a specific geek code into English, including:
Bradley M. Kuhn, in late 1998, made Williams' program ava |
https://en.wikipedia.org/wiki/Oil%20platform | An oil platform (also called an oil rig, offshore platform, oil production platform, etc.) is a large structure with facilities to extract and process petroleum and natural gas that lie in rock formations beneath the seabed. Many oil platforms will also have facilities to accommodate the workers, although it is also common to have a separate accommodation platform bridge linked to the production platform. Most commonly, oil platforms engage in activities on the continental shelf, though they can also be used in lakes, inshore waters, and inland seas. Depending on the circumstances, the platform may be fixed to the ocean floor, consist of an artificial island, or float. In some arrangements the main facility may have storage facilities for the processed oil. Remote subsea wells may also be connected to a platform by flow lines and by umbilical connections. These sub-sea facilities may include one or more subsea wells or manifold centres for multiple wells.
Offshore drilling presents environmental challenges, both from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate.
There are many different types of facilities from which offshore drilling operations take place. These include bottom-founded drilling rigs (jackup barges and swamp barges), combined drilling and production facilities, either bottom-founded or floating platforms, and deepwater mobile offshore drilling units (MODU), including semi-submersibles and drillships. These are capable of operating in water depths up to . In shallower waters, the mobile units are anchored to the seabed. However, in deeper water (more than ), the semisubmersibles or drillships are maintained at the required drilling location using dynamic positioning.
History
Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys (a.k.a. Mercer County Reservoir) in Ohio. The |
https://en.wikipedia.org/wiki/Iterative%20and%20incremental%20development | Iterative and incremental development is any combination of both iterative design or iterative method and incremental build model for development.
Usage of the term began in software development, with a long-standing combination of the two terms iterative and incremental having been widely suggested for large development efforts. For example, the 1985 DOD-STD-2167
mentions (in section 4.1.2): "During software development, more than one iteration of the software development cycle may be in progress at the same time." and "This process may be described as an 'evolutionary acquisition' or 'incremental build' approach." In software, the relationship between iterations and increments is determined by the overall software development process.
Overview
The basic idea behind this method is to develop a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing software developers to take advantage of what was learned during development of earlier parts or versions of the system. Learning comes from both the development and use of the system, where possible key steps in the process start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added.
The procedure itself consists of the initialization step, the iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes items such as new features to be implemented and areas of redesi |
https://en.wikipedia.org/wiki/Formula | In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities.
The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin).
In mathematics
In mathematics, a formula generally refers to an equation relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius:
Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form.
In a general context, formulas are often a manifestation of mathematical model to real world phenomena, and as such can be used to provide solution (or approximated solution) to real world problems, with some being more general than others. For example, the formula
is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations.
Expr |
https://en.wikipedia.org/wiki/Logarithmic%20scale | A logarithmic scale (or log scale) is a way of displaying numerical data over a very wide range of values in a compact way. As opposed to a linear number line in which every unit of distance corresponds to adding by the same amount, on a logarithmic scale, every unit of length corresponds to multiplying the previous value by the same amount. Hence, such a scale is nonlinear. In nonlinear scale, the numbers 1, 2, 3, 4, 5, and so on would not be equally spaced. Rather, the numbers 10, 100, 1000, 10000, and 100000 would be equally spaced. Likewise, the numbers 2, 4, 8, 16, 32, and so on, would be equally spaced. Often exponential growth curves are displayed on a log scale, otherwise they would increase too quickly to fit within a small graph.
Common uses
The markings on slide rules are arranged in a log scale for multiplying or dividing numbers by adding or subtracting lengths on the scales.
The following are examples of commonly used logarithmic scales, where a larger quantity results in a higher value:
Richter magnitude scale and moment magnitude scale (MMS) for strength of earthquakes and movement in the Earth
Sound level, with units decibel
Neper for amplitude, field and power quantities
Frequency level, with units cent, minor second, major second, and octave for the relative pitch of notes in music
Logit for odds in statistics
Palermo Technical Impact Hazard Scale
Logarithmic timeline
Counting f-stops for ratios of photographic exposure
The rule of nines used for rating low probabilities
Entropy in thermodynamics
Information in information theory
Particle size distribution curves of soil
The following are examples of commonly used logarithmic scales, where a larger quantity results in a lower (or negative) value:
pH for acidity
Stellar magnitude scale for brightness of stars
Krumbein scale for particle size in geology
Absorbance of light by transparent samples
Some of our senses operate in a logarithmic fashion (Weber–Fechner law), which m |
https://en.wikipedia.org/wiki/Tobacco%20mosaic%20virus | Tobacco mosaic virus (TMV) is a positive-sense single-stranded RNA virus species in the genus Tobamovirus that infects a wide range of plants, especially tobacco and other members of the family Solanaceae. The infection causes characteristic patterns, such as "mosaic"-like mottling and discoloration on the leaves (hence the name). TMV was the first virus to be discovered. Although it was known from the late 19th century that a non-bacterial infectious disease was damaging tobacco crops, it was not until 1930 that the infectious agent was determined to be a virus. It is the first pathogen identified as a virus. The virus was crystallised by W.M. Stanley. It has a similar size to the largest synthetic molecule, known as PG5.
History
In 1886, Adolf Mayer first described the tobacco mosaic disease that could be transferred between plants, similar to bacterial infections. In 1892, Dmitri Ivanovsky gave the first concrete evidence for the existence of a non-bacterial infectious agent, showing that infected sap remained infectious even after filtering through the finest Chamberland filters. Later, in 1903, Ivanovsky published a paper describing abnormal crystal intracellular inclusions in the host cells of the affected tobacco plants and argued the connection between these inclusions and the infectious agent. However, Ivanovsky remained rather convinced, despite repeated failures to produce evidence, that the causal agent was an unculturable bacterium, too small to be retained on the employed Chamberland filters and to be detected in the light microscope. In 1898, Martinus Beijerinck independently replicated Ivanovsky's filtration experiments and then showed that the infectious agent was able to reproduce and multiply in the host cells of the tobacco plant. Beijerinck adopted the term of "virus" to indicate that the causal agent of tobacco mosaic disease was of non-bacterial nature. Tobacco mosaic virus was the first virus to be crystallized. It was achieved by Wendell |
https://en.wikipedia.org/wiki/Society%20of%20Motion%20Picture%20and%20Television%20Engineers | The Society of Motion Picture and Television Engineers (SMPTE) (, rarely ), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is a global professional association of engineers, technologists, and executives working in the media and entertainment industry. As an internationally recognized standards organization, SMPTE has published more than 800 technical standards and related documents for broadcast, filmmaking, digital cinema, audio recording, information technology (IT), and medical imaging.
SMPTE also publishes the SMPTE Motion Imaging Journal, provides networking opportunities for its members, produces academic conferences and exhibitions, and performs other industry-related functions. SMPTE membership is open to any individual or organization with an interest in the subject matter. In the US, SMPTE is a 501(c)3 non-profit charitable organization.
History
An informal organizational meeting was held in April 1916 at the Astor Hotel in New York City. Enthusiasm and interest increased, and meetings were held in New York and Chicago, culminating in the founding of the Society of Motion Picture Engineers in the Oak Room of the Raleigh Hotel, Washington DC on the 24th of July. Ten industry stakeholders attended and signed the Articles of Incorporation. Papers of incorporation, were executed on 24 July 1916, were filed on 10 August in Washington DC. With a second meeting scheduled, invitations were telegraphed to Jenkin’s industry friends, i.e., key players and engineering executives in the motion picture industry.
Three months later, 26 attended the first “official” meeting of the Society, the SMPE, at the Hotel Astor in New York City, on 2 and 3 October 1916. Jenkins was formally elected president, a constitution ratified, an emblem for the Society approved, and six committees established.
At the July 1917 Society Convention in Chicago, a set of specifications including the dimensions of 35-mm film, 16 frames per second, etc. were adopted. SMP |
https://en.wikipedia.org/wiki/UNIVAC%201101 | The ERA 1101, later renamed UNIVAC 1101, was a computer system designed and built by Engineering Research Associates (ERA) in the early 1950s and continued to be sold by the Remington Rand corporation after that company later purchased ERA. Its (initial) military model, the ERA Atlas, was the first stored-program computer that was moved from its site of manufacture and successfully installed at a distant site. Remington Rand used the 1101's architecture as the basis for a series of machines into the 1960s.
History
Codebreaking
ERA was formed from a group of code-breakers working for the United States Navy during World War II. The team had built a number of code-breaking machines, similar to the more famous Colossus computer in England, but designed to attack Japanese codes. After the war the Navy was interested in keeping the team together even though they had to formally be turned out of Navy service. The result was ERA, which formed in St. Paul, Minnesota in the hangars of a former Chase Aircraft shadow factory.
After the war, the team continued to build codebreaking machines, targeted at specific codes. After one of these codes changed, making an expensive computer obsolete, the team convinced the Navy that the only way to make a system that would remain useful was to build a fully programmable computer. The Navy agreed, and in 1947 they funded development of a new system under "Task 13".
The resulting machines, known as "Atlas", used drum memory for main memory and featured a simple central processing unit built for integer math. The first Atlas machine was built, moved, and installed at the Army Security Agency by December 1950. A faster version using Williams tubes and drums was delivered to the NSA in 1953.
Commercialization
The company turned to the task of selling the systems commercially. Atlas was named after a character in the popular comic strip Barnaby, and they initially decided to name the commercial versions "Mabel". Jack Hill suggested "1101" |
https://en.wikipedia.org/wiki/SMPTE%20timecode | SMPTE timecode ( or ) is a set of cooperating standards to label individual frames of video or film with a timecode. The system is defined by the Society of Motion Picture and Television Engineers in the SMPTE 12M specification. SMPTE revised the standard in 2008, turning it into a two-part document: SMPTE 12M-1 and SMPTE 12M-2, including new explanations and clarifications.
Timecodes are added to film, video or audio material, and have also been adapted to synchronize music and theatrical production. They provide a time reference for editing, synchronization and identification. Timecode is a form of media metadata. The invention of timecode made modern videotape editing possible and led eventually to the creation of non-linear editing systems.
Basic concepts
SMPTE timecode is presented in hour:minute:second:frame format and is typically represented in 32 bits using binary-coded decimal. There are also drop-frame and color framing flags and three extra binary group flag bits used for defining the use of the user bits. The formats of other varieties of SMPTE timecode are derived from that of the linear timecode. More complex timecodes such as vertical interval timecode can also include extra information in a variety of encodings.
Sub-second timecode time values are expressed in terms of frames. Common supported frame rates include:
24 frame/sec. (film, ATSC, 2K, 4K, 6K)
25 frame/sec. (PAL (Europe, Uruguay, Argentina, Australia), SECAM, DVB, ATSC)
29.97 (30 ÷ 1.001) frame/sec. (NTSC American System (U.S., Canada, Mexico, Colombia, et al.), ATSC, PAL-M (Brazil))
30 frame/sec. (ATSC)
In general, SMPTE timecode frame rate information is implicit, known from the rate of arrival of the timecode from the medium. It may also be specified in other metadata encoded in the medium. The interpretation of several bits, including the color framing and drop frame bits, depends on the underlying data rate. In particular, the drop frame bit is only valid for 29.97 and 30 fra |
https://en.wikipedia.org/wiki/IBM%20Series/1 | The IBM Series/1 is a 16-bit minicomputer, introduced in 1976, that in many respects competed with other minicomputers of the time, such as the PDP-11 from Digital Equipment Corporation and similar offerings from Data General and HP. The Series/1 was typically used to control and operate external electro-mechanical components while also allowing for primitive data storage and handling.
Although the Series/1 uses EBCDIC character encoding internally and locally attached EBCDIC terminals, ASCII-based remote terminals and devices could be attached via an I/O card with a RS-232 interface to be more compatible with competing minicomputers. IBM's own 3101 and 3151 ASCII display terminals are examples of this. This was a departure from IBM mainframes that used 3270 terminals and coaxial attachment.
Series/1 computers were withdrawn from marketing in 1988 at or near the introduction of the IBM AS/400 line.
A US government asset report dated May 2016 revealed that an IBM Series/1 was still being used as part of the country's nuclear command and control systems.
Models
Initially, model 1 (4952, Model C), model 3 (IBM 4953) and model 5 (IBM 4955, Model F) processors were provided. Later processors were the model 4 (IBM 4954) and model 6 (IBM 4956). Don Estridge had been the lead manager on the IBM Series/1 minicomputer. He reportedly had fallen out of grace when that project was ill-received.
Software support
The Series/1 could be ordered with or without operating system. Available were either of two mutually exclusive operating systems: Event Driven Executive (EDX) or Realtime Programming System (RPS). Systems using EDX were primarily programmed using Event Driven Language (EDL), though high level languages such as FORTRAN IV, PL/I, Pascal and COBOL were also available. EDL delivered output in IBM machine code for System/3 or System/7 and for the Series/1 by an emulator. Although the Series/1 is underpowered by today's standards, a robust multi-user operating envir |
https://en.wikipedia.org/wiki/Nimber | In mathematics, the nimbers, also called Grundy numbers, are introduced in combinatorial game theory, where they are defined as the values of heaps in the game Nim. The nimbers are the ordinal numbers endowed with nimber addition and nimber multiplication, which are distinct from ordinal addition and ordinal multiplication.
Because of the Sprague–Grundy theorem which states that every impartial game is equivalent to a Nim heap of a certain size, nimbers arise in a much larger class of impartial games. They may also occur in partisan games like Domineering.
The nimber addition and multiplication operations are associative and commutative. Each nimber is its own additive inverse. In particular for some pairs of ordinals, their nimber sum is smaller than either addend. The minimum excludant operation is applied to sets of nimbers.
Uses
Nim
Nim is a game in which two players take turns removing objects from distinct heaps. As moves depend only on the position and not on which of the two players is currently moving, and where the payoffs are symmetric, Nim is an impartial game. On each turn, a player must remove at least one object, and may remove any number of objects provided they all come from the same heap. The goal of the game is to be the player who removes the last object. The nimber of a heap is simply the number of objects in that heap. Using nim addition, one can calculate the nimber of the game as a whole. The winning strategy is to force the nimber of the game to 0 for the opponent's turn.
Cram
Cram is a game often played on a rectangular board in which players take turns placing dominoes either horizontally or vertically until no more dominoes can be placed. The first player that cannot make a move loses. As the possible moves for both players are the same, it is an impartial game and can have a nimber value. For example, any board that is an even size by an even size will have a nimber of 0. Any board that is even by odd will have a non-zero ni |
https://en.wikipedia.org/wiki/Radio%20clock | A radio clock or radio-controlled clock (RCC), and often colloquially (and incorrectly) referred to as an "atomic clock", is a type of quartz clock or watch that is automatically synchronized to a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock. Such a clock may be synchronized to the time sent by a single transmitter, such as many national or regional time transmitters, or may use the multiple transmitters used by satellite navigation systems such as Global Positioning System. Such systems may be used to automatically set clocks or for any purpose where accurate time is needed. Radio clocks may include any feature available for a clock, such as alarm function, display of ambient temperature and humidity, broadcast radio reception, etc.
One common style of radio-controlled clock uses time signals transmitted by dedicated terrestrial longwave radio transmitters, which emit a time code that can be demodulated and displayed by the radio controlled clock. The radio controlled clock will contain an accurate time base oscillator to maintain timekeeping if the radio signal is momentarily unavailable. Other radio controlled clocks use the time signals transmitted by dedicated transmitters in the shortwave bands. Systems using dedicated time signal stations can achieve accuracy of a few tens of milliseconds.
GPS satellite receivers also internally generate accurate time information from the satellite signals. Dedicated GPS timing receivers are accurate to better than 1 microsecond; however, general-purpose or consumer grade GPS may have an offset of up to one second between the internally calculated time, which is much more accurate than 1 second, and the time displayed on the screen.
Other broadcast services may include timekeeping information of varying accuracy within their signals.
Single transmitter
Radio clocks synchronized to a terrestrial time signal can usually achieve an accuracy within a hundredth of a second r |
https://en.wikipedia.org/wiki/Jam%20sync | Jam sync refers to the practice of applying a phase hit to a system to bring it in synchronization with another. The term originates from the use of this technique to replace defective time code on a video tape recording by replacing it with a new time code sequence, which may be an extension of a previous good time code sequence on an earlier part of the source material.
Synchronization |
https://en.wikipedia.org/wiki/Semaphore%20%28programming%29 | In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available.
Semaphores are a useful tool in the prevention of race conditions; however, their use is not a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as THE multiprogramming system.
Library analogy
Suppose a physical library has 10 identical study rooms, to be used by one student at a time. Students must request a room from the front desk if they wish to use a study room. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free.
In the simplest implementation, the clerk at the front desk knows only the number of free rooms available, which they only know correctly if all of the students actuall |
https://en.wikipedia.org/wiki/Wavenumber | In the physical sciences, the wavenumber (or wave number), also known as repetency, is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber). It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time (ordinary frequency) or radians per unit time (angular frequency).
In multidimensional systems, the wavenumber is the magnitude of the wave vector. The space of wave vectors is called reciprocal space. Wave numbers and wave vectors play an essential role in optics and the physics of wave scattering, such as X-ray diffraction, neutron diffraction, electron diffraction, and elementary particle physics. For quantum mechanical waves, the wavenumber multiplied by the reduced Planck's constant is the canonical momentum.
Wavenumber can be used to specify quantities other than spatial frequency. For example, in optical spectroscopy, it is often used as a unit of temporal frequency assuming a certain speed of light.
Definition
Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm−1):
where λ is the wavelength. It is sometimes called the "spectroscopic wavenumber". It equals the spatial frequency.
For example, a wavenumber in inverse centimeters can be converted to a frequency in gigahertz by multiplying by 29.9792458 cm/ns (the speed of light, in centimeters per nanosecond); conversely, an electromagnetic wave at 29.9792458 GHz has a wavelength of 1 cm in free space.
In theoretical physics, a wave number, defined as the number of radians per unit distance, sometimes called "angular wavenumber", is more often used:
When wavenumber is represented by the symbol , a frequency is still being represented, albeit indirectly. As described in the spectroscopy section, this is done through the relationship , where s is a frequency in hertz. This is done for con |
https://en.wikipedia.org/wiki/Dissipation | In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
Definition
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the |
https://en.wikipedia.org/wiki/Radiative%20cooling | In the study of heat transfer, radiative cooling is the process by which a body loses heat by thermal radiation. As Planck's law describes, every physical body spontaneously and continuously emits electromagnetic radiation.
Radiative cooling has been applied in various contexts throughout human history, including ice making in India and Iran, heat shields for spacecraft, and in architecture. In 2014, a scientific breakthrough in the use of photonic metamaterials made daytime radiative cooling possible. It has since been proposed as a strategy to mitigate local and global warming caused by greenhouse gas emissions known as passive daytime radiative cooling.
Terrestrial radiative cooling
Mechanism
Infrared radiation can pass through dry, clear air in the wavelength range of 8–13 µm. Materials that can absorb energy and radiate it in those wavelengths exhibit a strong cooling effect. Materials that can also reflect 95% or more of sunlight in the 200 nanometres to 2.5 µm range can exhibit cooling even in direct sunlight.
Earth's energy budget
The Earth-atmosphere system is radiatively cooled, emitting long-wave (infrared) radiation which balances the absorption of short-wave (visible light) energy from the sun.
Convective transport of heat, and evaporative transport of latent heat are both important in removing heat from the surface and distributing it in the atmosphere. Pure radiative transport is more important higher up in the atmosphere. Diurnal and geographical variation further complicate the picture.
The large-scale circulation of the Earth's atmosphere is driven by the difference in absorbed solar radiation per square meter, as the sun heats the Earth more in the Tropics, mostly because of geometrical factors. The atmospheric and oceanic circulation redistributes some of this energy as sensible heat and latent heat partly via the mean flow and partly via eddies, known as cyclones in the atmosphere. Thus the tropics radiate less to space than they would |
https://en.wikipedia.org/wiki/General%20circulation%20model | A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.
Terminology
The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.
History
In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the |
https://en.wikipedia.org/wiki/Sensible%20heat | Sensible heat is heat exchanged by a body or thermodynamic system in which the exchange of heat changes the temperature of the body or system, and some macroscopic variables of the body or system, but leaves unchanged certain other macroscopic variables of the body or system, such as volume or pressure.
Usage
The term is used in contrast to a latent heat, which is the amount of heat exchanged that is hidden, meaning it occurs without change of temperature. For example, during a phase change such as the melting of ice, the temperature of the system containing the ice and the liquid is constant until all ice has melted. The terms latent and sensible are correlative.
The sensible heat of a thermodynamic process may be calculated as the product of the body's mass (m) with its specific heat capacity (c) and the change in temperature ():
Sensible heat and latent heat are not special forms of energy. Rather, they describe exchanges of heat under conditions specified in terms of their effect on a material or a thermodynamic system.
In the writings of the early scientists who provided the foundations of thermodynamics, sensible heat had a clear meaning in calorimetry. James Prescott Joule characterized it in 1847 as an energy that was indicated by the thermometer.
Both sensible and latent heats are observed in many processes while transporting energy in nature. Latent heat is associated with changes of state, measured at constant temperature, especially the phase changes of atmospheric water vapor, mostly vaporization and condensation, whereas sensible heat directly affects the temperature of the atmosphere.
In meteorology, the term 'sensible heat flux' means the conductive heat flux from the Earth's surface to the atmosphere. It is an important component of Earth's surface energy budget. Sensible heat flux is commonly measured with the eddy covariance method.
See also
Eddy covariance flux (eddy correlation, eddy flux)
Enthalpy
Thermodynamic databases for pure sub |
https://en.wikipedia.org/wiki/Latent%20heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance (for example, to melt or vaporize it) without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related |
https://en.wikipedia.org/wiki/Nyquist%20frequency | In signal processing, the Nyquist frequency (or folding frequency), named after Harry Nyquist, is a characteristic of a sampler, which converts a continuous function or signal into a discrete sequence. For a given sampling rate (samples per second), the Nyquist frequency (cycles per second) is the frequency whose cycle-length (or period) is twice the interval between samples, thus 0.5 cycle/sample. For example, audio CDs have a sampling rate of 44100 samples/second. At 0.5 cycle/sample, the corresponding Nyquist frequency is 22050 cycles/second (Hz). Conversely, the Nyquist rate for sampling a 22050 Hz signal is 44100 samples/second.
When the highest frequency (bandwidth) of a signal is less than the Nyquist frequency of the sampler, the resulting discrete-time sequence is said to be free of the distortion known as aliasing, and the corresponding sample rate is said to be above the Nyquist rate for that particular signal.
In a typical application of sampling, one first chooses the highest frequency to be preserved and recreated, based on the expected content (voice, music, etc.) and desired fidelity. Then one inserts an anti-aliasing filter ahead of the sampler. Its job is to attenuate the frequencies above that limit. Finally, based on the characteristics of the filter, one chooses a sample rate (and corresponding Nyquist frequency) that will provide an acceptably small amount of aliasing. In applications where the sample rate is pre-determined (such as the CD rate), the filter is chosen based on the Nyquist frequency, rather than vice versa.
Folding frequency
In this example, is the sampling rate, and is the corresponding Nyquist frequency. The black dot plotted at represents the amplitude and frequency of a sinusoidal function whose frequency is 60% of the sample rate. The other three dots indicate the frequencies and amplitudes of three other sinusoids that would produce the same set of samples as the actual sinusoid that was sampled. Undersampling |
https://en.wikipedia.org/wiki/Builder%20pattern | The builder pattern is a design pattern designed to provide a flexible solution to various object creation problems in object-oriented programming. The intent of the builder design pattern is to separate the construction of a complex object from its representation. It is one of the Gang of Four design patterns.
Overview
The Builder design pattern is one of the Design Patterns that describe how to solve recurring design problems in object-oriented software.
The Builder design pattern solves problems like:
How can a class (the same construction process) create different representations of a complex object?
How can a class that includes creating a complex object be simplified?
Creating and assembling the parts of a complex object directly within a class is inflexible. It commits the class to creating a particular representation of the complex object and makes it impossible to change the representation later independently from (without having to change) the class.
The Builder design pattern describes how to solve such problems:
Encapsulate creating and assembling the parts of a complex object in a separate Builder object.
A class delegates object creation to a Builder object instead of creating the objects directly.
A class (the same construction process) can delegate to different Builder objects to create different representations of a complex object.
Definition
The intent of the Builder design pattern is to separate the construction of a complex object from its representation. By doing so, the same construction process can create different representations.
Advantages
Advantages of the Builder pattern include:
Allows you to vary a product's internal representation.
Encapsulates code for construction and representation.
Provides control over steps of construction process.
Disadvantages
Disadvantages of the Builder pattern include:
A distinct ConcreteBuilder must be created for each type of product.
Builder classes must be mutable.
May hamper/compli |
https://en.wikipedia.org/wiki/Factory%20method%20pattern | In class-based programming, the factory method pattern is a creational pattern that uses factory methods to deal with the problem of creating objects without having to specify the exact class of the object that will be created. This is done by creating objects by calling a factory method—either specified in an interface and implemented by child classes, or implemented in a base class and optionally overridden by derived classes—rather than by calling a constructor.
Overview
The Factory Method
design pattern is one of the twenty-three well-known design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
The Factory Method design pattern solves problems like:
How can an object be created so that subclasses can redefine which class to instantiate?
How can a class defer instantiation to subclasses?
The Factory Method design pattern describes how to solve such problems:
Define a separate operation (factory method) for creating an object.
Create an object by calling a factory method.
This enables writing of subclasses to change the way an object is created (to redefine which class to instantiate).
See also the UML class diagram below.
Definition
"Define an interface for creating an object, but let subclasses decide which class to instantiate. The Factory method lets a class defer instantiation it uses to subclasses." (Gang Of Four)
Creating an object often requires complex processes not appropriate to include within a composing object. The object's creation may lead to a significant duplication of code, may require information not accessible to the composing object, may not provide a sufficient level of abstraction, or may otherwise not be part of the composing object's concerns. The factory method design pattern handles these problems by defining a separate method for creating the objects, which subclasses can then overr |
https://en.wikipedia.org/wiki/Prototype%20pattern | The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to avoid subclasses of an object creator in the client application, like the factory method pattern does, and to avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
To implement the pattern, the client declares an abstract base class that specifies a pure virtual clone() method. Any class that needs a "polymorphic constructor" capability derives itself from the abstract base class, and implements the clone() operation.
The client, instead of writing code that invokes the "new" operator on a hard-coded class name, calls the clone() method on the prototype, calls a factory method with a parameter designating the particular concrete derived class desired, or invokes the clone() method through some mechanism provided by another design pattern.
The mitotic division of a cell — resulting in two identical cells — is an example of a prototype that plays an active role in copying itself and thus, demonstrates the Prototype pattern. When a cell splits, two cells of identical genotype result. In other words, the cell clones itself.
Overview
The prototype design pattern is one of the 23 Gang of Four design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
The prototype design pattern solves problems like:
How can objects be created so that which objects to create can be specified at run-time?
How can dynamically loaded classes be instantiated?
Creating objects directly within the class that requires (uses) the objects is inflexible because it commits the class to particular objects at compile-time and |
https://en.wikipedia.org/wiki/Composite%20pattern | In software engineering, the composite pattern is a partitioning design pattern. The composite pattern describes a group of objects that are treated the same way as a single instance of the same type of object. The intent of a composite is to "compose" objects into tree structures to represent part-whole hierarchies. Implementing the composite pattern lets clients treat individual objects and compositions uniformly.
Overview
The Composite
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Composite design pattern solve?
A part-whole hierarchy should be represented so that clients can treat part and whole objects uniformly.
A part-whole hierarchy should be represented as tree structure.
When defining (1) Part objects and (2) Whole objects that act as containers for Part objects, clients must treat them separately, which complicates client code.
What solution does the Composite design pattern describe?
Define a unified Component interface for both part (Leaf) objects and whole (Composite) objects.
Individual Leaf objects implement the Component interface directly, and Composite objects forward requests to their child components.
This enables clients to work through the Component interface to treat Leaf and Composite objects uniformly:
Leaf objects perform a request directly,
and Composite objects
forward the request to their child components recursively downwards the tree structure.
This makes client classes easier to implement, change, test, and reuse.
See also the UML class and object diagram below.
Motivation
When dealing with Tree-structured data, programmers often have to discriminate between a leaf-node and a branch. This makes code more complex, and therefore, more error prone. The solution is an interface that allows t |
https://en.wikipedia.org/wiki/Decorator%20pattern | In object-oriented programming, the decorator pattern is a design pattern that allows behavior to be added to an individual object, dynamically, without affecting the behavior of other objects from the same class. The decorator pattern is often useful for adhering to the Single Responsibility Principle, as it allows functionality to be divided between classes with unique areas of concern as well as to the Open-Closed Principle, by allowing the functionality of a class to be extended without being modified. Decorator use can be more efficient than subclassing, because an object's behavior can be augmented without defining an entirely new object.
Overview
The decorator design pattern is one of the twenty-three well-known design patterns; these describe how to solve recurring design problems and design flexible and reusable object-oriented software—that is, objects which are easier to implement, change, test, and reuse.
What problems can it solve?
Responsibilities should be added to (and removed from) an object dynamically at run-time.
A flexible alternative to subclassing for extending functionality should be provided.
When using subclassing, different subclasses extend a class in different ways. But an extension is bound to the class at compile-time and can't be changed at run-time.
What solution does it describe?
Define Decorator objects that
implement the interface of the extended (decorated) object (Component) transparently by forwarding all requests to it
perform additional functionality before/after forwarding a request.
This allows working with different Decorator objects to extend the functionality of an object dynamically at run-time.
See also the UML class and sequence diagram below.
Intent
The decorator pattern can be used to extend (decorate) the functionality of a certain object statically, or in some cases at run-time, independently of other instances of the same class, provided some groundwork is done at design time. This is achieved by |
https://en.wikipedia.org/wiki/Proxy%20pattern | In computer programming, the proxy pattern is a software design pattern. A proxy, in its most general form, is a class functioning as an interface to something else. The proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate. In short, a proxy is a wrapper or agent object that is being called by the client to access the real serving object behind the scenes. Use of the proxy can simply be forwarding to the real object, or can provide additional logic. In the proxy, extra functionality can be provided, for example caching when operations on the real object are resource intensive, or checking preconditions before operations on the real object are invoked. For the client, usage of a proxy object is similar to using the real object, because both implement the same interface.
Overview
The Proxy
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Proxy design pattern solve?
The access to an object should be controlled.
Additional functionality should be provided when accessing an object.
When accessing sensitive objects, for example, it should be possible to check that clients have the needed access rights.
What solution does the Proxy design pattern describe?
Define a separate Proxy object that
can be used as substitute for another object (Subject) and
implements additional functionality to control the access to this subject.
This makes it possible to work through a Proxy object to perform additional functionality when accessing a subject. For example, to check the access rights of clients accessing a sensitive object.
To act as substitute for a subject, a proxy must implement the Subject interface.
Clients can't tell whether th |
https://en.wikipedia.org/wiki/Command%20pattern | In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time. This information includes the method name, the object that owns the method and values for the method parameters.
Four terms always associated with the command pattern are command, receiver, invoker and client. A command object knows about receiver and invokes a method of the receiver. Values for parameters of the receiver method are stored in the command. The receiver object to execute these methods is also stored in the command object by aggregation. The receiver then does the work when the execute() method in command is called. An invoker object knows how to execute a command, and optionally does bookkeeping about the command execution. The invoker does not know anything about a concrete command, it knows only about the command interface. Invoker object(s), command objects and receiver objects are held by a client object, the client decides which receiver objects it assigns to the command objects, and which commands it assigns to the invoker. The client decides which commands to execute at which points. To execute a command, it passes the command object to the invoker object.
Using command objects makes it easier to construct general components that need to delegate, sequence or execute method calls at a time of their choosing without the need to know the class of the method or the method parameters. Using an invoker object allows bookkeeping about command executions to be conveniently performed, as well as implementing different modes for commands, which are managed by the invoker object, without the need for the client to be aware of the existence of bookkeeping or modes.
The central ideas of this design pattern closely mirror the semantics of first-class functions and higher-order functions in functional programming languages. Specifically, the invoker o |
https://en.wikipedia.org/wiki/Iterator%20pattern | In object-oriented programming, the iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container's elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.
For example, the hypothetical algorithm SearchForElement can be implemented generally using a specified type of iterator rather than implementing it as a container-specific algorithm. This allows SearchForElement to be used on any container that supports the required type of iterator.
Overview
The Iterator
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Iterator design pattern solve?
The elements of an aggregate object should be accessed and traversed without exposing its representation (data structures).
New traversal operations should be defined for an aggregate object without changing its interface.
Defining access and traversal operations in the aggregate interface is inflexible because it commits the aggregate to particular access and traversal operations and makes it impossible to add new operations
later without having to change the aggregate interface.
What solution does the Iterator design pattern describe?
Define a separate (iterator) object that encapsulates accessing and traversing an aggregate object.
Clients use an iterator to access and traverse an aggregate without knowing its representation (data structures).
Different iterators can be used to access and traverse an aggregate in different ways.
New access and traversal operations can be defined independently by defining new iterators.
See also the UML class and sequence diagram below.
Definition
The essence of the Iterator Pattern is to "Provide a w |
https://en.wikipedia.org/wiki/Interpreter%20pattern | In computer programming, the interpreter pattern is a design pattern that specifies how to evaluate sentences in a language.
The basic idea is to have a class for each symbol (terminal or nonterminal) in a specialized computer language. The syntax tree of a sentence in the language is an instance of the composite pattern and is used to evaluate (interpret) the sentence for a client. See also Composite pattern.
Overview
The Interpreter
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Interpreter design pattern solve?
A grammar for a simple language should be defined
so that sentences in the language can be interpreted.
When a problem occurs very often, it could be considered to represent it as a sentence in a simple language
(Domain Specific Languages) so that an interpreter can solve the problem
by interpreting the sentence.
For example, when many different or complex search expressions must be specified.
Implementing (hard-wiring) them directly into a class is inflexible
because it commits the class to particular expressions and makes it impossible to specify new expressions or change existing ones independently from (without having to change) the class.
What solution does the Interpreter design pattern describe?
Define a grammar for a simple language by defining an Expression class hierarchy and implementing an interpret() operation.
Represent a sentence in the language by an abstract syntax tree (AST) made up of Expression instances.
Interpret a sentence by calling interpret() on the AST.
The expression objects are composed recursively into a composite/tree structure that is called
abstract syntax tree (see Composite pattern).
The Interpreter pattern doesn't describe how
to build an abstract syntax tree. This can
be done eit |
https://en.wikipedia.org/wiki/Mediator%20pattern | In software engineering, the mediator pattern defines an object that encapsulates how a set of objects interact. This pattern is considered to be a behavioral pattern due to the way it can alter the program's running behavior.
In object-oriented programming, programs often consist of many classes. Business logic and computation are distributed among these classes. However, as more classes are added to a program, especially during maintenance and/or refactoring, the problem of communication between these classes may become more complex. This makes the program harder to read and maintain. Furthermore, it can become difficult to change the program, since any change may affect code in several other classes.
With the mediator pattern, communication between objects is encapsulated within a mediator object. Objects no longer communicate directly with each other, but instead communicate through the mediator. This reduces the dependencies between communicating objects, thereby reducing coupling.
Overview
The mediator design pattern is one of the twenty-three well-known design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
Problems that the mediator design pattern can solve
Tight coupling between a set of interacting objects should be avoided.
It should be possible to change the interaction between a set of objects independently.
Defining a set of interacting objects by accessing and updating each other directly is inflexible because it tightly couples the objects to each other and makes it impossible to change the interaction independently from (without having to change) the objects.
And it stops the objects from being reusable and makes them hard to test.
Tightly coupled objects are hard to implement, change, test, and reuse because they refer to and know about many different objects.
Solutions described by the mediator de |
https://en.wikipedia.org/wiki/Observer%20pattern | In software design and engineering, the observer pattern is a software design pattern in which an object, named the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods.
It is often used for implementing distributed event-handling systems in event-driven software. In such systems, the subject is usually named a "stream of events" or "stream source of events" while the observers are called "sinks of events." The stream nomenclature alludes to a physical setup in which the observers are physically separated and have no control over the emitted events from the subject/stream source. This pattern thus suits any process by which data arrives from some input that is not available to the CPU at startup, but instead arrives seemingly at random (HTTP requests, GPIO data, user input from peripherals, distributed databases and blockchains, etc.).
Overview
The observer design pattern is a behavioural pattern listed among the 23 well-known "Gang of Four" design patterns that address recurring design challenges in order to design flexible and reusable object-oriented software, yielding objects that are easier to implement, change, test and reuse.
Which problems can the observer design pattern solve?
The observer pattern addresses the following problems:
A one-to-many dependency between objects should be defined without making the objects tightly coupled.
When one object changes state, an open-ended number of dependent objects should be updated automatically.
An object can notify multiple other objects.
Defining a one-to-many dependency between objects by defining one object (subject) that updates the state of dependent objects directly is inflexible because it couples the subject to particular dependent objects. However, it might be applicable from a performance point of view or if the object implementation is tightly coupled (such as low-level kernel structures that ex |
https://en.wikipedia.org/wiki/State%20pattern | The state pattern is a behavioral software design pattern that allows an object to alter its behavior when its internal state changes. This pattern is close to the concept of finite-state machines. The state pattern can be interpreted as a strategy pattern, which is able to switch a strategy through invocations of methods defined in the pattern's interface.
The state pattern is used in computer programming to encapsulate varying behavior for the same object, based on its internal state. This can be a cleaner way for an object to change its behavior at runtime without resorting to conditional statements and thus improve maintainability.
Overview
The state design pattern is one of twenty-three design patterns documented by the Gang of Four that describe how to solve recurring design problems. Such problems cover the design of flexible and reusable object-oriented software, such as objects that are easy to implement, change, test, and reuse.
The state pattern is set to solve two main problems:
An object should change its behavior when its internal state changes.
State-specific behavior should be defined independently. That is, adding new states should not affect the behavior of existing states.
Implementing state-specific behavior directly within a class is inflexible because it commits the class to a particular behavior and makes it impossible to add a new state or change the behavior of an existing state later, independently from the class, without changing the class. In this, the pattern describes two solutions:
Define separate (state) objects that encapsulate state-specific behavior for each state. That is, define an interface (state) for performing state-specific behavior, and define classes that implement the interface for each state.
A class delegates state-specific behavior to its current state object instead of implementing state-specific behavior directly.
This makes a class independent of how state-specific behavior is implemented. New states can b |
https://en.wikipedia.org/wiki/Strategy%20pattern | In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use.
Strategy lets the algorithm vary independently from clients that use it. Strategy is one of the patterns included in the influential book Design Patterns by Gamma et al. that popularized the concept of using design patterns to describe how to design flexible and reusable object-oriented software. Deferring the decision about which algorithm to use until runtime allows the calling code to be more flexible and reusable.
For instance, a class that performs validation on incoming data may use the strategy pattern to select a validation algorithm depending on the type of data, the source of the data, user choice, or other discriminating factors. These factors are not known until run-time and may require radically different validation to be performed. The validation algorithms (strategies), encapsulated separately from the validating object, may be used by other validating objects in different areas of the system (or even different systems) without code duplication.
Typically, the strategy pattern stores a reference to some code in a data structure and retrieves it. This can be achieved by mechanisms such as the native function pointer, the first-class function, classes or class instances in object-oriented programming languages, or accessing the language implementation's internal storage of code via reflection.
Structure
UML class and sequence diagram
In the above UML class diagram, the Context class doesn't implement an algorithm directly.
Instead, Context refers to the Strategy interface for performing an algorithm (strategy.algorithm()), which makes Context independent of how an algorithm is implemented.
The Strategy1 and Strategy2 classes implement the Strategy |
https://en.wikipedia.org/wiki/Template%20method%20pattern | In object-oriented programming, the template method is one of the behavioral design patterns identified by Gamma et al. in the book Design Patterns. The template method is a method in a superclass, usually an abstract superclass, and defines the skeleton of an operation in terms of a number of high-level steps. These steps are themselves implemented by additional helper methods in the same class as the template method.
The helper methods may be either abstract methods, in which case subclasses are required to provide concrete implementations, or hook methods, which have empty bodies in the superclass. Subclasses can (but are not required to) customize the operation by overriding the hook methods. The intent of the template method is to define the overall structure of the operation, while allowing subclasses to refine, or redefine, certain steps.
Overview
This pattern has two main parts:
The "template method" is implemented as a method in a base class (usually an abstract class). This method contains code for the parts of the overall algorithm that are invariant. The template ensures that the overarching algorithm is always followed. In the template method, portions of the algorithm that may vary are implemented by sending self messages that request the execution of additional helper methods. In the base class, these helper methods are given a default implementation, or none at all (that is, they may be abstract methods).
Subclasses of the base class "fill in" the empty or "variant" parts of the "template" with specific algorithms that vary from one subclass to another. It is important that subclasses do not override the template method itself.
At run-time, the algorithm represented by the template method is executed by sending the template message to an instance of one of the concrete subclasses. Through inheritance, the template method in the base class starts to execute. When the template method sends a message to self requesting one of the helper |
https://en.wikipedia.org/wiki/Balking%20pattern | The balking pattern is a software design pattern that only executes an action on an object when the object is in a particular state. For example, if an object reads ZIP files and a calling method invokes a get method on the object when the ZIP file is not open, the object would "balk" at the request. In the Java programming language, for example, an IllegalStateException might be thrown under these circumstances.
There are some specialists in this field who consider balking more of an anti-pattern than a design pattern. If an object cannot support its API, it should either limit the API so that the offending call is not available, or so that the call can be made without limitation. It should:
Be created in a "sane state";
not make itself available until it is in a sane state;
become a facade and answer back an object that is in a sane state.
Usage
Objects that use this pattern are generally only in a state that is prone to balking temporarily but for an unknown amount of time. If objects are to remain in a state which is prone to balking for a known, finite period of time, then the guarded suspension pattern may be preferred.
Implementation
Below is a general, simple example for an implementation of the balking pattern. As demonstrated by the definition above, notice how the "synchronized" line is utilized. If there are multiple calls to the job method, only one will proceed while the other calls will return with nothing. Another thing to note is the jobCompleted() method. The reason it is synchronized is because the only way to guarantee another thread will see a change to a field is to synchronize all access to it. Actually, since it is a boolean variable, it could be left not explicitly synchronized, only declared volatile - to guarantee that the other thread will not read an obsolete cached value.
public class Example {
private boolean jobInProgress = false;
public void job() {
synchronized(this) {
if (jobInProgress) {
|
https://en.wikipedia.org/wiki/Guarded%20suspension | In concurrent programming, guarded suspension is a software design pattern for managing operations that require both a lock to be acquired and a precondition to be satisfied before the operation can be executed. The guarded suspension pattern is typically applied to method calls in object-oriented programs, and involves suspending the method call, and the calling thread, until the precondition (acting as a guard) is satisfied.
Usage
Because it is blocking, the guarded suspension pattern is generally only used when the developer knows that a method call will be suspended for a finite and reasonable period of time. If a method call is suspended for too long, then the overall program will slow down or stop, waiting for the precondition to be satisfied. If the developer knows that the method call suspension will be indefinite or for an unacceptably long period, then the balking pattern may be preferred.
Implementation
In Java, the Object class provides the wait() and notify() methods to assist with guarded suspension. In the implementation below, originally found in , if there is no precondition satisfied for the method call to be successful, then the method will wait until it finally enters a valid state.
public class Example {
synchronized void guardedMethod() {
while (!preCondition()) {
try {
// Continue to wait
wait();
// …
} catch (InterruptedException e) {
// …
}
}
// Actual task implementation
}
synchronized void alterObjectStateMethod() {
// Change the object state
// …
// Inform waiting threads
notify();
}
}
An example of an actual implementation would be a queue object with a get method that has a guard to detect when there are no items in the queue. Once the put method notifies the other methods (for example, a get method), then the get method can exit its guarded state and proceed wit |
https://en.wikipedia.org/wiki/Double-checked%20locking | In software engineering, double-checked locking (also known as "double-checked locking optimization") is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "lock hint") before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required.
The original form of the pattern, appearing in Pattern Languages of Program Design 3, has data races, depending on the memory model in use, and it is hard to get right. Some consider it to be an anti-pattern. There are valid forms of the pattern, including the use of the keyword in Java and explicit memory barriers in C++.
The pattern is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of the Singleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed.
Motivation and original pattern
Consider, for example, this code segment in the Java programming language:
// Single-threaded version
class Foo {
private static Helper helper;
public Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
The problem is that this does not work when using multiple threads. A lock must be obtained in case two threads call getHelper() simultaneously. Otherwise, either they may both try to create the object at the same time, or one may wind up getting a reference to an incompletely initialized object.
Synchronizing with a lock can fix this, as is shown in the following example:
// Correct but possibly expensive multithreaded version
class Foo {
private Helper helper;
public synchronized Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
This is correct and will most likely have |
https://en.wikipedia.org/wiki/Three-domain%20system | The three-domain system is a biological classification introduced by Carl Woese, Otto Kandler, and Mark Wheelis in 1990 that divides cellular life forms into three domains, namely Archaea, Bacteria, and Eukarya. The key difference from earlier classifications such as the two-empire system and the five-kingdom classification is the splitting of Archaea from Bacteria as completely different organisms. It has been challenged by the two-domain system that divides organisms into Bacteria and Archaea only, as Eukaryotes are considered as one group of Archaea.
Background
Woese argued, on the basis of differences in 16S rRNA genes, that bacteria, archaea, and eukaryotes each arose separately from an ancestor with poorly developed genetic machinery, often called a progenote. To reflect these primary lines of descent, he treated each as a domain, divided into several different kingdoms. Originally his split of the prokaryotes was into Eubacteria (now Bacteria) and Archaebacteria (now Archaea). Woese initially used the term "kingdom" to refer to the three primary phylogenic groupings, and this nomenclature was widely used until the term "domain" was adopted in 1990.
Acceptance of the validity of Woese's phylogenetically valid classification was a slow process. Prominent biologists including Salvador Luria and Ernst Mayr objected to his division of the prokaryotes. Not all criticism of him was restricted to the scientific level. A decade of labor-intensive oligonucleotide cataloging left him with a reputation as "a crank", and Woese would go on to be dubbed "Microbiology's Scarred Revolutionary" by a news article printed in the journal Science in 1997. The growing amount of supporting data led the scientific community to accept the Archaea by the mid-1980s. Today, very few scientists still accept the concept of a unified Prokarya.
Classification
The three-domain system adds a level of classification (the domains) "above" the kingdoms present in the previously used five- or |
https://en.wikipedia.org/wiki/Post-translational%20modification | Post-translational modification (PTM) is the covalent process of changing proteins following protein biosynthesis. PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes translating mRNA into polypeptide chains, which may then change to form the mature protein product. PTMs are important components in cell signalling, as for example when prohormones are converted to hormones.
Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation, which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation, often targets a protein or part of a protein attached to the cell membrane.
Other forms of post-translational modification consist of cleaving peptide bonds, as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds.
Some types of post-translational modification are consequences of oxidative stress. Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. Specific amino acid modifications can be used as biomarkers indicating oxidative damage.
Sites that often undergo post-transl |
https://en.wikipedia.org/wiki/Software%20design%20pattern | In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.
History
Patterns originated as an architectural concept by Christopher Alexander as early as 1977 (c.f. "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying patterns to programming – specifically pattern languages – and presented their results at the OOPSLA conference that year. In the following years, Beck, Cunningham and others followed up on this work.
Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Gamma et al.), which is frequently abbreviated as "GoF". That same year, the first Pattern Languages of Programming Conference was held, and the following year th |
https://en.wikipedia.org/wiki/Purity%20test | A purity test is a self-graded survey that assesses the participants' supposed degree of innocence in worldly matters (sex, drugs, deceit, and other activities assumed to be vices), generally on a percentage scale with 100% being the most and 0% being the least pure. Online purity tests were among the earliest of Internet memes, popular on Usenet beginning in the early 1980s. However, similar types of tests circulated under various names long before the existence of the Internet.
Historical examples
The Columbia University humor magazine, The Jester, reported in its October 1935 issue on a campus wide "purity test" conducted at Barnard College in 1935. The issue of The Jester was briefly censored, with distribution curtailed until the director of activities at the university could review the article. According to the editor-in-chief of The Jester, "We printed the survey to clear up some of the misconceptions that Columbia and the outside world have about Barnard girls," he said. "The results seem to establish that Barnard girls are quite regular. I fail to see anything off-color in the story. It's a sociological study."
In 1936, The Indian Express reported that students at Toronto University were "under-going a 'purity test', which took "the form of twenty very personal questions, designed to determine the state of their morals and their 'purity ratio'. For example, so many marks are lost for smoking, drinking, and every time the sinner kisses a girl or boy. Then, after truthfully answering all the questions, the total number of bad marks are added up and subtracted from a hundred. What is left, if any, is the 'purity ratio'. The test is unofficial and just what it will prove when completed nobody knows."
Alan Dundes, a professor of anthropology and folklore at the University of California, Berkeley, and Carl R. Pagter included examples of purity tests in their 1975 book Work Hard and You Shall Be Rewarded: Urban Folklore from the Paperwork Empire. They noted, " |
https://en.wikipedia.org/wiki/Food%20coloring | Food coloring, or color additive, is any dye, pigment, or substance that imparts color when it is added to food or drink. They can be supplied as liquids, powders, gels, or pastes. Food coloring is used in both commercial food production and domestic cooking. Food colorants are also used in a variety of non-food applications, including cosmetics, pharmaceuticals, home craft projects, and medical devices. Colorings may be natural (e.g. anthocyanins, cochineal) or artificial/synthetic (e.g. tartrazine yellow).
Purpose of food coloring
People associate certain colors with certain flavors, and the color of food can influence the perceived flavor in anything from candy to wine. Sometimes, the aim is to simulate a color that is perceived by the consumer as natural, such as adding red coloring to glacé cherries (which would otherwise be beige), but sometimes it is for effect, like the green ketchup that Heinz launched in 2000. Color additives are used in foods for many reasons including:
To make food more attractive, appealing, appetizing, and informative
Offseting color loss over time due to exposure to light, air, temperature extremes, moisture and storage conditions
Correcting natural variations in color
Enhancing colors that occur naturally
Providing color to colorless and "fun" foods
Allowing products to be identified on sight, like candy flavors or medicine dosages
Natural food dyes
History
The addition of colorants to foods is thought to have occurred in Egyptian cities as early as 1500 BC, when candy makers added natural extracts and wine to improve the products' appearance. During the Middle Ages, the economy in the European countries was based on agriculture, and the peasants were accustomed to producing their own food locally or trading within the village communities. Under feudalism, aesthetic aspects were not considered, at least not by the vast majority of the generally very poor population. This situation changed with urbanization at the beginning |
https://en.wikipedia.org/wiki/Ampere-turn | The ampere-turn (symbol A⋅t) is the MKS (metre–kilogram–second) unit of magnetomotive force (MMF), represented by a direct current of one ampere flowing in a single-turn loop in a vacuum. "Turns" refers to the winding number of an electrical conductor composing an inductor.
For example, a current of flowing through a coil of 10 turns produces an MMF of .
The corresponding physical quantity is N⋅I, the product of the number of turns, N, and the current, I; it has been used in industry, specifically, US-based coil-making industries.
By maintaining the same current and increasing the number of loops or turns of the coil, the strength of the magnetic field increases because each loop or turn of the coil sets up its own magnetic field. The magnetic field unites with the fields of the other loops to produce the field around the entire coil, making the total magnetic field stronger.
The strength of the magnetic field is not linearly related to the ampere-turns when a magnetic material is used as a part of the system. Also, the material within the magnet carrying the magnetic flux "saturates" at some point, after which adding more ampere-turns has little effect.
The ampere-turn corresponds to gilberts, the corresponding CGS unit.
In Thomas Edison's laboratory Francis Upton was the lead mathematician. Trained with Helmholtz in Germany, he used weber as the name of the unit of current, modified to ampere later:
When conducting his investigations, Upton always noted the Weber turns and with his other data had all that was necessary to put the results of his work in proper form.
He discovered that a Weber turn (that is, an ampere turn) was a constant factor, a given number of which always produced the same effect magnetically.
See also
Inductance
Solenoid
References
Units of measurement
Magnetism |
https://en.wikipedia.org/wiki/Watt%20steam%20engine | The Watt steam engine design became synonymous with steam engines, and it was many years before significantly new designs began to replace the basic Watt design.
The first steam engines, introduced by Thomas Newcomen in 1712, were of the "atmospheric" design. At the end of the power stroke, the weight of the object being moved by the engine pulled the piston to the top of the cylinder as steam was introduced. Then the cylinder was cooled by a spray of water, which caused the steam to condense, forming a partial vacuum in the cylinder. Atmospheric pressure on the top of the piston pushed it down, lifting the work object. James Watt noticed that it required significant amounts of heat to warm the cylinder back up to the point where steam could enter the cylinder without immediately condensing. When the cylinder was warm enough that it became filled with steam the next power stroke could commence.
Watt realised that the heat needed to warm the cylinder could be saved by adding a separate condensing cylinder. After the power cylinder was filled with steam, a valve was opened to the secondary cylinder, allowing the steam to flow into it and be condensed, which drew the steam from the main cylinder causing the power stroke. The condensing cylinder was water cooled to keep the steam condensing. At the end of the power stroke, the valve was closed so the power cylinder could be filled with steam as the piston moved to the top. The result was the same cycle as Newcomen's design, but without any cooling of the power cylinder which was immediately ready for another stroke.
Watt worked on the design over a period of several years, introducing the condenser, and introducing improvements to practically every part of the design. Notably, Watt performed a lengthy series of trials on ways to seal the piston in the cylinder, which considerably reduced leakage during the power stroke, preventing power loss. All of these changes produced a more reliable design which used half as muc |
https://en.wikipedia.org/wiki/Runway | According to the International Civil Aviation Organization (ICAO), a runway is a "defined rectangular area on a land aerodrome prepared for the landing and takeoff of aircraft". Runways may be a human-made surface (often asphalt, concrete, or a mixture of both) or a natural surface (grass, dirt, gravel, ice, sand or salt). Runways, taxiways and ramps, are sometimes referred to as "tarmac", though very few runways are built using tarmac. Takeoff and landing areas defined on the surface of water for seaplanes are generally referred to as waterways. Runway lengths are now commonly given in meters worldwide, except in North America where feet are commonly used.
History
In 1916, in a World War I war effort context, the first concrete-paved runway was built in Clermont-Ferrand in France, allowing local company Michelin to manufacture Bréguet Aviation military aircraft.
In January 1919, aviation pioneer Orville Wright underlined the need for "distinctly marked and carefully prepared landing places, [but] the preparing of the surface of reasonably flat ground [is] an expensive undertaking [and] there would also be a continuous expense for the upkeep."
Headings
For fixed-wing aircraft, it is advantageous to perform takeoffs and landings into the wind to reduce takeoff or landing roll and reduce the ground speed needed to attain flying speed. Larger airports usually have several runways in different directions, so that one can be selected that is most nearly aligned with the wind. Airports with one runway are often constructed to be aligned with the prevailing wind. Compiling a wind rose is in fact one of the preliminary steps taken in constructing airport runways. Wind direction is given as the direction the wind is coming from: a plane taking off from runway 09 faces east, into an "east wind" blowing from 090°.
Originally in the 1920s and 1930s, airports and air bases (particularly in the United Kingdom) were built in a triangle-like pattern of three runways at 60° angl |
https://en.wikipedia.org/wiki/Inductance | Inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The electric current produces a magnetic field around the conductor. The magnetic field strength depends on the magnitude of the electric current, and follows any changes in the magnitude of the current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF.
Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality constant that depends on the geometry of circuit conductors (e.g., cross-section area and length) and the magnetic permeability of the conductor and nearby materials. An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire.
The term inductance was coined by Oliver Heaviside in May 1884, as a convenient way to refer to ″coefficient of self-induction″. It is customary to use the symbol for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry (H), which is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second. The unit is named for Joseph Henry, who discovered inductance independently of Faraday.
History
The history of electromagnetic induction, a facet of electromagnetism, began with observations of the ancients: electric charge or static electricity (rubbing silk on amber), electric current (lightning), and magnetic attraction (lodestone). Understanding the unity of these forces of nature, and the scientific theory of electromagnetism was initiated and achieved dur |
https://en.wikipedia.org/wiki/Application%20server | An application server is a server that hosts applications or software that delivers a business application through a communication protocol.
An application server framework is a service layer model. It includes software components available to a software developer through an application programming interface. An application server may have features such as clustering, fail-over, and load-balancing. The goal is for developers to focus on the business logic.
Java application servers
Jakarta EE (formerly Java EE or J2EE) defines the core set of API and features of Java application servers.
The Jakarta EE infrastructure is partitioned into logical containers.
EJB container: Enterprise Beans are used to manage transactions. According to the Java BluePrints, the business logic of an application resides in Enterprise Beans—a modular server component providing many features, including declarative transaction management, and improving application scalability.
Web container: the web modules include Jakarta Servlets and Jakarta Server Pages (JSP).
JCA container (Jakarta Connectors)
JMS provider (Jakarta Messaging)
Commercial Java application servers have been dominated by WebLogic Application Server by Oracle, WebSphere Application Server from IBM and the open source JBoss Enterprise Application Platform (JBoss EAP) by Red Hat. Another example of web server which can be used as an application server for the Java EE ecosystem is Apache Tomcat.
Microsoft
Microsoft's .NET positions their middle-tier applications and services infrastructure in the Windows Server operating system and the .NET Framework technologies in the role of an application server. The Windows Application Server role includes Internet Information Services (IIS) to provide web server support, the .NET Framework to provide application support, ASP.NET to provide server side scripting, COM+ for application component communication, Message Queuing for multithreaded processing, and the Windows Communication |
https://en.wikipedia.org/wiki/Software%20configuration%20management | In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the "what, when, why and who" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts.
The acronym "SCM" is also expanded as source configuration management process and software change and configuration management. However, "configuration" is generally understood to cover changes typically made by a system administrator.
Purposes
The goals of SCM are generally:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as v |
https://en.wikipedia.org/wiki/BitKeeper | BitKeeper is a software tool for distributed revision control of computer source code. Originally developed as proprietary software by BitMover Inc., a privately held company based in Los Gatos, California, it was released as open-source software under the Apache-2.0 license on 9 May 2016. BitKeeper is no longer being developed.
History
BitKeeper was originally developed by BitMover Inc., a privately held company from Los Gatos, California owned by Larry McVoy, who had previously designed TeamWare.
BitKeeper and the Linux Kernel
BitKeeper was first mentioned as a solution to some of the growing pains that Linux was having in September 1998. Early access betas were available in May 1999 and on May 4, 2000, the first public release of BitKeeper was made available.
BitMover used to provide access to the system for certain open-source or free-software projects, one of which was the source code of the Linux kernel. The license for the "community" version of BitKeeper had allowed for developers to use the tool at no cost for open source or free software projects, provided those developers did not participate in the development of a competing tool (such as Concurrent Versions System, GNU arch, Subversion or ClearCase) for the duration of their usage of BitKeeper plus one year. This restriction applied regardless of whether the competing tool was free or proprietary. This version of BitKeeper also required that certain meta-information about changes be stored on computer servers operated by BitMover, an addition that made it impossible for community version users to run projects of which BitMover was unaware.
The decision made in 2002 to use BitKeeper for Linux kernel development was a controversial one. Some, including GNU Project founder Richard Stallman, expressed concern about proprietary tools being used on a flagship free project. While project leader Linus Torvalds and other core developers adopted BitKeeper, several key developers (including Linux veteran Al |
https://en.wikipedia.org/wiki/Pentium%20II | The Pentium II brand refers to Intel's sixth-generation microarchitecture ("P6") and x86-compatible microprocessors introduced on May 7, 1997. Containing 7.5 million transistors (27.4 million in the case of the mobile Dixon with 256 KB L2 cache), the Pentium II featured an improved version of the first P6-generation core of the Pentium Pro, which contained 5.5 million transistors. However, its L2 cache subsystem was a downgrade when compared to the Pentium Pros. It is a single-core microprocessor.
In 1998, Intel stratified the Pentium II family by releasing the Pentium II-based Celeron line of processors for low-end computers and the Pentium II Xeon line for servers and workstations. The Celeron was characterized by a reduced or omitted (in some cases present but disabled) on-die full-speed L2 cache and a 66 MT/s FSB. The Xeon was characterized by a range of full-speed L2 cache (from 512 KB to 2048 KB), a 100 MT/s FSB, a different physical interface (Slot 2), and support for symmetric multiprocessing.
In February 1999, the Pentium II was replaced by the nearly identical Pentium III, which only added the then-new SSE instruction set. However, the older family would continue to be produced until June 2001 for desktop units, September 2001 for mobile units, and the end of 2003 for embedded devices.
Overview
The Pentium II microprocessor was largely based upon the microarchitecture of its predecessor, the Pentium Pro, but with some significant improvements.
Unlike previous Pentium and Pentium Pro processors, the Pentium II CPU was packaged in a slot-based module rather than a CPU socket. The processor and associated components were carried on a daughterboard similar to a typical expansion board within a plastic cartridge. A fixed or removable heatsink was carried on one side, sometimes using its own fan.
This larger package was a compromise allowing Intel to separate the secondary cache from the processor while still keeping it on a closely coupled back-side bus. T |
https://en.wikipedia.org/wiki/Pentium%20III | The Pentium III (marketed as Intel Pentium III Processor, informally PIII or P3) brand refers to Intel's 32-bit x86 desktop and mobile CPUs based on the sixth-generation P6 microarchitecture introduced on February 28, 1999. The brand's initial processors were very similar to the earlier Pentium II-branded processors. The most notable differences were the addition of the Streaming SIMD Extensions (SSE) instruction set (to accelerate floating point and parallel calculations), and the introduction of a controversial serial number embedded in the chip during manufacturing. The Pentium III is also a single-core processor.
Even after the release of the Pentium 4 in late 2000, the Pentium III continued to be produced with new models introduced up until early 2003. They were then discontinued in April 2004 for desktop units and May 2007 for mobile units.
Processor cores
Similarly to the Pentium II it superseded, the Pentium III was also accompanied by the Celeron brand for lower-end versions, and the Xeon for high-end (server and workstation) derivatives. The Pentium III was eventually superseded by the Pentium 4, but its Tualatin core also served as the basis for the Pentium M CPUs, which used many ideas from the P6 microarchitecture. Subsequently, it was the Pentium M microarchitecture of Pentium M branded CPUs, and not the NetBurst found in Pentium 4 processors, that formed the basis for Intel's energy-efficient Core microarchitecture of CPUs branded Core 2, Pentium Dual-Core, Celeron (Core), and Xeon.
Katmai
The first Pentium III variant was the Katmai (Intel product code 80525). It was a further development of the Deschutes Pentium II. The Pentium III saw an increase of 2 million transistors over the Pentium II. The differences were the addition of execution units and SSE instruction support, and an improved L1 cache controller (the L2 cache controller was left unchanged, as it would be fully redesigned for Coppermine anyway), which were responsible for the minor |
https://en.wikipedia.org/wiki/Pentium%20Pro | The Pentium Pro is a sixth-generation x86 microprocessor developed and manufactured by Intel and introduced on November 1, 1995. It introduced the P6 microarchitecture (sometimes termed i686) and was originally intended to replace the original Pentium in a full range of applications. While the Pentium and Pentium MMX had 3.1 and 4.5 million transistors, respectively, the Pentium Pro contained 5.5 million transistors. Later, it was reduced to a more narrow role as a server and high-end desktop processor and was used in supercomputers like ASCI Red, the first computer to reach the trillion floating point operations per second (teraFLOPS) performance mark in 1996. The Pentium Pro was capable of both dual- and quad-processor configurations. It only came in one form factor, the relatively large rectangular Socket 8. The Pentium Pro was succeeded by the Pentium II Xeon in 1998.
Microarchitecture
The lead architect of Pentium Pro was Fred Pollack who was specialized in superscalarity and had also worked as the lead engineer of the Intel iAPX 432.
Summary
The Pentium Pro incorporated a new microarchitecture, different from the Pentium's P5 microarchitecture. It has a decoupled, 14-stage superpipelined architecture which used an instruction pool.
The Pentium Pro (P6) implemented many radical architectural differences mirroring other contemporary x86 designs such as the NexGen Nx586 and Cyrix 6x86. The Pentium Pro pipeline had extra decode stages to dynamically translate IA-32 instructions into buffered micro-operation sequences which could then be analysed, reordered, and renamed in order to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro thus featured out of order execution, including speculative execution via register renaming. It also had a wider 36-bit address bus, usable by Physical Address Extension (PAE), allowing it to access up to 64 GB of memory.
The Pentium Pro has an 8 KB instruction cache, from whi |
https://en.wikipedia.org/wiki/Time%20value%20of%20money | The time value of money is the widely accepted conjecture that there is greater benefit to receiving a sum of money now rather than an identical sum later. It may be seen as an implication of the later-developed concept of time preference.
The time value of money is among the factors considered when weighing the opportunity costs of spending rather than saving or investing money. As such, it is among the reasons why interest is paid or earned: interest, whether it is on a bank deposit or debt, compensates the depositor or lender for the loss of their use of their money. Investors are willing to forgo spending their money now only if they expect a favorable net return on their investment in the future, such that the increased value to be available later is sufficiently high to offset both the preference to spending money now and inflation (if present); see required rate of return.
History
The Talmud (~500 CE) recognizes the time value of money. In Tractate Makkos page 3a the Talmud discusses a case where witnesses falsely claimed that the term of a loan was 30 days when it was actually 10 years. The false witnesses must pay the difference of the value of the loan "in a situation where he would be required to give the money back (within) thirty days..., and that same sum in a situation where he would be required to give the money back (within) 10 years...The difference is the sum that the testimony of the (false) witnesses sought to have the borrower lose; therefore, it is the sum that they must pay."
The notion was later described by Martín de Azpilcueta (1491–1586) of the School of Salamanca.
Calculations
Time value of money problems involve the net value of cash flows at different points in time.
In a typical case, the variables might be: a balance (the real or nominal value of a debt or a financial asset in terms of monetary units), a periodic rate of interest, the number of periods, and a series of cash flows. (In the case of a debt, cash flows are payments |
https://en.wikipedia.org/wiki/Weighted%20average%20cost%20of%20capital | The weighted average cost of capital (WACC) is the rate that a company is expected to pay on average to all its security holders to finance its assets. The WACC is commonly referred to as the firm's cost of capital. Importantly, it is dictated by the external market and not by management. The WACC represents the minimum return that a company must earn on an existing asset base to satisfy its creditors, owners, and other providers of capital, or they will invest elsewhere.
Companies raise money from a number of sources: common stock, preferred stock and related rights, straight debt, convertible debt, exchangeable debt, employee stock options, pension liabilities, executive stock options, governmental subsidies, and so on. Different securities, which represent different sources of finance, are expected to generate different returns. The WACC is calculated taking into account the relative weights of each component of the capital structure. The more complex the company's capital structure, the more laborious it is to calculate the WACC.
Companies can use WACC to see if the investment projects available to them are worthwhile to undertake.
Calculation
In general, the WACC can be calculated with the following formula:
where is the number of sources of capital (securities, types of liabilities); is the required rate of return for security ; and is the market value of all outstanding securities .
In the case where the company is financed with only equity and debt, the average cost of capital is computed as follows:
where is the total debt, is the total shareholder's equity, is the cost of debt, and is the cost of equity. The market values of debt and equity should be used when computing the weights in the WACC formula.
Tax effects
Tax effects can be incorporated into this formula. For example, the WACC for a company financed by one type of shares with the total market value of and cost of equity and one type of bonds with the total market value of and cost |
https://en.wikipedia.org/wiki/PARAM | PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022 the fastest machine in the series is the PARAM Siddhi AI which ranks 120th in world, with an Rpeak of 5.267 petaflops.
History
C-DAC was created in November 1987, originally as the Centre for Development of Advanced Computing Technology (C-DACT). This was in response to issues purchasing supercomputers from foreign sources. The Indian Government decided to try and develop indigenous computing technology.
PARAM 8000
The PARAM 8000 was the first machine in the series and was built from scratch. A prototype was benchmarked at the "1990 Zurich Super-computing Show": of the machines that ran at the show it came second only to one from the United States.
A 64-node machine was delivered in August 1991. Each node used Inmos T800/T805 transputers. A 256-node machine had a theoretical performance of 1GFLOPS, however in practice had a sustained performance of 100-200MFLOPS. PARAM 8000 was a distributed memory MIMD architecture with a reconfigurable interconnection network.
The PARAM 8000 was noted to be 28 times more powerful than the Cray X-MP that the government originally requested, for the same $10 million cost quoted for it.
Exports
The computer was a success and was exported to Germany, United Kingdom and Russia. Apart from taking over the home market, PARAM attracted 14 other buyers with its relatively low price tag of $350,000.
The computer was also exported to the ICAD Moscow in 1991 under Russian collaboration.
PARAM 8600
PARAM 8600 was an improvement over PARAM 8000. In 1992 C-DAC realised its machines were underpowered and wished to integrate the newly released Intel i860 processor. Each node was created with one i860 and four Inmos T800 transputers. The same PARAS programming environment was used for b |
https://en.wikipedia.org/wiki/Kinescope | Kinescope , shortened to kine , also known as telerecording in Britain, is a recording of a television programme on motion picture film, directly through a lens focused on the screen of a video monitor. The process was pioneered during the 1940s for the preservation, re-broadcasting and sale of television programmes before the introduction of quadruplex videotape, which from 1956 eventually superseded the use of kinescopes for all of these purposes. Kinescopes were the only practical way to preserve live television broadcasts prior to videotape.
Typically, the term Kinescope can refer to the process itself, the equipment used for the procedure (a movie camera mounted in front of a video monitor, and synchronised to the monitor's scanning rate), or a film made using the process. The term originally referred to the cathode ray tube used in television receivers, as named by inventor Vladimir K. Zworykin in 1929. Hence, the recordings were known in full as kinescope films or kinescope recordings. RCA was granted a trademark for the term (for its cathode ray tube) in 1932; it voluntarily released the term to the public domain in 1950.
Film recorders are similar, but record source material from a computer system. Whereas a kinescope records television to film, a telecine is used to play film back on television.
History
The General Electric laboratories in Schenectady, New York experimented with making still and motion picture records of television images in 1931.
There is anecdotal evidence that the BBC experimented with filming the output of the television monitor before its television service was suspended in 1939 due to the outbreak of World War II. A BBC executive, Cecil Madden, recalled filming a production of The Scarlet Pimpernel in this way, only for film director Alexander Korda to order the burning of the negative as he owned the film rights to the book, which he felt had been infringed. While there is no written record of any BBC Television production of T |
https://en.wikipedia.org/wiki/Curie%20temperature | In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism was lost at a critical temperature.
The force of magnetism is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons. Materials have different structures of intrinsic magnetic moments that depend on temperature; the Curie temperature is the critical point at which a material's intrinsic magnetic moments change direction.
Permanent magnetism is caused by the alignment of magnetic moments, and induced magnetism is created when disordered magnetic moments are forced to align in an applied magnetic field. For example, the ordered magnetic moments (ferromagnetic, Figure 1) change and become disordered (paramagnetic, Figure 2) at the Curie temperature. Higher temperatures make magnets weaker, as spontaneous magnetism only occurs below the Curie temperature. Magnetic susceptibility above the Curie temperature can be calculated from the Curie–Weiss law, which is derived from Curie's law.
In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the phase transition between ferroelectricity and paraelectricity. In this context, the order parameter is the electric polarization that goes from a finite value to zero when the temperature is increased above the Curie temperature.
Magnetic moments
Magnetic moments are permanent dipole moments within an atom that comprise electron angular momentum and spin by the relation μl = el/2me, where me is the mass of an electron, μl is the magnetic moment, and l is the angular momentum; this ratio is called the gyromagnetic ratio.
The electrons in an atom contribute magnetic moments from their own angular momen |
https://en.wikipedia.org/wiki/Phytophthora%20infestans | Phytophthora infestans is an oomycete or water mold, a fungus-like microorganism that causes the serious potato and tomato disease known as late blight or potato blight. Early blight, caused by Alternaria solani, is also often called "potato blight". Late blight was a major culprit in the 1840s European, the 1845–1852 Irish, and the 1846 Highland potato famines. The organism can also infect some other members of the Solanaceae. The pathogen is favored by moist, cool environments: sporulation is optimal at in water-saturated or nearly saturated environments, and zoospore production is favored at temperatures below . Lesion growth rates are typically optimal at a slightly warmer temperature range of .
Etymology
The genus name Phytophthora comes from the Greek (), meaning "plant" – plus the Greek (), meaning "decay, ruin, perish". The species name infestans is the present participle of the Latin verb , meaning "attacking, destroying", from which the word "to infest" is derived. The name Phytophthora infestans was coined in 1876 by the German mycologist Heinrich Anton de Bary (1831–1888).
Life cycle, signs and symptoms
The asexual life cycle of Phytophthora infestans is characterized by alternating phases of hyphal growth, sporulation, sporangia germination (either through zoospore release or direct germination, i.e. germ tube emergence from the sporangium), and the re-establishment of hyphal growth. There is also a sexual cycle, which occurs when isolates of opposite mating type (A1 and A2, see below) meet. Hormonal communication triggers the formation of the sexual spores, called oospores. The different types of spores play major roles in the dissemination and survival of P. infestans. Sporangia are spread by wind or water and enable the movement of P. infestans between different host plants. The zoospores released from sporangia are biflagellated and chemotactic, allowing further movement of P. infestans on water films found on leaves or soils. Both sporang |
https://en.wikipedia.org/wiki/Open%20Sound%20System | The Open Sound System (OSS) is an interface for making and capturing sound in Unix and Unix-like operating systems. It is based on standard Unix devices system calls (i.e. POSIX read, write, ioctl, etc.). The term also sometimes refers to the software in a Unix kernel that provides the OSS interface; it can be thought of as a device driver (or a collection of device drivers) for sound controller hardware. The goal of OSS is to allow the writing of sound-based applications that are agnostic of the underlying sound hardware.
OSS was created by Hannu Savolainen and is distributed under four license options, three of which are free software licences, thus making OSS free software.
API
The API is designed to use the traditional Unix framework of open(), read(), write(), and ioctl(), via device files. For instance, the default device for sound input and output is /dev/dsp. Examples using the shell:
cat /dev/random > /dev/dsp # plays white noise through the speaker
cat /dev/dsp > a.a # reads data from the microphone and copies it to file a.a
OSS implements the /dev/audio interface. Detailed access to individual sound devides is provided via the directory. OSS also has MIDI support in , (both legacy) and .
On Linux, OSS4 is also able to emulate ALSA, its open-source replacement.
History
OSS was originally "VoxWare", a Linux kernel sound driver by Hannu Savolainen. Savolainen made the code available under free software licenses, GPL for Linux and BSD for BSD distributions. Between 1993 and 1997, OSS was the sole choice of sound system in FreeBSD and Linux. This was changed when Luigi Rizzo wrote a new "pcm" driver for FreeBSD in 1997, and when Jaroslav Kysela started Advanced Linux Sound Architecture in 1998.
In 2002, Savolainen was contracted by the company 4Front Technologies and made the upcoming OSS 4, which includes support for newer sound devices and improvements, proprietary. In response, the Linux community abandoned the OSS/free implementation include |
https://en.wikipedia.org/wiki/Pip%20%28counting%29 | Pips are small but easily countable items, such as the dots on dominoes and dice, or the symbols on a playing card that denote its suit and value.
Playing cards
In playing cards, pips are small symbols on the front side of the cards that determine the suit of the card and its rank. For example, a standard 52-card deck consists of four suits of thirteen cards each: spades, hearts, clubs, and diamonds. Each suit contains three face cards – the jack, queen, and king. The remaining ten cards are called pip cards and are numbered from one to ten. (The "one" is almost always changed to "ace" and often is the highest card in many games, followed by the face cards.) Each pip card consists of an encoding in the top left-hand corner (and, because the card is also inverted upon itself, the lower right-hand corner) which tells the card-holder the value of the card. In Europe, it is more common to have corner indices on all four corners which lets left-handed players fan their cards more comfortably. The center of the card contains pips representing the suit. The number of pips corresponds with the number of the card, and the arrangement of the pips is generally the same from deck to deck.
Pip cards are also known as numerals or numeral cards.
In point-trick games where cards often score their value in pips (or equivalent if they are court cards e.g. a King may be worth 13), card points are sometimes referred to as pips.
Many French-suited packs contain a variation on the pip style for the Ace of Spades, often consisting of an especially large pip or even a representative image, along with information about the deck's manufacturer.
Historically German pips are generally different from the pips used in France and England, and the latter dates from at least the fourteenth century CE.
Dice
On dice, pips are small dots on each face of a common six-sided die. These pips are typically arranged in patterns denoting the numbers one through six. The sum of opposing faces traditio |
https://en.wikipedia.org/wiki/Drupal | Drupal () is a free and open-source web content management system (CMS) written in PHP and distributed under the GNU General Public License. Drupal provides an open-source back-end framework for at least 14% of the top 10,000 websites worldwide and 1.2% of the top 10 million websites—ranging from personal blogs to corporate, political, and government sites. Systems also use Drupal for knowledge management and for business collaboration.
, the Drupal community had more than 1.39 million members, including 124,000 users actively contributing, resulting in more than 50,000 free modules that extend and customize Drupal functionality, over 3,000 free themes that change the look and feel of Drupal, and at least 1,400 free distributions that allow users to quickly and easily set up a complex, use-specific Drupal in fewer steps.
The standard release of Drupal, known as Drupal core, contains basic features common to content-management systems. These include user account registration and maintenance, menu management, RSS feeds, taxonomy, page layout customization, and system administration. The Drupal core installation can serve as a simple website, a single- or multi-user blog, an Internet forum, or a community website providing for user-generated content.
Drupal also describes itself as a Web application framework. When compared with notable frameworks, Drupal meets most of the generally accepted feature requirements for such web frameworks.
Although Drupal offers a sophisticated API for developers, basic Web-site installation and administration of the framework require no programming skills.
Drupal runs on any computing platform that supports both a web server capable of running PHP and a database to store content and configuration.
History
Drupal was originally written by Dries Buytaert as a message board for his friends to communicate in their dorms while working on his Master's degree at the University of Antwerp. After graduation, Buytaert moved the site to the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.