source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Potassium%20propanoate | Potassium propanoate or potassium propionate has formula K(C2H5COO). Its melting point is 410 °C. It is the potassium salt of propanoic acid.
Use
It is used as a food preservative and is represented by the food labeling E number E283 in Europe and by the INS number 283 in Australia and New Zealand. |
https://en.wikipedia.org/wiki/Corn%20oil | Corn oil (North American) or maize oil (British) is oil extracted from the germ of corn (maize). Its main use is in cooking, where its high smoke point makes refined corn oil a valuable frying oil. It is also a key ingredient in some margarines. Corn oil is generally less expensive than most other types of vegetable oils.
Corn oil is also a feedstock used for biodiesel. Other industrial uses for corn oil include soap, salve, paint, erasers, rustproofing for metal surfaces, inks, textiles, nitroglycerin, and insecticides. It is sometimes used as a carrier for drug molecules in pharmaceutical preparations.
Production
Almost all corn oil is expeller-pressed, then solvent-extracted using hexane or 2-methylpentane (isohexane). The solvent is evaporated from the corn oil, recovered, and re-used. After extraction, the corn oil is then refined by degumming and/or alkali treatment, both of which remove phosphatides. Alkali treatment also neutralizes free fatty acids and removes color (bleaching). Final steps in refining include winterization (the removal of waxes), and deodorization by steam distillation of the oil at under a high vacuum.
Some specialty oil producers manufacture unrefined, 100%-expeller-pressed corn oil. This is a more expensive product since it has a much lower yield than the combination expeller and solvent process, as well as a smaller market share.
Constituents and comparison
Of the saturated fatty acids, 80% are palmitic acid (lipid number of C16:0), 14% stearic acid (C18:0), and 3% arachidic acid (C20:0).
Over 99% of the monounsaturated fatty acids are oleic acid (C18:1 cis-9)
98% of the polyunsaturated fatty acids are the omega-6 linoleic acid (C18:2 n-6).
See also
Corn wet-milling |
https://en.wikipedia.org/wiki/Representation%20theory%20of%20the%20Lorentz%20group | The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. This group can be realized as a collection of matrices, linear transformations, or unitary operators on some Hilbert space; it has a variety of representations. This group is significant because special relativity together with quantum mechanics are the two physical theories that are most thoroughly established, and the conjunction of these two theories is the study of the infinite-dimensional unitary representations of the Lorentz group. These have both historical importance in mainstream physics, as well as connections to more speculative present-day theories.
Development
The full theory of the finite-dimensional representations of the Lie algebra of the Lorentz group is deduced using the general framework of the representation theory of semisimple Lie algebras. The finite-dimensional representations of the connected component of the full Lorentz group are obtained by employing the Lie correspondence and the matrix exponential. The full finite-dimensional representation theory of the universal covering group (and also the spin group, a double cover) of is obtained, and explicitly given in terms of action on a function space in representations of and . The representatives of time reversal and space inversion are given in space inversion and time reversal, completing the finite-dimensional theory for the full Lorentz group. The general properties of the (m, n) representations are outlined. Action on function spaces is considered, with the action on spherical harmonics and the Riemann P-functions appearing as examples. The infinite-dimensional case of irreducible unitary representations are realized for the principal series and the complementary series. Finally, the Plancherel formula for is given, and representations of are classified and realized for Lie algebras.
The development of the representation theory has historically followed the development of the more gene |
https://en.wikipedia.org/wiki/Mock%20object | In object-oriented programming, mock objects are simulated objects that mimic the behaviour of real objects in controlled ways, most often as part of a software testing initiative. A programmer typically creates a mock object to test the behaviour of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behaviour of a human in vehicle impacts. The technique is also applicable in generic programming.
Motivation
In a unit test, mock objects can simulate the behavior of complex, real objects and are therefore useful when a real object is impractical or impossible to incorporate into a unit test. If an object has any of the following characteristics, it may be useful to use a mock object in its place:
the object supplies non-deterministic results (e.g. the current time or the current temperature);
it has states that are difficult to create or reproduce (e.g. a network error);
it is slow (e.g. a complete database, which would have to be prepared before the test);
it does not yet exist or may change behavior;
it would have to include information and methods exclusively for testing purposes (and not for its actual task).
For example, an alarm clock program which causes a bell to ring at a certain time might get the current time from a time service. To test this, the test must wait until the alarm time to know whether it has rung the bell correctly. If a mock time service is used in place of the real time service, it can be programmed to provide the bell-ringing time (or any other time) regardless of the real time, so that the alarm clock program can be tested in isolation.
Technical details
Mock objects have the same interface as the real objects they mimic, allowing a client object to remain unaware of whether it is using a real object or a mock object. Many available mock object frameworks allow the programmer to specify which, and in what order, methods will be invoked on a mock object and what paramete |
https://en.wikipedia.org/wiki/Blade%20PC | A blade PC is a form of client or personal computer (PC). In conjunction with a client access device (usually a thin client) on a user's desk, the supporting blade PC is typically housed in a rack enclosure, usually in a datacenter or specialised environment. Together, they accomplish many of the same functions of a traditional PC, but they also take advantage of many of the architectural achievements pioneered by blade servers.
Description
Like a traditional PC, a blade PC has a CPU, RAM and a hard drive. It may or may not have an integrated graphics sub-system. Some can support multiple hard drives. It is in a “blade” form that plugs into an enclosure. Enclosures offered by current blade PC vendors are similar but not identical. Most have moved the power supplies, cooling fans and some management capabilities from the blade PC to the enclosure. Up to 14 enclosures can be placed in one industry standard 42U rack.
Blade PCs support one or more common operating systems (for instance Microsoft has created a “blade PC” version of their XP and Vista Business operating systems and many Linux distributions are installable). Importantly, these solutions are intended to support one user per discrete device. This is a major difference from server-based computing, which supports multiple users simultaneously using an application hosted on one discrete server (be it a discrete piece of hardware or a discrete virtual machine on a server).
Access to the device is usually achieved via various Virtual Network Computing (VNC), which allows users to log on to the blade PC via a client device (usually a thin client). Once logged on the end user experience is largely the same as if they were logged on to a local PC. It is less effective at delivering multimedia, in part because the audio and video are not synchronized, so in circumstances where there is increasing latency, there is a proportional decrease in the quality of the end user experience. All protocols are negatively impa |
https://en.wikipedia.org/wiki/Yie%20Ar%20Kung-Fu | () is an arcade fighting game developed and published by Konami. It first had a limited Japanese release in October 1984, before having a wide release nationwide in January 1985 and then internationally in March. Along with Karate Champ (1984), which influenced Yie-Ar Kung Fu, it is one of the games that established the basis for modern fighting games.
The game was inspired by Bruce Lee's Hong Kong martial arts films, with the main player character Oolong modelled after Lee (like Bruceploitation films). In contrast to the grounded realism of Karate Champ, Yie Ar Kung-Fu moved the genre towards more fantastical, fast-paced action, with various different characters having a variety of special moves and high jumps, establishing the template for subsequent fighting games. It also introduced the health meter system to the genre, in contrast to the point-scoring system of Karate Champ.
The game was a commercial success in arcades, becoming the highest-grossing arcade conversion kit of 1985 in the United States while also being successful in Japan and Europe. It was ported to various home systems, including home computer conversions which were critically and commercially successful, becoming the best-selling home video game of 1986 in the United Kingdom.
Gameplay
Oolong (or Lee in the MSX and Famicom versions) must fight all the martial arts masters given by the game (eleven in the arcade version; five to thirteen in the home ports).
The player faces a variety of opponents, each with a unique appearance and fighting style. The player can perform up to 16 different moves, using a combination of buttons and joystick movements while standing, crouching or jumping. Moves are thrown at high, middle, and low levels. Regardless of the move that defeated them, male characters (save Feedle) always fall unconscious lying on their backs with their legs apart (Oolong flails his legs), and female characters always fall lying on their sides. Feedle disappears. When a player gains a |
https://en.wikipedia.org/wiki/L%28R%29 | In set theory, L(R) (pronounced L of R) is the smallest transitive inner model of ZF containing all the ordinals and all the reals.
Construction
It can be constructed in a manner analogous to the construction of L (that is, Gödel's constructible universe), by adding in all the reals at the start, and then iterating the definable powerset operation through all the ordinals.
Assumptions
In general, the study of L(R) assumes a wide array of large cardinal axioms, since without these axioms one cannot show even that L(R) is distinct from L. But given that sufficient large cardinals exist, L(R) does not satisfy the axiom of choice, but rather the axiom of determinacy. However, L(R) will still satisfy the axiom of dependent choice, given only that the von Neumann universe, V, also satisfies that axiom.
Results
Given the assumptions above, some additional results of the theory are:
Every projective set of reals – and therefore every analytic set and every Borel set of reals – is an element of L(R).
Every set of reals in L(R) is Lebesgue measurable (in fact, universally measurable) and has the property of Baire and the perfect set property.
L(R) does not satisfy the axiom of uniformization or the axiom of real determinacy.
R#, the sharp of the set of all reals, has the smallest Wadge degree of any set of reals not contained in L(R).
While not every relation on the reals in L(R) has a uniformization in L(R), every such relation does have a uniformization in L(R#).
Given any (set-size) generic extension V[G] of V, L(R) is an elementary submodel of L(R) as calculated in V[G]. Thus the theory of L(R) cannot be changed by forcing.
L(R) satisfies AD+. |
https://en.wikipedia.org/wiki/Law%20of%20comparative%20judgment | The law of comparative judgment was conceived by L. L. Thurstone. In modern-day terminology, it is more aptly described as a model that is used to obtain measurements from any process of pairwise comparison. Examples of such processes are the comparisons of perceived intensity of physical stimuli, such as the weights of objects, and comparisons of the extremity of an attitude expressed within statements, such as statements about capital punishment. The measurements represent how we perceive entities, rather than measurements of actual physical properties. This kind of measurement is the focus of psychometrics and psychophysics.
In somewhat more technical terms, the law of comparative judgment is a mathematical representation of a discriminal process, which is any process in which a comparison is made between pairs of a collection of entities with respect to magnitudes of an attribute, trait, attitude, and so on. The theoretical basis for the model is closely related to item response theory and the theory underlying the Rasch model, which are used in psychology and education to analyse data from questionnaires and tests.
Background
Thurstone published a paper on the law of comparative judgment in 1927. In this paper he introduced the underlying concept of a psychological continuum for a particular 'project in measurement' involving the comparison between a series of stimuli, such as weights and handwriting specimens, in pairs. He soon extended the domain of application of the law of comparative judgment to things that have no obvious physical counterpart, such as attitudes and values (Thurstone, 1929). For example, in one experiment, people compared statements about capital punishment to judge which of each pair expressed a stronger positive (or negative) attitude.
The essential idea behind Thurstone's process and model is that it can be used to scale a collection of stimuli based on simple comparisons between stimuli two at a time: that is, based on a series of |
https://en.wikipedia.org/wiki/Alginic%20acid | Alginic acid, also called algin, is a naturally occurring, edible polysaccharide found in brown algae. It is hydrophilic and forms a viscous gum when hydrated. With metals such as sodium and calcium, its salts are known as alginates. Its colour ranges from white to yellowish-brown. It is sold in filamentous, granular, or powdered forms.
It is a significant component of the biofilms produced by the bacterium Pseudomonas aeruginosa, a major pathogen found in the lungs of some people who have cystic fibrosis. The biofilm and P. aeruginosa have a high resistance to antibiotics, but susceptible to inhibition by macrophages.
Structure
Alginic acid is a linear copolymer with homopolymeric blocks of (1→4)-linked β-D-mannuronate (M) and α-L-guluronate (G) residues, respectively, covalently linked together in different sequences or blocks. The monomers may appear in homopolymeric blocks of consecutive G-residues (G-blocks), consecutive M-residues (M-blocks) or alternating M and G-residues (MG-blocks). α-L-guluronate is the C-5 epimer of β-D-mannuronate.
Forms
Alginates are refined from brown seaweeds. Throughout the world, many of the Phaeophyceae class brown seaweeds are harvested to be processed and converted into sodium alginate. Sodium alginate is used in many industries including food, animal food, fertilisers, textile printing, and pharmaceuticals. Dental impression material uses alginate as its means of gelling. Food grade alginate is an approved ingredient in processed and manufactured foods.
Brown seaweeds range in size from the giant kelp Macrocystis pyrifera which can be 20–40 meters long, to thick, leather-like seaweeds from 2–4 m long, to smaller species 30–60 cm long. Most brown seaweed used for alginates are gathered from the wild, with the exception of Laminaria japonica, which is cultivated in China for food and its surplus material is diverted to the alginate industry in China.
Alginates from different species of brown seaweed vary in their chemical |
https://en.wikipedia.org/wiki/Jet%20Set%20Willy%20II | Jet Set Willy II: The Final Frontier is a platform game released 1985 by Software Projects as the Amstrad CPC port of Jet Set Willy. It was then rebranded as the sequel and ported other home computers. Jet Set Willy II was developed by Derrick P. Rowson and Steve Wetherill rather than Jet Set Willy programmer Matthew Smith and is an expansion of the original game, rather than an entirely new one.
Gameplay
The map is primarily an expanded version of the original mansion from Jet Set Willy, with only a few new elements over its predecessor several of which are based on rumoured events in JSW that were in fact never programmed (such as being able to launch the titular ship in the screen called "The Yacht" and explore an island). In the ZX Spectrum, Amstrad CPC and MSX versions, Willy is blasted from the Rocket Room into space, and for these 33 rooms he dons a spacesuit.
Due to the proliferation of hacking and cheating in the original game, Jet Set Willy II pays homage to this and includes a screen called Cheat that can only be accessed by cheating.
Control of Willy also differs from the original:
The player can jump in the opposite direction immediately upon landing, without releasing the jump button.
Willy now takes a step forward before jumping from a standstill.
Some previous "safe spots" in Jet Set Willy are now hazardous to the player in Jet Set Willy II - the tall candle in "The Chapel" for example.
The ending of the game is also different.
Development
Jet Set Willy II was originally created as the Amstrad conversion of Jet Set Willy by Derrick P. Rowson and Steve Wetherill, but Rowson's use of an algorithm to compress much of the screen data meant there was enough memory available to create new rooms.
It came with a form of enhanced copy protection called Padlock II. To discourage felt tip copying, it had seven pages, rather than the single page used in Jet Set Willy.
Software Projects later had Rowson remove all of the enhancements from the Amstrad ve |
https://en.wikipedia.org/wiki/WCCT-TV | WCCT-TV (channel 20), branded on-air as CW 20, is a television station licensed to Waterbury, Connecticut, United States, serving the Hartford–New Haven market as an affiliate of The CW. It is owned by Tegna Inc. alongside Hartford-licensed Fox affiliate WTIC-TV (channel 61). Both stations share studios on Broad Street in downtown Hartford, while WCCT-TV's transmitter is located on Rattlesnake Mountain in Farmington, Connecticut.
Established in 1953 as WATR-TV, a television station for the Waterbury area, the station changed to a regional independent in 1982, becoming Connecticut's UPN affiliate in 1995 and switching to The WB in 2001. It has been managed by WTIC-TV since 1998.
History
WATR (1953–1966)
The station commenced operations on September 10, 1953, as WATR-TV on channel 53, the second UHF station in Connecticut. It was owned by the Thomas and Gilmore families, along with WATR radio (1320 AM). The station's studios and transmitter were located on West Peak in Meriden. At the time, the station's signal only covered Waterbury, New Haven and the southern portion of the state.
WATR-TV was originally a dual secondary affiliate of both DuMont and ABC, sharing them with New Haven-based WNHC-TV (channel 8, now WTNH). DuMont ceased operations in 1956, and shortly afterward, WNHC-TV became an exclusive ABC affiliate, as did WATR-TV. Both stations carried ABC programming through Connecticut.
In 1962, the station relocated to UHF channel 20 and moved to a new studio and transmitter site in Prospect, south of Waterbury. Channel 53 was later occupied by WEDN, Connecticut Public Television's outlet in Norwich.
NBC affiliate (1966–1982)
In August 1966, WATR-TV joined NBC. At the time, the network's primary affiliate in Connecticut, WHNB-TV (channel 30) in New Britain, was hampered by a weak signal in New Haven and the southwestern portions of the state. In the 1970s, the station offered limited local news and instead aired older syndicated programs and religious shows |
https://en.wikipedia.org/wiki/Nonvenereal%20endemic%20syphilis | Bejel, or endemic syphilis, is a chronic skin and tissue disease caused by infection by the endemicum subspecies of the spirochete Treponema pallidum. Bejel is one of the "endemic treponematoses" (endemic infections caused by spiral-shaped bacteria called treponemes), a group that also includes yaws and pinta. Typically, endemic trepanematoses begin with localized lesions on the skin or mucous membranes. Pinta is limited to affecting the skin, whereas bejel and yaws are considered to be invasive because they can also cause disease in bone and other internal tissues.
Signs and symptoms
Bejel usually begins in childhood as a small patch on the mucosa, often on the interior of the mouth, followed by the appearance of raised, eroding lesions on the limbs and trunk. Periostitis (inflammation) of the leg bones is commonly seen, and gummas of the nose and soft palate develop in later stages.
Causes
Although the organism that causes bejel, Treponema pallidum endemicum, is morphologically and serologically indistinguishable from Treponema pallidum pallidum, which causes venereal syphilis, transmission of bejel is not venereal in nature.
Diagnosis
The diagnosis of bejel is based on the geographic history of the patient as well as laboratory testing of material from the lesions (dark-field microscopy). The responsible spirochaete is readily identifiable on sight in a microscope as a treponema.
Epidemiology
Bejel is mainly found in arid countries of the eastern Mediterranean region and in West Africa, where it is known as sahel.
See also
Pinta (disease)
Syphilis
Yaws |
https://en.wikipedia.org/wiki/Ventricular%20remodeling | In cardiology, ventricular remodeling (or cardiac remodeling) refers to changes in the size, shape, structure, and function of the heart. This can happen as a result of exercise (physiological remodeling) or after injury to the heart muscle (pathological remodeling). The injury is typically due to acute myocardial infarction (usually transmural or ST segment elevation infarction), but may be from a number of causes that result in increased pressure or volume, causing pressure overload or volume overload (forms of strain) on the heart. Chronic hypertension, congenital heart disease with intracardiac shunting, and valvular heart disease may also lead to remodeling. After the insult occurs, a series of histopathological and structural changes occur in the left ventricular myocardium that lead to progressive decline in left ventricular performance. Ultimately, ventricular remodeling may result in diminished contractile (systolic) function and reduced stroke volume.
Physiological remodeling is reversible while pathological remodeling is mostly irreversible. Remodeling of the ventricles under left/right pressure demand make mismatches inevitable. Pathologic pressure mismatches between the pulmonary and systemic circulation guide compensatory remodeling of the left and right ventricles. The term "reverse remodeling" in cardiology implies an improvement in ventricular mechanics and function following a remote injury or pathological process.
Ventricular remodeling may include ventricular hypertrophy, ventricular dilation, cardiomegaly, and other changes. It is an aspect of cardiomyopathy, of which there are many types. Concentric hypertrophy is due to pressure overload, while eccentric hypertrophy is due to volume overload.
Pathophysiology
The cardiac myocyte is the major cell involved in remodeling. Fibroblasts, collagen, the interstitium, and the coronary vessels to a lesser extent, also play a role. A common scenario for remodeling is after myocardial infarction. Ther |
https://en.wikipedia.org/wiki/No-three-in-line%20problem | The no-three-in-line problem in discrete geometry asks how many points can be placed in the grid so that no three points lie on the same line. The problem concerns lines of all slopes, not only those aligned with the grid. It was introduced by Henry Dudeney in 1900. Brass, Moser, and Pach call it "one of the oldest and most extensively studied geometric questions concerning lattice points".
At most points can be placed, because points in a grid would include a row of three or more points, by the pigeonhole principle. Although the problem can be solved with points for every up it is conjectured that fewer than points can be placed in grids of large size. Known methods can place linearly many points in grids of arbitrary size, but the best of these methods place slightly fewer than points,
Several related problems of finding points with no three in line, among other sets of points than grids, have also been studied. Although originating in recreational mathematics, the no-three-in-line problem has applications in graph drawing and to the Heilbronn triangle problem.
Small instances
The problem was first posed by Henry Dudeney in 1900, as a puzzle in recreational mathematics, phrased in terms of placing the 16 pawns of a chessboard onto the board so that no three are in a line. This is exactly the no-three-in-line problem, for the In a later version of the puzzle, Dudeney modified the problem, making its solution unique, by asking for a solution in which two of the pawns are on squares d4 and e5, attacking each other in the center of the board.
Many authors have published solutions to this problem for small values and by 1998 it was known that points could be placed on an grid with no three in a line
for all up and some larger values. The numbers of solutions with points for small values of , starting are
The numbers of equivalence classes of solutions with points under reflections and rotations are
Upper and lower bounds
The exact number of poi |
https://en.wikipedia.org/wiki/Jacques%20Philippe%20Marie%20Binet | Jacques Philippe Marie Binet (; 2 February 1786 – 12 May 1856) was a French mathematician, physicist and astronomer born in Rennes; he died in Paris, France, in 1856. He made significant contributions to number theory, and the mathematical foundations of matrix algebra which would later lead to important contributions by Cayley and others. In his memoir on the theory of the conjugate axis and of the moment of inertia of bodies he enumerated the principle now known as Binet's theorem. He is also recognized as the first to describe the rule for multiplying matrices in 1812, and Binet's formula expressing Fibonacci numbers in closed form is named in his honour, although the same result was known to Abraham de Moivre a century earlier.
Career
Binet graduated from l'École Polytechnique in 1806, and returned as a teacher in 1807. He advanced in position until 1816 when he became an inspector of studies at l'École. He held this post until 13 November 1830, when he was dismissed by the recently sworn in King Louis-Philippe of France, probably because of Binet's strong support of the previous King, Charles X. In 1823 Binet succeeded Delambre in the chair of astronomy at the Collège de France. He was made a Chevalier in the Légion d'Honneur in 1821, and was elected to the Académie des Sciences in 1843.
Binet's Fibonacci number formula
The Fibonacci sequence is defined by
Binet's formula provides a closed-form expression for the term in this sequence:
See also
Binet–Cauchy identity
Binet equation
Cauchy–Binet formula
Matrix multiplication
Notes |
https://en.wikipedia.org/wiki/Charles%20Dupin | Baron Pierre Charles François Dupin (6 October 1784, Varzy, Nièvre – 18 January 1873, Paris, France) was a French Catholic mathematician, engineer, economist and politician, particularly known for work in the field of mathematics, where the Dupin cyclide and Dupin indicatrix are named after him; and for his work in the field of statistical and thematic mapping. In 1826 he created the earliest known choropleth map.
Life and work
He was born in Varzy in France, the son of Charles Andre Dupin, a lawyer, and Catherine Agnes Dupin. His elder brother is André Marie Jean Jacques Dupin.
Dupin studied geometry with Monge at the École Polytechnique and then became a naval engineer (ENSTA). His mathematical work was in descriptive and differential geometry. He was the discoverer of conjugate tangents to a point on a surface and of the Dupin indicatrix.
Dupin participated in the Greek science revival by teaching mathematics and mechanics lessons in Corfu in 1808. One of his students was Giovanni Carandino, who would go on to be the founder of the Greek Mathematics School in the 1820s.
From 1807 Dupin was responsible for the restoration of the damaged port and arsenal at Corfu. He founded the Toulon Maritime Museum in 1813.
In 1818, Dupin was elected to the body of the French Academy of Sciences, one of the Institut de France's five Academies.
He was appointed professor at the Conservatoire des Arts et Métiers in 1819; he kept this post until 1854. In 1822, Dupin was elected a foreign member of the Royal Swedish Academy of Sciences. He was made a baron in 1824.
In 1826 he published a thematic map showing the distribution of illiteracy in France, using shadings (from black to white), the first known instance of what is called a choropleth map today. Dupin had been inspired by the work of the German statisticians Georg Hassel and August Friedrich Wilhelm Crome.
Dupin was named rapporteur for the central jury of the Exposition des produits de l'industrie française en 1834.
Fo |
https://en.wikipedia.org/wiki/Openwall%20Project | The Openwall Project is a source for various software, including Openwall GNU/*/Linux (Owl), a security-enhanced Linux distribution designed for servers. Openwall patches and security extensions have been included into many major Linux distributions.
As the name implies, Openwall GNU/*/Linux draws source code and design concepts from numerous sources, most importantly to the project is its usage of the Linux kernel and parts of the GNU userland, others include the BSDs, such as OpenBSD for its OpenSSH suite and the inspiration behind its own Blowfish-based crypt for password hashing, compatible with the OpenBSD implementation.
Public domain software
The Openwall project maintains also a list of algorithms and source code which is public domain software.
Openwall GNU/*/Linux releases
LWN.net reviewed Openwall Linux 3.0. They wrote:
PoC||GTFO
Issues of the International Journal of Proof-of-Concept or Get The Fuck Out (PoC||GTFO) are mirrored by the Openwall Project under a samizdat licence. The first issue #00 was published in 2013, issue #02 featured the Chaos Computer Club. Issue #07 in 2015 was a homage for Dr. Dobb's Journal, which could be rendered as .pdf, .zip, .bpg, or .html.
See also
Executable space protection
Comparison of Linux distributions
Security-focused operating system
John the Ripper |
https://en.wikipedia.org/wiki/Destructor%20%28computer%20programming%29 | In object-oriented programming, a destructor (sometimes abbreviated dtor) is a method which is invoked mechanically just before the memory of the object is released. It can happen when its lifetime is bound to scope and the execution leaves the scope, when it is embedded in another object whose lifetime ends, or when it was allocated dynamically and is released explicitly. Its main purpose is to free the resources (memory allocations, open files or sockets, database connections, resource locks, etc.) which were acquired by the object during its life and/or deregister from other entities which may keep references to it. Use of destructors is needed for the process of Resource Acquisition Is Initialization (RAII).
With most kinds of automatic garbage collection algorithms, the releasing of memory may happen a long time after the object becomes unreachable, making destructors (called finalizers in this case) unsuitable for most purposes. In such languages, the freeing of resources is done either through a lexical construct (such as try..finally, Python's "with" or Java's "try-with-resources"), which is the equivalent to RAII, or explicitly by calling a function (equivalent to explicit deletion); in particular, many object-oriented languages use the Dispose pattern.
Destructor syntax
C++: destructors have the same name as the class with which they are associated, but with a tilde (~) prefix.
D: destructors are declared with name ~this() (whereas constructors are declared with this()).
Object Pascal: destructors have the keyword destructor and can have user-defined names, but are mostly named Destroy.
Objective-C: the destructor method has the name dealloc.
Perl: the destructor method has the name DESTROY; in the Moose object system extension, it is named DEMOLISH.
PHP: In PHP 5+, the destructor method has the name __destruct. There were no destructors in prior versions of PHP.
Python: there are __del__ methods called destructors by the Python 2 language guide |
https://en.wikipedia.org/wiki/Gateaux%20derivative | In mathematics, the Gateaux differential or Gateaux derivative is a generalization of the concept of directional derivative in differential calculus. Named after René Gateaux, a French mathematician who died at age 25 in World War I, it is defined for functions between locally convex topological vector spaces such as Banach spaces. Like the Fréchet derivative on a Banach space, the Gateaux differential is often used to formalize the functional derivative commonly used in the calculus of variations and physics.
Unlike other forms of derivatives, the Gateaux differential of a function may be nonlinear. However, often the definition of the Gateaux differential also requires that it be a continuous linear transformation. Some authors, such as , draw a further distinction between the Gateaux differential (which may be nonlinear) and the Gateaux derivative (which they take to be linear). In most applications, continuous linearity follows from some more primitive condition which is natural to the particular setting, such as imposing complex differentiability in the context of infinite dimensional holomorphy or continuous differentiability in nonlinear analysis.
Definition
Suppose and are locally convex topological vector spaces (for example, Banach spaces), is open, and The Gateaux differential of at in the direction is defined as
If the limit exists for all then one says that is Gateaux differentiable at
The limit appearing in () is taken relative to the topology of If and are real topological vector spaces, then the limit is taken for real On the other hand, if and are complex topological vector spaces, then the limit above is usually taken as in the complex plane as in the definition of complex differentiability. In some cases, a weak limit is taken instead of a strong limit, which leads to the notion of a weak Gateaux derivative.
Linearity and continuity
At each point the Gateaux differential defines a function
This function is homogen |
https://en.wikipedia.org/wiki/Renard%20series | Renard series are a system of preferred numbers dividing an interval from 1 to 10 into 5, 10, 20, or 40 steps. This set of preferred numbers was proposed in 1877 by French army engineer Colonel Charles Renard. His system was adopted by the ISO in 1949 to form the ISO Recommendation R3, first published in 1953 or 1954, which evolved into the international standard ISO 3.
The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. One application of the Renard series of numbers is to current rating of electric fuses. Another common use is the voltage rating of capacitors (e.g. 100 V, 160 V, 250 V, 400 V, 630 V).
Base series
The most basic R5 series consists of these five rounded numbers, which are powers of the fifth root of 10, rounded to two digits. The Renard numbers are not always rounded to the closest three-digit number to the theoretical geometric sequence:
R5: 1.00 1.60 2.50 4.00 6.30
Examples
If some design constraints were assumed so that two screws in a gadget should be placed between 32 mm and 55 mm apart, the resulting length would be 40 mm, because 4 is in the R5 series of preferred numbers.
If a set of nails with lengths between roughly 15 and 300 mm should be produced, then the application of the R5 series would lead to a product repertoire of 16 mm, 25 mm, 40 mm, 63 mm, 100 mm, 160 mm, and 250 mm long nails.
If traditional English wine cask sizes had been metricated, the rundlet (18 gallons, ca 68 liters), barrel (31.5 gal., ca 119 liters), tierce (42 gal., ca 159 liters), hogshead (63 gal., ca 239 liters), puncheon (84 gal., ca 318 liters), butt (126 gal., ca 477 liters) and tun (252 gal., ca 954 lite |
https://en.wikipedia.org/wiki/Astrophysical%20plasma | Astrophysical plasma is plasma outside of the Solar System. It is studied as part of astrophysics and is commonly observed in space. The accepted view of scientists is that much of the baryonic matter in the universe exists in this state.
When matter becomes sufficiently hot and energetic, it becomes ionized and forms a plasma. This process breaks matter into its constituent particles which includes negatively charged electrons and positively charged ions. These electrically charged particles are susceptible to influences by local electromagnetic fields. This includes strong fields generated by stars, and weak fields which exist in star forming regions, in interstellar space, and in intergalactic space. Similarly, electric fields are observed in some stellar astrophysical phenomena, but they are inconsequential in very low-density gaseous media.
Astrophysical plasma is often differentiated from space plasma, which typically refers to the plasma of the Sun, the solar wind, and the ionospheres and magnetospheres of the Earth and other planets.
Observing and studying astrophysical plasma
Plasmas in stars can both generate and interact with magnetic fields, resulting in a variety of dynamic astrophysical phenomena. These phenomena are sometimes observed in spectra due to the Zeeman effect. Other forms of astrophysical plasmas can be influenced by preexisting weak magnetic fields, whose interactions may only be determined directly by polarimetry or other indirect methods. In particular, the intergalactic medium, the interstellar medium, the interplanetary medium and solar winds consist of diffuse plasmas.
Possible related phenomena
Scientists are interested in active galactic nuclei because such astrophysical plasmas could be directly related to the plasmas studied in laboratories. Many of these phenomena seemingly exhibit an array of complex magnetohydrodynamic behaviors, such as turbulence and instabilities.
In Big Bang cosmology, the entire universe was in a pl |
https://en.wikipedia.org/wiki/C-element | In digital computing, the Muller C-element (C-gate, hysteresis flip-flop, coincident flip-flop, or two-hand safety circuit) is a small binary logic circuit widely used in design of asynchronous circuits and systems. It outputs 0 when all inputs are 0, it outputs 1 when all inputs are 1, and it retains its output state otherwise. It was specified formally in 1955 by David E. Muller and first used in ILLIAC II computer. In terms of the theory of lattices, the C-element is a semimodular distributive circuit, whose operation in time is described by a Hasse diagram. The C-element is closely related to the rendezvous and join elements, where an input is not allowed to change twice in succession. In some cases, when relations between delays are known, the C-element can be realized as a sum-of-product (SOP) circuit. Earlier techniques for implementing the C-element include Schmitt trigger, Eccles-Jordan flip-flop and last moving point flip-flop.
Truth table and delay assumptions
For two input signals the C-element is defined by the equation , which corresponds to the following truth table:
This table can be turned into a circuit using the Karnaugh map. However, the obtained implementation is naive, since nothing is said about delay assumptions. To understand under what conditions the obtained circuit is workable, it is necessary to do additional analysis, which reveals that
delay1 is a propagation delay from node 1 via environment to node 3,
delay2 is a propagation delay from node 1 via internal feedback to node 3,
delay1 must be greater than delay2.
Thus, the naive implementation is correct only for slow environment.
The definition of C-element can be generalized for multiple-valued logic , or even for continuous signals:
For example, the truth table for a balanced ternary C-element with two inputs is
Implementations of the C-element
Depending on the requirements to the switching speed and power consumption, the C-element can be realized as a coarse- or fine-grain |
https://en.wikipedia.org/wiki/Asymmetric%20C-element | Asymmetric C-elements are extended C-elements which allow inputs which only effect the operation of the element when transitioning in one of the directions. Asymmetric inputs are attached to either the minus (-) or plus (+) strips of the symbol. The common inputs which effect both the transitions are connected to the centre of the symbol. When transitioning from zero to one, the C-element will take into account the common and the asymmetric plus inputs. All these inputs must be high for the up transition to take place. Similarly when transitioning from one to zero the C-element will take into account the common and the asymmetric minus inputs. All these inputs must be low for the down transition to happen.
The figure shows the gate-level and transistor-level implementations and symbol of the asymmetric C-element. In the figure the plus inputs are marked with a 'P', the minus inputs are marked with an 'm' and the common inputs are marked with a 'C'.
In addition, it is possible to extend the asymmetric input convention to inverted C-elements, where a plus (minus) on an input port means that an input is required for the inverted output to fall (rise). |
https://en.wikipedia.org/wiki/Struve%20Geodetic%20Arc | The Struve Geodetic Arc is a chain of survey triangulations stretching from Hammerfest in Norway to the Black Sea, through ten countries and over , which yielded the first accurate measurement of a meridian arc.
The chain was established and used by the German-born Russian scientist Friedrich Georg Wilhelm von Struve in the years 1816 to 1855 to establish the exact size and shape of the earth. At that time, the chain passed merely through three countries: Norway, Sweden and the Russian Empire. The Arc's first point is located in Tartu Observatory in Estonia, where Struve conducted much of his research. Measurement of the triangulation chain comprises 258 main triangles and 265 geodetic vertices. The northernmost point is located near Hammerfest in Norway and the southernmost point near the Black Sea in Ukraine.
In 2005, the chain was inscribed on the World Heritage List, because of its importance in geodesy and its testimony to international scientific cooperation. The World Heritage site includes 34 commemorative plaques or built obelisks out of the original 265 main station points which are marked by drilled holes in rock, iron crosses, cairns, others. This inscription is located in ten countries, the second most of any UNESCO World Heritage after the Ancient and Primeval Beech Forests of the Carpathians and Other Regions of Europe.
The measurements of the 30° Meridian Arc in 1816–1852 as well the description of the geodesic, topographical, and map making works in the Balkans from the nineteenth century until the beginning of the twentieth century by Russian Czarist Army was described in Astronomy, geodesy and map- drawing in Moldova since the middle ages till the World War I.
Chain
Norway
Fuglenes in Hammerfest ()
Raipas in Alta ()
Luvdiidcohkka in Kautokeino ()
Baelljasvarri in Kautokeino ()
Sweden
"Pajtas-vaara" (Tynnyrilaki) in Kiruna ()
"Kerrojupukka" (Jupukka) in Pajala ()
Pullinki in Övertorneå ()
"Perra-vaara" (Perävaara) in Haparanda ()
Finland
Stu |
https://en.wikipedia.org/wiki/Dogbane | Dogbane, dog-bane, dog's bane, and other variations, some of them regional and some transient, are names for certain plants that are reputed to kill or repel dogs; "bane" originally meant "slayer", and was later applied to plants to indicate that they were poisonous to particular creatures.
History of the term
The earliest reference to such names in common English usage was in the 16th century, in which they were applied to various plants in the Apocynaceae, in particular Apocynum. Some plants in the Asclepiadoideae, now a subfamily of the Apocynaceae, but until recently regarded as the separate family Asclepiadaceae, were also called dogbane even before the two families were united. It is not clear how much earlier the name had been in use in the English language, which originated about 1000 years earlier in mediaeval times. However, centuries before the appearance of the English language, Pedanius Dioscorides, in his De Materia Medica, had already described members of the Apocynaceae, such as Apocynum and Cynanchum by names equivalent to "dogbane"; Apocynum literally means "dog killer" or "dog remover", and "Cynanchum" means "dog strangler". In modern times some species of Nerium, Periploca and Trachelospermum, also in the Apocynaceae, are called dogbane or variants such as "climbing dogbane".
Modern significance of the term "dogbane family"
Some modern sources restrict "dogbane" in its strict sense to the genus Apocynum, but it is doubtful that any such narrow definition could be justified even if it were enforceable. More widely, when authors refer to the "dogbane family" without qualification, they almost always mean Apocynaceae.
"Dogbane" as a term outside the family Apocynaceae
Common names, either informal or vernacular, are seldom definitive, let alone stable. Some poisonous or offensive plants in practically unrelated families had similar common names in the vernacular and writings of various times; for example an edition De Materia Medica, apparently o |
https://en.wikipedia.org/wiki/Edible%20plant%20stem | Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The |
https://en.wikipedia.org/wiki/Business%20record | A business record is a document (hard copy or digital) that records an "act, condition, or event" related to business. Business records include meeting minutes, memoranda, employment contracts, and accounting source documents.
It must be retrievable at a later date so that the business dealings can be accurately reviewed as required. Since business is dependent upon confidence and trust, not only must the record be accurate and easily retrieved, but the processes surrounding its creation and retrieval must be perceived by customers and the business community to consistently deliver a full and accurate record with no gaps or additions.
Most business records have specified retention periods based on legal requirements and/or internal company policies. This is important because in many countries (including the United States), many documents may be required by law to be disclosed to government regulatory agencies or to the general public. Likewise, they may be discoverable if the business is sued. Under the business records exception in the Federal Rules of Evidence, certain types of business records, particularly those made and kept with regularity, may be considered admissible in court despite containing hearsay.
See also
Records management
Information governance
Regulation Fair Disclosure
Sarbanes-Oxley Act |
https://en.wikipedia.org/wiki/Peak%20programme%20meter | A peak programme meter (PPM) is an instrument used in professional audio that indicates the level of an audio signal.
Different kinds of PPM fall into broad categories:
True peak programme meter. This shows the peak level of the waveform no matter how brief its duration.
Quasi peak programme meter (QPPM). This only shows the true level of the peak if it exceeds a certain duration, typically a few milliseconds. On peaks of shorter duration, it indicates less than the true peak level. The extent of the shortfall is determined by the 'integration time'.
Sample peak programme meter (SPPM). This is a PPM for digital audio. It shows only peak sample values, not true waveform peaks (which may fall between samples and may be higher in amplitude). It may have either a 'true' or a 'quasi' integration characteristic.
Over-sampling peak programme meter. This is a sample PPM that first oversamples the signal, typically by a factor of four, to alleviate the problems of a basic sample PPM.
In professional use, which requires consistent level measurements across an industry, audio level meters often comply with a formal standard. This ensures that all compliant meters indicate the same level for a given audio signal. The principal standard for PPMs is IEC 60268-10. It describes two different quasi-PPM designs that have roots in meters originally developed in the 1930s for the AM radio broadcasting networks of Germany (Type I) and the United Kingdom (Type II). The term Peak Programme Meter usually refers to these IEC-specified types and similar designs. Though originally designed for monitoring analogue audio signals, these PPMs are now also used with digital audio.
PPMs do not provide effective loudness monitoring. Newer types of meter do, and there is now a push within the broadcasting industry to move away from the traditional level meters in this article to two new types: loudness meters based on EBU Tech. 3341 and oversampling true PPMs. The former would be used to standardi |
https://en.wikipedia.org/wiki/Phycobilisome | Phycobilisomes are light harvesting antennae of photosystem II in cyanobacteria, red algae and glaucophytes. It was lost in the plastids of green algae / plants (chloroplasts).
General structure
Phycobilisomes are protein complexes (up to 600 polypeptides) anchored to thylakoid membranes. They are made of stacks of chromophorylated proteins, the phycobiliproteins, and their associated linker polypeptides. Each phycobilisome consists of a core made of allophycocyanin, from which several outwardly oriented rods made of stacked disks of phycocyanin and (if present) phycoerythrin(s) or phycoerythrocyanin. The spectral property of phycobiliproteins are mainly dictated by their prosthetic groups, which are linear tetrapyrroles known as phycobilins including phycocyanobilin, phycoerythrobilin, phycourobilin and phycobiliviolin. The spectral properties of a given phycobilin is influenced by its protein environment.
Function
Each phycobiliprotein has a specific absorption and fluorescence emission maximum in the visible range of light. Therefore, their presence and the particular arrangement within the phycobilisomes allow absorption and unidirectional transfer of light energy to chlorophyll a of the photosystem II. In this way, the cells take advantage of the available wavelengths of light (in the 500–650 nm range), which are inaccessible to chlorophyll, and utilize their energy for photosynthesis. This is particularly advantageous deeper in the water column, where light with longer wavelengths is less transmitted and therefore less available directly to chlorophyll.
The geometrical arrangement of a phycobilisome is very elegant in an antenna-like assembly. It results in 95% efficiency of energy transfer.
Evolution and diversity
There are many variations to the general phycobilisomes structure. Their shape can be hemidiscoidal (in cyanobacteria) or hemiellipsoidal (in red algae). Species lacking phycoerythrin have at least two disks of phycocyanin per rod, which is |
https://en.wikipedia.org/wiki/Hypervariable | Hypervariable may refer to:
Hypervariable sequence, a segment of a chromosome characterised by considerable variation in the number of tandem repeats at one or more loci
Hypervariable locus, a locus with many alleles; especially those whose variation is due to variable numbers of tandem repeats
Hypervariable region (HVR), a chromosomal segment characterized by multiple alleles within a population for a single genetic locus
Genetics |
https://en.wikipedia.org/wiki/Fax%20server | A fax server is a system installed in a local area network (LAN) server that allows computer users whose computers are attached to the LAN to send and receive fax messages.
Alternatively the term fax server is sometimes used to describe a program that enables a computer to send and receive fax messages, set of software running on a server computer which is equipped with one or more fax-capable modems (or dedicated fax boards) attached to telephone lines or, more recently, software modem emulators which use T.38 ("Fax over IP") technology to transmit the signal over an IP network. Its function is to accept documents from users, convert them into faxes, and transmit them, as well as to receive fax calls and either store the incoming documents or pass them on to users. Users may communicate with the server in several ways, through either a local network or the Internet. In a big organization with heavy fax traffic, the computer hosting the fax server may be dedicated to that function, in which case the computer itself may also be known as a fax server.
User interfaces
For outgoing faxes, several methods are available to the user:
An e-mail message (with optional attachments) can be sent to a special e-mail address; the fax server monitoring that address converts all such messages into fax format and transmits them.
The user can tell their computer to "print" a document using a "virtual printer" which, instead of producing a paper printout, sends the document to the fax server, which then transmits it.
A web interface can be used, allowing files to be uploaded, and transmitted to the fax server for faxing.
Special client software may be used.
For incoming faxes, several user interfaces may be available:
The user may be sent an e-mail message for each fax received, with the pages included as attachments, typically in either TIFF or PDF format.
Incoming faxes may be stored in a dedicated file directory, which the user can monitor.
A website may allow users to lo |
https://en.wikipedia.org/wiki/Motor%20unit%20recruitment | Motor unit recruitment is the activation of additional motor units to accomplish an increase in contractile strength in a muscle.
A motor unit consists of one motor neuron and all of the muscle fibers it stimulates. All muscles consist of a number of motor units and the fibers belonging to a motor unit are dispersed and intermingle amongst fibers of other units. The muscle fibers belonging to one motor unit can be spread throughout part, or most of the entire muscle, depending on the number of fibers and size of the muscle. When a motor neuron is activated, all of the muscle fibers innervated by the motor neuron are stimulated and contract.
The activation of one motor neuron will result in a weak but distributed muscle contraction. The activation of more motor neurons will result in more muscle fibers being activated, and therefore a stronger muscle contraction. Motor unit recruitment is a measure of how many motor neurons are activated in a particular muscle, and therefore is a measure of how many muscle fibers of that muscle are activated. The higher the recruitment the stronger the muscle contraction will be. Motor units are generally recruited in order of smallest to largest (smallest motor neurons to largest motor neurons, and thus slow to fast twitch) as contraction increases. This is known as Henneman's size principle.
Neuronal mechanism of recruitment
Henneman proposed that the mechanism underlying the size principle was that the smaller motor neurons had a smaller surface area and therefore a higher membrane resistance. He predicted that the current generated by an excitatory postsynaptic potential (EPSPs) would result in a higher voltage change (depolarization) across the neuronal membrane of the smaller motor neurons and therefore larger EPSPs in smaller motoneurons. Burke later demonstrated that there was a graded decrease of both EPSP and inhibitory postsynaptic potential (IPSP) amplitudes from small to large motoneurons. This seemed to confirm Hen |
https://en.wikipedia.org/wiki/Somatic%20marker%20hypothesis | The somatic marker hypothesis, formulated by Antonio Damasio and associated researchers, proposes that emotional processes guide (or bias) behavior, particularly decision-making.
"Somatic markers" are feelings in the body that are associated with emotions, such as the association of rapid heartbeat with anxiety or of nausea with disgust. According to the hypothesis, somatic markers strongly influence subsequent decision-making. Within the brain, somatic markers are thought to be processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala. The hypothesis has been tested in experiments using the Iowa gambling task.
Background
In economic theory, human decision-making is often modeled as being devoid of emotions, involving only logical reasoning based on cost-benefit calculations. In contrast, the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations.
Patients with frontal lobe damage, such as Phineas Gage, provided the first evidence that the frontal lobes were associated with decision-making. Frontal lobe damage, particularly to the vmPFC, results in impaired abilities to organize and plan behavior and learn from previous mistakes, without affecting intellect in terms of working memory, attention, and language comprehension and expression.
vmPFC patients also have difficulty expressing and experiencing appropriate emotions. This led Antonio Damasio to hypothesize that decision-making deficits following vmPFC damage result from the inability to use emotions to help guide future behavior based on past experiences. Consequently, vmPFC damage forces those affected to rely on slow and laborious cost-benefit analyses for every given choice situation.
Antonio Damasio
Antonio Damasio () is a Portuguese-American neuroscientist. He is currently the David Dornsife Professor of Neuroscience, Psychology and Philosophy at the University of Southern California and |
https://en.wikipedia.org/wiki/Situation%20calculus | The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation.
Overview
The situation calculus represents changing scenarios as a set of first-order logic formulae. The basic elements of the calculus are:
The actions that can be performed in the world
The fluents that describe the state of the world
The situations
A domain is formalized by a number of formulae, namely:
Action precondition axioms, one for each action
Successor state axioms, one for each fluent
Axioms describing the world in various situations
The foundational axioms of the situation calculus
A simple robot world will be modeled as a running example. In this world there is a single robot and several inanimate objects. The world is laid out according to a grid so that locations can be specified in terms of coordinate points. It is possible for the robot to move around the world, and to pick up and drop items. Some items may be too heavy for the robot to pick up, or fragile so that they break when they are dropped. The robot also has the ability to repair any broken items that it is holding.
Elements
The main elements of the situation calculus are the actions, fluents and the situations. A number of objects are also typically involved in the description of the world. The situation calculus is based on a sorted domain with three sorts: actions, situations, and objects, where the objects include everything that is not an action or a situation. Variables of each sort can be used. While actions, situations, and objects are elements of the domain, the fluents are modeled as either predicates or functions.
Actions
The actions form a sort of the domain. Variables of sort action can b |
https://en.wikipedia.org/wiki/Bomberman%20%281983%20video%20game%29 | is a maze video game developed and published by Hudson Soft. The original home computer game was released in July 1983 for the NEC PC-8801, NEC PC-6001 mkII, Fujitsu FM-7, Sharp MZ-700, Sharp MZ-2000, Sharp X1 and MSX in Japan, and a graphically modified version for the MSX and ZX Spectrum in Europe as Eric and the Floaters. A sequel, 3-D Bomberman, was produced. In 1985, Bomberman was released for the Nintendo Entertainment System. It spawned the Bomberman series with many installments building on its basic gameplay.
Gameplay
In the NES/Famicom release, the eponymous character, Bomberman, is a robot that must find his way through a maze while avoiding enemies. Doors leading to further maze rooms are found under rocks, which Bomberman must destroy with bombs. There are items that can help improve Bomberman's bombs, such as the Fire ability, which improves the blast range of his bombs. Bomberman will turn human when he escapes and reaches the surface. Each game has 50 levels in total. The original home computer games are more basic and have some different rules.
Notably, completing the NES and Famicom version reveals that the game is a prequel to Hudson Soft's NES port of Broderbund Software's 1983 game Lode Runner. Upon clearing the final screen, Bomberman is shown turning into Lode Runner's unnamed protagonist. In the Japanese version of the game, the player is explicitly told that Bomberman will 'See [them] in Lode Runner''', while in the international version, they are instead asked if they can recognise the protagonist from another Hudson game.
DevelopmentBomberman was written in 1980 to serve as a tech demo for Hudson Soft's BASIC compiler. This very basic version of the game was given a small-scale release for Japanese PCs in 1983 and the European PCs the following year.
The Famicom version was developed (ported) by Shinichi Nakamoto, who reputedly completed the task alone over a 72 hour period.
According to Zero magazine, Bomberman adopted gameplay elem |
https://en.wikipedia.org/wiki/The%20Code%20Room | The Code Room is a half-hour-long reality game show produced by Microsoft. The show was conceptualized and executive produced by Paul Murphy and hosted by Jessi Knapp, accompanied by a varying project expert. Each episode consists of a number of MSDN Developer Event attendees who team up to complete a project, with given specifications, in a limited amount of time. It should not be confused with The Code Room from DMD and the group that introduced the world to the "Hybrid Hostel".
The Code Room was filmed at an MSDN Developer Event and shown on several cable television stations, as well as other streaming television stations like MSDN TV. The show could also be watched for free through the Channel 9 community, and was additionally included in several MSDN CDs and DVDs.
The "Code Room" was typically an enclosed room with one or more desks, whiteboards and computers. All required programming tools were installed, along with other stationery. However, no internet connection was available and contestants could not bring their own notes. Contestants had to use preselected development environments which were typically new to them, having only been given a quick crash course about the environment in a presentation before the contest began.
Teams that were able to complete the project within the required time were eligible for prizes and to be part of "Team Code FF". At the end of the series, the two top "Team Code FF" teams (determined by their performance, as well as community ranking) competed in the Final Code Room challenge for the ultimate "Code Room Champion" title.
The program won a Telly Award in 2004.
Episodes
The Code Room: Episode 1, published December 9, 2004
The Code Room: Episode 2, Building Mobile Apps and Bluetooth Enabled Kiosks, published May 19, 2005
The Code Room: Episode 3, Breaking Into Vegas, published February 23, 2006
See also
Microsoft Developer Network
The .NET Show
External links
(official website)
MSDN Events
MSDN TV
MSDN
T |
https://en.wikipedia.org/wiki/Radio%20spectrum%20pollution | Radio spectrum pollution is the straying of waves in the radio and electromagnetic spectrums outside their allocations that cause problems for some activities. It is of particular concern to radio astronomers.
Radio spectrum pollution is mitigated by effective spectrum management. Within the United States, the Communications Act of 1934 grants authority for spectrum management to the President for all federal use (47 U.S.C. 305). The National Telecommunications and Information Administration (NTIA) manages the spectrum for the Federal Government. Its rules are found in the "NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management". The Federal Communications Commission (FCC) manages and regulates all domestic non-federal spectrum use (47 U.S.C. 301). Each country typically has its own spectrum regulatory organization. Internationally, the International Telecommunication Union (ITU) coordinates spectrum policy.
See also
Spectrum management
Electromagnetic radiation and health
Frequency allocation
Radio quiet zone |
https://en.wikipedia.org/wiki/Iterative%20deepening%20A%2A | Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to conservatively estimate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times.
While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative , where is the cost to travel from the root to node and is a problem-specific heuristic estimate of the cost to travel from to the goal.
The algorithm was first described by Richard Korf in 1985.
Description
Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.
As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below.
Pseudocode
path current search path (acts like a stack)
node current node (last node in current path)
g the cost to reach current node
f estimated cost of the cheapest path (root..node..goal)
h(node) estimated cost of the cheapest path (node..goal)
|
https://en.wikipedia.org/wiki/Multipath%20I/O | In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers, switches, and bridge devices connecting them.
As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency.
Multipath software layers can leverage the redundant paths to provide performance-enhancing features, including dynamic load balancing, traffic shaping, automatic path management, and dynamic reconfiguration.
See also
Device mapper
Linux DM Multipath
External links
Linux Multipathing, Linux Symposium 2005 p. 147
VxDMP white paper, Veritas Dynamic Multi pathing
Linux Multipath Usage guide
Computer data storage
Computer storage technologies
Fault-tolerant computer systems |
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s%20theorem%20%28conformal%20mapping%29 | In mathematics, Carathéodory's theorem is a theorem in complex analysis, named after Constantin Carathéodory, which extends the Riemann mapping theorem. The theorem, first proved in 1913, states that any conformal mapping sending the unit disk to some region in the complex plane bounded by a Jordan curve extends continuously to a homeomorphism from the unit circle onto the Jordan curve. The result is one of Carathéodory's results on prime ends and the boundary behaviour of univalent holomorphic functions.
Proofs of Carathéodory's theorem
The first proof of Carathéodory's theorem presented here is a summary of the short self-contained account in ; there are related proofs in and .
Clearly if f admits an extension to a homeomorphism, then ∂U must be a Jordan curve.
Conversely if ∂U is a Jordan curve, the first step is to prove f extends continuously to the closure of D. In fact this will hold if and only if f is uniformly continuous on D: for this is true if it has a continuous extension to the closure of D; and, if f is uniformly continuous, it is easy to check f has limits on the unit circle and the same inequalities for uniform continuity hold on the closure of D.
Suppose that f is not uniformly continuous. In this case there must be an ε > 0 and a point ζ on the unit circle and sequences zn, wn tending to ζ with |f(zn) − f(wn)| ≥ 2ε. This is shown below to lead to a contradiction, so that f must be uniformly continuous and hence has a continuous extension to the closure of D.
For 0 < r < 1, let γr be the curve given by the arc of the circle | z − ζ | = r lying within D. Then f ∘ γr is a Jordan curve. Its length can be estimated using the Cauchy–Schwarz inequality:
Hence there is a "length-area estimate":
The finiteness of the integral on the left hand side implies that there is a sequence rn decreasing to 0 with tending to 0. But the length of a curve g(t) for t in (a, b) is given by
The finiteness of therefore implies that the curve has limiting poin |
https://en.wikipedia.org/wiki/Capacitor-input%20filter | A capacitor-input filter is a filter circuit in which the first element is a capacitor connected in parallel with the output of the rectifier in a linear power supply. The capacitor increases the DC voltage and decreases the ripple voltage components of the output. The capacitor is often referred to as a smoothing capacitor or reservoir capacitor. The capacitor is often followed by other alternating series and parallel filter elements to further reduce ripple voltage, or adjust DC output voltage. It may also be followed by a voltage regulator which virtually eliminates any remaining ripple voltage, and adjusts the DC voltage output very precisely to match the DC voltage required by the circuit.
Operation
While during the time the rectifier is conducting and the potential is higher than the charge across the capacitor, the capacitor will store energy from the transformer; when the output of the rectifier falls below the charge on the capacitor, the capacitor will discharge energy into the circuit. Since the rectifier conducts current only in the forward direction, any energy discharged by the capacitor will flow into the load. This results in output of a DC voltage upon which is superimposed a waveform referred to as a sawtooth wave. The sawtooth wave is a convenient linear approximation to the actual waveform, which is exponential for both charge and discharge. The crests of the sawtooth waves will be more rounded when the DC resistance of the transformer secondary is higher.
Ripple current
A ripple current which is 90 degrees out of phase with the ripple voltage also passes through the capacitor.
See also
Rectifier#Capacitor input filter
Choke-input filter |
https://en.wikipedia.org/wiki/International%20Academy%20of%20Quantum%20Molecular%20Science | The International Academy of Quantum Molecular Science (IAQMS) is an international scientific learned society covering all applications of quantum theory to chemistry and chemical physics. It was created in Menton in 1967. The founding members were Raymond Daudel, Per-Olov Löwdin, Robert G. Parr, John Pople and Bernard Pullman. Its foundation was supported by Louis de Broglie.
Originally, the academy had 25 regular members under 65 years of age. This was later raised to 30, and then to 35. There is no limit on the number of members over 65 years of age. The members are "chosen among the scientists of all countries who have distinguished themselves by the value of their scientific work, their role of pioneer or leader of a school in the broad field of quantum chemistry, i.e. the application of quantum mechanics to the study of molecules and macromolecules". As of 2006, the academy consisted of 90 members. The academy organizes the International Congress of Quantum Chemistry every three years.
The academy awards a medal to a young member of the scientific community who has distinguished themselves by a pioneering and important contribution. The award has been made every year since 1967.
Presidents
Presidents and vice-presidents of the academy since its inception:
Members
Ali Alavi
Millard H. Alexander
Jean-Marie André
Evert-Jan Baerends
Vincenzo Barone
Rodney J. Bartlett
Mikhail V. Basilevsky
Axel D. Becke
Joel M. Bowman
Jean-Luc Brédas
Ria Broer-Braam
A. David Buckingham
Kieron Burke
Petr Cársky
Emily A. Carter
Lorenz S. Cederbaum
David M. Ceperley
Garnet Kin-Lic Chan
Jiří Čížek
David Clary
Enrico Clementi
Ernest R. Davidson
Wolfgang Domcke
Thom Dunning
Michel Dupuis
Odile Eisenstein
Jiali Gao
Jürgen Gauß
Peter Gill
William A. Goddard, III
Leticia González
Mark S. Gordon
Stefan Grimme
George G. Hall
Sharon Hammes-Schiffer
Martin Head-Gordon
Trygve Helgaker
Eric J. Heller
Kimihiko Hirao
So Hirata
Roald Hoffmann
Kendall N. Houk
Bogumil Jeziorski
Poul Jørgense |
https://en.wikipedia.org/wiki/Multireference%20configuration%20interaction | In quantum chemistry, the multireference configuration interaction (MRCI) method consists of a configuration interaction expansion of the eigenstates of the electronic molecular Hamiltonian in a set of Slater determinants which correspond to excitations of the ground state electronic configuration but also of some excited states. The Slater determinants from which the excitations are performed are called reference determinants. The higher excited determinants (also called configuration state functions (CSFs) or shortly configurations) are then chosen either by the program according to some perturbation theoretical ansatz according to a threshold provided by the user or simply by truncating excitations from these references to singly, doubly, ... excitations resulting in MRCIS, MRCISD, etc.
For the ground state using more than one reference configuration means a better correlation and so a lower energy. The problem of size inconsistency of truncated CI-methods is not solved by taking more references.
As a result of a MRCI calculation one gets a more balanced correlation of the ground and excited states. For quantitative good energy differences (excitation energies) one has to be careful in selecting the references. Taking only the dominant configuration of an excited state into the reference space leads to a correlated (lower) energy of the excited state. The generally too-high excitation energies of CIS or CISD are lowered. But usually excited states have more than one dominant configuration and so the ground state is more correlated due to: a) now including some configurations with higher excitations (triply and quadruply in MRCISD); b) the neglect of other dominant configurations of the excited states which are still uncorrelated.
Selecting the references can be done manually (), automatically (all possible configurations within an active space of some orbitals) or semiautomatically (taking all configurations as references that have been shown to be important |
https://en.wikipedia.org/wiki/Lusin%27s%20theorem | In the mathematical field of real analysis, Lusin's theorem (or Luzin's theorem, named for Nikolai Luzin) or Lusin's criterion states that an almost-everywhere finite function is measurable if and only if it is a continuous function on nearly all its domain. In the informal formulation of J. E. Littlewood, "every measurable function is nearly continuous".
Classical statement
For an interval [a, b], let
be a measurable function. Then, for every ε > 0, there exists a compact E ⊆ [a, b] such that f restricted to E is continuous and
Note that E inherits the subspace topology from [a, b]; continuity of f restricted to E is defined using this topology.
Also for any function f, defined on the interval [a, b] and almost-everywhere finite, if for any ε > 0 there is a function ϕ, continuous on [a, b], such that the measure of the set
is less than ε, then f is measurable.
General form
Let be a Radon measure space and Y be a second-countable topological space equipped with a Borel algebra, and let be a measurable function. Given , for every of finite measure there is a closed set with such that restricted to is continuous.
On the proof
The proof of Lusin's theorem can be found in many classical books. Intuitively, one expects it as a consequence of Egorov's theorem and density of smooth functions. Egorov's theorem states that pointwise convergence is nearly uniform, and uniform convergence preserves continuity. |
https://en.wikipedia.org/wiki/Dirac%20fermion | In physics, a Dirac fermion is a spin-½ particle (a fermion) which is different from its antiparticle. A vast majority of fermions fall under this category.
Description
In particle physics, all fermions in the standard model have distinct antiparticles (perhaps excepting neutrinos) and hence are Dirac fermions. They are named after Paul Dirac, and can be modeled with the Dirac equation.
A Dirac fermion is equivalent to two Weyl fermions. The counterpart to a Dirac fermion is a Majorana fermion, a particle that must be its own antiparticle.
Dirac quasi-particles
In condensed matter physics, low-energy excitations in graphene and topological insulators, among others, are fermionic quasiparticles described by a pseudo-relativistic Dirac equation.
See also
Dirac spinor, a wavefunction-like description of a Dirac fermion
Dirac–Kähler fermion, a geometric formulation of Dirac fermions
Majorana fermion, an alternate category of fermion, possibly describing neutrinos
Spinor, mathematical details |
https://en.wikipedia.org/wiki/Francis%20Polkinghorne%20Pascoe | Francis Polkinghorne Pascoe (1 September 1813 – 20 June 1893) was an English entomologist mainly interested in beetles.
Biography
He was born in Penzance, Cornwall and trained at St. Bartholomew's Hospital, London. Appointed surgeon in the Navy he served on Australian, West Indian and Mediterranean stations. He married a Miss Mary Glasson of Cornwall and settled at Trewhiddle near St Austell where his wife's property produced china clay. Widowed in 1851 he settled in London devoting himself to natural history and entomology in particular. The results of collecting trips to Europe, North Africa and the Lower Amazons were poor and Pascoe worked mainly on insects collected by others. His entomological papers listed and described species collected by Alfred Russel Wallace (in Longicornia Malayana), Robert Templeton and other assiduous collectors but not prolific writers on systematic entomology. He became a Fellow of the Entomological Society in 1854, was president from 1864–1865, a Member of the Société Entomologique de France and belonged to the Belgian and Stettin Societies. He was also a Fellow of the Linnean Society (elected 1852) and was on the Council of the Ray Society. His 2,500 types are in the Natural History Museum, London.
Evolution
Pascoe accepted the fact of evolution but was an opponent to natural selection. Pascoe's 1890 book The Darwinian Theory of the Origin of Species was an attack on natural selection. It received a lengthy review in the Nature journal by Raphael Meldola who disagreed with Pascoe's criticisms but noted the work should be taken seriously as Pascoe was a respected systematic entomologist.
Works
1858 On new genera and species of longicorn Coleoptera. Part III Trans. Entomol. Soc. London, (2)4:236–266.
1859 On some new genera and species of longicorn Coleoptera. Part IV.Trans.Entomol. Soc. London, (2)5:12–61.
1860 Notices of new or little-known genera and species of Coleoptera. J.Entomol., 1(1):36–64.
1860 Notices of new or little- |
https://en.wikipedia.org/wiki/Stationary%20wavelet%20transform | The Stationary wavelet transform (SWT) is a wavelet transform algorithm designed to overcome the lack of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance is achieved by removing the downsamplers and upsamplers in the DWT and upsampling the filter coefficients by a factor of in the th level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. This algorithm is more famously known as "algorithme à trous" in French (word trous means holes in English) which refers to inserting zeros in the filters. It was introduced by Holschneider et al.
Implementation
The following block diagram depicts the digital implementation of SWT.
In the above diagram, filters in each level are up-sampled versions of the previous (see figure below).
KIT
Applications
A few applications of SWT are specified below.
Signal denoising
Pattern recognition
Brain image classification
Pathological brain detection
Synonyms
Redundant wavelet transform
Algorithme à trous
Quasi-continuous wavelet transform
Translation invariant wavelet transform
Shift invariant wavelet transform
Cycle spinning
Maximal overlap wavelet transform (MODWT)
Undecimated wavelet transform (UWT)
See also
wavelet transform
wavelet entropy
wavelet packet decomposition |
https://en.wikipedia.org/wiki/Nambu%E2%80%93Jona-Lasinio%20model | In quantum field theory, the Nambu–Jona-Lasinio model (or more precisely: the Nambu and Jona-Lasinio model) is a complicated effective theory of nucleons and mesons constructed from interacting Dirac fermions with chiral symmetry, paralleling the construction of Cooper pairs from electrons in the BCS theory of superconductivity. The "complicatedness" of the theory has become more natural as it is now seen as a low-energy approximation of the still more basic theory of quantum chromodynamics, which does not work perturbatively at low energies.
Overview
The model is much inspired by the different field of solid state theory, particularly from the BCS breakthrough of 1957. The first inventor of the Nambu–Jona-Lasinio model, Yoichiro Nambu, also contributed essentially to the theory of superconductivity, i.e., by the "Nambu formalism". The second inventor was Giovanni Jona-Lasinio. The common paper of the authors that introduced the model appeared in 1961. A subsequent paper included chiral symmetry breaking, isospin and strangeness.
At the same time, the same model was independently considered by Soviet physicists Valentin Vaks and Anatoly Larkin.
The model is quite technical, although based essentially on symmetry principles. It is an example of the importance of four-fermion interactions and is defined in a spacetime with an even number of dimensions. It is still important and is used primarily as an effective although not rigorous low energy substitute for quantum chromodynamics.
The dynamical creation of a condensate from fermion interactions inspired many theories of the breaking of electroweak symmetry, such as technicolor and the top-quark condensate.
Starting with the one-flavor case first, the Lagrangian density is
or, equivalently,
The terms proportional to are an attractive four-fermion interaction, which parallels the BCS theory phonon exchange interaction.
The global symmetry of the model is U(1)Q×U(1)χ where Q is the ordinary charge of the Dirac f |
https://en.wikipedia.org/wiki/Stardock%20Central | Stardock Central was a software content delivery and digital rights management system used by Stardock customers to access components of the Object Desktop, TotalGaming.net and ThinkDesk product lines, as well as products under the WinCustomize brand.
Introduced in 2001 to access games on TotalGaming.net (then known as the Drengin Network), Stardock Central was later expanded to cover all Stardock products, replacing Component Manager (1999).
As of 2010, Stardock Central had been phased out in favour of its successor, Impulse. However, in March 2011 Impulse was sold to GameStop and Stardock soon reopened their own online store. As of April 2012, the Stardock Central software has been revived and released as a Beta to once again provide a proprietary platform for Stardock's digital product downloads.
Features
Software on Stardock Central was divided into components, and further divided into packages. When users purchased a product or a subscription, they gained access to it via Stardock Central. The program had the ability to break products into components so that users on slower connections could start using the main portion of the software as soon as possible, and download extras — such as in-game movies or music — at a later date.
To cater for the various frequent updates provided for many products, once a package has been downloaded and installed Stardock Central only downloaded updated files for new versions. A product archiving and restore function was available to back up components and to allow their transfer to other computers. Users could also use the program to interact on Stardock's discussion boards or access the Stardock IRC server via a built-in IRC client. WinCustomize subscribers could use the Skins and Themes section to browse and download the WinCustomize library.
Stardock Central was similar in concept to the later-developed Steam content delivery system; unlike Steam, it did not require a permanent connection to the Internet, only being re |
https://en.wikipedia.org/wiki/WUPL | WUPL (channel 54) is a television station licensed to Slidell, Louisiana, United States, serving the New Orleans area as an affiliate of MyNetworkTV. It is owned by Tegna Inc. alongside CBS affiliate WWL-TV (channel 4). Both stations share studios on Rampart Street in the historic French Quarter district, while WUPL's transmitter is located on Cooper Road in Terrytown, Louisiana.
History
As a UPN affiliate
The station first signed on the air on June 1, 1995, as an affiliate of the United Paramount Network (UPN). It was owned by Texas broadcaster Larry Safir via his company, Middle America Communications. Safir also owned Univision affiliate KNVO in the Rio Grande Valley. Prior to the station's sign-on, WHNO (channel 20) was approached by UPN for an affiliation, though WHNO's owner LeSEA Broadcasting declined all netlet offers on their stations through the country, as the programming planned for both UPN and competitor The WB conflicted with the company's core programming values; as a result, programming from UPN, which launched on January 16, 1995, was only available on New Orleans-area cable and satellite providers through New York City-based national superstation WWOR for the 5½ months prior to WUPL's debut. Along with programming from UPN, the station ran a general entertainment format, offering vintage off-network sitcoms, talk shows, court shows and other syndicated programs. In 1996, Safir entered a deal with Cox Enterprises to take over operations of the station, and in 1997, he sold the station to the Paramount Stations Group subsidiary of Viacom; as a result, WUPL became a UPN owned-and-operated station (Viacom launched UPN in a programming partnership with Chris-Craft Industries/United Television, and acquired a 50% interest in the network from Chris-Craft/United in 1996).
Viacom merged with CBS in 2000. Despite Viacom's ownership of WUPL, the market's CBS affiliation remained on WWL-TV (channel 4), the highest-rated television station in New Orleans an |
https://en.wikipedia.org/wiki/FMRFamide | FMRFamide (H-Phe-Met-Arg-Phe-NH2) is a neuropeptide from a broad family of FMRFamide-related peptides (FaRPs) all sharing an -RFamide sequence at their C-terminus. First identified in Hard clam (Mercenaria mercenaria), it is thought to play an important role in cardiac activity regulation. Several FMRFamide related peptides are known, regulating various cellular functions and possessing pharmacological actions, such as anti-opiate effects. In Mercenaria mercenaria, FMRFamide has been isolated and demonstrated to increase both the force and frequency of the heartbeat through a biochemical pathway that is thought to involve the increase of cytoplasmic cAMP in the ventricular region.
FMRFamide is an important neuropeptide in several phyla such as Insecta, Nematoda, Mollusca, and Annelida.
It is the most abundant neuropeptide in endocrine cells of insect alimentary tracts along with allatostatin and tachykinin families, however the neuropeptide’s function is not known. Generally, the neuropeptide is encoded by several genes such as flp-1 through flp-22 in C. elegans. The common precursor of the FaRPs is modified to yield many different neuropeptides all having the same FMRFamide sequence. Moreover, these peptides are not functionally redundant.
In invertebrates, the FMRFamide-related peptides are known to affect heart rate, blood pressure, gut motility, feeding behaviour and reproduction. In vertebrates such as mice, they are known to affect opioid receptors resulting in elicitation of naloxone-sensitive antinociception and reduction of morphine-induced antinociception.
Detection of this neuropeptide is important because its expression lays down the foundation of the CNS in the early stages of development in invertebrates. In recent years, neuromodulatory actions of FMRFamide in invertebrates have become more apparent. This is, in part, due to the extensive studies done on the Planorbid and Lymnaeid families of pond snails.
See also
Neuropeptide VF precursor |
https://en.wikipedia.org/wiki/Isotopologue | In chemistry, isotopologues are molecules that differ only in their isotopic composition. They have the same chemical formula and bonding arrangement of atoms, but at least one atom has a different number of neutrons than the parent.
An example is water, whose hydrogen-related isotopologues are: "light water" (HOH or ), "semi-heavy water" with the deuterium isotope in equal proportion to protium (HDO or ), "heavy water" with two deuterium isotopes of hydrogen per molecule ( or ), and "super-heavy water" or tritiated water ( or , as well as and , where some or all of the hydrogen atoms are replaced with the radioactive tritium isotope). Oxygen-related isotopologues of water include the commonly available form of heavy-oxygen water () and the more difficult to separate version with the isotope. Both elements may be replaced by isotopes, for example in the doubly labeled water isotopologue . All taken together, there are 9 different stable water isotopologues, and 9 radioactive isotopologues involving tritium, for a total of 18. However only certain ratios are possible in mixture, due to prevalent hydrogen swapping.
The atom(s) of the different isotope may be anywhere in a molecule, so the difference is in the net chemical formula. If a compound has several atoms of the same element, any one of them could be the altered one, and it would still be the same isotopologue. When considering the different locations of the same isotopically modified element, the term isotopomer, first proposed by Seeman and Paine in 1992, is used.
Isotopomerism is analogous to constitutional isomerism of different elements in a structure. Depending on the formula and the symmetry of the structure, there might be several isotopomers of one isotopologue. For example, ethanol has the molecular formula . Mono-deuterated ethanol, , is an isotopologue of it. The structural formulas and are two isotopomers of that isotopologue.
Singly substituted isotopologues
Analytical chemistry applicatio |
https://en.wikipedia.org/wiki/Chronometry | Chronometry (from Greek χρόνος chronos, "time" and μέτρον metron, "measure") is the science of the measurement of time, or timekeeping. Chronometry provides a standard of measurement for time, and therefore serves as a significant reference for many and various fields of science.
The importance of the accuracy and reliability of measuring time provides a standardized unit for chronometric experiments for the modern world, and more specifically scientific research. Despite the coincidental identicality of worldwide units of time, time produces a measurement of change and is a variable in many experiments. Therefore, time is an essential part of many areas of science.
It should not be confused with chronology, the science of locating events in time, which often relies upon it. Also, of similarity to chronometry is horology, the study of time; however, it is commonly used specifically with reference to the mechanical instruments created to keep time, with examples such as stopwatches, clocks, and hourglasses. Chronometry is utilised in many areas, and its fields are often derived from aspects of other areas in science, for example geochronometry, combining geology and chronometry.
Early records of time keeping are thought to have originated in the Paleolithic era, with etchings to mark the passing of moons in order to measure the year. And then progressed to written versions of calendars, before mechanisms and devices made to track time were invented. Today, the highest level of precision in timekeeping comes with atomic clocks, which are used for the international standard of the second.
Etymology
Chronometry is derived from two root words, chronos and metron (χρόνος and μέτρον in Ancient Greek respectively), with rough meanings of "time" and "measure". The combination of the two is taken to mean time measuring.
In the Ancient Greek lexicon, meanings and translations differ depending on the source. Chronos, used in relation to time when in definite periods, and |
https://en.wikipedia.org/wiki/Juggling%20notation | Juggling notation is the written depiction of concepts and practices in juggling. Toss juggling patterns have a reputation for being "easier done than said" – while it might be easy to learn a given maneuver and demonstrate it for others, it is often much harder to communicate the idea accurately using speech or plain text. To circumvent this problem, various numeric or diagram-based notation systems have been developed to facilitate communication of patterns or tricks between jugglers, as well the investigation and discovery of new patterns.
A juggling notation system (based on music notation) was first proposed by Dave Storer in 1978 and while the first juggling diagram (a ladder diagram), by Claude Shannon around 1981, was not printed till 2010, the first printed diagram and second oldest notation system were proposed by Jeff Walker in 1982.
Diagram-based
While diagrams are the most visual and reader-friendly way to notate many juggling patterns, they rely on images, so are complicated to produce and unwieldy to share via text or speech.
Ladder diagrams - Each rung on the "ladder" represents a point in time (or "beat"). The juggled objects are represented as lines, their paths through time and between a pair of hands.
Causal diagrams - Similar to the ladder diagram but doesn't show the props held in a juggler's hands. Instead it only shows each "problem" — an incoming prop — and what the juggler should do to make space in his or her hands to catch that incoming prop. It is usually used for club passing and can be displayed or edited in some juggling software.
Mills Mess State Transition Diagrams - Mills Mess is a popular pattern in which the arms cross and uncross. Mills Mess State Transition Diagrams can be used to track these basic arm movements.
Numeric
The following notation systems use only numbers and common characters. The patterns can easily be communicated by text. Most numeric systems are designed to be processed by software juggling simulators |
https://en.wikipedia.org/wiki/Pickling | Pickling is the process of preserving or extending the shelf life of food by either anaerobic fermentation in brine or immersion in vinegar. The pickling procedure typically affects the food's texture and flavor. The resulting food is called a pickle, or, to prevent ambiguity, prefaced with pickled. Foods that are pickled include vegetables, fruits, mushrooms, meats, fish, dairy and eggs.
Pickling solutions are typically highly acidic, with a pH of 4.6 or lower, and high in salt, preventing enzymes from working and micro-organisms from multiplying. Pickling can preserve perishable foods for months. Antimicrobial herbs and spices, such as mustard seed, garlic, cinnamon or cloves, are often added. If the food contains sufficient moisture, a pickling brine may be produced simply by adding dry salt. For example, sauerkraut and Korean kimchi are produced by salting the vegetables to draw out excess water. Natural fermentation at room temperature, by lactic acid bacteria, produces the required acidity. Other pickles are made by placing vegetables in vinegar. Like the canning process, pickling (which includes fermentation) does not require that the food be completely sterile before it is sealed. The acidity or salinity of the solution, the temperature of fermentation, and the exclusion of oxygen determine which microorganisms dominate, and determine the flavor of the end product.
When both salt concentration and temperature are low, Leuconostoc mesenteroides dominates, producing a mix of acids, alcohol, and aroma compounds. At higher temperatures Lactobacillus plantarum dominates, which produces primarily lactic acid. Many pickles start with Leuconostoc, and change to Lactobacillus with higher acidity.
History
Pickling with vinegar likely originated in ancient Mesopotamia around 2400 BCE. There is archaeological evidence of cucumbers being pickled in the Tigris Valley in 2030 BCE. Pickling vegetables in vinegar continued to develop in the Middle East region before spr |
https://en.wikipedia.org/wiki/GForge | GForge is a commercial service originally based on the Alexandria software behind SourceForge, a web-based project management and collaboration system which was licensed under the GPL. Open source versions of the GForge code were released from 2002 to 2009, at which point the company behind GForge focused on their proprietary service offering which provides project hosting, version control (CVS, Subversion, Git), code reviews, ticketing (issues, support), release management, continuous integration and messaging. The FusionForge project emerged in 2009 to pull together open-source development efforts from the variety of software forks which had sprung up.
History
In 1999, VA Linux hired four developers, including Tim Perdue (1974-2011), to develop the SourceForge.net service to encourage open-source development and support the Open Source developer community. SourceForge.net services were offered free of charge to any Open Source project team. Following the SourceForge launch on November 17, 1999, the free software community rapidly took advantage of SourceForge.net, and traffic and users grew very quickly.
As another competitive web service, "Server 51", was being readied for launch, VA Linux released the source code for the sourceforge.net web site on January 14, 2000, as a marketing ploy to show that SourceForge was 'more open source'. Many companies began installing and using it themselves and contacting VA Linux for professional services to set up and use the software. However, their pricing was so unrealistic, they had few customers. By 2001, the company's Linux hardware business had collapsed in the dotcom bust. The company was renamed to VA Software and called the closed codebase SourceForge Enterprise Edition to try to force some of the large companies to purchase licenses. This prompted objections from open source community members. VA Software continued to say that a new source code release would be made at some point, but it never was.
Some time later |
https://en.wikipedia.org/wiki/Catenin | Catenins are a family of proteins found in complexes with cadherin cell adhesion molecules of animal cells. The first two catenins that were identified became known as α-catenin and β-catenin. α-Catenin can bind to β-catenin and can also bind filamentous actin (F-actin). β-Catenin binds directly to the cytoplasmic tail of classical cadherins. Additional catenins such as γ-catenin and δ-catenin have been identified. The name "catenin" was originally selected ('catena' means 'chain' in Latin) because it was suspected that catenins might link cadherins to the cytoskeleton.
Types
α-catenin
β-catenin
γ-catenin
δ-catenin
All but α-catenin contain armadillo repeats. They exhibit a high degree of protein dynamics, alone or in complex.
Function
Several types of catenins work with N-cadherins to play an important role in learning and memory.
Cell-cell adhesion complexes are required for simple epithelia in higher organisms to maintain structure, function and polarity. These complexes, which help regulate cell growth in addition to creating and maintaining epithelial layers, are known as adherens junctions and they typically include at least cadherin, β-catenin, and α-catenin. Catenins play roles in cellular organization and polarity long before the development and incorporation of Wnt signaling pathways and cadherins.
The primary mechanical role of catenins is to connect cadherins to actin filaments, such as the adhesion junctions of epithelial cells. Most studies investigating catenin actions have focused on α-catenin and β-catenin. β-catenin is particularly interesting as it plays a dual role in the cell. First of all, by binding to cadherin receptor intracellular cytoplasmic tail domains, it can act as an integral component of a protein complex in adherens junctions that helps cells maintain epithelial layers. β-catenin acts by anchoring the actin cytoskeleton to the junctions, and may possibly aid in contact inhibition signaling within the cell. For instanc |
https://en.wikipedia.org/wiki/Van%20der%20Woude%20syndrome | Van der Woude syndrome (VDWS) is a genetic disorder characterized by the combination of lower lip pits, cleft lip with or without cleft palate (CL/P), and cleft palate only (CPO). The frequency of orofacial clefts ranges from 1:1000 to 1:500 births worldwide, and there are more than 400 syndromes that involve CL/P. VWS is distinct from other clefting syndromes due to the combination of cleft lip and palate (CLP) and CPO within the same family. Other features frequently associated with VWS include hypodontia in 10-81% of cases, narrow arched palate, congenital heart disease, heart murmur and cerebral abnormalities, syndactyly of the hands, polythelia, ankyloglossia, and adhesions between the upper and lower gum pads.
The association between lower lip pits and cleft lip and/or palate was first described by Anne Van der Woude in 1954. The worldwide disease incidence ranges from 1:100,000 to 1:40,000.
Genetics
Van der Woude syndrome is inherited as an autosomal dominant disease caused by a mutation in a single gene with equal distribution between the sexes. The disease has high penetrance at about 96% but the phenotypic expression varies from lower lip pits with cleft lip and cleft palate to no visible abnormalities. Approximately 88% of VWS patients display lower lip pits, and in about 64% of cases lip pits are the only visible defect. Reported clefting covers a wide range including submucous cleft palate, incomplete unilateral CL, bifid uvula, and complete bilateral CLP. VWS is the most common orofacial clefting syndrome, accounting for 2% of CLP cases.
The majority of VWS cases are caused by haploinsufficiency due to mutations in the interferon regulatory factor 6 gene (IRF6) on chromosome 1 in the 1q32-q41 region known as VWS locus 1. A second, less common, causative locus is found at 1p34, known as VWS locus 2 (VWS2). More recent work has shown that GRHL3 is the VWS2 gene. Grhl3 is downstream of Irf6 in oral epithelium, suggesting a common molecular pathw |
https://en.wikipedia.org/wiki/Surface%20roughness | Surface roughness can be regarded as the quality of a surface of not being smooth and it is hence linked to human (haptic) perception of the surface texture. From a mathematical perspective it is related to the spatial variability structure of surfaces, and inherently it is a multiscale property. It has different interpretations and definitions depending from the disciplines considered.
In surface metrology
Surface roughness, often shortened to roughness, is a component of surface finish (surface texture). It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically considered to be the high-frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose.
Roughness plays an important role in determining how a real object will interact with its environment. In tribology, rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness and static friction.
Although a high roughness value is often undesirable, it can be difficult and expensive to control in manufacturing. For example, it is difficult and expensive to control surface roughness of fused deposition modelling (FDM) manufactured parts. Decreasing the roughness of a surface usually increases its manufacturing cost. This often resul |
https://en.wikipedia.org/wiki/Encyclopedia%20Mythica | Encyclopedia Mythica is an online encyclopedia that seeks to cover folklore, mythology, and religion. This encyclopedia was founded in June 1995 as a small site with about 300 entries, and established with its own domain name in March 1996. As of May 2021, it features more than 11000 articles. |
https://en.wikipedia.org/wiki/Tail | The tail is the section at the rear end of certain kinds of animals' bodies; in general, the term refers to a distinct, flexible appendage to the torso. It is the part of the body that corresponds roughly to the sacrum and coccyx in mammals, reptiles, and birds. While tails are primarily a feature of vertebrates, some invertebrates including scorpions and springtails, as well as snails and slugs, have tail-like appendages that are sometimes referred to as tails. Tailed objects are sometimes referred to as "caudate" and the part of the body associated with or proximal to the tail are given the adjective "caudal".
Function
Animal tails are used in a variety of ways. They provide a source of locomotion for fish and some other forms of marine life. Many land animals use their tails to brush away flies and other biting insects. Most canines use their tails to communicate mood and intention. Some species, including cats and kangaroos, use their tails for balance; and some, such as monkeys and opossums, have what are known as prehensile tails, which are adapted to allow them to grasp tree branches.
Tails are also used for social signaling. Some deer species flash the white underside of their tails to warn other nearby deer of possible danger, beavers slap the water with their tails to indicate danger, and canids (including domestic dogs) indicate emotions through the positioning and movement of their tails. Some species' tails are armored, and some, such as those of scorpions, contain venom.
Some species of lizard can detach ("cast") their tails from their bodies. This can help them to escape predators, which are either distracted by the wriggling, detached tail or left with only the tail while the lizard flees. Tails cast in this manner generally grow back over time, though the replacement is typically darker in colour than the original and contains only cartilage, not bone. Various species of rat demonstrate a similar function with their tails, known as degloving, i |
https://en.wikipedia.org/wiki/Oxymonad | The Oxymonads (or Oxymonadida) are a group of flagellated protozoa found exclusively in the intestines of termites and other wood-eating insects. Along with the similar parabasalid flagellates, they harbor the symbiotic bacteria that are responsible for breaking down cellulose.
It includes Dinenympha, Pyrsonympha, and Oxymonas.
Characteristics
Most Oxymonads are around 50 μm in size and have a single nucleus, associated with four flagella. Their basal bodies give rise to several long sheets of microtubules, which form an organelle called an axostyle, but different in structure from the axostyles of parabasalids. The cell may use the axostyle to swim, as the sheets slide past one another and cause it to undulate. An associated fiber called the preaxostyle separates the flagella into two pairs. A few oxymonads have multiple nuclei, flagella, and axostyles.
Relationship to Trimastix
The free-living flagellate Trimastix is closely related to the oxymonads. It lacks mitochondria and has four flagella separated by a preaxostyle, but unlike the oxymonads has a feeding groove. This character places the Oxymonads and Trimastix among the Excavata, and in particular they may belong to the metamonads.
Taxonomy
Order Oxymonadida Grassé 1952 emend. Cavalier-Smith 2003
Family Oxymonadidae Kirby 1928 [Oxymonadaceae]
Genus ?Barroella Zeliff 1944 [Kirbyella Zeliff 1930 non Kirkaldy 1906 non Bolivar 1909]
Genus ?Metasaccinobaculus Freitas 1945
Genus ?Tubulimonoides Krishnamurthy & Sultana 1976
Genus Microrhopalodina Grassé & Foa 1911 [Proboscidiella Kofoid & Swezy 1926; Opisthomitus Grassé 1952 non Duboscq & Grassé 1934]
Genus Oxymonas Janicki 1915
Genus Sauromonas Grassé & Hollande 1952
Family Polymastigidae Bütschli 1884 [Polymastigaceae]
Genus ?Brachymonas Grassé 1952 non Hiraishi et al. 1995
Genus ?Paranotila Cleveland 1966
Genus Monocercomonoides Travis 1932
Genus Polymastix Bütschli 1884 non Gruber 1884
Family Pyrsonymphidae Grassé 1892 [Pyrsonymphaceae; Di |
https://en.wikipedia.org/wiki/Divide%20and%20choose | Divide and choose (also Cut and choose or I cut, you choose) is a procedure for fair division of a continuous resource, such as a cake, between two parties. It involves a heterogeneous good or resource ("the cake") and two partners who have different preferences over parts of the cake. The protocol proceeds as follows: one person ("the cutter") cuts the cake into two pieces; the other person ("the chooser") selects one of the pieces; the cutter receives the remaining piece.
The procedure has been used since ancient times to divide land, cake and other resources between two parties. Currently, there is an entire field of research, called fair cake-cutting, devoted to various extensions and generalizations of cut-and-choose.
History
Divide and choose is mentioned in the Bible, in the Book of Genesis (chapter 13). When Abraham and Lot come to the land of Canaan, Abraham suggests that they divide it among them. Then Abraham, coming from the south, divides the land to a "left" (northern) part and a "right" (southern) part, and lets Lot choose. Lot chooses the eastern part which contains Sodom and Gomorrah, and Abraham is left with the western part which contains Beer Sheva, Hebron, Bethel, and Shechem.
The United Nations Convention on the Law of the Sea applies a procedure similar to divide-and-choose for allocating areas in the ocean among countries. A developed state applying for a permit to mine minerals from the ocean must prepare two areas of approximately similar value, let the UN authority choose one of them for reservation to developing states, and get the other area for mining:"Each application... shall cover a total area... sufficiently large and of sufficient estimated commercial value to allow two mining operations... of equal estimated commercial value... Within 45 days of receiving such data, the Authority shall designate which part is to be reserved solely for the conduct of activities by the Authority through the Enterprise or in association with devel |
https://en.wikipedia.org/wiki/Proportional%20division | A proportional division is a kind of fair division in which a resource is divided among n partners with subjective valuations, giving each partner at least 1/n of the resource by his/her own subjective valuation.
Proportionality was the first fairness criterion studied in the literature; hence it is sometimes called "simple fair division". It was first conceived by Steinhaus.
Example
Consider a land asset that has to be divided among 3 heirs: Alice and Bob who think that it's worth 3 million dollars, and George who thinks that it's worth $4.5M. In a proportional division, Alice receives a land-plot that she believes to be worth at least $1M, Bob receives a land-plot that he believes to be worth at least $1M (even though Alice may think it is worth less), and George receives a land-plot that he believes to be worth at least $1.5M.
Existence
A proportional division does not always exist. For example, if the resource contains several indivisible items and the number of people is larger than the number of items, then some people will get no item at all and their value will be zero. Nevertheless, such a division exists with high probability for indivisible items under certain assumptions on the valuations of the agents.
Moreover, a proportional division is guaranteed to exist if the following conditions hold:
The valuations of the players are non-atomic, i.e., there are no indivisible elements with positive value.
The valuations of the players are additive, i.e., when a piece is divided, the value of the piece is equal to the sum of its parts.
Hence, proportional division is usually studied in the context of fair cake-cutting. See proportional cake-cutting for detailed information about procedures for achieving a proportional division in the context of cake-cutting.
A more lenient fairness criterion is partial proportionality, in which each partner receives a certain fraction f(n) of the total value, where f(n) ≤ 1/n. Partially proportional divisions exist (un |
https://en.wikipedia.org/wiki/Bijective%20numeration | Bijective numeration is any numeral system in which every non-negative integer can be represented in exactly one way using a finite string of digits. The name refers to the bijection (i.e. one-to-one correspondence) that exists in this case between the set of non-negative integers and the set of finite strings using a finite set of symbols (the "digits").
Most ordinary numeral systems, such as the common decimal system, are not bijective because more than one string of digits can represent the same positive integer. In particular, adding leading zeroes does not change the value represented, so "1", "01" and "001" all represent the number one. Even though only the first is usual, the fact that the others are possible means that the decimal system is not bijective. However, the unary numeral system, with only one digit, is bijective.
A bijective base-k numeration is a bijective positional notation. It uses a string of digits from the set {1, 2, ..., k} (where k ≥ 1) to encode each positive integer; a digit's position in the string defines its value as a multiple of a power of k. calls this notation k-adic, but it should not be confused with the p-adic numbers: bijective numerals are a system for representing ordinary integers by finite strings of nonzero digits, whereas the p-adic numbers are a system of mathematical values that contain the integers as a subset and may need infinite sequences of digits in any numerical representation.
Definition
The base-k bijective numeration system uses the digit-set {1, 2, ..., k} (k ≥ 1) to uniquely represent every non-negative integer, as follows:
The integer zero is represented by the empty string.
The integer represented by the nonempty digit-string
is
.
The digit-string representing the integer m > 0 is
where
and
being the least integer not less than x (the ceiling function).
In contrast, standard positional notation can be defined with a similar recursive algorithm where
Extension to integers
For base , the bije |
https://en.wikipedia.org/wiki/Sign%20of%20the%20horns | The sign of the horns is a hand gesture with a variety of meanings and uses in various cultures. It is formed by extending the index and little fingers while holding the middle and ring fingers down with the thumb.
Religious and superstitious meaning
In Hatha Yoga, a similar hand gesture – with the tips of middle and ring finger touching the thumb – is known as , a gesture believed to rejuvenate the body. In Indian classical dance forms, it symbolizes the lion. In Buddhism, the is seen as an apotropaic gesture to expel demons, remove negative energy, and ward off evil. It is commonly found on depictions of Gautama Buddha. It is also found on the Song dynasty statue of Laozi, the founder of Taoism, on Mount Qingyuan, China.
An apotropaic usage of the sign can be seen in Italy and in other Mediterranean cultures where, when confronted with unfortunate events, or simply when these events are mentioned, the sign of the horns may be given to ward off further bad luck. It is also used traditionally to counter or ward off the "evil eye" (). In Italy specifically, the gesture is known as the ('horns'). With fingers pointing down, it is a common Mediterranean apotropaic gesture, by which people seek protection in unlucky situations (a Mediterranean equivalent of knocking on wood). The President of the Italian Republic, Giovanni Leone, startled the media when, while in Naples during an outbreak of cholera, he shook the hands of patients with one hand while with the other behind his back he superstitiously made the , presumably to ward off the disease or in reaction to being confronted by such misfortune.
In Italy and other parts of the Mediterranean region, the gesture must usually be performed with the fingers tilting downward or in a leveled position not pointed at someone and without movement to signify the warding off of bad luck; in the same region and elsewhere, the gesture may take a different, offensive, and insulting meaning if it is performed with fingers upw |
https://en.wikipedia.org/wiki/Legacy%20port | In computing, a legacy port is a computer port or connector that is considered by some to be fully or partially superseded. The replacement ports usually provide most of the functionality of the legacy ports with higher speeds, more compact design, or plug and play and hot swap capabilities for greater ease of use. Modern PC motherboards use separate Super I/O controllers to provide legacy ports, since current chipsets do not offer direct support for them. A category of computers called legacy-free PCs omits these ports, typically retaining only USB for external expansion.
USB adapters are often used to provide legacy ports if they are required on systems not equipped with them.
Common legacy ports
See also
Legacy encoding
Legacy system |
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Physics | The Max Planck Institute for Physics (MPP) is a physics institute in Munich, Germany that specializes in high energy physics and astroparticle physics. It is part of the Max-Planck-Gesellschaft and is also known as the Werner Heisenberg Institute, after its first director in its current location.
The founding of the institute traces back to 1914, as an idea from Fritz Haber, Walther Nernst, Max Planck, Emil Warburg, Heinrich Rubens. On October 1, 1917, the institute was officially founded in Berlin as Kaiser-Wilhelm-Institut für Physik (KWIP, Kaiser Wilhelm Institute for Physics) with Albert Einstein as the first head director. In October 1922, Max von Laue succeeded Einstein as managing director. Einstein gave up his position as a director of the institute in April 1933. The Institute took part in the German nuclear weapon project from 1939 to 1942.
In June 1942, Werner Heisenberg took over as managing director. A year after the end of fighting in Europe in World War II, the institute was moved to Göttingen and renamed the Max Planck Institute for Physics, with Heisenberg continuing as managing director. In 1946, Carl Friedrich von Weizsäcker and Karl Wirtz joined the faculty as the directors for theoretical and experimental physics, respectively.
In 1955 the institute made the decision to move to Munich, and soon after began construction of its current building, designed by Sep Ruf. The institute moved into its current location on September 1, 1958, and took on the new name the Max Planck Institute for Physics and Astrophysics, still with Heisenberg as the managing director. In 1991, the institute was split into the Max Planck Institute for Physics, the Max Planck Institute for Astrophysics and the Max Planck Institute for Extraterrestrial Physics.
Structure
There are three departments with multiple research groups:
Structure of matter
Innovative calculation methods in particle physics (Giulia Zanderighi)
Quantum field theory and scattering amplitudes (Joha |
https://en.wikipedia.org/wiki/WCWF | WCWF (channel 14) is a television station licensed to Suring, Wisconsin, United States, serving the Green Bay area as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside Fox affiliate WLUK-TV (channel 11). Both stations share studios on Lombardi Avenue (US 41) on the line between Green Bay and Ashwaubenon, while WCWF's transmitter is located on Scray Hill in Ledgeview.
History
The station launched on February 22, 1984, as religious independent station WSCO-TV, under the ownership of Northeastern Wisconsin Christian Television Incorporated. The station's former analog transmitter was located outside of the unincorporated Oconto County community of Krakow, north of Pulaski on WIS 32. Financial problems would force the station off the air by 1987; VCY America would purchase the station's license that year and return it to the air by 1993 as a sister station to Milwaukee's WVCY-TV with religious and home shopping programming. On April 30, 1997, Paxson Communications (now Ion Media Networks) purchased the station and converted it to a paid programming format under Paxson's inTV service. On August 31, 1998, WSCO became a charter owned-and-operated station of Pax TV (later i: Independent Television, now Ion Television) under the new call sign WPXG (for "Pax Green Bay").
On June 2, 1999, Paxson sold WPXG to ACME Communications; the station immediately became a primary WB affiliate and changed its call sign to WIWB, originally branded as "WB 14" and later "Wisconsin's WB" (The WPXG-TV callsign has been moved to a TV station in Manchester, New Hampshire). Before it joined the network, WB programming in Northeastern Wisconsin was previously seen either through cable providers that carried Chicago-based superstation WGN and/or Milwaukee's WVTV or during off hours on UPN affiliate WACY-TV (channel 32; Kids' WB programming aired as part of WACY's children's lineup). WIWB also continued to air Pax programming in the mornings, overnights and weekends for a |
https://en.wikipedia.org/wiki/Zinc%20acetate | Zinc acetate is a salt with the formula Zn(CH3CO2)2, which commonly occurs as the dihydrate Zn(CH3CO2)2·2H2O. Both the hydrate and the anhydrous forms are colorless solids that are used as dietary supplements. When used as a food additive, it has the E number E650.
Uses
Zinc acetate is a component of some medicines, e.g., lozenges for treating the common cold. Zinc acetate can also be used as a dietary supplement. As an oral daily supplement it is used to inhibit the body's absorption of copper as part of the treatment for Wilson's disease. Zinc acetate is also sold as an astringent in the form of an ointment, a topical lotion, or combined with an antibiotic such as erythromycin for the topical treatment of acne. It is commonly sold as a topical anti-itch ointment.
Zinc acetate is used as the catalyst for the industrial production of vinyl acetate from acetylene:
Approximately 1/3 of the worlds production uses this route, which because it is environmentally messy, is mainly practiced in countries with relaxed environmental regulations such as China.
Preparation
Zinc acetates are prepared by the action of acetic acid on zinc carbonate or zinc metal. Treatment of zinc nitrate with acetic anhydride is an alternative route.
Structures
In anhydrous zinc acetate the zinc is coordinated to four oxygen atoms to give a tetrahedral environment, these tetrahedral polyhedra are then interconnected by acetate ligands to give a range of polymeric structures.
In the dihydrate, zinc is octahedral, wherein both acetate groups are bidentate.
Reactions
Heating Zn(CH3CO2)2 in a vacuum results in a loss of acetic anhydride, leaving a residue of "basic zinc acetate," with the formula Zn4O(CH3CO2)6. It can also be prepared by a reaction of glacial acetic acid with zinc oxide. The cluster compound has a tetrahedral structure with an oxide ligand at its center Basic zinc acetate is a common precursor to metal-organic frameworks (MOFs).
See also
Basic beryllium acetate - isostru |
https://en.wikipedia.org/wiki/Doppler%20broadening | In atomic physics, Doppler broadening is broadening of spectral lines due to the Doppler effect caused by a distribution of velocities of atoms or molecules. Different velocities of the emitting (or absorbing) particles result in different Doppler shifts, the cumulative effect of which is the emission (absorption) line broadening.
This resulting line profile is known as a Doppler profile. A particular case is the thermal Doppler broadening due to the thermal motion of the particles. Then, the broadening depends only on the frequency of the spectral line, the mass of the emitting particles, and their temperature, and therefore can be used for inferring the temperature of an emitting (or absorbing) body being spectroscopically investigated.
Derivation
When a particle moves (e.g., due to the thermal motion) towards the observer, the emitted radiation is shifted to a higher frequency. Likewise, when the emitter moves away, the frequency is lowered. For non-relativistic thermal velocities, the Doppler shift in frequency is
where is the observed frequency, is the rest frequency, is the velocity of the emitter towards the observer, and is the speed of light.
Since there is a distribution of speeds both toward and away from the observer in any volume element of the radiating body, the net effect will be to broaden the observed line. If is the fraction of particles with velocity component to along a line of sight, then the corresponding distribution of the frequencies is
where is the velocity towards the observer corresponding to the shift of the rest frequency to . Therefore,
We can also express the broadening in terms of the wavelength . Recalling that in the non-relativistic limit , we obtain
In the case of the thermal Doppler broadening, the velocity distribution is given by the Maxwell distribution
where is the mass of the emitting particle, is the temperature, and is the Boltzmann constant.
Then
We can simplify this expression as
w |
https://en.wikipedia.org/wiki/Nanaerobe | Nanaerobes are organisms that cannot grow in the presence of micromolar concentrations of oxygen, but can grow with and benefit from the presence of nanomolar concentrations of oxygen (e.g. Bacteroides fragilis). Like other anaerobes, these organisms do not require oxygen for growth. This growth benefit requires the expression of an oxygen respiratory chain that is typically associated with microaerophilic respiration. Recent studies suggest that respiration in low concentrations of oxygen is an ancient process which predates the emergence of oxygenic photosynthesis. |
https://en.wikipedia.org/wiki/Software%20as%20a%20service | Software as a service (SaaS ) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is also known as on-demand software, web-based software, or web-hosted software.
SaaS is considered to be part of cloud computing, along with several other as a service business models.
SaaS apps are typically accessed by users of a web browser (a thin client). SaaS became a common delivery model for many business applications, including office software, messaging software, payroll processing software, DBMS software, management software, CAD software, development software, gamification, virtualization, accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, field service management, human resource management (HRM), talent acquisition, learning management systems, content management (CM), geographic information systems (GIS), and service desk management.
SaaS has been incorporated into the strategies of nearly all enterprise software companies.
History
Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe computer providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers.
The expansion of the Internet during the 1990s brought about a new class of centralized computing, called application service providers (ASP). ASPs provided businesses with the service of hosting and managing specialized business applications to reduce costs through central administration and the provider's specialization in a particular business application. Two of the largest ASPs were USI, which was headquartered in the Washington, D.C., area, and Futurelink Corporation, headquartered in Irvine |
https://en.wikipedia.org/wiki/Bacillus%20safensis | Bacillus safensis is a Gram-positive, spore-forming, and rod bacterium, originally isolated from a spacecraft in Florida and California. B. safensis could have possibly been transported to the planet Mars on spacecraft Opportunity and Spirit in 2004. There are several known strains of this bacterium, all of which belong to the Bacillota phylum of Bacteria. This bacterium also belongs to the large, pervasive genus Bacillus. B. safensis is an aerobic chemoheterotroph and is highly resistant to salt and UV radiation. B. safensis affects plant growth, since it is a powerful plant hormone producer, and it also acts as a plant growth-promoting rhizobacteria, enhancing plant growth after root colonization. Strain B. safensis JPL-MERTA-8-2 is (so far) the only bacterial strain shown to grow noticeably faster in micro-gravity environments than on the Earth surface.
Discovery and importance
Thirteen strains of the novel bacterium Bacillus safensis were first isolated from spacecraft surfaces and assembly-facility surfaces at the Kennedy Space Center in Florida as well as the Jet Propulsion Laboratory in California. The bacterium gets its name from the JPL Spacecraft Assembly Facility (SAF). Researchers used customary swabbing techniques to detect and collect the bacteria from cleanrooms where the spacecraft were put together in the Jet Propulsion Laboratory. The bacterium was accidentally brought to Mars during space missions due to contamination of clean rooms. Contamination of clean rooms during space travel is an area of concern for planetary protection as it can threaten microbial experimentation and give false positives of other microbial life forms on other planets.
V.V. Kothari and his colleagues from Saurashtra University in Gujarat, India, first isolated another strain, B. safensis VK. Strain VK was collected from Cuminum cyminum, a cumin plant in the desert area of Gujarat, India. Specifically, the bacteria were collected from the rhizosphere of the cumin plant.
|
https://en.wikipedia.org/wiki/Thermodynamic%20limit | In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles.
The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed.
In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas.
Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important.
There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as
microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering)
motion of visible particles (Brownian motion)
electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires)
Mathematically an asymptotic analysis is performed when considering the thermodynamic limit.
Origin
The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volu |
https://en.wikipedia.org/wiki/Software%20package%20metrics | Various software package metrics are used in modular programming. They have been mentioned by Robert Cecil Martin in his 2002 book Agile software development: principles, patterns, and practices.
The term software package here refers to a group of related classes in object-oriented programming.
Number of classes and interfaces: The number of concrete and abstract classes (and interfaces) in the package is an indicator of the extensibility of the package.
Afferent couplings (Ca): The number of classes in other packages that depend upon classes within the package is an indicator of the package's responsibility. Afferent couplings signal inward.
Efferent couplings (Ce): The number of classes in other packages that the classes in a package depend upon is an indicator of the package's dependence on externalities. Efferent couplings signal outward.
Abstractness (A): The ratio of the number of abstract classes (and interfaces) in the analyzed package to the total number of classes in the analyzed package. The range for this metric is 0 to 1, with A=0 indicating a completely concrete package and A=1 indicating a completely abstract package.
Instability (I): The ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely unstable package.
Distance from the main sequence (D): The perpendicular distance of a package from the idealized line A + I = 1. D is calculated as D = | A + I - 1 |. This metric is an indicator of the package's balance between abstractness and stability. A package squarely on the main sequence is optimally balanced with respect to its abstractness and stability. Ideal packages are either completely abstract and stable (I=0, A=1) or completely concrete and unstable (I=1, A=0). The range for this metric is 0 to 1, with D=0 indicating a |
https://en.wikipedia.org/wiki/Distinctive%20unit%20insignia | A distinctive unit insignia (DUI) is a metallic heraldic badge or device worn by soldiers in the United States Army. The DUI design is derived from the coat of arms authorized for a unit. DUIs may also be called "distinctive insignia" (DI) or, imprecisely, a "crest" or a "unit crest" by soldiers or collectors. The U.S. Army Institute of Heraldry is responsible for the design, development and authorization of all DUIs.
History
Pre-World War I Insignia
Distinctive ornamentation of a design desired by the organization was authorized for wear on the Mess Jacket uniform by designated organizations (staff corps, departments, corps of artillery, and infantry and cavalry regiments) per War Department General Order 132 dated December 31, 1902. The distinctive ornamentation was described later as coats of arms, pins and devices. The authority continued until omitted in the Army uniform regulation dated December 26, 1911.
Distinctive unit insignia
War Department Circular 161 dated 29 April 1920 authorized the use of a regimental coat of arms or badge as approved by the War Department for wear on the collar of the white uniform and the lapels of the mess jacket. War Department Circular 244, 1921 states: "It has been approved, in principle, that regiments of the Regular Army and National Guard may wear distinctive badges or trimmings on their uniforms as a means of promoting esprit de corps and keeping alive historical traditions. Various organizations which carry colors or standards have generally submitted coats of arms having certain historical significance. As fast as they are approved, these coats of arms will form the basis for regimental colors or standards which will eventually replace the present regimental colors or standards when these wear out. The use of these coats of arms as collar ornaments in lieu of the insignia of corps, departments, or arms of service would be an example of distinctive badge to be worn by the regiment." `The first unit to wear this insig |
https://en.wikipedia.org/wiki/Electron%20configurations%20of%20the%20elements%20%28data%20page%29 | This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below.
As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is .
Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration.
See also
Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184 |
https://en.wikipedia.org/wiki/Hook%20effect | The hook effect refers to the prozone phenomenon, also known as antibody excess or the Postzone phenomenon, also known as antigen excess. It is an immunologic phenomenon whereby the effectiveness of antibodies to form immune complexes can be impaired when concentrations of an antibody or an antigen are very high. The formation of immune complexes stops increasing with greater concentrations and then decreases at extremely high concentrations, producing a hook shape on a graph of measurements. An important practical relevance of the phenomenon is as a type of interference that plagues certain immunoassays and nephelometric assays, resulting in false negatives or inaccurately low results. Other common forms of interference include antibody interference, cross-reactivity and signal interference. The phenomenon is caused by very high concentrations of a particular analyte or antibody and is most prevalent in one-step (sandwich) immunoassays.
Mechanism and in vitro importance
Prozone - excess antibodies
In an agglutination test, a person's serum (which contains antibodies) is added to a test tube, which contains a particular antigen. If the antibodies interact with the antigen to form immune complexes, called agglutination, then the test is interpreted as positive. However, if too many antibodies are present that can bind to the antigen, then the antigenic sites are coated by antibodies, and few or no antibodies directed toward the pathogen are able to bind more than one antigenic particle. Since the antibodies do not bridge between antigens, no agglutination occurs. Because no agglutination occurs, the test is interpreted as negative. In this case, the result is a false negative. The range of relatively high antibody concentrations within which no reaction occurs is called the prozone.
Postzone - excess antigens
The effect can also occur because of antigen excess, when both the capture and detection antibodies become saturated by the high analyte concentration. In |
https://en.wikipedia.org/wiki/Hippology | Hippology (from Greek: ἵππος, hippos, "horse"; and λόγος, logos, "study") is the study of the horse - a domesticated, one-toed, hoofed mammal belonging to the taxonomic family Equidae.
Today, hippology is the title of an equine veterinary and management knowledge contest that is used in 4-H, Future Farmers of America (FFA), and many horse breed contests. Hippology consists of four phases: horse judging, written examination and slide identification, ID stations, and team problem-solving. Many youths across the United States and in other countries compete in hippology annually, showing their knowledge of all things "horse".
Items covered in the contest may cover any equine subject, including reproduction, training, parasites, dressage, history and origins, anatomy and physiology, driving and harnessing, horse industry, horse management, breeds, genetics, western games, colors, famous horses in history, parts of the saddle, types of bits, gaits, competitions, poisonous plants, and nutrition.
Judging
The judging phase generally includes judging both a halter class and an "under saddle" class (such as western pleasure, hunter under saddle, etc.). The classes involve four horses and contestants are given a judging card to place the horses. Unlike the horse judging competitions, hippology competitors are not expected to give reasons, but only place the classes.
Written examination and slide identification
The written examination is a multiple-choice, 50-question test. The written examination can cover any of the topics and any of the information from the designated sources. The slide identification is composed of 25 slides.
ID stations
The ID station phase includes 10 stations, each with 10 pictures or objects to be identified along with a list of multiple-choice answers. Each station has a theme (anatomy, poisonous plants, tack, etc.). A time limit exists allotting only 2 minutes per station.
Team problem solving
The team problem solving phase requires a team, wi |
https://en.wikipedia.org/wiki/Ion%20wind | Ion wind, ionic wind, corona wind or electric wind is the airflow induced by electrostatic forces linked to corona discharge arising at the tips of some sharp conductors (such as points or blades) subjected to high voltage relative to ground. Ion wind is an electrohydrodynamic phenomenon. Ion wind generators can also be considered electrohydrodynamic thrusters.
The term "ionic wind" is considered a misnomer due to misconceptions that only positive and negative ions were primarily involved in the phenomenon. A 2018 study found that electrons play a larger role than the negative ions during the negative voltage period. As a result, the term "electric wind" has been suggested as a more accurate terminology.
This phenomenon is now used in an MIT ionic wind plane, the first solid state plane, developed in 2018.
History
B. Wilson in 1750 demonstrated the recoil force associated to the same corona discharge and precursor to the ion thruster was the corona discharge pinwheel. The corona discharge from the freely rotating pinwheel arm with ends bent to sharp points gives the air a space charge which repels the point because the polarity is the same for the point and the air.
Francis Hauksbee, curator of instruments for the Royal Society of London, made the earliest report of electric wind in 1709. Myron Robinson completed an extensive bibliography and literature review during the 1950s resurgence of interest in the phenomena.
In 2018, researchers from South Korea and Slovenia used Schlieren photography to experimentally determine that electrons, in addition to ions, play an important role in generating ionic wind. The study was the first to provide direct evidence that the electrohydrodynamic force responsible for the ionic wind is caused by a charged particle drag that occur as the electrons and ions push the neutral particles away.
In 2018, a team of MIT researchers built and successfully flew the first-ever prototype plane propelled by ionic wind, MIT EAD Airframe |
https://en.wikipedia.org/wiki/Caret%20notation | Caret notation is a notation for control characters in ASCII. The notation assigns to control-code 1, sequentially through the alphabet to assigned to control-code 26 (0x1A). For the control-codes outside of the range 1–26, the notation extends to the adjacent, non-alphabetic ASCII characters.
Often a control character can be typed on a keyboard by holding down the and typing the character shown after the caret. The notation is often used to describe keyboard shortcuts even though the control character is not actually used (as in "type ^X to cut the text").
The meaning or interpretation of, or response to the individual control-codes is not prescribed by the caret notation.
Description
The notation consists of a caret () followed by a single character (usually a capital letter). The character has the ASCII code equal to the control code with the bit representing 0x40 reversed. A useful mnemonic, this has the effect of rendering the control codes 1 through 26 as through . Seven ASCII control characters map outside the upper-case alphabet: 0 (NUL) is , 27 (ESC) is , 28 is , 29 is , 30 is , 31 is , and 127 (DEL) is .
Examples are "" for the Windows CR, LF newline pair, and describing the ANSI escape sequence to clear the screen as "".
Only the use of characters in the range of 63–95 ("") is specifically allowed in the notation, but use of lower-case alphabetic characters entered at the keyboard is nearly always allowed – they are treated as equivalent to upper-case letters. When converting to a control character, except for '?', masking with 0x1F will produce the same result and also turn lower-case into the same control character as upper-case.
There is no corresponding version of the caret notation for control-codes with more than 7 bits such as the C1 control characters from 128–159 (0x80–0x9F). Some programs that produce caret notation show these as backslash and octal ("" through ""). Also see the bar notation used by Acorn Computers, below.
History
Th |
https://en.wikipedia.org/wiki/Comparison%20of%20project%20management%20software | The following is a comparison of project management software.
General information
Features
Monetary features
See also
Kanban (development)
Project management software
Project planning
Comparison of scrum software
Comparison of development estimation software
Comparison of source-code-hosting facilities
Comparison of CRM systems
Notes |
https://en.wikipedia.org/wiki/Syneresis%20%28chemistry%29 | Syneresis (also spelled 'synæresis' or 'synaeresis'), in chemistry, is the extraction or expulsion of a liquid from a gel, such as when serum drains from a contracting clot of blood. Another example of syneresis is the collection of whey on the surface of yogurt. Syneresis can also be observed when the amount of diluent in a swollen polymer exceeds the solubility limit as the temperature changes. A household example of this is the counterintuitive expulsion of water from dry gelatin when the temperature increases. Syneresis has also been proposed as the mechanism of formation for the amorphous silica composing the frustule of diatoms.
Examples
In the processing of dairy milk, for example during cheese making, syneresis is the formation of the curd due to the sudden removal of the hydrophilic macropeptides, which causes an imbalance in intermolecular forces. Bonds between hydrophobic sites start to develop and are enforced by calcium bonds, which form as the water molecules in the micelles start to leave the structure. This process is usually referred to as the phase of coagulation and syneresis. The splitting of the bond between residues 105 and 106 in the κ-casein molecule is often called the primary phase of the rennet action, while the phase of coagulation and syneresis is referred to as the secondary phase.
In cooking, syneresis is the sudden release of moisture contained within protein molecules, usually caused by excessive heat, which over-hardens the protective shell. Moisture inside expands upon heating. The hard protein shell pops, expelling the moisture.
This process is responsible for transforming juicy rare steak into dry steak when cooked thoroughly. It creates weeping in scrambled eggs, with dry protein curd swimming in the released moisture. It also causes emulsified sauces, such as hollandaise, to "break" ("split"). Additionally, it creates unsightly moisture pockets within baked custard dishes, such as flan or crème brûlée.
In dentistry, syneres |
https://en.wikipedia.org/wiki/List%20of%20tautonyms | The following is a list of tautonyms: zoological names of species consisting of two identical words (the generic name and the specific name have the same spelling). Such names are allowed in zoology, but not in botany, where the two parts of the name of a species must differ (though differences as small as one letter are permitted, as in cumin, Cuminum cyminum).
Mammals
Alces alces (Linnaeus, 1758) — Eurasian elk, moose
Axis axis (Erxleben, 1777) — chital, axis deer
Bison bison (Linnaeus, 1758) — American bison, buffalo
Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer
Caracal caracal (Schreber, 1776) — caracal
Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla
Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki
Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster
Crocuta crocuta (Erxleben, 1777) — spotted hyena
Dama dama (Linnaeus, 1758) — European fallow deer
Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew
Gazella gazella (Pallas, 1766) — mountain gazelle
Genetta genetta (Linnaeus, 1758) — common genet
Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil
Giraffa giraffa (von Schreber, 1784) — southern giraffe
Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse
Gorilla gorilla (Savage, 1847) — western gorilla
Gulo gulo (Linnaeus, 1758) — wolverine
Hoolock hoolock (Harlan, 1834) — western hoolock gibbon
Hyaena hyaena (Linnaeus, 1758) — striped hyena
Indri indri (Gmelin, 1788) — indri
Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa
Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming
Lemmus lemmus (Linnaeus, 1758) — Norway lemming
Lutra lutra (Linnaeus, 1758) — European otter
Lynx lynx (Linnaeus, 1758) — Eurasian lynx
Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat
Marmota marmota (Linnaeus, 1758) — Alpine marmot
Martes martes (Linnaeus, 1758) — European pine marten, pine marten
Meles meles (Linnaeus, 1758) — European badg |
https://en.wikipedia.org/wiki/Pratique | Pratique is the license given to a ship to enter a port, that indicates to local authorities (on assurance from the captain) that it is free from contagious disease. The clearance granted is commonly referred to as free pratique. A ship can signal a request for pratique by flying a solid yellow square-shaped flag. This yellow flag is the Q flag in the set of international maritime signal flags.
In the event that free pratique is not granted, a vessel will be held in quarantine, according to the customs and health regulations prevailing at the port of entry, typically until a customs or biosecurity officer makes a satisfactory inspection.
Since flying the Q flag involves a request for boarding by Port State Control, it has also become an invitation to Customs to inspect a vessel for dutiable goods or contraband, as in the Rich Harvest case, where a yacht carrying a large quantity of alcohol flew the Q flag in order to seek exemption from having to pay duty during a temporary visit to port. The same vessel was also flying the Q flag when she was boarded in Cape Verde and found to be carrying more than one ton of cocaine. However, although the captain had thereby invited the authorities to make an inspection (being, according to his claim, ignorant of the fact that the boat was carrying contraband), he and the crew were nevertheless arrested for trafficking.
A question over who granted pratique arose with the Ruby Princess COVID-19 incident.
See also
Quarantine |
https://en.wikipedia.org/wiki/Spite%20%28game%20theory%29 | In fair division problems, spite is a phenomenon that occurs when a player's value of an allocation decreases when one or more other players' valuation increases. Thus, other things being equal, a player exhibiting spite will prefer an allocation in which other players receive less than more (if more of the good is desirable).
In this language, spite is difficult to analyze because one has to assess two sets of preferences. For example, in the divide and choose method, a spiteful player would have to make a trade-off between depriving his opponent of cake, and getting more himself.
Within the field of sociobiology, spite is used to describe those social behaviors that have a negative impact on both the actor and recipient(s). Spite can be favored by kin selection when: (a) it leads to an indirect benefit to some third party that is sufficiently related to the actor (Wilsonian spite); or (b) when it is directed primarily at negatively related individuals (Hamiltonian spite). Negative relatedness occurs when two individuals are less related than average.
In game theory
The iterated prisoner's dilemma provides an example where players may "punish" each other for failing to cooperate in previous rounds, even if doing so would cause negative consequences for both players. For example, the simple "tit for tat" strategy has been shown to be effective in round-robin tournaments of iterated prisoner's dilemma.
In industrial relations
There is always difficulty in fairly dividing the proceeds of a business between the business owners and the employees.
When a trade union decides to call a strike, both employer and the union members lose money (and may damage the national economy). The unionists hope that the employer will give in to their demands before such losses have destroyed the business.
In the reverse direction, an employer may terminate the employment of certain productive workers who are agitating for higher wages or organising a trade union. Losing productiv |
https://en.wikipedia.org/wiki/De%20Arte%20Combinatoria | The Dissertatio de arte combinatoria ("Dissertation on the Art of Combinations" or "On the Combinatorial Art") is an early work by Gottfried Leibniz published in 1666 in Leipzig. It is an extended version of his first doctoral dissertation, written before the author had seriously undertaken the study of mathematics. The booklet was reissued without Leibniz' consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum. During the following years he repeatedly expressed regrets about its being circulated as he considered it immature. Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time.
Summary
The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering
the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might
Find all the predicates which are appropriate to a given subject, or
Find all the subjects which are convenient to a given predicate.
For this, Leibniz was inspired in the Ars Magna of Ramon Llull, although he criticized this author because of the arbitrariness of his categories indexing.
Leibniz discusses in this work some combinatorial concepts. He had read Clavius' comments to Sacrobosco's De sphaera mundi, and some other contemporary works. He introduced the term variationes ordinis for the permutations, combinationes for the combinations of two elements, con3nationes (shorthand for conternationes) for those of three elements, etc. His general term for combinations was complexions. He fo |
https://en.wikipedia.org/wiki/Arabic%20chat%20alphabet | The Arabic chat alphabet, Arabizi, or Arabeezi refer to the romanized alphabets for informal Arabic dialects in which Arabic script is transcribed or encoded into a combination of Latin script and Arabic numerals. These informal chat alphabets were originally used primarily by youth in the Arab world in very informal settings—especially for communicating over the Internet or for sending messages via cellular phones—though use is not necessarily restricted by age anymore and these chat alphabets have been used in other media such as advertising.
These chat alphabets differ from more formal and academic Arabic transliteration systems, in that they use numerals and multigraphs instead of diacritics for letters such as qāf () or ḍād () that do not exist in the basic Latin script (ASCII), and in that what is being transcribed is an informal dialect and not Standard Arabic. These Arabic chat alphabets also differ from each other, as each is influenced by the particular phonology of the Arabic dialect being transcribed and the orthography of the dominant European language in the area—typically the language of the former colonists, and typically either French or English.
Because of their widespread use, including in public advertisements by large multinational companies, large players in the online industry like Google and Microsoft have introduced tools that convert text written in Arabish to Arabic (Google Translate and Microsoft Translator). Add-ons for Mozilla Firefox and Chrome also exist (Panlatin and ARABEASY Keyboard ). The Arabic chat alphabet is never used in formal settings and is rarely, if ever, used for long communications.
History
During the last decades of the 20th century, Western text-based communication technologies, such as mobile phone text messaging, the World Wide Web, email, bulletin board systems, IRC, and instant messaging became increasingly prevalent in the Arab world. Most of these technologies originally permitted the use of the Latin scri |
https://en.wikipedia.org/wiki/Virtual%20globe | A virtual globe is a three-dimensional (3D) software model or representation of Earth or another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventional globe, virtual globes have the additional capability of representing many different views of the surface of Earth. These views may be of geographical features, man-made features such as roads and buildings, or abstract representations of demographic quantities such as population.
On November 20, 1997, Microsoft released an offline virtual globe in the form of Encarta Virtual Globe 98, followed by Cosmi's 3D World Atlas in 1999. The first widely publicized online virtual globes were NASA WorldWind (released in mid-2004) and Google Earth (mid-2005).
Types
Virtual globes may be used for study or navigation (by connecting to a GPS device) and their design varies considerably according to their purpose. Those wishing to portray a visually accurate representation of the Earth often use satellite image servers and are capable not only of rotation but also zooming and sometimes horizon tilting. Very often such virtual globes aim to provide as true a representation of the world as is possible, with worldwide coverage up to a very detailed level. When this is the case, the interface often has the option of providing simplified graphical overlays to highlight man-made features, since these are not necessarily obvious from a photographic aerial view. The other issue raised by such detail available is that of security, with some governments having raised concerns about the ease of access to detailed views of sensitive locations such as airports and military bases.
Another type of virtual globe exists whose aim is not the accurate representation of the planet, but instead a simplified graphical depiction. Most early computerized atlases were of this type and, while displaying less detail, these simplified |
https://en.wikipedia.org/wiki/Best%20of%20all%20possible%20worlds | The phrase "the best of all possible worlds" (; ) was coined by the German polymath and Enlightenment philosopher Gottfried Leibniz in his 1710 work Essais de Théodicée sur la bonté de Dieu, la liberté de l'homme et l'origine du mal (Essays of Theodicy on the Goodness of God, the Freedom of Man and the Origin of Evil), more commonly known simply as the Theodicy. The claim that the actual world is the best of all possible worlds is the central argument in Leibniz's theodicy, or his attempt to solve the problem of evil.
Leibniz
In Leibniz's works, the argument about the best of all possible worlds appears in the context of his theodicy, a word that he coined by combining the Greek words Theos, 'God', and dikē, 'justice'. Its object was to solve the problem of evil, that is, to reconcile the existence of evil and suffering in the world with the existence of a perfectly good, all-powerful and all-knowing God, who would seem required to prevent it; as such, the name comes from Leibniz's conceiving of the project as the vindication of God's justice, namely against the charges of injustice brought against him by such evils. Proving that this is the best of all possible worlds would dispel such charges by showing that, no matter how it may intuitively appear to us from our limited point of view, any other world – such as, namely, one without the evils which trouble our lives – would, in fact, have been worse than the current one, all things considered.
Leibniz's argument for this conclusion may be gathered from the paragraphs 53–55 of his Monadology, which run as follows:Since this is a very compact exposition, the remainder of this section will explain the argument in more words. While the text refers to "possible universes", this article will often adopt the more common usage "possible worlds", which refers to the same thing, which is explained next. As Leibniz said in the Theodicy, this term should not be misunderstood as referring only to a single planet or reality, |
https://en.wikipedia.org/wiki/Problem%20solving%20environment | A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results.
Purpose of PSE
Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design.
History
The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60. People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs.
The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation. By 1990s, hypertext, point-and-click had moved towards inter-operability. Moving on, a "Software Parts" Industry finally existed.
Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing |
https://en.wikipedia.org/wiki/Finger%20binary | Finger binary is a system for counting and displaying binary numbers on the fingers of either or both hands. Each finger represents one binary digit or bit. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both: that is, up to 25−1 or 210−1 respectively.
Modern computers typically store values as some whole number of 8-bit bytes, making the fingers of both hands together equivalent to 1 bytes of storage—in contrast to less than half a byte when using ten fingers to count up to 10.
Mechanics
In the binary number system, each numerical digit has two possible states (0 or 1) and each successive digit represents an increasing power of two.
Note: What follows is but one of several possible schemes for assigning the values 1, 2, 4, 8, 16, etc. to fingers, not necessarily the best. (see below the illustrations.): The rightmost digit represents two to the zeroth power (i.e., it is the "ones digit"); the digit to its left represents two to the first power (the "twos digit"); the next digit to the left represents two to the second power (the "fours digit"); and so on. (The decimal number system is essentially the same, only that powers of ten are used: "ones digit", "tens digit" "hundreds digit", etc.)
It is possible to use anatomical digits to represent numerical digits by using a raised finger to represent a binary digit in the "1" state and a lowered finger to represent it in the "0" state. Each successive finger represents a higher power of two.
With palms oriented toward the counter's face, the values for when only the right hand is used are:
When only the left hand is used:
When both hands are used:
And, alternately, with the palms oriented away from the counter:
The values of each raised finger are added together to arrive at a total number. In the one-handed version, all fingers raised is thus 31 (16 + 8 + 4 + 2 + 1), and all fingers lowered (a fist) is 0. In the two-handed system, all fingers raised is 1,023 (512 + 256 + 1 |
https://en.wikipedia.org/wiki/Complementary%20code%20keying | Complementary code keying (CCK) is a modulation scheme used with wireless networks (WLANs) that employ the IEEE 802.11b specification. In 1999, CCK was adopted to supplement the Barker code in wireless digital networks to achieve data rate higher than 2 Mbit/s at the expense of shorter distance. This is due to the shorter chipping sequence in CCK (8 bits versus 11 bits in Barker code) that means less spreading to obtain higher data rate but more susceptible to narrowband interference resulting in shorter radio transmission range. Beside shorter chipping sequence, CCK also has more chipping sequences to encode more bits (4 chipping sequences at 5.5 Mbit/s and 8 chipping sequences at 11 Mbit/s) increasing the data rate even further. The Barker code, however, only has a single chipping sequence.
The complementary codes first discussed by Golay were pairs of binary complementary codes and he noted that when the elements of a code of length N were either [−1 or 1] it followed immediately from their definition that the sum of their respective autocorrelation sequences was zero at all points except for the zero shift where it is equal to K×N. (K being the number of code words in the set).
CCK is a variation and improvement on M-ary Orthogonal Keying and uses 'polyphase complementary codes'. They were developed by Lucent Technologies and Harris Semiconductor and were adopted by the 802.11 working group in 1998. CCK is the form of modulation used when 802.11b operates at either 5.5 or 11 Mbit/s. CCK was selected over competing modulation techniques as it used approximately the same bandwidth and could use the same preamble and header as pre-existing 1 and 2 Mbit/s wireless networks and thus facilitated interoperability.
Polyphase complementary codes, first proposed by Sivaswamy, 1978, are codes where each element is a complex number of unit magnitude and arbitrary phase, or more specifically for 802.11b is one of [1, −1, j, −j].
Networks using the 802.11g specification e |
https://en.wikipedia.org/wiki/Perfect%20mixing | Perfect mixing is a term heavily used in relation to the definition of models that predict the behavior of chemical reactors. Perfect mixing assumes that there are no spatial gradients in a given physical envelope, such as:
concentration (with respect to any chemical species)
temperature
chemical potential
catalytic activity
Physical chemistry |
https://en.wikipedia.org/wiki/Sucrose%20gap | The sucrose gap technique is used to create a conduction block in nerve or muscle fibers. A high concentration of sucrose is applied to the extracellular space, which prevents the correct opening and closing of sodium and potassium channels, increasing resistance between two groups of cells. It was originally developed by Robert Stämpfli for recording action potentials in nerve fibers, and is particularly useful for measuring irreversible or highly variable pharmacological modifications of channel properties since untreated regions of membrane can be pulled into the node between the sucrose regions.
History
The sucrose gap technique was first introduced by in 1954 who worked with Alan Hodgkin and Andrew Huxley between 1947 and 1949. From his research, Stämpfli determined that currents moving along nerve fibers can be measured more easily when there is a gap of high resistance that reduces the amount of conducting medium outside of the cell. Stämpfli observed many problems with the ways that were being used to measure membrane potential at the time. He experimented with a new method that he called the sucrose gap. The method was used to study action potentials in nerve fibers.
Huxley observed Stämpfli's method and agreed that it was useful and produced very few errors. The sucrose gap technique also contributed to Stämpfli's and Huxley's discovery of inhibitory junction potentials. Since its introduction, many improvements and alterations have been made to the technique. One modification of the single sucrose gap method was introduced by C.H.V. Hoyle in 1987.
The double sucrose gap technique, which was first used by Rougier, Vassort, and Stämpfli to study cardiac cells in 1968, was improved by C. Leoty and J. Alix who introduced an improved chamber for the double sucrose gap with voltage clamp technique which eliminated external resistance from the node.
Method
A classic sucrose gap technique is typically set up with three chambers that each contain a segment of |
https://en.wikipedia.org/wiki/ETL%20SEMKO | ETL SEMKO (formerly Electrical Testing Laboratory) is a division of Intertek Group plc (LSE: ITRK) which is based in London. It specializes in electrical product safety testing, EMC testing, and benchmark performance testing. ETL SEMKO operates more than 30 offices and laboratories on six continents. SEMKO (Svenska Elektriska Materielkontrollanstalten "The Swedish Electric Equipment Control Office") was, until 1990, the body responsible for testing and certifying electric appliances in Sweden. The "S" mark was mandatory for products sold in Sweden until the common European CE mark was adopted prior to Sweden's accession to the European Union.
See also
Product certification
Canadian Standards Association
CE mark
Certification mark
Underwriters Laboratories |
https://en.wikipedia.org/wiki/Phenocopy | A phenocopy is a variation in phenotype (generally referring to a single trait) which is caused by environmental conditions (often, but not necessarily, during the organism's development), such that the organism's phenotype matches a phenotype which is determined by genetic factors. It is not a type of mutation, as it is non-hereditary.
The term was coined by Richard Goldschmidt in 1935. He used it to refer to forms, produced by some experimental procedure, whose appearance duplicates or copies the phenotype of some mutant or combination of mutants.
Examples
The butterfly genus Vanessa can change phenotype based on the local temperature. If introduced to Lapland they mimic butterflies localised to this area; and if localised to Syria they mimic butterflies of this area.
The larvae of Drosophila melanogaster have been found to be particularly vulnerable to environmental factors which produce phenocopies of known mutations; these factors include temperature, shock, radiation, and various chemical compounds. In fruit fly, Drosophila melanogaster, the normal body colour is brownish gray with black margins. A hereditary mutant for this was discovered by T.H. Morgan in 1910 where the body colour is yellow. This was a genotypic character which was constant in both the flies in all environments. However, in 1939, Rapoport discovered that if larvae of normal flies were fed with silver salts, they develop into yellow bodied flies irrespective of their genotype. The yellow bodied flies which are genetically brown is a variant of the original yellow bodied fly.
Phenocopy can also be observed in Himalayan rabbits. When raised in moderate temperatures, Himalayan rabbits are white in colour with black tail, nose, and ears, making them phenotypically distinguishable from genetically black rabbits. However, when raised in cold temperatures, Himalayan rabbits show black colouration of their coats, resembling the genetically black rabbits. Hence this Himalayan rabbit is a phenocop |
https://en.wikipedia.org/wiki/Autonegotiation | Autonegotiation is a signaling mechanism and procedure used by Ethernet over twisted pair by which two connected devices choose common transmission parameters, such as speed, duplex mode, and flow control. In this process, the connected devices first share their capabilities regarding these parameters and then choose the highest performance transmission mode they both support.
Autonegotiation is defined in clause 28 of IEEE 802.3. and was originally an optional component in the Fast Ethernet standard. It is backwards compatible with the normal link pulses (NLP) used by 10BASE-T. The protocol was significantly extended in the Gigabit Ethernet standard, and is mandatory for 1000BASE-T gigabit Ethernet over twisted pair.
In the OSI model, autonegotiation resides in the physical layer.
Standardization and interoperability
In 1995, the Fast Ethernet standard was released. Because this introduced a new speed option for the same wires, it included a means for connected network adapters to negotiate the best possible shared mode of operation. The autonegotiation protocol included in IEEE 802.3 clause 28 was developed from a patented technology by National Semiconductor known as NWay. The company gave a letter of assurance for anyone to use their system for a one time license fee. Another company has since bought the rights to that patent.
The first version of the autonegotiation specification, in the 1995 IEEE 802.3u Fast Ethernet standard, was implemented differently by different manufacturers leading to interoperability issues. These problems led many network administrators to manually set the speed and duplex mode of each network interface. However, the use of manually set configuration may also lead to duplex mismatches. Duplex mismatch is difficult to diagnose because the network is nominally working; Simple programs used for network tests such as ping report a valid connection. However, network performance is significantly impacted.
The autonegotiation specific |
https://en.wikipedia.org/wiki/Repaint | A repaint is a toy, typically a figure or doll, that was created entirely from a mold was previously available; however, the colors of the plastic and/or the paint operations have been changed. Repaints differ from redecos in that repaints do not alter the actual placement of paint applications while redecos do.
Since molds can be expensive to create, this is often seen as a comparatively inexpensive way for a toy company to make many different toys available in a cost-effective manner. It is also an effective way for toy manufacturers to produce exclusive figures, chase figures or other variants.
One of the many franchises that repaint their figures is Transformers. Bumblebee toys are sometimes repainted the color red to resemble another Transformers character: Cliffjumper.
In the collecting of 1:6th action figures, repainting has several methods. They can generally be narrowed down to 3 categories: paint, pastel and wash.
The term repaint also refers to fashion dolls whose original manufacturer face paint is removed and then repainted by an artist. Repaint styles include highly realistic treatments, fantasy makeovers, and celebrity likenesses. These dolls are often OOAK (one of a kind), although some artists create repaints in small limited editions.
See also
Palette swap, a comparable concept for video game characters
Ball-jointed doll, a type of doll that is often customized and repainted
Reborn dolls, baby dolls customized and repainted for realism |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.