source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Sight%20%28device%29 | A sight or sighting device is any device used to assist in precise visual alignment (i.e. aiming) of ranged weapons, surveying instruments, aircraft equipment or optical illumination equipments with the intended target. Sights can be a simple set or system of physical markers that serve as visual references for directly aligning the user's line of sight with the target (such as iron sights on firearms), or optical instruments that provide an optically enhanced — often magnified — target image aligned in the same focus with an aiming point (e.g. telescopic, reflector and holographic sights). There are also sights that actively project an illuminated point of aim (a.k.a. "hot spot") onto the target itself so it can be observed by anyone with a direct view, such as laser sights and infrared illuminators on some night vision devices, as well as augmented or even virtual reality-enabled digital cameras ("smart scopes") with software algorithms that produce digitally enhanced target images.
Simple sights
At its simplest, a sight typically has two components, front and rear aiming pieces that have to be lined up. Sights such as this can be found on many types of devices including weapons, surveying and measuring instruments, and navigational tools.
On weapons, these sights are usually formed by rugged metal parts, giving them the name "iron sights", as distinct from optical or computing sights. On many types of weapons they are built-in and may be fixed, adjustable, or marked for elevation, windage, target speed, etc. They are also classified in forms of notch (open sight) or aperture (closed sight). These types of sights can require considerable experience and skill, as the user has to hold proper eye position and simultaneously focus on the rear sight, the front sight, and a target, all at different distances, and align all three planes of focus.
Optical sights
Optical sights use optics that give the user an enhanced image with an aligned aiming point or pattern (al |
https://en.wikipedia.org/wiki/VDE%20e.V. | The VDE e. V. () is a German technical-scientific association. VDE is best known for creating and maintaining standards in the field of electric safety and has a strong influence on the DIN (German Institute for Standardization).
Organization
VDE has about 36,000 members (including 1,300 companies), according to his own statements and is one of the largest technical and scientific associations in Europe. The VDE is headquartered in Frankfurt am Main and has main branches in Brussels and Berlin. President of VDE is the former Alcatel and Alstrom Power manager Alf Henryk Wulf.
VDE carries out standardization work, product testing and certification. VDE is involved in technical knowledge transfers, research and promoting young talents in the technologies of electrical engineering, electronics and information technology and their applications. Other VDE activities include ensuring safety in electrical engineering, developing recognized technical regulations as national and international standards as well as testing and certifying electrical and electronic devices and systems. VDE works in the fields of information technology, energy, medical engineering, microelectronics, micro and nanotechnology and automation.
History
The first electrotechnical association in the territory of the then German Empire had existed since 1879. On January 21 and 22, 1893, 37 delegates from various German-speaking electrical engineering associations founded the VDE in Berlin. The first technical committee of the VDE was formed at the first annual meeting. Their task was to draw up regulations for electrical systems. As a result, the first “VDE regulation”, "VDE 0100", was passed in 1895 for the safe creation of electrotechnical systems.
In 1937 the VDE was incorporated into the NS-Bund Deutscher Technik (NSBDT). Reichsleiter der NSDAP Robert Ley, who later committed suicide while awaiting trial at Nuremberg for crimes against humanity and war crimes, was leader of NSBDT inclduing VDE. |
https://en.wikipedia.org/wiki/Teichm%C3%BCller%20space | In mathematics, the Teichmüller space of a (real) topological (or differential) surface is a space that parametrizes complex structures on up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Teichmüller spaces are named after Oswald Teichmüller.
Each point in a Teichmüller space may be regarded as an isomorphism class of "marked" Riemann surfaces, where a "marking" is an isotopy class of homeomorphisms from to itself. It can be viewed as a moduli space for marked hyperbolic structure on the surface, and this endows it with a natural topology for which it is homeomorphic to a ball of dimension for a surface of genus . In this way Teichmüller space can be viewed as the universal covering orbifold of the Riemann moduli space.
The Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The study of geometric features of these various structures is an active body of research.
The sub-field of mathematics that studies the Teichmüller space is called Teichmüller theory.
History
Moduli spaces for Riemann surfaces and related Fuchsian groups have been studied since the work of Bernhard Riemann (1826-1866), who knew that parameters were needed to describe the variations of complex structures on a surface of genus . The early study of Teichmüller space, in the late nineteenth–early twentieth century, was geometric and founded on the interpretation of Riemann surfaces as hyperbolic surfaces. Among the main contributors were Felix Klein, Henri Poincaré, Paul Koebe, Jakob Nielsen, Robert Fricke and Werner Fenchel.
The main contribution of Teichmüller to the study of moduli was the introduction of quasiconformal mappings to the subject. They allow us to give much more depth to the study of moduli spaces by endowing them with additional features that were not present in the previous, more elementary works. After World War II the subject was developed further in this analytic vein, in particular by L |
https://en.wikipedia.org/wiki/Remote%20digital%20terminal | In telecommunications, a remote digital terminal (RDT) typically accepts E1, T1 or OC-3 digital lines to communicate with a telephone Access network (AN) or telephone exchange (Local Digital Switch, LDS) on one side, and forms a local exchange (LE) on the other, which is connected to "plain old telephone service" (POTS) lines.
See also
Distributed switching
Remote concentrator
Digital access carrier system |
https://en.wikipedia.org/wiki/The%20Unadulterated%20Cat | The Unadulterated Cat by Terry Pratchett, illustrated by Gray Jolliffe, is a book written to promote what Pratchett terms the 'Real Cat', a cat who urinates in the flowerbeds, rips up the furniture, and eats frogs, mice and sundry other small animals. The opposite of the Real Cat is the 'Fizzy Keg Cat', a well-behaved and bland kind, as seen on cat food advertisements.
It was first published 1989 by Gollancz.
Translations
Автентичната котка (Bulgarian)
Nefalšovaná kočka (Czech)
De echte kat (Dutch)
Tosikissa ei kirppuja kiroile (Finnish)
Sacrés chats ! (French)
Die gemeine Hauskatze (German)
Az igazi macska: Kampány az igazi macskáért (Hungarian)
Il Gatto D.O.C. (Italian)
Kot w stanie czystym (Polish)
Кот без дураков (Russian)
Riktiga Katter bär inte Rosett (Swedish)
External links
L-Space
Books about cats
Comedy books
1989 books
Ethology
Terry Pratchett |
https://en.wikipedia.org/wiki/Riesz%E2%80%93Fischer%20theorem | In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.
For many authors, the Riesz–Fischer theorem refers to the fact that the Lp spaces from Lebesgue integration theory are complete.
Modern forms of the theorem
The most common form of the theorem states that a measurable function on is square integrable if and only if the corresponding Fourier series converges in the Lp space This means that if the Nth partial sum of the Fourier series corresponding to a square-integrable function f is given by
where the nth Fourier coefficient, is given by
then
where is the -norm.
Conversely, if is a two-sided sequence of complex numbers (that is, its indices range from negative infinity to positive infinity) such that
then there exists a function f such that f is square-integrable and the values are the Fourier coefficients of f.
This form of the Riesz–Fischer theorem is a stronger form of Bessel's inequality, and can be used to prove Parseval's identity for Fourier series.
Other results are often called the Riesz–Fischer theorem . Among them is the theorem that, if A is an orthonormal set in a Hilbert space H, and then
for all but countably many and
Furthermore, if A is an orthonormal basis for H and x an arbitrary vector, the series
converges (or ) to x. This is equivalent to saying that for every there exists a finite set in A such that
for every finite set B containing B0. Moreover, the following conditions on the set A are equivalent:
the set A is an orthonormal basis of H
for every vector
Another result, which also sometimes bears the name of Riesz and Fischer, is the theorem that (or more generally ) is complete.
Example
The Riesz–Fischer theorem also applies in a more general setting. Let R be an inner product space consis |
https://en.wikipedia.org/wiki/Autodynamics | Autodynamics was a physics theory proposed by Ricardo Carezani (1921–2016.) In the early, 1940s as a replacement for Einstein's theories of special relativity and general relativity. Autodynamics never gained status as a viable alternative model within the physics community, and today is wholly rejected by mainstream science.
Main tenets of autodynamics
The primary claim of autodynamics is that the equations of the Lorentz transformation are incorrectly formulated to describe relativistic effects, which would invalidate special relativity, general relativity, and Maxwell's equations. The effect of the revised equations proposed in autodynamics is to cause particle mass to decrease with particle velocity, being exchanged with kinetic energy (with mass being zero and kinetic energy being equal to the rest mass at c). This exchange between mass and energy is the proposed mechanism underlying most of the derived conclusions of autodynamics.
Ancillary predictions of autodynamics include:
the nonexistence of the neutrino,
the existence of additional particles that have not been observed by mainstream physicists (including the "picograviton" and the "electromuon"),
the existence of additional decay modes for muons and interaction modes for energetic atomic nuclei.
Status of autodynamics
Autodynamics is wholly rejected by the mainstream scientific community. Since Carezani's original publication, no papers on autodynamics have appeared in the scientific literature, except for additional papers by Carezani published in alternative journals such as Physics Essays. A 1999 article in the magazine Wired quotes H. Pierre Noyes, a professor at the Stanford Linear Accelerator Center, as stating, "autodynamics was disproved. Special relativity is correct". Noyes was a researcher in an experiment attempting to compare the predictions of SR and AD, and concluded that the values calculated by SR were significantly closer to what was observed. Carezani later argued that the ex |
https://en.wikipedia.org/wiki/Dobi%C5%84ski%27s%20formula | In combinatorial mathematics, Dobiński's formula states that the n-th Bell number Bn (i.e., the number of partitions of a set of size n) equals
where denotes Euler's number.
The formula is named after G. Dobiński, who published it in 1877.
Probabilistic content
In the setting of probability theory, Dobiński's formula represents the nth moment of the Poisson distribution with mean 1. Sometimes Dobiński's formula is stated as saying that the number of partitions of a set of size n equals the nth moment of that distribution.
Reduced formula
The computation of the sum of Dobiński's series can be reduced to a finite sum of terms, taking into account the information that is an integer. Precisely one has, for any integer
provided (a condition that of course implies , but that is satisfied by some of size ). Indeed, since , one has
Therefore for all
so that the tail is dominated by the series , which implies , whence the reduced formula.
Generalization
Dobiński's formula can be seen as a particular case, for , of the more general relation:
and for in this formula for Touchard polynomials
Proof
One proof relies on a formula for the generating function for Bell numbers,
The power series for the exponential gives
so
The coefficient of in this power series must be , so
Another style of proof was given by Rota. Recall that if x and n are nonnegative integers then the number of one-to-one functions that map a size-n set into a size-x set is the falling factorial
Let ƒ be any function from a size-n set A into a size-x set B. For any b ∈ B, let ƒ −1(b) = {a ∈ A : ƒ(a) = b}. Then } is a partition of A. Rota calls this partition the "kernel" of the function ƒ. Any function from A into B factors into
one function that maps a member of A to the element of the kernel to which it belongs, and
another function, which is necessarily one-to-one, that maps the kernel into B.
The first of these two factors is completely determined by the partition t |
https://en.wikipedia.org/wiki/G.%20Harry%20Stine | George Harry Stine (March 26, 1928 – November 2, 1997) was one of the founding figures of model rocketry, a science and technology writer, and (under the name Lee Correy) a science fiction author.
Education and early career
Stine grew up in Colorado Springs and attended New Mexico Military Institute and Colorado College in Colorado Springs, majoring in physics. Upon his graduation he went to work at White Sands Proving Grounds, first as a civilian scientist and then, from 1955 to 1957, at the U.S. Naval Ordnance Missile Test Facility as head of the Range Operations Division.
Stine and his wife Barbara were friends of author Robert A. Heinlein, who sponsored their wedding, as Harry's parents were dead and Barbara's mother too ill to travel. Several of Heinlein's books are dedicated one or both of them, most particularly Have Space Suit - Will Travel. Stine wrote science fiction under the pen name "Lee Correy" in the mid-1950s and under his own name in the 1980s and 1990s, as well as writing science articles for Popular Mechanics.
Model rocketry
After White Sands, Stine was employed at several other aerospace companies, finally ending up at Martin working on the Titan project. This job was short-lived: he was abruptly fired in 1957 when United Press called him for a reaction to the launch of Sputnik 1, and he repeated to them a passage from his just-published book Earth Satellites and the Race for Space Superiority, in which he wrote, "For the first time since the dawn of history, the Earth is going to have more than one moon. This is due to happen within the next few months—or it may have already happened even at the time you are reading this." The next day he was told to clear out his desk. To be more precise, in his "The Formative Years of Model Rocketry, 1957–1962; A Personal Memoir" (International Astronautical Federation, IAF XXVIIth Congress, Anaheim, CA, October 10–16, 1976 (76-241), he wrote "I was fired by the Martin Company on October 5, 1957, for tellin |
https://en.wikipedia.org/wiki/Knudsen%20gas | A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions (mean free path) is greater than the diameter of the receptacle that contains it. If the mean free path is much greater than the diameter, the flow regime is dominated by collisions between the gas molecules and the walls of the receptacle, rather than intermolecular collisions with each other. It is named after Martin Knudsen.
Knudsen number
For a Knudsen gas, the Knudsen number must be greater than 1. The Knudsen number can be defined as:
where
is the mean free path [m]
is the diameter of the receptacle [m].
When , the flow regime of the gas is transitional flow. In this regime the intermolecular collisions between gas particles are not yet negligible compared to collisions with the wall. However when , the flow regime is free molecular flow, so the intermolecular collisions between the particles are negligible compared to the collisions with the wall.
Example
For example, consider a receptacle of air at room temperature and pressure with a mean free path of 68nm. If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm. |
https://en.wikipedia.org/wiki/Bird%20flight | Bird flight is the primary mode of locomotion used by most bird species in which birds take off and fly. Flight assists birds with feeding, breeding, avoiding predators, and migrating.
Bird flight is one of the most complex forms of locomotion in the animal kingdom. Each facet of this type of motion, including hovering, taking off, and landing, involves many complex movements. As different bird species adapted over millions of years through evolution for specific environments, prey, predators, and other needs, they developed specializations in their wings, and acquired different forms of flight.
Various theories exist about how bird flight evolved, including flight from falling or gliding (the trees down hypothesis), from running or leaping (the ground up hypothesis), from wing-assisted incline running or from proavis (pouncing) behavior.
Basic mechanics of bird flight
Lift, drag and thrust
The fundamentals of bird flight are similar to those of aircraft, in which the aerodynamic forces sustaining flight are lift, drag, and thrust. Lift force is produced by the action of air flow on the wing, which is an airfoil. The airfoil is shaped such that the air provides a net upward force on the wing, while the movement of air is directed downward. Additional net lift may come from airflow around the bird's body in some species, especially during intermittent flight while the wings are folded or semi-folded (cf. lifting body).
Aerodynamic drag is the force opposite to the direction of motion, and hence the source of energy loss in flight. The drag force can be separated into two portions, lift-induced drag, which is the inherent cost of the wing producing lift (this energy ends up primarily in the wingtip vortices), and parasitic drag, including skin friction drag from the friction of air and body surfaces and form drag from the bird's frontal area. The streamlining of bird's body and wings reduces these forces. Unlike aircraft, which have engines to produce thrust, bi |
https://en.wikipedia.org/wiki/Reaching%20definition | In compiler theory, a reaching definition for a given instruction is an earlier instruction whose target variable can reach (be assigned to) the given one without an intervening assignment. For example, in the following code:
d1 : y := 3
d2 : x := y
d1 is a reaching definition for d2. In the following, example, however:
d1 : y := 3
d2 : y := 4
d3 : x := y
d1 is no longer a reaching definition for d3, because d2 kills its reach: the value defined in d1 is no longer available and cannot reach d3.
As analysis
The similarly named reaching definitions is a data-flow analysis which statically determines which definitions may reach a given point in the code. Because of its simplicity, it is often used as the canonical example of a data-flow analysis in textbooks. The data-flow confluence operator used is set union, and the analysis is forward flow. Reaching definitions are used to compute use-def chains.
The data-flow equations used for a given basic block in reaching definitions are:
In other words, the set of reaching definitions going into are all of the reaching definitions from 's predecessors, . consists of all of the basic blocks that come before in the control-flow graph. The reaching definitions coming out of are all reaching definitions of its predecessors minus those reaching definitions whose variable is killed by plus any new definitions generated within .
For a generic instruction, we define the and sets as follows:
, a set of locally available definitions in a basic block
, a set of definitions (not locally available, but in the rest of the program) killed by definitions in the basic block.
where is the set of all definitions that assign to the variable . Here is a unique label attached to the assigning instruction; thus, the domain of values in reaching definitions are these instruction labels.
Worklist algorithm
Reaching definition is usually calculated using an iterative worklist algorithm.
Input: control-flow gra |
https://en.wikipedia.org/wiki/Apocope | In phonology, apocope () is the loss (elision) of a word-final vowel. In a broader sense, it can refer to the loss of any final sound (including consonants) from a word.
Etymology
Apocope comes from the Greek () from () "cutting off", from () "away from" and () "to cut".
Historical sound change
In historical linguistics, apocope is often the loss of an unstressed vowel.
Loss of an unstressed vowel or vowel and nasal
Latin → Portuguese (sea)
Vulgar Latin → Spanish (bread)
Vulgar Latin → French (wolf)
Proto-Germanic → Old, Middle, and Modern English land
Old English → Modern English love (noun)
Old English → Modern English love (verb)
The loss of a final unstressed vowel is a feature of southern dialects of Māori in comparison to standard Māori, for example the term kainga (village) is rendered in southern Māori as kaik. A similar feature is seen in the dialects of Northern Italy.
Loss of other sounds
Non-rhotic English accents, including British Received Pronunciation, suppress the final r in each syllable (except when it is followed by a vowel). (In most accents, the suppressed r lengthens or modifies the preceding vowel.)
French pronunciation suppresses the final consonant of most words (but it is normally pronounced as a liaison at the beginning of the following word in the sentence if the latter word begins with a vowel or with an unaspirated 'h').
Latin → Spanish
Case marker
In Estonian and the Sami languages, apocopes explain the forms of grammatical cases. For example, a nominative is described as having apocope of the final vowel, but the genitive does not have it. Throughout its history, however, the genitive case marker has also undergone apocope: Estonian ("a city") and ("of a city") are derived from and respectively, as can still be seen in the corresponding Finnish word.
In the genitive form, the final , while it was being deleted, blocked the loss of . In Colloquial Finnish, the final vowel is sometimes omitted from cas |
https://en.wikipedia.org/wiki/Radio%20jamming | Radio jamming is the deliberate blocking of or interference with wireless communications. In some cases, jammers work by the transmission of radio signals that disrupt telecommunications by decreasing the signal-to-noise ratio.
The concept can be used in wireless data networks to disrupt information flow. It is a common form of censorship in totalitarian countries, in order to prevent foreign radio stations in border areas from reaching the country.
Jamming is usually distinguished from interference that can occur due to device malfunctions or other accidental circumstances. Devices that simply cause interference are regulated differently. Unintentional "jamming" occurs when an operator transmits on a busy frequency without first checking whether it is in use, or without being able to hear stations using the frequency. Another form of unintentional jamming occurs when equipment accidentally radiates a signal, such as a cable television plant that accidentally emits on an aircraft emergency frequency.
Distinction between "jamming" and "interference"
Originally the terms were used interchangeably but nowadays most radio users use the term "jamming" to describe the deliberate use of radio noise or signals in an attempt to disrupt communications (or prevent listening to broadcasts) whereas the term "interference" is used to describe unintentional forms of disruption (which are far more common). However, the distinction is still not universally applied. For inadvertent disruptions, see electromagnetic compatibility.
Method
Intentional communications jamming is usually aimed at radio signals to disrupt control of a battle. A transmitter, tuned to the same frequency as the opponents' receiving equipment and with the same type of modulation, can, with enough power, override any signal at the receiver. Digital wireless jamming for signals such as Bluetooth and WiFi is possible with very low power.
The most common types of this form of signal jamming are random noise, ra |
https://en.wikipedia.org/wiki/Slutsky%20equation | The Slutsky equation (or Slutsky identity) in economics, named after Eugen Slutsky, relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand, which is known as such since it compensates to maintain a fixed level of utility.
There are two parts of the Slutsky equation, namely the substitution effect, and income effect.
In general, the substitution effect can be negative for consumers as it can limit choices. He designed this formula to explore a consumer's response as the price changes. When the price increases, the budget set moves inward, which also causes the quantity demanded to decrease. In contrast, when the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The substitution effect is due to the effect of the relative price change while the income effect is due to the effect of income being freed up. The equation demonstrates that the change in the demand for a good, caused by a price change, is the result of two effects:
a substitution effect: when the price of good changes, as it becomes relatively cheaper, if hypothetically consumer's consumption remains same, income would be freed up which could be spent on a combination of each or more of the goods.
an income effect: the purchasing power of a consumer increases as a result of a price decrease, so the consumer can now afford better products or more of the same products, depending on whether the product itself is a normal good or an inferior good.
The Slutsky equation decomposes the change in demand for good i in response to a change in the price of good j:
where is the Hicksian demand and is the Marshallian demand, at the vector of price levels , wealth level (or, alternatively, income level) , and fixed utility level given by maximizing utility at the original price and income, formally given by the indirect utility function . The right-hand side of the equation is equal to the change in demand for goo |
https://en.wikipedia.org/wiki/Active%20Body%20Control | Active Body Control, or ABC, is the Mercedes-Benz brand name used to describe electronically controlled hydropneumatic suspension.
This suspension combines a high level of ride quality with control of the vehicle body motions, and therefore virtually eliminates body roll in many driving situations including cornering, accelerating, and braking.
Mercedes-Benz has been experimenting with these capabilities for automobile suspension since the air suspension of the 1963 600 and the hydropneumatic (fluid and air) suspension of the 1974 6.9.
ABC was only offered on rear-wheel drive models, as all-wheel drive 4MATIC models were available only with Airmatic semi-active air suspension, with the 2019 Mercedes-Benz GLE 450 4MATIC being the first AWD to have ABC available.
The production version was introduced at the 1999 Geneva Motor Show on the new Mercedes-Benz CL-Class C215.
Description
In the ABC system, a computer detects body movement from sensors located throughout the vehicle, and controls the action of the active suspension with the use of hydraulic servomechanisms. The hydraulic pressure to the servos is supplied by a high pressure radial piston hydraulic pump, operating at 3,000psi. Accumulators regulate the hydraulic pressure, by means of an enclosed nitrogen bubble separated from the hydraulic fluid by a membrane.
A total of 13 sensors continually monitor body movement and vehicle level and supply the ABC controller with new data every ten milliseconds. Four level sensors, one at each wheel measure the ride level of the vehicle, three accelerometers measure the vertical body acceleration, one acceleration sensor measures the longitudinal and one sensor the transverse body acceleration. As the ABC controller receives and processes data, it operates four hydraulic servos, each mounted on an air and pressurized hydraulic fluid strut, beside each wheel.
Almost instantaneously, the servo regulated suspension generates counter forces to body lean, dive and squa |
https://en.wikipedia.org/wiki/Massimo%20Pigliucci | Massimo Pigliucci (; born January 16, 1964) is an Italian-American philosopher and biologist who is professor of philosophy at the City College of New York, former co-host of the Rationally Speaking Podcast, and former editor in chief for the online magazine Scientia Salon. He is a critic of pseudoscience and creationism, and an advocate for secularism and science education.
Biography
Pigliucci was born in Monrovia, Liberia, and raised in Rome, Italy. He has a doctorate in genetics from the University of Ferrara, Italy, a PhD in biology from the University of Connecticut, and a PhD in philosophy of science from the University of Tennessee. He is a fellow of the American Association for the Advancement of Science and of the Committee for Skeptical Inquiry.
Pigliucci was formerly a professor of ecology and evolution at Stony Brook University. He explored phenotypic plasticity, genotype–environment interactions, natural selection, and the constraints imposed on natural selection by the genetic and developmental makeup of organisms. In 1997, while working at the University of Tennessee, Pigliucci received the Theodosius Dobzhansky Prize, awarded annually by the Society for the Study of Evolution to recognize the accomplishments and future promise of an outstanding young evolutionary biologist. As a philosopher, Pigliucci is interested in the structure and foundations of evolutionary theory, the relationship between science and philosophy, and the relationship between science and religion. He is a proponent of an extended evolutionary synthesis to unify parts of biology not covered by the "modern synthesis" of the 20th century.
Pigliucci has written regularly for Skeptical Inquirer on topics such as climate change denial, intelligent design, pseudoscience, and philosophy. He has also written for Philosophy Now and maintains a blog called "Rationally Speaking". He has debated "deniers of evolution" (young-earth creationists and intelligent design proponents), including |
https://en.wikipedia.org/wiki/KMYS | KMYS (channel 35) is a television station licensed to Kerrville, Texas, United States, serving the San Antonio area as an affiliate of the digital multicast network Dabl. It is owned by Deerfield Media, which maintains joint sales and shared services agreements (JSA/SSA) with Sinclair Broadcast Group, owner of dual NBC/CW affiliate WOAI-TV (channel 4) and Fox affiliate KABB (channel 29), for the provision of certain services. The stations share studios between Babcock Road and Sovereign Drive (off Loop 410) in northwest San Antonio, while KMYS's transmitter is located in rural southeastern Bandera County (near Lakehills).
Channel 35 began broadcasting in November 1985 as KRRT, the first independent station serving San Antonio and the first new commercial TV station in the San Antonio market in 28 years. It was owned in part, and eventually entirely, by TVX Broadcast Group, a Virginia-based group of independent stations. KRRT served as San Antonio's first affiliate of Fox when the network launched in 1986. TVX was acquired by Paramount Pictures in two stages between 1989 and 1991.
The Paramount Stations Group sold KRRT in 1994 to Jet Broadcasting of Erie, Pennsylvania. Jet then contracted with River City Broadcasting, owner of KABB, to run the station. The Fox affiliation moved to KABB, which was starting a news department; KRRT then became a UPN affiliate, and it also inherited San Antonio Spurs telecasts from KABB. After River City merged into Sinclair in 1996, KABB and other Sinclair-owned UPN stations switched to The WB in a major group deal that took effect in January 1998. KRRT became KMYS, an affiliate of MyNetworkTV, in 2006; it then became the CW affiliate in 2010, replacing KCWX. In September 2021, the programming that had been airing on KMYS became the "CW 35" subchannel of WOAI-TV; KMYS itself began exclusively airing diginets ahead of conversion to ATSC 3.0.
History
Early years
In December 1980, Hubbard Broadcasting petitioned the Federal Communicati |
https://en.wikipedia.org/wiki/Radeon%20R200%20series | The R200 is the second generation of GPUs used in Radeon graphics cards and developed by ATI Technologies. This GPU features 3D acceleration based upon Microsoft Direct3D 8.1 and OpenGL 1.3, a major improvement in features and performance compared to the preceding Radeon R100 design. The GPU also includes 2D GUI acceleration, video acceleration, and multiple display outputs. "R200" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
Architecture
R200's 3D hardware consists of 4 pixel pipelines, each with 2 texture sampling units. It has 2 vertex shader units and a legacy Direct3D 7 TCL unit, marketed as Charisma Engine II. It is ATI's first GPU with programmable pixel and vertex processors, called Pixel Tapestry II and compliant with Direct3D 8.1. R200 has advanced memory bandwidth saving and overdraw reduction hardware called HyperZ II that consists of occlusion culling (hierarchical Z), fast z-buffer clear, and z-buffer compression. The GPU is capable of dual display output (HydraVision) and is equipped with a video decoding engine (Video Immersion II) with adaptive hardware deinterlacing, temporal filtering, motion compensation, and iDCT.
R200 introduced pixel shader version 1.4 (PS1.4), a significant enhancement to prior PS1.x specifications. Notable instructions include "phase", "texcrd", and "texld". The phase instruction allows a shader program to operate on two separate "phases" (2 passes through the hardware), effectively doubling the maximum number of texture addressing and arithmetic instructions, and potentially allowing the number of passes required for an effect to be reduced. This allows not only more complicated effects, but can also provide a speed boost by utilizing the hardware more efficiently. The "texcrd" instruction moves the texture coordinate values of a texture into the destination register, while the "texld" instruction will load the texture at th |
https://en.wikipedia.org/wiki/List%20of%20plasma%20physicists | This is a list of physicists who have worked in or made notable contributions to the field of plasma physics.
See also
Whistler (radio) waves
Langmuir waves
Plasma physicists
Plasma physicists |
https://en.wikipedia.org/wiki/Disposal%20of%20human%20corpses | Disposal of human corpses, also called final disposition, is the practice and process of dealing with the remains of a deceased human being. Disposal methods may need to account for the fact that soft tissue will decompose relatively rapidly, while the skeleton will remain intact for thousands of years under certain conditions.
Several methods for disposal are practiced. A funeral is a ceremony that may accompany the final disposition. Regardless, the manner of disposal is often dominated by spirituality with a desire to hold vigil for the dead and may be highly ritualized. In cases of mass death, such as war and natural disaster, or in which the means of disposal are limited, practical concerns may be of greater priority.
Ancient methods of disposing of dead bodies include cremation practiced by the Romans, Greeks, Hindus, and some Mayans; burial practiced by the Chinese, Japanese, Bali, Jews, Christians, and Muslims, as well as some Mayans; mummification, a type of embalming, practiced by the Ancient Egyptians; and the sky burial and a similar method of disposal called Tower of Silence practiced by Tibetan Buddhists, some Mongolians, and Zoroastrians.
A modern method of quasi-final disposition, though still rare, is cryonics; this being putatively near-final, though nowhere close to demonstrated.
Commonly practiced legal methods
Some cultures place the dead in tombs of various sorts, either individually, or in specially designated tracts of land that house tombs. Burial in a graveyard is one common form of tomb. In some places, burials are impractical because the groundwater is too high; therefore tombs are placed above ground, as is the case in New Orleans, Louisiana, US. Elsewhere, a separate building for a tomb is usually reserved for the socially prominent and wealthy; grand, above-ground tombs are called mausoleums. The socially prominent sometimes had the privilege of having their corpses stored in church crypts. In more recent times, however, this has |
https://en.wikipedia.org/wiki/Glabrousness | Glabrousness (from the Latin glaber meaning "bald", "hairless", "shaved", "smooth") is the technical term for a lack of hair, down, setae, trichomes or other such covering. A glabrous surface may be a natural characteristic of all or part of a plant or animal, or be due to loss because of a physical condition, such as alopecia universalis in humans, which causes hair to fall out or not regrow.
In botany
Glabrousness or otherwise, of leaves, stems, and fruit is a feature commonly mentioned in plant keys; in botany and mycology, a glabrous morphological feature is one that is smooth and may be glossy. It has no bristles or hair-like structures such as trichomes. In anything like the zoological sense, no plants or fungi have hair or wool, although some structures may resemble such materials.
The term "glabrous" strictly applies only to features that lack trichomes at all times. When an organ bears trichomes at first, but loses them with age, the term used is glabrescent.
In the model plant Arabidopsis thaliana, trichome formation is initiated by the GLABROUS1 protein. Knockouts of the corresponding gene lead to glabrous plants. This phenotype has already been used in gene editing experiments and might be of interest as visual marker for plant research to improve gene editing methods such as CRISPR/Cas9.
In zoology
In varying degrees most mammals have some skin areas without natural hair. On the human body, glabrous skin is found on the ventral portion of the fingers, palms, soles of feet and lips, which are all parts of the body most closely associated with interacting with the world around us, as are the labia minora and glans penis. There are four main types of mechanoreceptors in the glabrous skin of humans: Pacinian corpuscles, Meissner's corpuscles, Merkel's discs, and Ruffini corpuscles.
The Naked mole-rat (Heterocephalus glaber) has evolved skin lacking in general, pelagic hair covering, yet has retained long, very sparsely scattered tactile hairs over i |
https://en.wikipedia.org/wiki/Video%20game%20bot | In video games, a bot is a type of artificial intelligence (AI)–based expert system software that plays a video game in the place of a human. Bots are used in a variety of video game genres for a variety of tasks: a bot written for a first-person shooter (FPS) works very differently from one written for a massively multiplayer online role-playing game (MMORPG). The former may include analysis of the map and even basic strategy; the latter may be used to automate a repetitive and tedious task like farming.
Bots written for first-person shooters usually try to mimic how a human would play a game. Computer-controlled bots may play against other bots and/or human players in unison, either over the Internet, on a LAN or in a local session. Features and intelligence of bots may vary greatly, especially with community created content. Advanced bots feature machine learning for dynamic learning of patterns of the opponent as well as dynamic learning of previously unknown maps – whereas more trivial bots may rely completely on lists of waypoints created for each map by the developer, limiting the bot to play only maps with said waypoints.
Using bots is generally against the rules of current massively multiplayer online role-playing games (MMORPGs), but a significant number of players still use MMORPG bots for games like RuneScape.
MUD players may run bots to automate laborious tasks, which can sometimes make up the bulk of the gameplay. While a prohibited practice in most MUDs, there is an incentive for the player to save time while the bot accumulates resources, such as experience, for the player character bot.
Types
Bots may be static, dynamic, or both. Static bots are designed to follow pre-made waypoints for each level or map. These bots need a unique waypoint file for each map. For example, Quake III Arena bots use an area awareness system file to move around the map, while Counter-Strike bots use a waypoint file. Dynamic bots learn the levels and maps as they play |
https://en.wikipedia.org/wiki/Irresistible%20force%20paradox | The irresistible force paradox (also unstoppable force paradox or shield and spear paradox), is a classic paradox formulated as "What happens when an unstoppable force meets an immovable object?" The immovable object and the unstoppable force are both implicitly assumed to be indestructible, or else the question would have a trivial resolution. Furthermore, it is assumed that they are two entities.
The paradox arises because it rests on two incompatible premises—that there can exist simultaneously such things as unstoppable forces and immovable objects.
Origins
An example of this paradox in eastern thought can be found in the origin of the Chinese word for contradiction (). This term originates from a story (see ) in the 3rd century BC philosophical book Han Feizi. In the story, a man trying to sell a spear and a shield claimed that his spear could pierce any shield, and then claimed that his shield was unpierceable. Then, asked about what would happen if he were to take his spear to strike his shield, the seller could not answer. This led to the idiom of "zìxīang máodùn" (自相矛盾, "from each-other spear shield"), or "self-contradictory".
Another ancient and mythological example illustrating this theme can be found in the story of the Teumessian fox, which can never be caught, and the hound Laelaps, which never misses what it hunts. Realizing the paradox, Zeus, Lord of the Sky, turns both creatures into static constellations.
Applications
The problems associated with this paradox can be applied to any other conflict between two abstractly defined extremes that are opposite.
One of the answers generated by seeming paradoxes like these is that there is no contradiction – that there is not a false dilemma. Christopher Kaczor suggested that the need to change indicates a lack of power rather than the possession thereof, and as such a person who was omniscient would never need to change their mind – not changing the future would be consistent with omniscience rather |
https://en.wikipedia.org/wiki/Complete%20Boolean%20algebra | In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion of A is the Dedekind–MacNeille completion.
More generally, if κ is a cardinal then a Boolean algebra is called κ-complete if every subset of cardinality less than κ has a supremum.
Examples
Complete Boolean algebras
Every finite Boolean algebra is complete.
The algebra of subsets of a given set is a complete Boolean algebra.
The regular open sets of any topological space form a complete Boolean algebra. This example is of particular importance because every forcing poset can be considered as a topological space (a base for the topology consisting of sets that are the set of all elements less than or equal to a given element). The corresponding regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions by the given forcing poset.
The algebra of all measurable subsets of a σ-finite measure space, modulo null sets, is a complete Boolean algebra. When the measure space is the unit interval with the σ-algebra of Lebesgue measurable sets, the Boolean algebra is called the random algebra.
The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra.
Non-complete Boolean algebras
The algebra of all subsets of an infinite set that are finite or have finite complement is a Boolean algebra but is not complete.
The algebra of all measurable subsets of a measure space is a ℵ1-complete Boolean algebra, but is not usually complet |
https://en.wikipedia.org/wiki/Crypton%20%28particle%29 | In particle physics, the crypton is a hypothetical superheavy particle, thought to exist in a hidden sector of string theory. It has been proposed as a candidate particle to explain the dark matter content of the universe. Cryptons arising in the hidden sector of a superstring-derived flipped SU(5) GUT model have been shown to be metastable with a lifetime exceeding the age of the universe. Their slow decays may provide a source for the ultra-high-energy cosmic rays (UHECR). |
https://en.wikipedia.org/wiki/Solomon%20Marcus | Solomon Marcus (; 1 March 1925 – 17 March 2016) was a Romanian mathematician, member of the Mathematical Section of the Romanian Academy (full member from 2001) and emeritus professor of the University of Bucharest's Faculty of Mathematics.
His main research was in the fields of mathematical analysis, mathematical and computational linguistics and computer science. He also published numerous papers on various cultural topics: poetics, linguistics, semiotics, philosophy, and history of science and education.
Early life and education
He was born in Bacău, Romania, to Sima and Alter Marcus, a Jewish family of tailors. From an early age he had to live through dictatorships, war, infringements on free speech and free thinking as well as anti-Semitism. At the age of 16 or 17 he started tutoring younger pupils in order to help his family financially.
He graduated from Ferdinand I High School in 1944, and completed his studies at the University of Bucharest's Faculty of Science, Department of Mathematics, in 1949. He continued tutoring throughout college and later recounted in an interview that he had to endure hunger during those years and that till the age of 20 he only wore hand-me-downs from his older brothers.
Academic career
Marcus obtained his PhD in Mathematics in 1956, with a thesis on the Monotonic functions of two variables, written under the direction of Miron Nicolescu. He was appointed Lecturer in 1955, Associate Professor in 1964, and became a Professor in 1966 (Emeritus in 1991).
Marcus has contributed to the following areas:
Mathematical Analysis, Set Theory, Measure and Integration Theory, and Topology
Theoretical Computer Science
Linguistics
Poetics and Theory of Literature
Semiotics
Cultural Anthropology
History and Philosophy of Science
Education.
Publications by and on Marcus
Marcus published about 50 books, which have been translated into English, French, German, Italian, Spanish, Russian, Greek, Hungarian, Czech, Serbo-Croatian, and about 400 r |
https://en.wikipedia.org/wiki/Biodistribution | Biodistribution is a method of tracking where compounds of interest travel in an experimental animal or human subject. For example, in the development of new compounds for PET (positron emission tomography) scanning, a radioactive isotope is chemically joined with a peptide (subunit of a protein). This particular class of isotopes emits positrons (which are antimatter particles, equal in mass to the electron, but with a positive charge). When ejected from the nucleus, positrons encounter an electron, and undergo annihilation which produces two gamma rays travelling in opposite directions. These gamma rays can be measured, and when compared to a standard, quantified.
Biodistribution analysis
Purpose and results
A useful novel radiolabelled compound is one that is suitable either for medical imaging of certain body parts such as brain or tumors (injecting low doses of radioactivity) or for treating tumors (requiring injection of high doses of radioactivity). In both cases, the compound needs to accumulate in the target organ and any surplus compound present needs to clear the body rapidly. In medical diagnostic imaging, this then produces a clear diagnostic image (high image contrast), and in radiotherapy leads to an attack of the target (e.g. tumor) while minimizing side effects to non-target organs. Additional factors need to be evaluated in the development of a new diagnostic or therapeutic compound, including safety for humans. From an efficacy point of view, the biodistribution is an important aspect which can be measured by dissection or by imaging.
By dissection
For example, a new radiolabelled compound is injected intravenously into a group of 16-20 rodents (typically mice or rats). At intervals of 1, 2, 4, and 24 hours, smaller groups (4-5) of the animals are euthanized, then dissected. The organs of interest (usually: blood, liver, spleen, kidney, muscle, fat, adrenals, pancreas, brain, bone, stomach, small intestine, and upper and lower large intes |
https://en.wikipedia.org/wiki/Conservative%20extension | In mathematical logic, a conservative extension is a supertheory of a theory which is often convenient for proving theorems, but proves no new theorems about the language of the original theory. Similarly, a non-conservative extension is a supertheory which is not conservative, and can prove more theorems than the original.
More formally stated, a theory is a (proof theoretic) conservative extension of a theory if every theorem of is a theorem of , and any theorem of in the language of is already a theorem of .
More generally, if is a set of formulas in the common language of and , then is -conservative over if every formula from provable in is also provable in .
Note that a conservative extension of a consistent theory is consistent. If it were not, then by the principle of explosion, every formula in the language of would be a theorem of , so every formula in the language of would be a theorem of , so would not be consistent. Hence, conservative extensions do not bear the risk of introducing new inconsistencies. This can also be seen as a methodology for writing and structuring large theories: start with a theory, , that is known (or assumed) to be consistent, and successively build conservative extensions , , ... of it.
Recently, conservative extensions have been used for defining a notion of module for ontologies: if an ontology is formalized as a logical theory, a subtheory is a module if the whole ontology is a conservative extension of the subtheory.
An extension which is not conservative may be called a proper extension.
Examples
, a subsystem of second-order arithmetic studied in reverse mathematics, is a conservative extension of first-order Peano arithmetic.
The subsystems of second-order arithmetic and are -conservative over .
The subsystem is a -conservative extension of , and a -conservative over (primitive recursive arithmetic).
Von Neumann–Bernays–Gödel set theory () is a conservative extension of Zermelo–Fraenkel set theo |
https://en.wikipedia.org/wiki/Nude%20mouse | A nude mouse is a laboratory mouse from a strain with a genetic mutation that causes a deteriorated or absent thymus, resulting in an inhibited immune system due to a greatly reduced number of T cells. The phenotype (main outward appearance) of the mouse is a lack of body hair, which gives it the "nude" nickname. The nude mouse is valuable to research because it can receive many different types of tissue and tumor grafts, as it mounts no rejection response. These xenografts are commonly used in research to test new methods of imaging and treating tumors. The genetic basis of the nude mouse mutation is a disruption of the FOXN1 gene.
Nomenclature
The nomenclature for the nude mouse has changed several times since their discovery. Originally they were described as nu and this was updated to Hfh11nu when the mutated gene was identified as a mutation in the HNF-3/forkhead homolog 11 gene. Then in 2000, the gene responsible for the mutation was identified as a member of the Fox gene family and the nomenclature was updated to Foxn1nu.
History and significance
Nude mice were first discovered in 1962 by Dr. Norman R. Grist at Ruchill Hospital's Brownlee virology laboratory in Glasgow. Because they lack a thymus, nude mice cannot generate mature T lymphocytes. Therefore they are unable to mount many types of adaptive immune responses, including:
antibody formation that requires CD4+ helper T cells
cell-mediated immune responses, which require CD4+ and/or CD8+ T cells
delayed-type hypersensitivity responses (require CD4+ T cells)
killing of virus-infected or malignant cells (requires CD8+ cytotoxic T cells)
graft rejection (requires both CD4+ and CD8+ T cells)
Because of the above features, nude mice have served in the laboratory to gain insights into the immune system, leukemia, solid tumors, AIDS and other forms of immune deficiency as well as leprosy.
Moreover, the absence of functioning T cells prevents nude mice from rejecting not only allografts (grafts of tis |
https://en.wikipedia.org/wiki/United%20Devices | United Devices, Inc. was a privately held, commercial volunteer computing company that focused on the use of grid computing to manage high-performance computing systems and enterprise cluster management. Its products and services allowed users to "allocate workloads to computers and devices throughout enterprises, aggregating computing power that would normally go unused." It operated under the name Univa UD for a time, after merging with Univa on September 17, 2007.
History
Founded in 1999 in Austin, Texas, United Devices began with volunteer computing expertise from distributed.net and SETI@home, although only a few of the original technical staff from those organizations remained through the years.
In April 2001, grid.org was formally announced as a philanthropic non-profit website to demonstrate the benefits of Internet-based large scale grid computing.
Later in 2002 with help from UD, NTT Data launched a similar Internet-based Cell Computing project targeting Japanese users. In 2004, IBM and United Devices worked together to start the World Community Grid project as another demonstration of Internet-based grid computing.
In August 2005, United Devices acquired the Paris-based GridXpert company and added Synergy to its product lineup.
In 2006, the company acknowledged seeing an industry shift from only using grid computing for compute-intensive applications towards data center automation and business application optimization.
Partly in response to the market shifts and reorganization, grid.org was shut down on April 27, 2007, after completing its mission to "demonstrate the viability and benefits of large-scale Internet-based grid computing".
On September 17, 2007, the company announced that it would merge with the Lisle, Illinois-based Univa and operate under the new name Univa UD. The combined company would offer open source solutions based around Globus Toolkit, while continuing to sell its existing grid products and support its existing customers. |
https://en.wikipedia.org/wiki/Bully%20algorithm | In distributed computing, the bully algorithm is a method for dynamically electing a coordinator or leader from a group of distributed computer processes. The process with the highest process ID number from amongst the non-failed processes is selected as the coordinator.
Assumptions
The algorithm assumes that:
the system is synchronous.
processes may fail at any time, including during execution of the algorithm.
a process fails by stopping and returns from failure by restarting.
there is a failure detector which detects failed processes.
message delivery between processes is reliable.
each process knows its own process id and address, and that of every other process.
Algorithm
The algorithm uses the following message types:
Election Message: Sent to announce election.
Answer (Alive) Message: Responds to the Election message.
Coordinator (Victory) Message: Sent by winner of the election to announce victory.
When a process recovers from failure, or the failure detector indicates that the current coordinator has failed, performs the following actions:
If has the highest process ID, it sends a Victory message to all other processes and becomes the new Coordinator. Otherwise, broadcasts an Election message to all other processes with higher process IDs than itself.
If receives no Answer after sending an Election message, then it broadcasts a Victory message to all other processes and becomes the Coordinator.
If receives an Answer from a process with a higher ID, it sends no further messages for this election and waits for a Victory message. (If there is no Victory message after a period of time, it restarts the process at the beginning.)
If receives an Election message from another process with a lower ID it sends an Answer message back and if it has not already started an election, it starts the election process at the beginning, by sending an Election message to higher-numbered processes.
If receives a Coordinator message, it treats the sen |
https://en.wikipedia.org/wiki/Social%20grooming | Social grooming is a behavior in which social animals, including humans, clean or maintain one another's body or appearance. A related term, allogrooming, indicates social grooming between members of the same species. Grooming is a major social activity, and a means by which animals who live in close proximity may bond and reinforce social structures, family links, and build companionships. Social grooming is also used as a means of conflict resolution, maternal behavior and reconciliation in some species. Mutual grooming typically describes the act of grooming between two individuals, often as a part of social grooming, pair bonding, or a precoital activity.
Evolutionary advantages
There are a variety of proposed mechanisms by which social grooming behavior has been hypothesized to increase fitness. These evolutionary advantages may come in the form of health benefits including reduced disease transmission and reduced stress levels, maintaining social structure, and direct improvement of fitness as a measure of survival.
Health benefits
It is often argued as to whether the overarching importance of social grooming is to boost an organism's health and hygiene or whether the social side of social grooming plays an equally or more important role. Traditionally, it is thought that the primary function of social grooming is the upkeep of an animal's hygiene. Evidence to support this statement involves the fact that all grooming concentrates on body parts that are inaccessible by autogrooming and that the amount of time spent allogrooming regions did not vary significantly even if the body part had a more important social or communicatory function.
Social grooming behaviour has been shown to elicit an array of health benefits in a variety of species. For example, group member connection has the potential to mitigate the potentially harmful effects of stressors. In macaques, social grooming has been proven to reduce heart rate. Social affiliation during a mild stre |
https://en.wikipedia.org/wiki/Otto%20engine | The Otto engine was a large stationary single-cylinder internal combustion four-stroke engine designed by the German Nicolaus Otto. It was a low-RPM machine, and only fired every other stroke due to the Otto cycle, also designed by Otto.
Types
Three types of internal combustion engines were designed by German inventors Nicolaus Otto and his partner Eugen Langen. The models were a failed 1862 compression engine, an 1864 atmospheric engine, and the 1876 Otto cycle engine known today as the petrol engine. The engines were initially used for stationary installations, as Otto had no interest in transportation. Other makers such as Daimler perfected the Otto engine for transportation use.
Timeline
Nicolaus August Otto as a young man was a traveling salesman for a grocery concern. In his travels he encountered the internal combustion engine built in Paris by Belgian expatriate Jean Joseph Etienne Lenoir. In 1860 Lenoir succeeded in creating a double-acting engine which ran on illuminating gas at 4% efficiency. The 18 liter Lenoir engine was able to produce only 2 horsepower.
In testing a replica of the Lenoir engine in 1861 Otto became aware of the effects of compression on the fuel charge. In 1862 Otto attempted to produce an engine to improve on the poor efficiency and reliability of the Lenoir engine. He tried to create an engine which would compress the fuel mixture prior to ignition, but failed, as that engine would run no more than a few minutes prior to its destruction. Many engineers were also trying to solve the problem with no success.
In 1864 Otto and Eugen Langen founded the first internal combustion engine production company NA Otto and Cie (NA Otto and Company). Otto and Cie succeeded in creating a successful atmospheric engine that same year.
The factory ran out of space and was moved to the town of Deutz, Germany in 1869 where the company was renamed to Gasmotoren-Fabrik Deutz (The Gas Engine Manufacturing Company Deutz).
Gottlieb Daimler was techni |
https://en.wikipedia.org/wiki/Decay%20correction | Decay correction is a method of estimating the amount of radioactive decay at some set time before it was actually measured.
Example of use
Researchers often want to measure, say, medical compounds in the bodies of animals. It's hard to measure them directly, so it can be chemically joined to a radionuclide - by measuring the radioactivity, you can get a good idea of how the original medical compound is being processed.
Samples may be collected and counted at short time intervals (ex: 1 and 4 hours). But they might be tested for radioactivity all at once. Decay correction is one way of working out what the radioactivity would have been at the time it was taken, rather than at the time it was tested.
For example, the isotope copper-64, commonly used in medical research, has a half-life of 12.7 hours. If you inject a large group of animals at "time zero", but measure the radioactivity in their organs at two later times, the later groups must be "decay corrected" to adjust for the decay that has occurred between the two time points.
Mathematics
The formula for decay correcting is:
where is the original activity count at time zero, is the activity at time "t", "λ" is the decay constant, and "t" is the elapsed time.
The decay constant is where "" is the half-life of the radioactive material of interest.
Example
The decay correct might be used this way: a group of 20 animals is injected with a compound of interest on a Monday at 10:00 a.m. The compound is chemically joined to the isotope copper-64, which has a known half-life of 12.7 hours, or 764 minutes. After one hour, the 5 animals in the "one hour" group are killed, dissected, and organs of interest are placed in sealed containers to await measurement. This is repeated for another 5 animals, at 2 hours, and again at 4 hours. At this point, (say, 4:00 p.m., Monday) all the organs collected so far are measured for radioactivity (a proxy of the distribution of the compound of interest). The next day |
https://en.wikipedia.org/wiki/Right%20circular%20cylinder | A right circular cylinder is a cylinder whose generatrices are perpendicular to the bases. Thus, in a right circular cylinder, the generatrix and the height have the same measurements. It is also less often called a cylinder of revolution, because it can be obtained by rotating a rectangle of sides and around one of its sides. Fixing as the side on which the revolution takes place, we obtain that the side , perpendicular to , will be the measure of the radius of the cylinder.
In addition to the right circular cylinder, within the study of spatial geometry there is also the oblique circular cylinder, characterized by not having the geratrices perpendicular to the bases.
Examples of objects that are shaped like a right circular cylinder are: some cans and candles.
Elements of the right circular cylinder
Bases: the two parallel and congruent circles of the bases;
Axis: the line determined by the two points of the centers of the cylinder's bases;
Height: the distance between the two planes of the cylinder's bases;
Geratrices: the line segments parallel to the axis and that have ends at the points of the bases' circles.
Lateral and total areas
The lateral surface of a right cylinder is the meeting of the generatrices. It can be obtained by the product between the length of the circumference of the base and the height of the cylinder. Therefore, the lateral surface area is given by:
.
Where:
represents the lateral surface area of the cylinder;
is approximately 3.14;
is the distance between the lateral surface of the cylinder and the axis, i.e. it is the value of the radius of the base;
is the height of the cylinder;
is the length of the circumference of the base, since , that is, .
Note that in the case of the right circular cylinder, the height and the generatrix have the same measure, so the lateral area can also be given by:
.
The area of the base of a cylinder is the area of a circle (in this case we define that the circle has a radius with mea |
https://en.wikipedia.org/wiki/Gamma%20counter | A gamma counter is an instrument to measure gamma radiation emitted by a radionuclide. Unlike survey meters, gamma counters are designed to measure small samples of radioactive material, typically with automated measurement and movement of multiple samples.
Operation
Gamma counters are usually scintillation counters. In a typical system, a number of samples are placed in sealed vials or test tubes, and moved along a track. One at a time, they move down inside a shielded detector, set to measure specific energy windows characteristic of the particular isotope. Within this shielded detector there is a scintillation crystal that surrounds the radioactive sample. Gamma rays emitted from the radioactive sample interact with the crystal, are absorbed, and light is emitted. A detector, such as a photomultiplier tube converts the visible light to an electrical signal. Depending on the half-life and concentration of the sample, measurement times may vary from 0.02 minutes to several hours.
If the photon has too low of an energy level it will be absorbed into the scintillation crystal and never be detected. If the photon has too high of an energy level the photons may just pass right through the crystal without any interaction. Thus the thickness of the crystal is very important when sampling radioactive materials using the Gamma Counter.
Applications
Gamma counters are standard tools used in the research and development of new radioactive compounds used for diagnosing (and treating disease) as in PET scanning. Gamma counters are used in radiobinding assays, radioimmunoassays (RIA) and nuclear medicine measurements such as GFR and hematocrit.
Some gamma counters can be used for gamma spectroscopy to identify radioactive materials based on their output energy spectrum, e.g. as a wipe test counter. |
https://en.wikipedia.org/wiki/Counting%20efficiency | In the measurement of ionising radiation the counting efficiency is the ratio between the number of particles or photons counted with a radiation counter and the number of particles or photons of the same type and energy emitted by the radiation source.
Factors
Several factors affect the counting efficiency:
The distance from the source of radiation
The absorption or scattering of particles by the medium (such as air) between the source and the surface of the detector
The detector efficiency in counting all radiation photons and particles that reach the surface of the detector
The accompanying diagram shows this graphically.
Scintillation counters
Radiation protection instruments
Large area scintillation counters used for surface radioactive contamination measurements use plate or planar radioactive sources as calibration standards. The Surface Emission Rate (SER), not the source activity, is used as a measure of the rate of particles emitted from the source of radiation. The SER is the true emission rate from the surface, which is usually different to the activity. This difference is due to self-shielding within the active layer of the source which will reduce the SER, or backscatter which will reflect particles off the backing plate of the active layer and will increase the SER. Beta particle plate sources usually have a significant backscatter, whereas alpha plate sources usually have no backscatter, but are easily self-attenuated if the active layer is made too thick.
Liquid scintillation counters
Counting efficiency varies for different isotopes, sample compositions and scintillation counters. Poor counting efficiency can be caused by an extremely low energy to light conversion rate, (scintillation efficiency) which, even optimally, will be a small value. It has been calculated that only some 4% of the energy from a β emission event is converted to light by even the most efficient scintillation cocktails.
Gaseous counters
Proportional counters and |
https://en.wikipedia.org/wiki/ZipZaps | ZipZaps are miniature radio-controlled cars that were sold by RadioShack, later marketed under the brand name XMODS Micro RC. They were commonly compared to Tomy's Bit Char-G (sold in the U.S. as MicroSizers) and Takara's Digi-Q micro R/C lines.
Overview
First introduced in September 2002, ZipZaps were an instant success and one of the most popular gifts of the holiday season in North America, with RadioShack stores often selling out of the starter kits. They were referred to as "micro R/C" due to their diminutive size. At around 1:64 scale, ZipZaps were only slightly larger than popular die-cast toy cars such as Hot Wheels and Matchbox.ZipZaps were unique among toy-style micro R/C vehicles in that they could be customized much like large R/C models. Each partially assembled ZipZaps kit included a pre-assembled chassis with built-in nickel metal hydride rechargeable batteries on which the gears, tires and hubcaps could be changed for better appearance and performance. Six motors and three gear ratios were offered, ranging from the 10,000 rpm "precision control motor" to the high-speed 34,000 rpm "NX" (named after a popular brand of nitrous oxide). Gear ratios were 12:1 "stock", 9.86:1 "performance" and 8.25:1 "turbo". Tracking was adjusted via a wheel alignment lever underneath the vehicles themselves. The vehicle's on-board battery was charged by snapping the entire car onto the top of the transmitter. Full charge was accomplished in about 40 to 60 seconds with run times averaging five minutes. Neither steering nor throttle was proportional, with both functions either full on or full off. The SE series added semi-proportional, two-speed throttle and fully proportional rack-and-pinion steering with steering trim adjustable at the transmitter, as with a full-sized R/C.
Further customizing options came from the variety of body shells which snapped on over the top of the chassis, allowing the car to take on the appearance of any number of popular cars including the |
https://en.wikipedia.org/wiki/Cartan%27s%20equivalence%20method | In mathematics, Cartan's equivalence method is a technique in differential geometry for determining whether two geometrical structures are the same up to a diffeomorphism. For example, if M and N are two Riemannian manifolds with metrics g and h, respectively,
when is there a diffeomorphism
such that
?
Although the answer to this particular question was known in dimension 2 to Gauss and in higher dimensions to Christoffel and perhaps Riemann as well, Élie Cartan and his intellectual heirs developed a technique for answering similar questions for radically different geometric structures. (For example see the Cartan–Karlhede algorithm.)
Cartan successfully applied his equivalence method to many such structures, including projective structures, CR structures, and complex structures, as well as ostensibly non-geometrical structures such as the equivalence of Lagrangians and ordinary differential equations. (His techniques were later developed more fully by many others, such as D. C. Spencer and Shiing-Shen Chern.)
The equivalence method is an essentially algorithmic procedure for determining when two geometric structures are identical. For Cartan, the primary geometrical information was expressed in a coframe or collection of coframes on a differentiable manifold. See method of moving frames.
Overview
Specifically, suppose that M and N are a pair of manifolds each carrying a G-structure for a structure group G. This amounts to giving a special class of coframes on M and N. Cartan's method addresses the question of whether there exists a local diffeomorphism φ:M→N under which the G-structure on N pulls back to the given G-structure on M. An equivalence problem has been "solved" if one can give a complete set of structural invariants for the G-structure: meaning that such a diffeomorphism exists if and only if all of the structural invariants agree in a suitably defined sense.
Explicitly, local systems of one-forms θi and γi are given on M and N, respective |
https://en.wikipedia.org/wiki/Curare | Curare ( or ; or ) is a common name for various alkaloid arrow poisons originating from plant extracts. Used as a paralyzing agent by indigenous peoples in Central and South America for hunting and for therapeutic purposes, curare only becomes active when it contaminates a wound. These poisons cause weakness of the skeletal muscles and, when administered in a sufficient dose, eventual death by asphyxiation due to paralysis of the diaphragm. Curare is prepared by boiling the bark of one of the dozens of plant sources, leaving a dark, heavy paste that can be applied to arrow or dart heads. In medicine, curare has been used as a treatment for tetanus and strychnine poisoning and as a paralyzing agent for surgical procedures.
History
The word 'curare' is derived from wurari, from the Carib language of the Macusi of Guyana. It has its origins in the Carib phrase "mawa cure" meaning of the Mawa vine, scientifically known as Strychnos toxifera. Curare is also known among indigenous peoples as Ampi, Woorari, Woorara, Woorali, Wourali, Wouralia, Ourare, Ourari, Urare, Urari, and Uirary. The noun 'curare' is not to be confused with the Latin verb 'curare' ('to heal, cure, take care of').
Classification
In 1895 pharmacologist Rudolf Boehm sought to classify the various alkaloid poisons based on the containers used for their preparation. He believed curare could be categorized into three main types as seen below. However useful it appeared, it became rapidly outmoded. Richard Gill, a plant collector, found that the indigenous peoples began to use a variety of containers for their curare preparations, henceforth invalidating Boehm's basis of classification.
Tube or bamboo curare: Mainly composed of the toxin D-tubocurarine, this poison is found packed into hollow bamboo tubes derived from Chondrodendron and other genera in the Menispermaceae. According to their LD50 values, tube curare is thought to be the most toxic.
Pot curare: Mainly composed of alkaloid components p |
https://en.wikipedia.org/wiki/Hans%20Kramers | Hendrik Anthony "Hans" Kramers (17 December 1894 – 24 April 1952) was a Dutch physicist who worked with Niels Bohr to understand how electromagnetic waves interact with matter and made important contributions to quantum mechanics and statistical physics.
Background and education
Hans Kramers was born on 17 December 1894 in Rotterdam. the son of Hendrik Kramers, a physician, and Jeanne Susanne Breukelman.
In 1912 Hans finished secondary education (HBS) in Rotterdam, and studied mathematics and physics at the University of Leiden, where he obtained a master's degree in 1916. Kramers wanted to obtain foreign experience during his doctoral research, but his first choice of supervisor, Max Born in Göttingen, was not reachable because of the First World War. Because Denmark was neutral in this war, as was the Netherlands, he travelled (by ship, overland was impossible) to Copenhagen, where he visited unannounced the then still relatively unknown Niels Bohr. Bohr took him on as a Ph.D. candidate and Kramers prepared his dissertation under Bohr's direction. Although Kramers did most of his doctoral research (on intensities of atomic transitions) in Copenhagen, he obtained his formal Ph.D. under Ehrenfest in Leiden, on 8 May 1919.
Kramers enjoyed music, and played cello and piano.
Academic career
He worked for almost ten years in Bohr's group, becoming an associate professor at the University of Copenhagen. He played a role in the ill-fated BKS theory of 1924-5. Kramers left Denmark in 1926 and returned to the Netherlands. He became a full professor in theoretical physics at Utrecht University, where he supervised Tjalling Koopmans.
In 1925, with Werner Heisenberg he developed the Kramers–Heisenberg dispersion formula. He is also credited with introducing in 1948 the concept of renormalization into quantum field theory, although his approach was nonrelativistic. He is also credited for the Kramers–Kronig relations with Ralph Kronig which are mathematical equations rela |
https://en.wikipedia.org/wiki/Prosthaphaeresis | Prosthaphaeresis (from the Greek προσθαφαίρεσις) was an algorithm used in the late 16th century and early 17th century for approximate multiplication and division using formulas from trigonometry. For the 25 years preceding the invention of the logarithm in 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from the Greek prosthesis (πρόσθεσις) and aphaeresis (ἀφαίρεσις), meaning addition and subtraction, two steps in the process.
History and motivation
In 16th-century Europe, celestial navigation of ships on long voyages relied heavily on ephemerides to determine their position and course. These voluminous charts prepared by astronomers detailed the position of stars and planets at various points in time. The models used to compute these were based on spherical trigonometry, which relates the angles and arc lengths of spherical triangles (see diagram, right) using formulas such as
and
where a, b and c are the angles subtended at the centre of the sphere by the corresponding arcs.
When one quantity in such a formula is unknown but the others are known, the unknown quantity can be computed using a series of multiplications, divisions, and trigonometric table lookups. Astronomers had to make thousands of such calculations, and because the best method of multiplication available was long multiplication, most of this time was spent taxingly multiplying out products.
Mathematicians, particularly those who were also astronomers, were looking for an easier way, and trigonometry was one of the most advanced and familiar fields to these people. Prosthaphaeresis appeared in the 1580s, but its originator is not known for certain; its contributors included the mathematicians Ibn Yunis, Johannes Werner, Paul Wittich, Joost Bürgi, Christopher Clavius, and François Viète. Wittich, Yunis, and Clavius were all astronomers and have all been credited by various sources with discovering the method. Its most well-known propo |
https://en.wikipedia.org/wiki/Gaussian%20blur | In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function (named after mathematician and scientist Carl Friedrich Gauss).
It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination.
Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales—see scale space representation and scale space implementation.
Mathematics
Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect.
Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's high-frequency components; a Gaussian blur is thus a low-pass filter.
The Gaussian blur is a type of image-blurring filter that uses a Gaussian function (which also expresses the normal distribution in statistics) for calculating the transformation to apply to each pixel in the image. The formula of a Gaussian function in one dimension is
In two dimensions, it is the product of two such Gaussian functions, one in each dimension:
where x is the distance from the origin in the horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution. It is important to note that the origin on these axes are at the center (0, 0). When applied in two dimensions, this formula produces a surface whose contours are concentric circles w |
https://en.wikipedia.org/wiki/Nutritional%20genomics | Nutritional genomics, also known as nutrigenomics, is a science studying the relationship between human genome, human nutrition and health. People in the field work toward developing an understanding of how the whole body responds to a food via systems biology, as well as single gene/single food compound relationships. Nutritional genomics or Nutrigenomics is the relation between food and inherited genes, it was first expressed in 2001.
Introduction
The term "nutritional genomics" is an umbrella term including several subcategories, such as nutrigenetics, nutrigenomics, and nutritional epigenetics. Each of these subcategories explain some aspect of how genes react to nutrients and express specific phenotypes, like disease risk. There are several applications for nutritional genomics, for example how much nutritional intervention and therapy can successfully be used for disease prevention and treatment.
Background and preventive health
Nutritional science originally emerged as a field that studied individuals lacking certain nutrients and the subsequent effects, such as the disease scurvy which results from a lack of vitamin C. As other diseases closely related to diet (but not deficiency), such as obesity, became more prevalent, nutritional science expanded to cover these topics as well. Nutritional research typically focuses on preventative measure, trying to identify what nutrients or foods will raise or lower risks of diseases and damage to the human body.
For example, Prader–Willi syndrome, a disease whose most distinguishing factor is insatiable appetite, has been specifically linked to an epigenetic pattern in which the paternal copy in the chromosomal region is erroneously deleted, and the maternal loci is inactivated by over methylation. Yet, although certain disorders may be linked to certain single-nucleotide polymorphisms (SNPs) or other localized patterns, variation within a population may yield many more polymorphisms.
Mediterranean Diet
The Medi |
https://en.wikipedia.org/wiki/Software%20quality%20assurance | Software quality assurance (SQA) is a means and practice of monitoring all software engineering processes, methods, and work products to ensure compliance against defined standards. It may include ensuring conformance to standards or models, such as ISO/IEC 9126 (now superseded by ISO 25010), SPICE or CMMI.
It includes standards and procedures that managers, administrators or developers may use to review and audit software products and activities to verify that the software meets quality criteria which link to standards.
SQA encompasses the entire software development process, including requirements engineering, software design, coding, code reviews, source code control, software configuration management, testing, release management and software integration. It is organized into goals, commitments, abilities, activities, measurements, verification and validation.
Purpose
SQA involves a three-pronged approach:
Organization-wide policies, procedures and standards
Project-specific policies, procedures and standards
Compliance to appropriate procedures
Guidelines for the application of ISO 9001:2015 to computer software are described in ISO/IEC/IEEE 90003:2018. External entities can be contracted as part of process assessments to verify that projects are standard-compliant. More specifically in case of software, ISO/IEC 9126 (now superseded by ISO 25010) should be considered and applied for software quality.
Activities
Quality assurance activities take place at each phase of development. Analysts use application technology and techniques to achieve high-quality specifications and designs, such as model-driven design. Engineers and technicians find bugs and problems with related software quality through testing activities. Standards and process deviations are identified and addressed throughout development by project managers or quality managers, who also ensure that changes to functionality, performance, features, architecture and component (in general: cha |
https://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson%20method | In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.
For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step times the thermal diffusivity to the square of space step, , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
Principle
The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method the simplest example of a Gauss–Legendre implicit Runge–Kutta method which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is
Letting and evaluated for and , the equation for Crank–Nicolson method is a combination of the forward Euler method at and the backward Euler method at n + 1 (note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution):
Note that this is an implicit method: to get the "next" value of u in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear, so that advancing in time will involve the solution of a system of nonlinear algebraic equati |
https://en.wikipedia.org/wiki/Photuris%20%28protocol%29 | In computer networking, Photuris is a session key management protocol defined in RFC 2522.
Photuris is the Latin name of a genus of fireflies native to North America that mimic the signals of other firefly species. The name was chosen as a reference to the (classified) FIREFLY key exchange protocol developed by the National Security Agency and used in the STU-III secure telephone, which is believed to operate by similar principles.
See also
FIREFLY
External links
RFC 2522
Test implementation of Photuris
Network protocols |
https://en.wikipedia.org/wiki/Desert%20ecology | Desert ecology is the study of interactions between both biotic and abiotic components of desert environments. A desert ecosystem is defined by interactions between organisms, the climate in which they live, and any other non-living influences on the habitat. Deserts are arid regions that are generally associated with warm temperatures; however, cold deserts also exist. Deserts can be found in every continent, with the largest deserts located in Antarctica, the Arctic, Northern Africa, and the Middle East.
Climate
Deserts experience a wide range of temperatures and weather conditions, and can be classified into four types: hot, semiarid, coastal, and cold. Hot deserts experience warm temperatures year round, and low annual precipitation. Low levels of humidity in hot deserts contribute to high daytime temperatures, and extensive night time heat loss. The average annual temperature in hot deserts is approximately 20 to 25 °C, however, extreme weather conditions can lead to temperatures ranging from -18 to 49 °C.
Rainfall generally occurs, followed by long periods of dryness. Semiarid deserts experience similar conditions to hot deserts, however, the maximum and minimum temperatures tend to be less extreme, and generally range from 10 to 38 °C. Coastal deserts are cooler than hot and semiarid deserts, with average summer temperatures ranging between 13 and 24 °C. They also feature higher total rainfall values. Cold deserts are similar in temperature to coastal deserts, however, they receive more annual precipitation in the form of snowfall. Deserts are most notable for their dry climates; usually a result from their surrounding geography. For example, rain-blocking mountain ranges, and distance from oceans are two geographic features that contribute to desert aridity. Rain-blocking mountain ranges create Rain Shadows. As air rises and cools, its relative humidity increases and some or most moisture rains out, leaving little to no water vapor to form precipitation o |
https://en.wikipedia.org/wiki/Luminous%20efficacy | Luminous efficacy is a measure of how well a light source produces visible light. It is the ratio of luminous flux to power, measured in lumens per watt in the International System of Units (SI). Depending on context, the power can be either the radiant flux of the source's output, or it can be the total power (electric power, chemical energy, or others) consumed by the source.
Which sense of the term is intended must usually be inferred from the context, and is sometimes unclear. The former sense is sometimes called luminous efficacy of radiation, and the latter luminous efficacy of a light source or overall luminous efficacy.
Not all wavelengths of light are equally visible, or equally effective at stimulating human vision, due to the spectral sensitivity of the human eye; radiation in the infrared and ultraviolet parts of the spectrum is useless for illumination. The luminous efficacy of a source is the product of how well it converts energy to electromagnetic radiation, and how well the emitted radiation is detected by the human eye.
Efficacy and efficiency
Luminous efficacy can be normalized by the maximum possible luminous efficacy to a dimensionless quantity called luminous efficiency. The distinction between efficacy and efficiency is not always carefully maintained in published sources, so it is not uncommon to see "efficiencies" expressed in lumens per watt, or "efficacies" expressed as a percentage.
Luminous efficacy of radiation
Explanation
Wavelengths of light outside of the visible spectrum are not useful for illumination because they cannot be seen by the human eye. Furthermore, the eye responds more to some wavelengths of light than others, even within the visible spectrum. This response of the eye is represented by the luminosity function. This is a standardized function which represents the response of a "typical" eye under bright conditions (photopic vision). One can also define a similar curve for dim conditions (scotopic vision). When neith |
https://en.wikipedia.org/wiki/Linnean%20Medal | The Linnean Medal of the Linnean Society of London was established in 1888, and is awarded annually to alternately a botanist or a zoologist or (as has been common since 1958) to one of each in the same year. The medal was of gold until 1976, and is for the preceding years often referred to as "the Gold Medal of the Linnean Society", not to be confused with the official Linnean Gold Medal which is seldom awarded.
The engraver of the medal was Charles Anderson Ferrier of Dundee, a Fellow of the Linnean Society from 1882. On the obverse of the medal is the head of Linnaeus in profile and the words "Carolus Linnaeus", on the reverse are the arms of the society and the legend "Societas Linnaeana optime merenti"; an oval space is reserved for the recipient's name.
Linnean medallists
19th century
1888: Sir Joseph D. Hooker and Sir Richard Owen
1889: Alphonse Louis Pierre Pyrame de Candolle
1890: Thomas Henry Huxley
1891: Jean-Baptiste Édouard Bornet
1892: Alfred Russel Wallace
1893: Daniel Oliver
1894: Ernst Haeckel
1895: Ferdinand Julius Cohn
1896: George James Allman
1897: Jacob Georg Agardh
1898: George Charles Wallich
1899: John Gilbert Baker
1900: Alfred Newton
20th century
1901: Sir George King
1902: Albert von Kölliker
1903: Mordecai Cubitt Cooke
1904: Albert C. L. G. Günther
1905: Eduard Strasburger
1906: Alfred Merle Norman
1907: Melchior Treub
1908: Thomas Roscoe Rede Stebbing
1909: Frederick Orpen Bower
1910: Georg Ossian Sars
1911: Hermann Graf zu Solms-Laubach
1912: Robert Cyril Layton Perkins
1913: Heinrich Gustav Adolf Engler
1914: Otto Butschli
1915: Joseph Henry Maiden
1916: Frank Evers Beddard
1917: Henry Brougham Guppy
1918: Frederick DuCane Godman
1919: Sir Isaac Bayley Balfour
1920: Sir Edwin Ray Lankester
1921: Dukinfield Henry Scott
1922: Sir Edward Bagnall Poulton
1923: Thomas Frederic Cheeseman
1924: William Carmichael McIntosh
1925: Francis Wall Oliver
1926: Edgar Johnson Allen
1927: Otto Stapf
1928: Edmund Beecher Wilson
1929: Hugo de Vries
|
https://en.wikipedia.org/wiki/Banner%20of%20arms | A banner of arms is a type of heraldic flag, characterised by sharing its imagery with that of the coat of arms (i.e. the shield of a full heraldic achievement, rendered in a square or rectangular shape of the flag).
The term is derived from the terminology of heraldry but mostly used in vexillology. Examples of modern national flags which are banners of arms are the flags of Austria, Iraq, and Switzerland.
The banner of arms is sometimes simply called a banner, but a banner is in a more strict sense a one of a kind personal flag of a nobleman held in battle.
Examples
National flags
Subdivision flags
County flags
City flags
Organization flags
Notes |
https://en.wikipedia.org/wiki/Video%20astronomy | Video astronomy (aka - Camera Assisted Astronomy, aka electronically-assisted astronomy or "EAA") is a branch of astronomy for near real-time observing of relatively faint astronomical objects using very sensitive CCD or CMOS cameras. Unlike lucky imaging, video astronomy does not discard unwanted frames, and image corrections such as dark subtraction are often not applied, however, the gathered data may be retained and processed in more traditional ways.. Although the field has a long history reaching back to 1928 with the inception of live television broadcasting of the planet Mars, it has largely been developed more recently by amateur enthusiasts and is characterized by the use of relatively inexpensive equipment, such as easily available sensitive security cameras, in contrast to the equipment used for advanced astrophotography.
By using either method of rapid internally stacked images, or very short exposure times, and using a TV monitor (for analog cameras) or a computer with readily available software (for USB cameras), video astronomy allows observers to see colour and detail that would not register to the eye. Because the image can be displayed on a monitor or television screen it allows multiple people to share 'live' images; using the internet it is possible for a worldwide audience to share such images. Live broadcasting websites exist for sharing live video astronomy feeds.
Video astronomy, combined with remote control of a telescope, allows anyone including disabled people to operate a telescope remotely, or observers in a light-polluted area to operate a telescope in another area, even another country.
Other benefits of the highly sensitive cameras used in video astronomy are the ability to see through thin cloud, and the ability to see many faint objects in areas suffering from light pollution.
The equipment used varies from webcams and basic security cameras to specialized video astronomy cameras. Recent growing interest in the video 'near-live' |
https://en.wikipedia.org/wiki/Freeze%20%28software%20engineering%29 | In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap.
The exact rules depend on the type of freeze and the particular development process in use; for example, they may include only allowing changes which fix bugs, or allowing changes only after thorough review by other members of the development team. They may also specify what happens if a change contrary to the rules is required, such as restarting the freeze period.
Common types of freezes are:
A (complete) specification freeze, in which the parties involved decide not to add any new requirement, specification, or feature to the feature list of a software project, so as to begin coding work.
A (complete) feature freeze, in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. The addition of new features may have a disruptive effect on other parts of the program, due both to the introduction of new, untested source code or resources and to interactions with other features; thus, a feature freeze helps improve the program's stability. For example: "user interface feature freeze" means no more features will be permitted to the user interface portion of the code; bugs can still be fixed.
A (complete) code freeze, in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. Particularly in large software systems, any change to the source code may have unintended consequences, potentially introducing new bugs; thus, a code freeze helps ensure that a portion of the program that is known to work correctly will continue to do so. Code freezes are often emplo |
https://en.wikipedia.org/wiki/Exponential%20dichotomy | In the mathematical theory of dynamical systems, an exponential dichotomy is a property of an equilibrium point that extends the idea of hyperbolicity to non-autonomous systems.
Definition
If
is a linear non-autonomous dynamical system in Rn with fundamental solution matrix Φ(t), Φ(0) = I, then the equilibrium point 0 is said to have an exponential dichotomy if there exists a (constant) matrix P such that P2 = P and positive constants K, L, α, and β such that
and
If furthermore, L = 1/K and β = α, then 0 is said to have a uniform exponential dichotomy.
The constants α and β allow us to define the spectral window of the equilibrium point, (−α, β).
Explanation
The matrix P is a projection onto the stable subspace and I − P is a projection onto the unstable subspace. What the exponential dichotomy says is that the norm of the projection onto the stable subspace of any orbit in the system decays exponentially as t → ∞ and the norm of the projection onto the unstable subspace of any orbit decays exponentially as t → −∞, and furthermore that the stable and unstable subspaces are conjugate (because ).
An equilibrium point with an exponential dichotomy has many of the properties of a hyperbolic equilibrium point in autonomous systems. In fact, it can be shown that a hyperbolic point has an exponential dichotomy. |
https://en.wikipedia.org/wiki/Locally%20compact%20quantum%20group | In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems.
One of the main features distinguishing this new approach from its predecessors is the axiomatic existence of left and right invariant weights. This gives a noncommutative analogue of left and right Haar measures on a locally compact Hausdorff group.
Definitions
Before we can even begin to properly define a locally compact quantum group, we first need to define a number of preliminary concepts and also state a few theorems.
Definition (weight). Let be a C*-algebra, and let denote the set of positive elements of . A weight on is a function such that
for all , and
for all and .
Some notation for weights. Let be a weight on a C*-algebra . We use the following notation:
, which is called the set of all positive -integrable elements of .
, which is called the set of all -square-integrable elements of .
, which is called the set of all -integrable elements of .
Types of weights. Let be a weight on a C*-algebra .
We say that is faithful if and only if for each non-zero .
We say that is lower semi-continuous if and only if the set is a closed subset of for every .
We say that is densely defined if and only if is a dense subset of , or equivalently, if and only if either or is a dense subset of .
We say that is proper if and only if it is non-zero, lower semi-continuous and densely defined.
Definition (one-parameter group). Let be a C*-algebra. A one-parameter group on is a family of *-automorphisms of that satisfies for all . We say that is norm-continuous if and only if for every , the mapping defined by is continuous (surely this s |
https://en.wikipedia.org/wiki/Chain%20drive | Chain drive is a way of transmitting mechanical power from one place to another. It is often used to convey power to the wheels of a vehicle, particularly bicycles and motorcycles. It is also used in a wide variety of machines besides vehicles.
Most often, the power is conveyed by a roller chain, known as the drive chain or transmission chain, passing over a sprocket gear, with the teeth of the gear meshing with the holes in the links of the chain. The gear is turned, and this pulls the chain putting mechanical force into the system. Another type of drive chain is the Morse chain, invented by the Morse Chain Company of Ithaca, New York, United States. This has inverted teeth.
Sometimes the power is output by simply rotating the chain, which can be used to lift or drag objects. In other situations, a second gear is placed and the power is recovered by attaching shafts or hubs to this gear. Though drive chains are often simple oval loops, they can also go around corners by placing more than two gears along the chain; gears that do not put power into the system or transmit it out are generally known as idler-wheels. By varying the diameter of the input and output gears with respect to each other, the gear ratio can be altered. For example, when the bicycle pedals' gear rotate once, it causes the gear that drives the wheels to rotate more than one revolution. Duplex chains are another type of chain which are essentially two chains joined side by side which allow for more power and torque to be transmitted.
History
The oldest known application of a chain drive appears in the Polybolos, described by the Greek engineer Philon of Byzantium (3rd century BC). Two flat-linked chains were connected to a windlass, which by winding back and forth would automatically fire the machine's arrows until its magazine was empty. Although the device did not transmit power continuously since the chains "did not transmit power from shaft to shaft, and hence they were not in the direct |
https://en.wikipedia.org/wiki/Disk%20encryption%20software | Disk encryption software is computer security software that protects the confidentiality of data stored on computer media (e.g., a hard disk, floppy disk, or USB device) by using disk encryption.
Compared to access controls commonly enforced by an operating system (OS), encryption passively protects data confidentiality even when the OS is not active, for example, if data is read directly from the hardware or by a different OS. In addition crypto-shredding suppresses the need to erase the data at the end of the disk's lifecycle.
Disk encryption generally refers to wholesale encryption that operates on an entire volume mostly transparently to the user, the system, and applications. This is generally distinguished from file-level encryption that operates by user invocation on a single file or group of files, and which requires the user to decide which specific files should be encrypted. Disk encryption usually includes all aspects of the disk, including directories, so that an adversary cannot determine content, name or size of any file. It is well suited to portable devices such as laptop computers and thumb drives which are particularly susceptible to being lost or stolen. If used properly, someone finding a lost device cannot penetrate actual data, or even know what files might be present.
Methods
The disk's data is protected using symmetric cryptography with the key randomly generated when a disk's encryption is first established. This key is itself encrypted in some way using a password or pass-phrase known (ideally) only to the user. Thereafter, in order to access the disk's data, the user must supply the password to make the key available to the software. This must be done sometime after each operating system start-up before the encrypted data can be used.
Done in software, encryption typically operates at a level between all applications and most system programs and the low-level device drivers by "transparently" (from a user's point of view) encrypting da |
https://en.wikipedia.org/wiki/616%20%28number%29 | 616 (six hundred [and] sixteen) is the natural number following 615 and preceding 617.
While 666 is called the "number of the beast" in most manuscripts of Revelation , a fragment of the earliest papyrus 115 gives the number as 616.
In mathematics
616 is a member of the Padovan sequence, coming after 265, 351, 465 (it is the sum of the first two of these). 616 is a polygonal number in four different ways: it is a heptagonal number, as well as 13-, 31- and 104-gonal.
It is also the sum of the squares of the factorials of 2,3,4: (2!)² + (3!)² + (4!)² = 4+36+576=616.
The 616th harmonic number is the first to exceed seven.
Number of the beast
666 is generally believed to have been the original Number of the Beast in the Book of Revelation in the Christian Bible. In 2005, however, a fragment of papyrus 115 was revealed, containing the earliest known version of that part of the Book of Revelation discussing the Number of the Beast. It gave the number as 616, suggesting that this may have been the original. One possible explanation for the two different numbers is that they reflect two different spellings of Emperor Nero/Neron's name, for which (according to this theory) this number is believed to be a code.
In other fields
Earth-616 is the name used to identify the primary continuity in which most Marvel Comics' titles take place.
616 film, a medium film format.
Area code 616, an area code in Michigan. |
https://en.wikipedia.org/wiki/Variable-frequency%20drive | A variable-frequency drive (VFD, or adjustable-frequency drives, adjustable-speed drives, variable-speed drives, AC drives, micro drives, inverter drives, or drives) is a type of AC motor drive (system incorporating a motor) that controls speed and torque by varying the frequency of the input electricity. Depending on its topology, it controls the associated voltage or current variation.
VFDs are used in applications ranging from small appliances to large compressors. Systems using VFDs can be more efficient than hydraulic systems, such as in systems with pumps and damper control for fans.
Since the 1980s, power electronics technology has reduced VFD cost and size and has improved performance through advances in semiconductor switching devices, drive topologies, simulation and control techniques, and control hardware and software.
VFDs include low- and medium-voltage AC-AC and DC-AC topologies.
History
Pulse Width Modulating (PWM) variable frequency drive project started in the 1960s at Strömberg in Finland. Martti Harmoinen is regarded the inventor of this technology. Strömberg managed to sell the idea of PWM drive to Helsinki metro in 1973 and in 1982 first PWM drive SAMI10 were operational.
System description and operation
A variable-frequency drive is a device used in a drive system consisting of the following three main sub-systems: AC motor, main drive controller assembly, and drive/operator interface.
AC motor
The AC electric motor used in a VFD system is usually a three-phase induction motor. Some types of single-phase motors or synchronous motors can be advantageous in some situations, but generally three-phase induction motors are preferred as the most economical. Motors that are designed for fixed-speed operation are often used. Elevated-voltage stresses imposed on induction motors that are supplied by VFDs require that such motors be designed for definite-purpose inverter-fed duty in accordance with such requirements as Part 31 of NEMA Standard |
https://en.wikipedia.org/wiki/Personal%20data | Personal data, also known as personal information or personally identifiable information (PII), is any information related to an identifiable person.
The abbreviation PII is widely accepted in the United States, but the phrase it abbreviates has four common variants based on personal or personally, and identifiable or identifying. Not all are equivalent, and for legal purposes the effective definitions vary depending on the jurisdiction and the purposes for which the term is being used. Under European Union and United Kingdom data protection regimes, which centre primarily on the General Data Protection Regulation (GDPR), the term "personal data" is significantly broader, and determines the scope of the regulatory regime.
National Institute of Standards and Technology Special Publication 800-122 defines personally identifiable information as "any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information." For instance, a user's IP address is not classed as PII on its own, but is classified as a linked PII.
Personal data is defined under the GDPR as "any information which [is] related to an identified or identifiable natural person". The IP address of an Internet subscriber may be classed as personal data.
The concept of PII has become prevalent as information technology and the Internet have made it easier to collect PII leading to a profitable market in collecting and reselling PII. PII can also be exploited by criminals to stalk or steal the identity of a person, or to aid in the planning of criminal acts. As a response to these threats, many website privacy policies specifically address the gathering of PII, and lawmake |
https://en.wikipedia.org/wiki/Budlong%20Pickle%20Company | The Budlong Pickle Company was an American company based in Chicago that made and marketed pickles from its own cucumbers. Founded in the late 1850s, it was sold in 1958 to a company which was later acquired by Dean Foods. The Budlong pickle legacy has recently been revived as the namesake of a restaurant chain in Chicago called “The Budlong Hot Chicken”.
Background
In the 19th century, Chicago was a powerhouse of American agriculture. The Union Stock Yards was the center of American meatpacking, and the Chicago Board of Trade provided financial support for investment in agricultural commodities.
In addition to its dominance in meatpacking and the grain trade, Chicago was a center of American pickle industry in the late 19th century. Among the reasons for Chicago's pickle prominence were ample supplies of salt and a robust rail infrastructure.
The Budlong family, an old Rhode Island family and the namesake of Budlong Farm, were established farmers and picklers on the East Coast, with a large operation in Cranston, Rhode Island.
Early history
In the late 1850s, Lyman A. Budlong (1829–1909) started a large farm in Chicago—in an area now named Budlong Woods—called the Budlong Nursery. Dates of the Nursery's establishment vary, but it must have been in 1857, the year that Lyman first came to Chicago, or later.
The Nursery, nicknamed the "village of glass" after its many greenhouses, produced large quantities of cucumbers, onions, and other vegetables. It also grew flowers. Approximately in size, the center of the farm was at what is now the intersection of California and Foster Avenues in the Budlong Woods section of Chicago's Lincoln Square community area.
The Budlong Pickle Company, founded in 1857 or 1859, was initially part of the same enterprise as the Nursery. In 1903, the Chicago Sunday Tribune called Budlong's the largest pickle farm in the world; as of 1928, the Tribune called "one of the largest pickle factories in the world". It sold approximately |
https://en.wikipedia.org/wiki/Luigi%20Cremona | Antonio Luigi Gaudenzio Giuseppe Cremona (7 December 1830 – 10 June 1903) was an Italian mathematician. His life was devoted to the study of geometry and reforming advanced mathematical teaching in Italy. He worked on algebraic curves and algebraic surfaces, particularly through his paper Introduzione ad una teoria geometrica delle curve piane ("Introduction to a geometrical theory of the plane curves"), and was a founder of the Italian school of algebraic geometry.
Biography
Luigi Cremona was born in Pavia (Lombardy), then part of the Austrian-controlled Kingdom of Lombardy–Venetia. His youngest brother was the painter Tranquillo Cremona.
In 1848, when Milan and Venice rose against Austria, Cremona, then only seventeen, joined the ranks of the Italian volunteers. He remained with them, fighting on behalf of his country's freedom, until, in 1849, the capitulation of Venice put an end to the campaign.
He then returned to Pavia, where he pursued his studies at the university under Francesco Brioschi, and determined to seek a career as teacher of mathematics. He graduated in 1853 as dottore negli studi di ingegnere civile e architetto.
Cremona is noted for the important role he played in bringing about the great geometrical advances in Italy. While, at the beginning of the nineteenth century, Italy had very little mathematical standing, the end of the century found Italy in the lead along geometric lines, largely as a result of the work of Cremona. He was very influential in bringing about reforms in the secondary schools of Italy and became a leader in questions of mathematical pedagogy as well as in those relating to the advancement of knowledge. The mathematical advances which Italy made since the middle of the nineteenth century were largely guided by Cremona, Brioschi, and Beltrami.
His first appointment was as elementary mathematical master at the gymnasium and lyceum of Cremona, and he afterwards obtained a similar post at Milan. In 1860 he was appointed to |
https://en.wikipedia.org/wiki/Metabolic%20equivalent%20of%20task | The metabolic equivalent of task (MET) is the objective measure of the ratio of the rate at which a person expends energy, relative to the mass of that person, while performing some specific physical activity compared to a reference, currently set by convention at an absolute 3.5 mL of oxygen per kg per minute, which is the energy expended when sitting quietly by a reference individual, chosen to be roughly representative of the general population, and thereby suited to epidemiological surveys. A Compendium of Physical Activities is available online, which provides MET values for hundreds of activities.
A primary use of METs is to grade activity levels for common household activities (such as cleaning) and common exercise modalities (such as running). Vigorous household chores can add up to as much energy expenditure as dedicated exercise, so it is necessary to include both, suitably pro rata, in an assessment of general fitness.
An earlier convention defined the MET as a multiple of the resting metabolic rate (RMR) for the individual concerned. An individual's resting metabolic rate can be measured by absolute gas exchange, absolute thermal output, or steady-state diet in a sedentary condition (with no reference to body mass); or it can be estimated from age, sex, height, body mass, and estimated fitness level (which in part functions as a proxy for lean body mass). As a relative measure, it might correlate better with rating of perceived exertion. This definition is more common in colloquial use on the Internet concerning personal fitness, and less common in the recent academic literature. As a relative measure suited to judge exertion level for the individual athlete, many coaches now prefer a measure indexed to maximum heart rate, which is easy to monitor continuously with modern consumer electronics. Exercise equipment with an accurate delivered-wattage indicator permits the use of relative METs for the same purpose, assuming a known ratio of biological effi |
https://en.wikipedia.org/wiki/Palmaria%20palmata | Palmaria palmata, also called dulse, dillisk or dilsk (from Irish/Scottish Gaelic /), red dulse, sea lettuce flakes, or creathnach, is a red alga (Rhodophyta) previously referred to as Rhodymenia palmata. It grows on the northern coasts of the Atlantic and Pacific Oceans. It is a well-known snack food. In Iceland, where it is known as , it has been an important source of dietary fiber throughout the centuries.
History
The earliest record of this species is on the island of Iona, Scotland where Christian monks harvested it over 1,400 years ago.
Description
The erect frond of dulse grows attached by its discoid holdfast and a short inconspicuous stipe epiphytically on to the stipe of Laminaria or to rocks. The fronds are variable in shape and colour from deep rose to reddish purple and are rather leathery in texture. The flat foliose blade gradually expands and divides into broad segments ranging in size to long and in width which can bear flat, wedge-shaped proliferations from the edge. The blade consists of an outer cortex of small cells enclosing a medulla of larger cells up to 0.35 thick.
The reference to Rhodymenia palmata var. mollis in Abbott and Hollenberg (1976), is now considered to refer to a different species: Palmaria mollis (Setchel et Gardner) van der Meer et Bird.
Dulse is similar to another seaweed, Dilsea carnosa, but Dilsea is more leathery with blades up to long and wide. Unlike P. palmata, it is not branched and does not have proliferations or branches from the edge of the frond, although the older blades may split.
Life history
The full haplodiploid life history was not fully explained until 1980. There are two phases in the life-history, with a haploid phase that is dioecious, with separate male and female plants. The large haploid plants are male, having sporangia. Spermatial sori occur scattered over most of the frond of the haploid male plant. The male plants are blade-like and produce spermatia which fertilize the carpogonia of th |
https://en.wikipedia.org/wiki/Supramolecular%20assembly | In chemistry, a supramolecular assembly is a complex of molecules held together by noncovalent bonds. While a supramolecular assembly can be simply composed of two molecules (e.g., a DNA double helix or an inclusion compound), or a defined number of stoichiometrically interacting molecules within a quaternary complex, it is more often used to denote larger complexes composed of indefinite numbers of molecules that form sphere-, rod-, or sheet-like species. Colloids, liquid crystals, biomolecular condensates, micelles, liposomes and biological membranes are examples of supramolecular assemblies, and their realm of study is known as supramolecular chemistry. The dimensions of supramolecular assemblies can range from nanometers to micrometers. Thus they allow access to nanoscale objects using a bottom-up approach in far fewer steps than a single molecule of similar dimensions.
The process by which a supramolecular assembly forms is called molecular self-assembly. Some try to distinguish self-assembly as the process by which individual molecules form the defined aggregate. Self-organization, then, is the process by which those aggregates create higher-order structures. This can become useful when talking about liquid crystals and block copolymers.
Templating reactions
As studied in coordination chemistry, metal ions (usually transition metal ions) exist in solution bound to ligands, In many cases, the coordination sphere defines geometries conducive to reactions either between ligands or involving ligands and other external reagents.
A well known metal-ion-templating was described by Charles Pedersen in his synthesis of various crown ethers using metal cations as template. For example, 18-crown-6 strongly coordinates potassium ion thus can be prepared through the Williamson ether synthesis using potassium ion as the template metal.
Metal ions are frequently used for assembly of large supramolecular structures. Metal organic frameworks (MOFs) are one example. MOFs |
https://en.wikipedia.org/wiki/Air%E2%80%93fuel%20ratio | Air–fuel ratio (AFR) is the mass ratio of air to a solid, liquid, or gaseous fuel present in a combustion process. The combustion may take place in a controlled manner such as in an internal combustion engine or industrial furnace, or may result in an explosion (e.g., a dust explosion, gas or vapor explosion or in a thermobaric weapon).
The air–fuel ratio determines whether a mixture is combustible at all, how much energy is being released, and how much unwanted pollutants are produced in the reaction. Typically a range of fuel to air ratios exists, outside of which ignition will not occur. These are known as the lower and upper explosive limits.
In an internal combustion engine or industrial furnace, the air–fuel ratio is an important measure for anti-pollution and performance-tuning reasons. If exactly enough air is provided to completely burn all of the fuel, the ratio is known as the stoichiometric mixture, often abbreviated to stoich. Ratios lower than stoichiometric (where the fuel is in excess) are considered "rich". Rich mixtures are less efficient, but may produce more power and burn cooler. Ratios higher than stoichiometric (where the air is in excess) are considered "lean". Lean mixtures are more efficient but may cause higher temperatures, which can lead to the formation of nitrogen oxides. Some engines are designed with features to allow lean-burn. For precise air–fuel ratio calculations, the oxygen content of combustion air should be specified because of different air density due to different altitude or intake air temperature, possible dilution by ambient water vapor, or enrichment by oxygen additions.
Internal combustion engines
In theory, a stoichiometric mixture has just enough air to completely burn the available fuel. In practice, this is never quite achieved, due primarily to the very short time available in an internal combustion engine for each combustion cycle.
Most of the combustion process is completed in approximately 2 millisecon |
https://en.wikipedia.org/wiki/Password%20Safe | Password Safe is a free and open-source password manager program originally written for Microsoft Windows but supporting wide area of operating systems with compatible clients available for Linux, FreeBSD, Android, IOS, BlackBerry and other operating systems as well.
The Linux version is available for Ubuntu (including the Kubuntu and Xubuntu derivatives) and Debian. A Java-based version is also available on SourceForge. On its page, users can find links to unofficial releases running under Android, BlackBerry, and other mobile operating systems.
History
The program was initiated by Bruce Schneier at Counterpane Systems, and is now hosted on SourceForge (Windows) and GitHub (Linux) and developed by a group of volunteers.
Design
After filling in the master password the user has access to all account data entered and saved previously. The data can be organized by categories, searched, and sorted based on references which are easy for the user to remember.
There are various key combinations and mouse clicks to copy parts of the stored data (password, email, username etc.), or use the autofill feature (for filling forms).
The program can be set to minimize automatically after a period of idle time and clears the clipboard.
It is possible to compare and synchronize (merge) two different password databases. The program can be set up to generate automatic backups.
Password Safe does not support database sharing, but the single-file database can be shared by any external sharing method (for example Syncthing, Dropbox etc.). Database is not stored online.
Features
Note: All uncited information in this section is sourced from the official Help file included with the application
Password management
Stored passwords can be sectioned into groups and subgroups in a tree structure.
Changes to entries can be tracked, including a history of previous passwords, the creation time, modification time, last access time, and expiration time of each password stored. Text notes |
https://en.wikipedia.org/wiki/Dopamine%20transporter | The dopamine transporter (DAT) also (sodium-dependent dopamine transporter) is a membrane-spanning protein coded for in the human by the SLC6A3 gene, (also known as DAT1), that pumps the neurotransmitter dopamine out of the synaptic cleft back into cytosol. In the cytosol, other transporters sequester the dopamine into vesicles for storage and later release. Dopamine reuptake via DAT provides the primary mechanism through which dopamine is cleared from synapses, although there may be an exception in the prefrontal cortex, where evidence points to a possibly larger role of the norepinephrine transporter.
DAT is implicated in a number of dopamine-related disorders, including attention deficit hyperactivity disorder, bipolar disorder, clinical depression, eating disorders, and substance use disorders. The gene that encodes the DAT protein is located on chromosome 5, consists of 15 coding exons, and is roughly 64 kbp long. Evidence for the associations between DAT and dopamine related disorders has come from a type of genetic polymorphism, known as a variable number tandem repeat, in the SLC6A3 gene, which influences the amount of protein expressed.
Function
DAT is an integral membrane protein that removes dopamine from the synaptic cleft and deposits it into surrounding cells, thus terminating the signal of the neurotransmitter. Dopamine underlies several aspects of cognition, including reward, and DAT facilitates regulation of that signal.
Mechanism
DAT is a symporter that moves dopamine across the cell membrane by coupling the movement to the energetically-favorable movement of sodium ions moving from high to low concentration into the cell. DAT function requires the sequential binding and co-transport of two Na+ ions and one Cl− ion with the dopamine substrate. The driving force for DAT-mediated dopamine reuptake is the ion concentration gradient generated by the plasma membrane Na+/K+ ATPase.
In the most widely accepted model for monoamine transporter functio |
https://en.wikipedia.org/wiki/International%20Celestial%20Reference%20System%20and%20its%20realizations | The International Celestial Reference System (ICRS) is the current standard celestial reference system adopted by the International Astronomical Union (IAU). Its origin is at the barycenter of the Solar System, with axes that are intended to "show no global rotation with respect to a set of distant extragalactic objects". This fixed reference system differs from previous reference systems, which had been based on Catalogues of Fundamental Stars that had published the positions of stars based on direct "observations of [their] equatorial coordinates, right ascension and declination" and had adopted as "privileged axes ... the mean equator and the dynamical equinox" at a particular date and time.
The International Celestial Reference Frame (ICRF) is a realization of the International Celestial Reference System using reference celestial sources observed at radio wavelengths. In the context of the ICRS, a reference frame (RF) is the physical realization of a reference system, i.e., the reference frame is the set of numerical coordinates of the reference sources, derived using the procedures spelled out by the ICRS.
More specifically, the ICRF is an inertial barycentric reference frame whose axes are defined by the measured positions of extragalactic sources (mainly quasars) observed using very long baseline interferometry while the Gaia-CRF is an inertial barycentric reference frame defined by optically measured positions of extragalactic sources by the Gaia satellite and whose axes are rotated to conform to the ICRF. Although general relativity implies that there are no true inertial frames around gravitating bodies, these reference frames are important because they do not exhibit any measurable angular rotation since the extragalactic sources used to define the ICRF and the Gaia-CRF are so far away. The ICRF and the Gaia-CRF are now the standard reference frames used to define the positions of astronomical objects.
Reference systems and frames
It is useful to di |
https://en.wikipedia.org/wiki/Cats%20in%20ancient%20Egypt | Cats were represented in social and religious practices of ancient Egypt for more than 3,000 years. Several ancient Egyptian deities were depicted and sculptured with cat-like heads such as Mafdet, Bastet and Sekhmet, representing justice, fertility, and power, respectively.
The deity Mut was also depicted as a cat and in the company of a cat.
Cats were praised for killing venomous snakes and protecting the Pharaoh since at least the First Dynasty of Egypt. Skeletal remains of cats were found among funerary goods dating to the 12th Dynasty. The protective function of cats is indicated in the Book of the Dead, where a cat represents Ra and the benefits of the sun for life on Earth. Cat-shaped decorations used during the New Kingdom of Egypt indicate that the cat cult became more popular in daily life. Cats were depicted in association with the name of Bastet.
Cat cemeteries at the archaeological sites Speos Artemidos, Bubastis and Saqqara were used for several centuries. They contained vast numbers of cat mummies and cat statues that are exhibited in museum collections worldwide.
Among the mummified animals excavated in Gizeh, the African wildcat (Felis lybica) is the most common cat followed by the jungle cat (Felis chaus).
In view of the huge number of cat mummies found in Egypt, the cat cult was certainly important for the country's economy, as it required breeding of cats and a trading network for the supply of food, oils and resins for embalming them.
History
Mafdet was the first known cat-headed deity in ancient Egypt. During the First Dynasty, she was regarded as protector of the pharaoh's chambers against snakes, scorpions and evil. She was often also depicted with a head of a leopard (Panthera pardus) or a cheetah (Acinonyx jubatus). She was particularly prominent during the reign of Den.
The deity Bastet is known from at least the Second Dynasty onwards. At the time, she was depicted with a lion (Panthera leo) head. Seals and stone vessels with her nam |
https://en.wikipedia.org/wiki/Catastrophe%20modeling | Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology.
Catastrophes/ Perils
Natural catastrophes (sometimes referred to as "nat cat") that are modeled include:
Hurricane (main peril is wind damage; some models can also include storm surge and rainfall)
Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage)
severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail)
Flood
Extratropical cyclone (commonly referred to as European windstorm)
Wildfire
Winter storm
Human catastrophes include:
Terrorism events
Warfare
Casualty/liability events
Forced displacement crises
Cyber data breaches
Lines of business modeled
Cat modeling involves many lines of business, including:
Personal property
Commercial property
Workers' compensation
Automobile physical damage
Limited liabilities
Product liability
Business Interruption
Inputs, Outputs, and Use Cases
The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups:
Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.)
Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.)
Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.)
The output of a cat model is an estimate of |
https://en.wikipedia.org/wiki/Hoarding%20%28animal%20behavior%29 | Hoarding or caching in animal behavior is the storage of food in locations hidden from the sight of both conspecifics (animals of the same or closely related species) and members of other species. Most commonly, the function of hoarding or caching is to store food in times of surplus for times when food is less plentiful. However, there is evidence that some amount of caching or hoarding is done in order to ripen the food, called ripening caching. The term hoarding is most typically used for rodents, whereas caching is more commonly used in reference to birds, but the behaviors in both animal groups are quite similar.
Hoarding is done either on a long-term basiscached on a seasonal cycle, with food to be consumed months down the lineor on a short-term basis, in which case the food will be consumed over a period of one or several days.
Some common animals that cache their food are rodents such as hamsters and squirrels, and many different bird species, such as rooks and woodpeckers. The western scrub jay is noted for its particular skill at caching. There are two types of caching behavior: larder hoarding, where a species creates a few large caches which it often defends, and scatter hoarding, where a species will create multiple caches, often with each individual food item stored in a unique place. Both types of caching have their advantage.
Function
Caching behavior is typically a way to save excess edible food for later consumption—either soon to be eaten food, such as when a jaguar hangs partially eaten prey from a tree to be eaten within a few days, or long term, where the food is hidden and retrieved many months later. Caching is a common adaptation to seasonal changes in food availability. In regions where winters are harsh, food availability typically becomes low, and caching food during the times of high food availability in the warmer months provides a significant survival advantage. For species that hoard perishable food weather can significantly aff |
https://en.wikipedia.org/wiki/Language%20module | The language module or language faculty is a hypothetical structure in the human brain which is thought to contain innate capacities for language, originally posited by Noam Chomsky. There is ongoing research into brain modularity in the fields of cognitive science and neuroscience, although the current idea is much weaker than what was proposed by Chomsky and Jerry Fodor in the 1980s. In today's terminology, 'modularity' refers to specialisation: language processing is specialised in the brain to the extent that it occurs partially in different areas than other types of information processing such as visual input. The current view is, then, that language is neither compartmentalised nor based on general principles of processing (as proposed by George Lakoff). It is modular to the extent that it constitutes a specific cognitive skill or area in cognition.
Meaning of a module
The notion of a dedicated language module in the human brain originated with Noam Chomsky's theory of Universal Grammar (UG). The debate on the issue of modularity in language is underpinned, in part, by different understandings of this concept. There is, however, some consensus in the literature that a module is considered committed to processing specialized representations (domain-specificity) in an informationally encapsulated way. A distinction should be drawn between anatomical modularity, which proposes there is one 'area' in the brain that deals with this processing, and functional modularity that obviates anatomical modularity whilst maintaining information encapsulation in distributed parts of the brain.
No singular anatomical module
The available evidence points toward the conclusion that no single area of the brain is solely devoted to processing language. The Wada test, where sodium amobarbital is used to anaesthetise one hemisphere, shows that the left-hemisphere appears to be crucial in language processing. Yet, neuroimaging does not implicate any single area but rather ident |
https://en.wikipedia.org/wiki/Liquid%20Scintillator%20Neutrino%20Detector | The Liquid Scintillator Neutrino Detector (LSND) was a scintillation counter at Los Alamos National Laboratory that measured the number of neutrinos being produced by an accelerator neutrino source. The LSND project was created to look for evidence of neutrino oscillation, and its results conflict with the Standard Model expectation of only three neutrino flavors, when considered in the context of other solar and atmospheric neutrino oscillation experiments. Cosmological data bound the mass of the sterile neutrino to ms < 0.26eV (0.44eV) at 95% (99.9%) confidence limit, excluding at high significance the sterile neutrino hypothesis as an explanation of the LSND anomaly. The controversial LSND result was tested by the MiniBooNE experiment at Fermilab which has found similar evidence for oscillations.
The hint is currently undergoing further tests at MicroBooNE at Fermilab.
The detector consisted of a tank filled with 167 tons (50,000 gallons) of mineral oil and of b-PDB (2-(4-tert-butylphenyl)-5-(4-biphenyl)-1,3,4-oxadiazole) organic scintillator material. Cherenkov light emitted by particle interactions was detected by an array of 1220 photomultiplier tubes. The experiment collected data from 1993 to 1998. |
https://en.wikipedia.org/wiki/Timestream | The timestream or time stream is a metaphorical conception of time as a stream, a flowing body of water. In Brave New Words: The Oxford Dictionary of Science Fiction, the term is more narrowly defined as: "the series of all events from past to future, especially when conceived of as one of many such series". Timestream is the normal passage or flow of time and its historical developments, within a given dimension of reality. The concept of the time stream, and the ability to travel within and around it, are the fundamentals of a genre of science fiction.
This conception has been widely used in mythology and in fiction.
This analogy is useful in several ways:
Streams flow only one way. Time moves only forward.
Streams flow constantly. Time never stops.
People can stand in a stream, but will be pulled along by it. People exist within time, but move with it.
Some physicists and science fiction writers have speculated that time is branching—it branches into alternate universes (see many-worlds interpretation). Streams can converge and also diverge.
Science fiction scholar Andrew Sawyer writes, "The paradoxes of time—do we move in time, or does it move by us? Does it exist or is it merely an illusion of our limited perception?—are puzzles that exercise both physicists and philosophers..."
History
Brian Stableford writes of the historical and philosophical concepts of time (and using the terminology of "flow"):
The ancient Greek philosopher Heraclitus was famous for a statement that has been translated in many ways, most commonly as "No man ever steps in the same river twice," which is often called his "flux [flow] doctrine." An essayist for the Stanford Encyclopedia of Philosophy explained it in this manner: "Everything is in flux (in the sense that 'everything is always flowing in some respects'...) ..."
Fiction
In fiction, an alternate continuity is sometimes called an alternate timestream.
Science fiction
The Time Stream, a 1946 science fiction nove |
https://en.wikipedia.org/wiki/Esky | Esky is a brand of portable coolers, originally Australian, derived from the word "Eskimo". The term "esky" is also commonly used in Australia to generically refer to portable coolers or ice boxes and is part of the Australian vernacular, in place of words like "cooler" or "cooler box" and the New Zealand "chilly bin".
The brand name was purchased by the United States firm Coleman Company, (a subsidiary of Newell Brands) in 2009.
History
Some historians have credited Malley's with the invention of the portable ice cooler. According to the company, the Esky was "recognised as the first official portable cooler in the world." The company's own figures claim that, by 1960, 500,000 Australian households owned one (in a country of approximately 3 million households at the time).
The brand "Esky" was used from around 1945, for an Australian-made ice chest, a free-standing insulated cabinet with two compartments: the upper to carry a standard () block of ice, and the lower for food and drinks. It was made in Sydney by Malleys but did not carry their name until around 1949.
The first (metal-cased) portable Esky appeared in 1952, sized to accommodate six bottles of beer or soft drink, as advertised nationally.
By 1965 "esky" (no capital E) was being used in Australian literature for such coolers, and in 1973 Malleys, owners of the tradename, acknowledged that the term had entered the vernacular and was being used for lightweight plastic imitations.
One such brand was Willow, an Australian manufacturer, previously known for domestic "tinware" — buckets, bins, cake tins and oven trays.
Nylex started making the plastic-cased Esky in 1984.
In 1993 Nylex Corporation was still defending their ownership of the "Esky" trademark, but by 2002 they had allowed it to lapse.
Outdoor recreation company Coleman Australia bought the Esky brands from Nylex Ltd after the company went into administration in February 2009, and later that year Coleman was producing most of the Esky lin |
https://en.wikipedia.org/wiki/Medical%20oddity | A medical oddity is an unusual predicament or event which takes place in a medical context. Some examples of medical oddities might include: "lost and found" surgical instruments (in the body), grotesquely oversized tumors, (human) male pregnancy, rare or "orphan" illnesses, rare allergies (such as to water), strange births (extra or missing organs), and bizarre syndromes (such as Capgras delusion).
Medical oddities can also include unusual discoveries in purchased food, such as finding a severed finger or thumb in a hamburger.
Medical oddities are also known as medical curiosities. While not strictly paranormal, they are classically Fortean.
See also
Cabinet of curiosities
Further reading
Books
Gould, George Milbry, Anomalies and Curiosities of Medicine, W. B. Saunders, ©1896, Philadelphia, LC Control Number: 07028696
Jones, Kenneth Lyons, Smith's Recognizable Patterns of Human Malformation, Saunders, ©1997, Philadelphia, LC Control Number: 96016722,
Periodicals
Fortean Times
Forteana |
https://en.wikipedia.org/wiki/Glucuronide | A glucuronide, also known as glucuronoside, is any substance produced by linking glucuronic acid to another substance via a glycosidic bond. The glucuronides belong to the glycosides.
Glucuronidation, the conversion of chemical compounds to glucuronides, is a method that animals use to assist in the excretion of toxic substances, drugs or other substances that cannot be used as an energy source. Glucuronic acid is attached via a glycosidic bond to the substance, and the resulting glucuronide, which has a much higher water solubility than the original substance, is eventually excreted by the kidneys.
Enzymes that cleave the glycosidic bond of a glucuronide are called glucuronidases.
Examples
Miquelianin (Quercetin 3-O-glucuronide)
Morphine-6-glucuronide
Scutellarein-7-glucuronide |
https://en.wikipedia.org/wiki/Airline%20teletype%20system | The airline teletype system uses teleprinters, which are electro-mechanical typewriters that can communicate typed messages from point to point through simple electric communications channels, often just pairs of wires. The most modern form of these devices are fully electronic and use a screen, instead of a printer.
Historical development
The airline industry began using teletypewriter technology in the early 1920s utilizing radio stations located at 10 airfields in the United States. The US Post Office and other US government agencies used these radio stations for transmitting telegraph messages. It was during this period that the first federal teletypewriter system was introduced in the United States to allow weather and flight information to be exchanged between air traffic facilities. While the use of physical teletypes is almost extinct, the message formats and switching concepts remain similar. In 1929, Aeronautical Radio Incorporated (ARINC) was formed to manage radio frequencies and license allocation in the United States, as well as to support the radio stations that were used by the emerging airlines, a role ARINC still fulfils today. ARINC is a private company originally owned by many of the world's airlines including American Airlines, Continental Airlines, British Airways, Air France, and SAS; it was acquired by Collins Aerospace in December 2013.
In 1949, the Société Internationale de Télécommunication Aeronautique (SITA) was formed as a cooperative by 11 airlines: Air France, KLM, Sabena, Swissair, TWA, British European Airways, British Overseas Airways Corporation, British South American Airways, Swedish AB Aerotransport, Danish Det Danske Luftfartselskab A/S, and Norwegian Det Norske Luftfartselskap. Their aim was to enable airlines to be able to use the existing communications facilities in the most efficient manner.
Morse code was the general means of relaying information between air communications stations prior to World War II. Generally, |
https://en.wikipedia.org/wiki/Cryptographically%20Generated%20Address | A Cryptographically Generated Address (CGA) is an Internet Protocol Version 6 (IPv6) address that has a host identifier computed from a cryptographic hash function. This procedure is a method for binding a public signature key to an IPv6 address in the Secure Neighbor Discovery Protocol (SEND).
Characteristics
A Cryptographically Generated Address is an IPv6 address whose interface identifier has been generated according to the CGA generation method. The interface identifier is formed by the least-significant 64 bits of an IPv6 address and is used to identify the host's network interface on its subnet. The subnet is determined by the most-significant 64 bits, the subnet prefix.
Apart from the public key that is to be bound to the CGA, the CGA generation method takes several other input parameters including the predefined subnet prefix. These parameters, along with other parameters that are generated during the execution of the CGA generation method, form a set of parameters called the CGA Parameters data structure. The complete set of CGA Parameters has to be known in order to be able to verify the corresponding CGA.
The CGA Parameters data structure consists of:
modifier: a random 128-bit unsigned integer;
subnetPrefix: the 64-bit prefix that defines to which subnet the CGA belongs;
collCount: an 8-bit unsigned integer that must be 0, 1, or 2;
publicKey: the public key as a DER-encoded ASN.1 structure of the type SubjectPublicKeyInfo;
extFields: an optional variable-length field (default length 0).
Additionally, a security parameter Sec determines the CGA's strength against brute-force attacks. This is a 3-bit unsigned integer that can have any value from 0 up to (and including) 7 and is encoded in the three leftmost bits of the CGA's interface identifier. The higher the value of Sec, the higher the level of security, but also the longer it generally takes to generate a CGA. For convenience, the intermediate Sec values in the pseudocode below are assumed t |
https://en.wikipedia.org/wiki/Secure%20Neighbor%20Discovery | The Secure Neighbor Discovery (SEND) protocol is a security extension of the Neighbor Discovery Protocol (NDP) in IPv6 defined in RFC 3971 and updated by RFC 6494.
The Neighbor Discovery Protocol (NDP) is responsible in IPv6 for discovery of other network nodes on the local link, to determine the link layer addresses of other nodes, and to find available routers, and maintain reachability information about the paths to other active neighbor nodes (RFC 4861). NDP is insecure and susceptible to malicious interference. It is the intent of SEND to provide an alternate mechanism for securing NDP with a cryptographic method that is independent of IPsec, the original and inherent method of securing IPv6 communications.
SEND uses Cryptographically Generated Addresses (CGA) and other new NDP options for the ICMPv6 packet types used in NDP.
SEND was updated to use the Resource Public Key Infrastructure (RPKI) by RFC 6494 and RFC 6495 which define use of a SEND Certificate Profile utilizing a modified RFC 6487 RPKI Certificate Profile which must include a single RFC 3779 IP Address Delegation extension.
There have been concerns with algorithm agility vis-à-vis attacks on hash functions used by SEND expressed in RFC 6273, as CGA currently uses the SHA-1 hash algorithm and PKIX certificates and does not provide support for alternative hash algorithms.
Implementations
Cisco IOS 12.4(24)T and newer
Docomo USL SEND fork
Easy-SEND
ipv6-send-cga, Huawei and Beijing University of Posts and Telecommunications
NDprotector, Telecom SudParis
Native SeND kernel API
TrustRouter
USL SEND (discontinued), NTT DoCoMo
WinSEND
See also
Neighbor Discovery Protocol |
https://en.wikipedia.org/wiki/Paper%20craft | Paper craft is a collection of crafts using paper or card as the primary artistic medium for the creation of two or three-dimensional objects. Paper and card stock lend themselves to a wide range of techniques and can be folded, curved, bent, cut, glued, molded, stitched, or layered. Papermaking by hand is also a paper craft.
Paper crafts are known in most societies that use paper, with certain kinds of crafts being particularly associated with specific countries or cultures. In Caribbean countries paper craft is unique to Caribbean culture which reflect the importance of native animals in life of people.
In addition to the aesthetic value of paper crafts, various forms of paper crafts are used in the education of children. Paper is a relatively inexpensive medium, readily available, and easier to work with than the more complicated media typically used in the creation of three-dimensional artwork, such as ceramics, wood, and metals. It is also neater to work with than paints, dyes, and other coloring materials. Paper crafts may also be used in therapeutic settings, providing children with a safe and uncomplicated creative outlet to express feelings.
Folded paper
The word "paper" derives from papyrus, the name of the ancient material manufactured from beaten reeds in Egypt as far back as the third millennium B.C. Indeed, the earliest known example of "paper folding" is an ancient Egyptian map, drawn on papyrus and folded into rectangular forms like a modern road map. However, it does not appear that intricate paper folding as an art form became possible until the introduction of wood-pulp based papers.
The first Japanese origami is dated from the 6th century A.D. In much of the West, the term origami is used synonymously with paper folding, though the term properly only refers to the art of paper folding in Japan. Other forms of paper folding include Chinese zhezhi (摺紙), Korean jong'i jeopgi (종이접기), and Western paper folding, such as the traditional paper |
https://en.wikipedia.org/wiki/BrickOS | brickOS is an open-source operating system created by Markus Noga as firmware to operate as an alternative software environment for the Lego Mindstorms Robotic Invention System [14]. brickOS is the first open-source software made for Lego Mindstorms robots. It is embedded with a C/C++-based and a Java-based environment for RCX program through utilizing g++ and jack toolchain. It uses a Hitachi H8 cross compiler assembler as its primary toolchain.
The operating system comprises demonstration programs implemented in C and C++ and an alternative operating system for the Lego Mindstorms. It provides users with utilities that allows them to download compiled programs and install the operating system for RCX.
Features
BrickOS was designed and developed using Linux as a replacement for the previous operating system for Lego Mindstorms, which is also known as LegOS. It is capable of being implemented on the Windows system and most Unices [6]. It allows for a more flexible and higher performances system that is much superior to LegOS [3]. The current version of the system's main features includes [3]:
BrickOS programs are executed natively instead of interpreted bytes code [12] like that of a standard firmware which in terms makes the program faster. BrickOS is flexible with controlling outputs [2], for instance, it can alter 255 value of motor speed. Another feature is that brickOS contains a LegOS Network Protocol (LNP) which allows for more than one driver to communicate. This protocol will broadcast message to any RCXs component within the receiving area. By adding layers to the command, the message can filter out the recipient to arrive at the addressed RCX.
BrickOS provides a development environment that allows users to freely implement the provided RCX drivers, such as sensors and motors, using C or C++ programming languages. It can be used alongside libre simulators LegoSim and Emmulegos, which provide graphical interfaces to create a virtual machine that eases t |
https://en.wikipedia.org/wiki/Prototype%20Verification%20System | The Prototype Verification System (PVS) is a specification language integrated with support tools and an automated theorem prover, developed at the Computer Science Laboratory of SRI International in Menlo Park, California.
PVS is based on a kernel consisting of an extension of Church's theory of types with dependent types, and is fundamentally a classical typed higher-order logic. The base types include uninterpreted types that may be introduced by the user, and built-in types such as the booleans, integers, reals, and the ordinals. Type-constructors include functions, sets, tuples, records, enumerations, and abstract data types. Predicate subtypes and dependent types can be used to introduce constraints; these constrained types may incur proof obligations (called type-correctness conditions or TCCs) during typechecking. PVS specifications are organized into parameterized theories.
The system is implemented in Common Lisp, and is released under the GNU General Public License (GPL).
See also
Formal methods
List of proof assistants |
https://en.wikipedia.org/wiki/Electrical%20resistivities%20of%20the%20elements%20%28data%20page%29 |
Electrical resistivity |
https://en.wikipedia.org/wiki/Archaic%20smile | The archaic smile was used by sculptors in Archaic Greece, especially in the second quarter of the 6th century BCE, possibly to suggest that their subject was alive and infused with a sense of well-being. One of the most famous examples of the archaic smile is the Kroisos Kouros, and the Peplos Kore is another.
By the middle of the Archaic Period of ancient Greece (roughly 800 BCE to 480 BCE), the art that proliferated contained images of people who had the archaic smile, as evidenced by statues found in excavations all across the Greek mainland, Asia Minor, and on islands in the Aegean Sea. The significance of the convention is not known although it is often assumed that for the Greeks, that kind of smile reflected a state of ideal health and well-being. It has also been suggested that it is simply the result of a technical difficulty in fitting the curved shape of the mouth to the somewhat-blocklike head typical of Archaic sculpture. Richard Neer theorizes that the archaic smile may actually be a marker of status, since aristocrats of multiple cities throughout Greece were referred to as the Geleontes or "smiling ones". There are alternative views to the archaic smile being "flat and quite unnatural looking". John Fowles describes the archaic smile in his novel The Magus as "full of the purest metaphysical good humour [...] timelessly intelligent and timelessly amused. [...] Because a star explodes and a thousand worlds like ours die, we know this world is. That is the smile: that what might not be, is [...] When I die, I shall have this by my bedside. It is the last human face I want to see."
The Greek archaic smile is also found on Etruscan artworks during the same time period nearby on the west side of the Italian peninsula, as consequence of the influence of Greek art on Etruscan art. An example of this commonly featured in art history texts is the Sarcophagus of the Spouses, a terracotta work found in the necropolis of Cerveteri. It features a smiling coup |
https://en.wikipedia.org/wiki/Billiken | The Billiken is a charm doll created by an American art teacher and illustrator, Florence Pretz of Kansas City, Missouri, who is said to have seen the mysterious figure in a dream. It is believed that Pretz found the name Billiken in Bliss Carman's 1896 poem "Mr. Moon: A Song Of The Little People". In 1908, she obtained a design patent on the ornamental design of the Billiken, which she sold to the Billiken Company of Chicago. The Billiken was monkey-like with pointed ears, a mischievous smile and a tuft of hair on his pointed head. His arms were short and he was generally sitting with his legs stretched out in front of him. Billiken is known as "The God of Things as They Ought to Be".
To buy a Billiken was said to give the purchaser luck, but to receive one as a gift would be better luck. The image was copyrighted and a trademark was put on the name. After a few years of popularity, the Billiken faded into obscurity. Although they are similar, the Billiken and the baby-like kewpie figures that debuted in the December 1909 Ladies' Home Journal are not the same.
Today, the Billiken is the official mascot of Saint Louis University and St. Louis University High School, both Jesuit institutions located in St. Louis. The Billiken is also the official mascot of the Royal Order of Jesters, an invitation-only Shriner group affiliated with Freemasonry. The Billiken also became the namesake of Billiken Shokai, the Japanese toy and model manufacturing company (established 1976).
History
The Billiken sprang from the height of the "Mind-Cure" craze in the United States at the start of the twentieth century. It represented the "no worry" ideal, and was a huge hit. Variations appeared, such as the "Teddy-Billiken Doll" and the Billycan/Billycant pair (to drive petty problems away). The Billiken helped touch off the doll craze of the era.
At least two Billiken-themed songs were recorded, including "Billiken Rag" and the "Billiken Man Song." The latter was recorded by Blan |
https://en.wikipedia.org/wiki/0.999... | In mathematics, 0.999... (also written as 0., 0. or 0.(9)) is a notation for the repeating decimal consisting of an unending sequence of 9s after the decimal point. This repeating decimal is a numeral that represents the smallest number no less than every number in the sequence (0.9, 0.99, 0.999, ...); that is, the supremum of this sequence. This number is equal to1. In other words, "0.999..." is not "almost exactly" or "very, very nearly but not quite" rather, "0.999..." and "1" represent the same number.
There are many ways of showing this equality, from intuitive arguments to mathematically rigorous proofs. The technique used depends on the target audience, background assumptions, historical context, and preferred development of the real numbers, the system within which 0.999... is commonly defined. In other systems, 0.999... can have the same meaning, a different definition, or be undefined.
More generally, every nonzero terminating decimal has two equal representations (for example, 8.32 and 8.31999...), which is a property of all positional numeral system representations regardless of base. The utilitarian preference for the terminating decimal representation contributes to the misconception that it is the only representation. For this and other reasons—such as rigorous proofs relying on non-elementary techniques, properties, or disciplines—some people can find the equality sufficiently counterintuitive that they question or reject it. This has been the subject of several studies in mathematics education.
Elementary proof
There is an elementary proof of the equation , which uses just the mathematical tools of comparison and addition of (finite) decimal numbers, without any reference to more advanced topics such as series, limits, formal construction of real numbers, etc. The proof, given below, is a direct formalization of the intuitive fact that, if one draws 0.9, 0.99, 0.999, etc. on the number line there is no room left for placing a number between |
https://en.wikipedia.org/wiki/Zitterbewegung | In physics, the zitterbewegung (, ) is the theoretical prediction of a rapid oscillatory motion of elementary particles that obey relativistic wave equations. This prediction was first discussed by Gregory Breit in 1928 and later by Erwin Schrödinger in 1930 as a result of analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces an apparent fluctuation (up to the speed of light) of the position of an electron around the median, with an angular frequency of , or approximately radians per second.
This apparent oscillatory motion is often interpreted as an artifact of using the Dirac equation in a single particle description and disappears when using quantum field theory. For the hydrogen atom, the zitterbewegung is related to the Darwin term, a small correction of the energy level of the s-orbitals.
Theory
Free spin-1/2 fermion
The time-dependent Dirac equation is written as
,
where is the reduced Planck constant, is the wave function (bispinor) of a fermionic particle spin-½, and is the Dirac Hamiltonian of a free particle:
,
where is the mass of the particle, is the speed of light, is the momentum operator, and and are matrices related to the Gamma matrices , as and .
In the Heisenberg picture, the time dependence of an arbitrary observable obeys the equation
In particular, the time-dependence of the position operator is given by
.
where is the position operator at time .
The above equation shows that the operator can be interpreted as the -th component of a "velocity operator".
Note that this implies that
,
as if the "root mean square speed" in every direction of space is the speed of light.
To add time-dependence to , one implements the Heisenberg picture, which says
.
The time-dependence of the velocity operator is given by
,
where
Now, because both and are time-independent, the above equation can easily |
https://en.wikipedia.org/wiki/Snake-in-the-box | The snake-in-the-box problem in graph theory and computer science deals with finding a certain kind of path along the edges of a hypercube. This path starts at one corner and travels along the edges to as many corners as it can reach. After it gets to a new corner, the previous corner and all of its neighbors must be marked as unusable. The path should never travel to a corner which has been marked unusable.
In other words, a snake is a connected open path in the hypercube where each node connected with path, with the exception of the head (start) and the tail (finish), it has exactly two neighbors that are also in the snake. The head and the tail each have only one neighbor in the snake. The rule for generating a snake is that a node in the hypercube may be visited if it is connected to the current node and it is not a neighbor of any previously visited node in the snake, other than the current node.
In graph theory terminology, this is called finding the longest possible induced path in a hypercube; it can be viewed as a special case of the induced subgraph isomorphism problem. There is a similar problem of finding long induced cycles in hypercubes, called the coil-in-the-box problem.
The snake-in-the-box problem was first described by , motivated by the theory of error-correcting codes. The vertices of a solution to the snake or coil in the box problems can be used as a Gray code that can detect single-bit errors. Such codes have applications in electrical engineering, coding theory, and computer network topologies. In these applications, it is important to devise as long a code as is possible for a given dimension of hypercube. The longer the code, the more effective are its capabilities.
Finding the longest snake or coil becomes notoriously difficult as the dimension number increases and the search space suffers a serious combinatorial explosion. Some techniques for determining the upper and lower bounds for the snake-in-the-box problem include proofs usin |
https://en.wikipedia.org/wiki/Achondrogenesis | Achondrogenesis is a number of disorders that are the most severe form of congenital chondrodysplasia (malformation of bones and cartilage). These conditions are characterized by a small body, short limbs, and other skeletal abnormalities. As a result of their serious health problems, infants with achondrogenesis are usually born prematurely, are stillborn, or die shortly after birth from respiratory failure. Some infants, however, have lived for a while with intensive medical support.
Researchers have described at least three forms of achondrogenesis, designated as Achondrogenesis type 1A, achondrogenesis type 1B and achondrogenesis type 2. These types are distinguished by their signs and symptoms, inheritance pattern, and genetic cause. Other types of achondrogenesis may exist, but they have not been characterized or their cause is unknown.
Achondrogenesis type 1A is caused by a defect in the microtubules of the Golgi apparatus. In mice, a nonsense mutation in the thyroid hormone receptor interactor 11 gene (Trip11), which encodes the Golgi microtubule-associated protein 210 (GMAP-210), resulted in defects similar to the human disease. When their DNA was sequenced, human patients with achondrogenesis type 1A also had loss-of-function mutations in GMAP-210. GMAP-210 moves proteins from the endoplasmic reticulum to the Golgi apparatus. Because of the defect, GMAP-210 is not able to move the proteins, and they remain in the endoplasmic reticulum, which swells up. The loss of Golgi apparatus function affects some cells, such as those responsible for forming bone and cartilage, more than others.
Achondrogenesis type 1B is caused by a similar mutation in SLC26A2, which encodes a sulfate transporter.
Achondrogenesis, type 2 is one of several skeletal disorders caused by mutations in the COL2A1 gene. Achondrogenesis, type 2 and hypochondrogenesis (a similar skeletal disorder) together affect 1 in 40,000 to 60,000 births. |
https://en.wikipedia.org/wiki/Epson%20QX-10 | The Epson QX-10 is a microcomputer running CP/M or TPM-III (CP/M-80 compatible) which was introduced in 1983. It was based on a Zilog Z80 microprocessor, running at 4 MHz, provided up to 256 KB of RAM organized in four switchable banks, and included a separate graphics processor chip (µPD7220) manufactured by NEC to provide advanced graphics capabilities. In the USA and Canada, two versions were launched; a basic CP/M configuration with 64 KB RAM and the HASCI configuration with 256 KB RAM and the special HASCI keyboard to be used with the bundled application suite, called Valdocs. TPM-III was used for Valdocs and some copy protected programs like Logo Professor. The European and Japanese versions were CP/M configurations with 256 KB RAM and a graphical Basic interpreter.
The machine had internal extension slots, which could be used for extra serial ports, network cards or third party extensions like an Intel 8088 processor, adding MS-DOS compatibility.
Rising Star Industries was the primary American software vendor for the HASCI QX series. Their product line included the TPM-II and III operating system, Valdocs, a robust BASIC language implementation, a graphics API library used by a variety of products which initially supported line drawing and fill functions and was later extended to support the QX-16 color boards, Z80 assembler, and low level Zapple machine code monitor which could be invoked from DIP switch setting on the rear of the machine.
QX-11
The "Abacus" is a IBM PC compatible machine released in 1985 booting MS-DOS 2.11 from 64 KB ROM. It has a Intel 8086-2 CPU at 8 MHz, 128 to 512 KB of RAM and two 3½" floppy drives (360 KB format). The sound chip has 3 sound tones plus one noise channel with 16 independent volume levels, graphics are 640x400 and the joystick ports are Atari-2600 compatible. There was also support for custom ROM cartridges.
QX-16
Its successor, the dual-processor QX-16, added a 16-bit Intel processor with Color Graphics Adapter |
https://en.wikipedia.org/wiki/Ecological%20stability | In ecology, an ecosystem is said to possess ecological stability (or equilibrium) if it is capable of returning to its equilibrium state after a perturbation (a capacity known as resilience) or does not experience unexpected large changes in its characteristics across time. Although the terms community stability and ecological stability are sometimes used interchangeably, community stability refers only to the characteristics of communities. It is possible for an ecosystem or a community to be stable in some of their properties and unstable in others. For example, a vegetation community in response to a drought might conserve biomass but lose biodiversity.
Stable ecological systems abound in nature, and the scientific literature has documented them to a great extent. Scientific studies mainly describe grassland plant communities and microbial communities. Nevertheless, it is important to mention that not every community or ecosystem in nature is stable (for example, wolves and moose on Isle Royale). Also, noise plays an important role on biological systems and, in some scenarios, it can fully determine their temporal dynamics.
The concept of ecological stability emerged in the first half of the 20th century. With the advancement of theoretical ecology in the 1970s, the usage of the term has expanded to a wide variety of scenarios. This overuse of the term has led to controversy over its definition and implementation.
In 1997, Grimm and Wissel made an inventory of 167 definitions used in the literature and found 70 different stability concepts. One of the strategies that these two authors proposed to clarify the subject is to replace ecological stability with more specific terms, such as constancy, resilience and persistence. In order to fully describe and put meaning to a specific kind of stability, it must be looked at more carefully. Otherwise the statements made about stability will have little to no reliability because they would not have information to back |
https://en.wikipedia.org/wiki/Fermat%27s%20theorem%20on%20sums%20of%20two%20squares | In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:
with x and y integers, if and only if
The prime numbers for which this is true are called Pythagorean primes.
For example, the primes 5, 13, 17, 29, 37 and 41 are all congruent to 1 modulo 4, and they can be expressed as sums of two squares in the following ways:
On the other hand, the primes 3, 7, 11, 19, 23 and 31 are all congruent to 3 modulo 4, and none of them can be expressed as the sum of two squares. This is the easier part of the theorem, and follows immediately from the observation that all squares are congruent to 0 or 1 modulo 4.
Since the Diophantus identity implies that the product of two integers each of which can be written as the sum of two squares is itself expressible as the sum of two squares, by applying Fermat's theorem to the prime factorization of any positive integer n, we see that if all the prime factors of n congruent to 3 modulo 4 occur to an even exponent, then n is expressible as a sum of two squares. The converse also holds. This generalization of Fermat's theorem is known as the sum of two squares theorem.
History
Albert Girard was the first to make the observation, characterizing the positive integers (not necessarily primes) that are expressible as the sum of two squares of positive integers; this was published in 1625. The statement that every prime p of the form 4n+1 is the sum of two squares is sometimes called Girard's theorem. For his part, Fermat wrote an elaborate version of the statement (in which he also gave the number of possible expressions of the powers of p as a sum of two squares) in a letter to Marin Mersenne dated December 25, 1640: for this reason this version of the theorem is sometimes called Fermat's Christmas theorem.
Gaussian primes
Fermat's theorem on sums of two squares is strongly related with the theory of Gaussian primes.
A Gaussian integer is a complex number such that and are |
https://en.wikipedia.org/wiki/Modified%20atmosphere | Modified atmosphere packaging (MAP) is the practice of modifying the composition of the internal atmosphere of a package (commonly food packages, drugs, etc.) in order to improve the shelf life. The need for this technology for food arises from the short shelf life of food products such as meat, fish, poultry, and dairy in the presence of oxygen. In food, oxygen is readily available for lipid oxidation reactions. Oxygen also helps maintain high respiration rates of fresh produce, which contribute to shortened shelf life. From a microbiological aspect, oxygen encourages the growth of aerobic spoilage microorganisms. Therefore, the reduction of oxygen and its replacement with other gases can reduce or delay oxidation reactions and microbiological spoilage. Oxygen scavengers may also be used to reduce browning due to lipid oxidation by halting the auto-oxidative chemical process. Besides, MAP changes the gaseous atmosphere by incorporating different compositions of gases.
The modification process generally lowers the amount of oxygen (O2) in the headspace of the package. Oxygen can be replaced with nitrogen (N2), a comparatively inert gas, or carbon dioxide (CO2).
A stable atmosphere of gases inside the packaging can be achieved using active techniques, such as gas flushing and compensated vacuum, or passively by designing “breathable” films.
History
The first recorded beneficial effects of using modified atmosphere date back to 1821. Jacques Étienne Bérard, a professor at the School of Pharmacy in Montpellier, France, reported delayed ripening of fruit and increased shelf life in low-oxygen storage conditions. Controlled atmosphere storage (CAS) was used from the 1930s when ships transporting fresh apples and pears had high levels of CO2 in their holding rooms in order to increase the shelf life of the product. In the 1970s MA packages reached the stores when bacon and fish were sold in retail packs in Mexico. Since then development has been continuous and interes |
https://en.wikipedia.org/wiki/Neurosarcoidosis | Neurosarcoidosis (sometimes shortened to neurosarcoid) refers to a type of sarcoidosis, a condition of unknown cause featuring granulomas in various tissues, in this type involving the central nervous system (brain and spinal cord). Neurosarcoidosis can have many manifestations, but abnormalities of the cranial nerves (a group of twelve nerves supplying the head and neck area) are the most common. It may develop acutely, subacutely, and chronically. Approximately 5–10 percent of people with sarcoidosis of other organs (e.g. lung) develop central nervous system involvement. Only 1 percent of people with sarcoidosis will have neurosarcoidosis alone without involvement of any other organs. Diagnosis can be difficult, with no test apart from biopsy achieving a high accuracy rate. Treatment is with immunosuppression. The first case of sarcoidosis involving the nervous system was reported in 1905.
Signs and symptoms
Neurological
Abnormalities of the cranial nerves are present in 50–70 percent of cases. The most common abnormality is involvement of the facial nerve, which may lead to reduced power on one or both sides of the face (65 percent resp 35 percent of all cranial nerve cases), followed by reduction in visual perception due to optic nerve involvement. Rarer symptoms are double vision (oculomotor nerve, trochlear nerve or abducens nerve), decreased sensation of the face (trigeminal nerve), hearing loss or vertigo (vestibulocochlear nerve), swallowing problems (glossopharyngeal nerve) and weakness of the shoulder muscles (accessory nerve) or the tongue (hypoglossal nerve). Visual problems may also be the result of papilledema (swelling of the optic disc) due to obstruction by granulomas of the normal cerebrospinal fluid (CSF) circulation.
Seizures (mostly of the tonic-clonic/"grand mal" type) are present in about 15 percent and may be the presenting phenomenon in 10 percent.
Meningitis (inflammation of the lining of the brain) occurs in 3–26 percent of cases. Sym |
https://en.wikipedia.org/wiki/October%20Sky%20%28book%29 | October Sky is the first memoir in a series of four, by American engineer Homer Hickam Jr. originally published in 1998 as Rocket Boys. Later editions were published under the title October Sky as a tie-in to the 1999 film adaptation.
It is a story of growing up in a mining town, and a boy's pursuit of amateur rocketry in a coal mining town. The book won the W.D. Weatherford Award in 1998, the year of its release. Today, it is one of the most often picked community/library reads in the United States. It is also studied in many school systems around the world. October Sky was followed by The Coalwood Way (2000), Sky of Stone (2002), and Carrying Albert Home (2015).
Rocket Boys was made into a film in 1999, titled October Sky (an anagram of "Rocket Boys"). The book was then re-published as October Sky shortly afterwards.
Plot summary
Homer "Sonny" Hickam Jr. lives in a small coal mining town in West Virginia named Coalwood. Sonny, after seeing the Russian satellite Sputnik, decides to join the American team of rocket engineers called the Missile Agency when he graduates from school. (Note: In the book Rocket Boys, the main character is always called Sonny. In the movie October Sky, he is called Homer.) Sonny's older brother, Jim Hickam, excels at football and expects to get a college football scholarship. Sonny, however, is terrible at sports and has no special skill that would get him "out of Coalwood". Sonny's mother is afraid that he will have to work in the mines after high school.
Sonny's first attempt at rocketry (which occurred when he was 14) consists of a flashlight tube and model airplane body as a casing. It is fueled by flash powder from old cherry bombs. It explodes violently, destroying his mother's fence. After that, Sonny enlists the help of Quentin Wilson, Roy Lee Cooke, Sherman Siers, Jimmy "O'Dell" Carroll, and Billy Rose to help build rockets while forming the BCMA (Big Creek Missile Agency). Their first real rocket, powered by black powder |
https://en.wikipedia.org/wiki/Virtual%20Private%20LAN%20Service | Virtual Private LAN Service (VPLS) is a way to provide Ethernet-based multipoint to multipoint communication over IP or MPLS networks. It allows geographically dispersed sites to share an Ethernet broadcast domain by connecting sites through pseudowires. The term sites includes multiplicities of both servers and clients. The technologies that can be used as pseudo-wire can be Ethernet over MPLS, L2TPv3 or even GRE. There are two IETF standards-track RFCs (RFC 4761 and RFC 4762) describing VPLS establishment.
VPLS is a virtual private network (VPN) technology. In contrast to L2TPv3, which allows only point-to-point layer 2 tunnels, VPLS allows any-to-any (multipoint) connectivity.
In a VPLS, the local area network (LAN) at each site is extended to the edge of the provider network. The provider network then emulates a switch or bridge to connect all of the customer LANs to create a single bridged LAN.
VPLS is designed for applications that require multipoint or broadcast access.
Mesh establishment
Since VPLS emulates a LAN, full mesh connectivity is required. There are two methods for full mesh establishment for VPLS: using Border Gateway Protocol (BGP) and using Label Distribution Protocol (LDP). The "control plane" is the means by which provider edge (PE) routers communicate for auto-discovery and signalling. Auto-discovery refers to the process of finding other PE routers participating in the same VPN or VPLS. Signalling is the process of establishing pseudowires (PW). The PWs constitute the "data plane", whereby PEs send customer VPN/VPLS traffic to other PEs.
BGP provides both auto-discovery and signalling. The mechanisms used are very similar to those used in establishing Layer-3 MPLS VPNs. Each PE is configured to participate in a given VPLS. The PE, through the use of BGP, simultaneously discovers all other PEs in the same VPLS, and establishes a full mesh of pseudowires to those PEs.
With LDP, each PE router must be configured to participate in |
https://en.wikipedia.org/wiki/Weasel%20program | The weasel program or Dawkins' weasel is a thought experiment and a variety of computer simulations illustrating it. Their aim is to demonstrate that the process that drives evolutionary systems—random variation combined with non-random cumulative selection—is different from pure chance.
The thought experiment was formulated by Richard Dawkins, and the first simulation written by him; various other implementations of the program have been written by others.
Overview
In chapter 3 of his book The Blind Watchmaker, Dawkins gave the following introduction to the program, referencing the well-known infinite monkey theorem:
The scenario is staged to produce a string of gibberish letters, assuming that the selection of each letter in a sequence of 28 characters will be random. The number of possible combinations in this random sequence is 2728, or about 1040, so the probability that the monkey will produce a given sequence is extremely low. Any particular sequence of 28 characters could be selected as a "target" phrase, all equally as improbable as Dawkins's chosen target, "METHINKS IT IS LIKE A WEASEL".
A computer program could be written to carry out the actions of Dawkins's hypothetical monkey, continuously generating combinations of 26 letters and spaces at high speed. Even at the rate of millions of combinations per second, it is unlikely, even given the entire lifetime of the universe to run, that the program would ever produce the phrase "METHINKS IT IS LIKE A WEASEL".
Dawkins intends this example to illustrate a common misunderstanding of evolutionary change, i.e. that DNA sequences or organic compounds such as proteins are the result of atoms randomly combining to form more complex structures. In these types of computations, any sequence of amino acids in a protein will be extraordinarily improbable (this is known as Hoyle's fallacy). Rather, evolution proceeds by hill climbing, as in adaptive landscapes.
Dawkins then goes on to show that a process of cumu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.