id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,138,849 | https://en.wikipedia.org/wiki/Cleveland%3A%20Now%21 | Cleveland: Now! was a public and private funding program for the rehabilitation of neighborhoods in Cleveland, Ohio initiated by Mayor Carl B. Stokes on May 1, 1968. Local businesses agreed to cooperate with the Stokes administration on the program "to combat the ills of Cleveland's inner city in order to preserve racial peace." The aim of the Now! was to "raise $1.5 billion over 10 years with $177 million projected during the first 2 years to fund youth activities and employment, community centers, health-clinic facilities, housing units, and economic renewal projects."
The program's funding aims were quickly met within the first few months of its initiation. However, on July 23, 1968, the Glenville Shootout occurred. Subsequent revelations found that Fred "Ahmed" Evans, one of the major instigators in the incident, had indirectly received some $6,000 in funds from the program. Donations declined. However, Stokes pulled through and was reelected for a second term in 1969. The Now program continued to actively operate until 1970. Stokes announced that its "last major commitment would be the funding of 4 new community centers." The organization was not formally dissolved until George V. Voinovich assumed office in 1980. The city donated the remaining $220,000 to the Cleveland Foundation to "use for youth employment and low-income housing."
References
Further reading
External links
Carl & Louis Stokes Making History, Western Reserve Historical Society
History of Cleveland
Urban planning | Cleveland: Now! | [
"Engineering"
] | 299 | [
"Urban planning",
"Architecture"
] |
3,139,577 | https://en.wikipedia.org/wiki/Jet%20group | In mathematics, a jet group is a generalization of the general linear group which applies to Taylor polynomials instead of vectors at a point. A jet group is a group of jets that describes how a Taylor polynomial transforms under changes of coordinate systems (or, equivalently, diffeomorphisms).
Overview
The k-th order jet group Gnk consists of jets of smooth diffeomorphisms φ: Rn → Rn such that φ(0)=0.
The following is a more precise definition of the jet group.
Let k ≥ 2. The differential of a function f: Rk → R can be interpreted as a section of the cotangent bundle of RK given by df: Rk → T*Rk. Similarly, derivatives of order up to m are sections of the jet bundle Jm(Rk) = Rk × W, where
Here R* is the dual vector space to R, and Si denotes the i-th symmetric power. A smooth function f: Rk → R has a prolongation jmf: Rk → Jm(Rk) defined at each point p ∈ Rk by placing the i-th partials of f at p in the Si((R*)k) component of W.
Consider a point . There is a unique polynomial fp in k variables and of order m such that p is in the image of jmfp. That is, . The differential data x′ may be transferred to lie over another point y ∈ Rn as jmfp(y) , the partials of fp over y.
Provide Jm(Rn) with a group structure by taking
With this group structure, Jm(Rn) is a Carnot group of class m + 1.
Because of the properties of jets under function composition, Gnk is a Lie group. The jet group is a semidirect product of the general linear group and a connected, simply connected nilpotent Lie group. It is also in fact an algebraic group, since the composition involves only polynomial operations.
Notes
References
Lie groups | Jet group | [
"Mathematics"
] | 429 | [
"Algebra stubs",
"Mathematical structures",
"Lie groups",
"Algebraic structures",
"Algebra"
] |
3,139,692 | https://en.wikipedia.org/wiki/Lamb%E2%80%93Oseen%20vortex | In fluid dynamics, the Lamb–Oseen vortex models a line vortex that decays due to viscosity. This vortex is named after Horace Lamb and Carl Wilhelm Oseen.
Mathematical description
Oseen looked for a solution for the Navier–Stokes equations in cylindrical coordinates with velocity components of the form
where is the circulation of the vortex core. Navier-Stokes equations lead to
which, subject to the conditions that it is regular at and becomes unity as , leads to
where is the kinematic viscosity of the fluid. At , we have a potential vortex with concentrated vorticity at the axis; and this vorticity diffuses away as time passes.
The only non-zero vorticity component is in the direction, given by
The pressure field simply ensures the vortex rotates in the circumferential direction, providing the centripetal force
where ρ is the constant density
Generalized Oseen vortex
The generalized Oseen vortex may be obtained by looking for solutions of the form
that leads to the equation
Self-similar solution exists for the coordinate , provided , where is a constant, in which case . The solution for may be written according to Rott (1958) as
where is an arbitrary constant. For , the classical Lamb–Oseen vortex is recovered. The case corresponds to the axisymmetric stagnation point flow, where is a constant. When , , a Burgers vortex is a obtained. For arbitrary , the solution becomes , where is an arbitrary constant. As , Burgers vortex is recovered.
See also
The Rankine vortex and Kaufmann (Scully) vortex are common simplified approximations for a viscous vortex.
References
Vortices
Equations of fluid dynamics | Lamb–Oseen vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 343 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Fluid dynamics",
"Dynamical systems"
] |
3,139,737 | https://en.wikipedia.org/wiki/Batchelor%20vortex | In fluid dynamics, Batchelor vortices, first described by George Batchelor in a 1964 article, have been found useful in analyses of airplane vortex wake hazard problems.
The model
The Batchelor vortex is an approximate solution to the Navier–Stokes equations obtained using a boundary layer approximation. The physical reasoning behind this approximation is the assumption that the axial gradient of the flow field of interest is of much smaller magnitude than the radial gradient.
The axial, radial and azimuthal velocity components of the vortex are denoted , and respectively and can be represented in cylindrical coordinates as follows:
The parameters in the above equations are
, the free-stream axial velocity,
, the velocity scale (used for nondimensionalization),
, the length scale (used for nondimensionalization),
, a measure of the core size, with initial core size and representing viscosity,
, the swirl strength, given as a ratio between the maximum tangential velocity and the core velocity.
Note that the radial component of the velocity is zero and that the axial and azimuthal components depend only on .
We now write the system above in dimensionless form by scaling time by a factor . Using the same symbols for the dimensionless variables, the Batchelor vortex can be expressed in terms of the dimensionless variables as
where denotes the free stream axial velocity and is the Reynolds number.
If one lets and considers an infinitely large swirl number then the Batchelor vortex simplifies to the Lamb–Oseen vortex for the azimuthal velocity:
where is the circulation.
References
External links
Continuous spectra of the Batchelor vortex (Authored by Xueri Mao and Spencer Sherwin and published by Imperial College London)
Equations of fluid dynamics
Vortices
Fluid dynamics | Batchelor vortex | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 358 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Dynamical systems",
"Chemical engineering",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
3,139,931 | https://en.wikipedia.org/wiki/Silver%28II%29%20fluoride | Silver(II) fluoride is a chemical compound with the formula AgF2. It is a rare example of a silver(II) compound - silver usually exists in its +1 oxidation state. It is used as a fluorinating agent.
Preparation
AgF2 can be synthesized by fluorinating Ag2O with elemental fluorine. Also, at 200 °C (473 K) elemental fluorine will react with AgF or AgCl to produce AgF2.
As a strong fluorinating agent, AgF2 should be stored in Teflon or a passivated metal container. It is light sensitive.
AgF2 can be purchased from various suppliers, the demand being less than 100 kg/year. While laboratory experiments find use for AgF2, it is too expensive for large scale industry use. In 1993, AgF2 cost between 1000-1400 US dollars per kg.
Composition and structure
AgF2 is a white crystalline powder, but it is usually black/brown due to impurities. The F/Ag ratio for most samples is < 2, typically approaching 1.75 due to contamination with Ag and oxides and carbon.
For some time, it was doubted that silver was actually in the +2 oxidation state, rather than some combination of states such as AgI[AgIIIF4], which would be similar to silver(I,III) oxide. Neutron diffraction studies, however, confirmed its description as silver(II). The AgI[AgIIIF4] was found to be present at high temperatures, but it was unstable with respect to AgF2.
In the gas phase, AgF2 is believed to have D∞h symmetry. Crystal structure of AgF2 was determined by single-crystal X-ray diffraction.
Approximately 14 kcal/mol (59 kJ/mol) separate the ground and first excited states. The compound is paramagnetic, but it becomes ferromagnetic at temperatures below −110 °C (163 K).
Uses
AgF2 is a strong fluorinating and oxidising agent. It is formed as an intermediate in the catalysis of gaseous reactions with fluorine by silver. With fluoride ions, it forms complex ions such as , the blue-violet , and .
It is used in the fluorination and preparation of organic perfluorocompounds. This type of reaction can occur in three different ways (here Z refers to any element or group attached to carbon, X is a halogen):
CZ3H + 2 AgF2 → CZ3F + HF + 2 AgF
CZ3X + 2AgF2 → CZ3F + X2 + 2 AgF
Z2C=CZ2 + 2 AgF2 → Z2CFCFZ2 + 2 AgF
Similar transformations can also be effected using other high valence metallic fluorides such as CoF3, MnF3, CeF4, and PbF4.
is also used in the fluorination of aromatic compounds, although selective monofluorinations are more difficult:
C6H6 + 2 AgF2 → C6H5F + 2 AgF + HF
oxidises xenon to xenon difluoride in anhydrous HF solutions.
2 AgF2 + Xe → 2 AgF + XeF2
It also oxidises carbon monoxide to carbonyl fluoride.
2 AgF2 + CO → 2 AgF + COF2
It reacts with water to form oxygen gas:
4 AgF2 + 4 H2O → 2 Ag2O + 8 HF + O2
can be used to selectively fluorinate pyridine at the ortho position under mild conditions.
Safety
is a very strong oxidizer that reacts violently with water, reacts with dilute acids to produce ozone, oxidizes iodide to iodine, and upon contact with acetylene forms the contact explosive silver acetylide. It is light-sensitive, very hygroscopic and corrosive. It decomposes violently on contact with hydrogen peroxide, releasing oxygen gas. It also liberates HF, , and elemental silver.
References
External links
National Pollutant Inventory Fluoride and compounds fact sheet
WebElements Silver(II) Fluoride
Structure graphic
Fluorides
Silver compounds
Metal halides
Fluorinating agents | Silver(II) fluoride | [
"Chemistry"
] | 927 | [
"Inorganic compounds",
"Salts",
"Fluorinating agents",
"Metal halides",
"Reagents for organic chemistry",
"Fluorides"
] |
3,139,950 | https://en.wikipedia.org/wiki/Iron%28III%29%20fluoride | Iron(III) fluoride, also known as ferric fluoride, are inorganic compounds with the formula FeF3(H2O)x where x = 0 or 3. They are mainly of interest by researchers, unlike the related iron(III) chloride. Anhydrous iron(III) fluoride is white, whereas the hydrated forms are light pink.
Chemical and physical properties
Iron(III) fluoride is a thermally robust, antiferromagnetic solid consisting of high spin Fe(III) centers, which is consistent with the pale colors of all forms of this material. Both anhydrous iron(III) fluoride as well as its hydrates are hygroscopic.
Structure
The anhydrous form adopts a simple structure with octahedral Fe(III)F6 centres interconnected by linear Fe-F-Fe linkages. In the language of crystallography, the crystals are classified as rhombohedral with an R-3c space group. The structural motif is similar to that seen in ReO3. Although the solid is nonvolatile, it evaporates at high temperatures, the gas at 987 °C consists of FeF3, a planar molecule of D3h symmetry with three equal Fe-F bonds, each of length 176.3 pm. At very high temperatures, it decomposes to give FeF2 and F2.
Two crystalline forms—or more technically, polymorphs—of FeF3·3H2O are known, the α and β forms. These are prepared by evaporation of an HF solution containing Fe3+ at room temperature (α form) and above 50 °C (β form). The space group of the β form is P4/m, and the α form maintains a P4/m space group with a J6 substructure. The solid α form is unstable and converts to the β form within days. The two forms are distinguished by their difference in quadrupole splitting from their Mössbauer spectra.
Preparation, occurrence, reactions
Anhydrous iron(III) fluoride is prepared by treating virtually any anhydrous iron compound with fluorine. More practically and like most metal fluorides, it is prepared by treating the corresponding chloride with hydrogen fluoride:
FeCl3 + 3 HF → FeF3 + 3 HCl
It also forms as a passivating film upon contact between iron (and steel) and hydrogen fluoride. The hydrates crystallize from aqueous hydrofluoric acid.
The material is a fluoride acceptor. With xenon hexafluoride it forms [FeF4][XeF5].
Pure FeF3 is not yet known among minerals. However, hydrated form is known as the very rare fumarolic mineral topsøeite. Generally a trihydrate, its chemistry is slightly more complex: FeF[F0.5(H2O)0.5]4·H2O.
Applications
The primary commercial use of iron(III) fluoride in the production of ceramics.
Some cross coupling reaction are catalyzed by ferric fluoride-based compounds. Specifically the coupling of biaryl compounds are catalyzed by hydrated iron(II) fluoride complexes of N-heterocyclic carbene ligands. Other metal fluorides also catalyse similar reactions. Iron(III) fluoride has also been shown to catalyze chemoselective addition of cyanide to aldehydes to give the cyanohydrins.
Safety
The anhydrous material is a powerful dehydrating agent. The formation of ferric fluoride may have been responsible for the explosion of a cylinder of hydrogen fluoride gas.
References
External links
National Pollutant Inventory—Fluoride and compounds fact sheet
CAMEO Chemicals: Database of Hazardous Materials
Fluorides
Metal halides
Iron(III) compounds | Iron(III) fluoride | [
"Chemistry"
] | 843 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
3,139,954 | https://en.wikipedia.org/wiki/Kaufmann%20vortex | The Kaufmann vortex, also known as the Scully model, is a mathematical model for a vortex taking account of viscosity. It uses an algebraic velocity profile. This vortex is not a solution of the Navier–Stokes equations.
Kaufmann and Scully's model for the velocity in the Θ direction is:
The model was suggested by W. Kaufmann in 1962, and later by Scully and Sullivan in 1972 at the Massachusetts Institute of Technology.
See also
Rankine vortex – a simpler, but more crude, approximation for a vortex.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity.
References
Equations of fluid dynamics
Vortices | Kaufmann vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 139 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Dynamical systems",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
3,140,268 | https://en.wikipedia.org/wiki/Bluing%20%28fabric%29 | Bluing, laundry blue, dolly blue or washing blue is a household product used to improve the appearance of textiles, especially white fabrics. Used during laundering, it adds a trace of blue dye (often synthetic ultramarine, sometimes Prussian blue) to the fabric.
Uses
White fabrics acquire a slight color cast after use (usually grey or yellow). Since blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to the slightly off-white color of these fabrics makes them appear whiter. Laundry detergents may also use fluorescing agents to similar effect. Many white fabrics are blued during manufacturing. Bluing is not permanent and rinses out over time leaving dingy or yellowed whites. A commercial bluing product allows the consumer to add the bluing back into the fabric to restore whiteness.
On the same principle, bluing is sometimes used by white-haired people in a blue rinse.
Bluing has other miscellaneous household uses, including as an ingredient in rock crystal "gardens" (whereby a porous item is placed in a salt solution, the solution then precipitating out as crystals), and to improve the appearance of swimming-pool water. In Australia it was used as a folk remedy to relieve the itching of mosquito and sand fly bites.
Laundry bluing is made of a colloid of ferric ferrocyanide (blue iron salt, also referred to as "Prussian blue") in water.
Blue colorings have been added to rinse water for centuries, first in the form of powder blue or smalt, or using small lumps of indigo and starch, called stone blue. After the invention of synthetic ultramarine and Prussian blue it was manufactured by many companies, including Mrs. Stewart's Bluing in the United States, and by Reckitt's Crown Blue in Hull and the Lancashire Ultramarine Company's Dolly Blue at Backbarrow (later purchased by Reckitt & Sons) in the United Kingdom. It was popular until the mid-twentieth century in the United Kingdom and the United States, and is still widely used in India and Pakistan. In many places, it has been replaced by bleach for its primary purpose.
Bluing is usually sold in liquid form, but it may also be a solid. Solid bluing is sometimes used by hoodoo doctors to provide the blue color needed for "mojo hands" without having to use the toxic compound copper(II) sulfate. Bluing was also used by some Native American tribes to mark their arrows showing tribe ownership.
See also
Mrs. Stewart's Bluing
Blue rinse
Optical brightener
References
External links
Laundry Blue: Bluing, Reckitt's blue bags, Dolly Blue
Azurage du linge en France
Videos
Cleaning products
Dyes | Bluing (fabric) | [
"Physics",
"Chemistry"
] | 579 | [
"Products of chemical industry",
"Materials stubs",
"Cleaning products",
"Materials",
"Matter"
] |
3,140,419 | https://en.wikipedia.org/wiki/Bluing%20%28steel%29 | Bluing, sometimes spelled as blueing, is a passivation process in which steel is partially protected against rust using a black oxide coating. It is named after the blue-black appearance of the resulting protective finish. Bluing involves an electrochemical conversion coating resulting from an oxidizing chemical reaction with iron on the surface selectively forming magnetite (), the black oxide of iron. In comparison, rust, the red oxide of iron (), undergoes an extremely large volume change upon hydration; as a result, the oxide easily flakes off, causing the typical reddish rusting away of iron. Black oxide provides minimal protection against corrosion, unless also treated with a water-displacing oil to reduce wetting and galvanic action. In colloquial use, thin coatings of black oxide are often termed "gun bluing", while heavier coatings are termed "black oxide". Both refer to the same chemical process for providing true gun bluing.
Overview
Various processes are used to produce the oxide coating.
"Cold" bluing is generally a selenium dioxide-based compound that colours steel black, or more often a very dark grey. It is a difficult product to apply evenly, offers minimal protection and is generally best used for small fast repair jobs and touch-ups.
The "hot" process is an alkali salt solution using potassium nitrite or sodium nitrate and sodium hydroxide, referred to as "traditional caustic black", that is typically done at an elevated temperature, . This method was adopted by larger firearm companies for large scale, more economical bluing. It does provide good rust resistance, which is improved with oil.
"Rust bluing" and "fume bluing" provide the best rust and corrosion resistance as the process continually converts any metal that is capable of rusting into magnetite (). Treating with an oiled coating enhances the protection offered by the bluing. This process is also the only process safely used to re-blue vintage shotguns. Many double-barreled shotguns are soft soldered (lead) or silver brazed together and many of the parts are attached by that method also. The higher temperatures of the other processes as well as their caustic nature could weaken the soldered joints and make the gun hazardous to use.
Bluing can also be done in a furnace, for example for a sword or other item traditionally made by a blacksmith or specialist such as a weapon-smith. Blacksmith products to this day may occasionally be found made from blued steel by traditional craftsmen in cultures and segments of society who use that technology either by necessity or choice.
Processes
Hot caustic bluing
Bluing may be applied by immersing steel parts in a solution of potassium nitrate, sodium hydroxide, and water heated to the boiling point, depending on the recipe. Similarly, stainless steel parts may be immersed in a mixture of nitrates and chromates, similarly heated. Either of these two methods is called 'hot bluing'. Hot bluing is the current standard in gun bluing, as both it and rust bluing provide the most permanent degree of rust-resistance and cosmetic protection of exposed gun metal, and hot bluing takes less time than rust bluing.
Rust bluing
Rust bluing was developed between hot and cold bluing processes, and was originally used by gunsmiths in the 19th century to blue firearms prior to the development of hot bluing processes. The process was to coat the gun parts in an acid solution, let the parts rust uniformly, then immerse the parts in boiling water to convert the red oxide to black oxide , which forms a more protective, stable coating than the red oxide; the boiling water also removes any remaining residue from the applied acid solution (often nitric acid and hydrochloric acid diluted in water). The loose oxide was then carded (scrubbed) off, using a carding brush – a wire brush with soft, thin (usually about thick) wires – or wheel.
This process was repeated until the desired depth of color was achieved or the metal simply did not color further. This is one of the reasons rust and fume bluing are generally more rust-resistant than other methods. The parts are then oiled and allowed to stand overnight. This process leaves a deep blue-black finish.
Modern home hobbyist versions of this process typically use a hydrogen peroxide and salt solution, sometimes preceded with a vinegar soak, for the rusting step to avoid the need for more dangerous acids.
Fume bluing
Fume bluing is another process similar to rust bluing. Instead of applying the acid solution directly to the metal parts, the parts are placed in a sealed cabinet with a moisture source, a container of nitric acid and a container of hydrochloric acid. The mixed fumes of the acids produce a uniform rust on the surface of the parts (inside and out) in about 12 hours. The parts are then boiled in distilled water, blown dry, then carded, as with rust bluing.
These processes were later abandoned by major firearm manufacturers as it often took parts days to finish completely, and was very labor-intensive. They are still sometimes used by gunsmiths to obtain an authentic finish for a period gun of the time that rust bluing was in vogue, analogous to the use of browning on earlier representative firearm replicas. Rust bluing is also used on shotgun barrels that are soldered to the rib between the barrels, as hot bluing solutions melt the solder during the bluing process.
Large scale industrial hot bluing is often performed using a bluing furnace. This is an alternative method for creating the black oxide coating. In place of using a hot bath (although at a lower temperature) chemically induced method, it is possible through controlling the temperature to heat steel precisely such as to cause the formation of black oxide selectively over the red oxide. It, too, must be oiled to provide any significant rust resistance.
Cold bluing
There are also methods of cold bluing, which do not require heat. Commercial products are widely sold in small bottles for cold bluing firearms, and these products are primarily used by individual gun owners for implementing small touch-ups to a gun's finish, to prevent a small scratch from becoming a major source of rust on a gun over time. Cold bluing is not particularly resistant to holster wear, nor does it provide a large degree of rust resistance. Often it does provide an adequate cosmetic touch-up of a gun's finish when applied and additionally oiled on a regular basis. However, rust bluing small areas often match, blend, and wear better than any cold bluing process.
At least one of the cold bluing solutions contains selenium dioxide. These work by depositing a coating of copper selenide on the surface.
Nitre bluing
In the nitre bluing process, polished and cleaned steel parts are immersed in a bath of molten salts—typically potassium nitrate and sodium nitrate (sometimes with of manganese dioxide per pound of total nitrate). The mixture is heated to and the parts are suspended in this solution with wire. The parts must be observed constantly for colour change. The cross section and size of parts affect the outcome of the finish and time it takes to achieve. This method must not be used on critically heat-treated parts such as receivers, slides or springs. It is generally employed on smaller parts such as pins, screws, sights, etc. The colours range through straw, gold, brown, purple, blue, teal, then black. Examples of this finish are common on older pocket watches whose hands exhibit what is called 'peacock blue', a rich iridescent blue.
Color case hardening
Color case hardening is the predecessor of all metal coloring typically employed in the firearms industry. Contemporary heat-treatable steels did not exist or were in their infancy. Soft, low-carbon steel was used, but strong materials were needed for the receivers of firearms. Initially case hardening was used but did not offer any aesthetics. Colour case hardening occurs when soft steels were packed in a reasonably airtight crucible in a mixture of charred leather, bone charcoal and wood charcoal. This crucible was heated to for up to 6 hours (the longer the heat was applied the thicker the case hardening). At the end of this heating process the crucible is removed from the oven and positioned over a bath of water with air forced through a perforated coil in the bottom of the bath. The bottom of the crucible is opened allowing the contents to drop into the rapidly bubbling water. The differential cooling causes patterns of colors to appear as well as hardening the part.
Different colors can be achieved through variations of this method including quenching in oil instead of water.
Browning
"Browning" is controlled red rust and is also known as "pluming" or "plum brown". One can generally use the same solution to brown as to blue. The difference is immersion in boiling water for bluing. The rust then turns to black-blue . Many older browning and bluing formulas are based on corrosive solutions (necessary to cause metal to rust) and often contain cyanide or mercury salts solutions that are especially toxic to humans.
Applications
Bluing is most commonly used by gun manufacturers, gunsmiths, and gun owners to improve the cosmetic appearance of and provide a measure of corrosion resistance to their firearms. It is also used by machinists, to protect and beautify tools made for their own use. Bluing also helps to maintain the metal finish by resisting superficial scratching, and also helps to reduce glare to the eyes of the shooter when looking down the barrel of the gun. All blued parts still require oiling to prevent rust. Bluing, being a chemical conversion coating, is not as robust against wear and corrosion resistance as plated coatings, and is typically no thicker than 2.5 micrometres (0.0001 inches). For this reason, it is considered not to add any appreciable thickness to precisely-machined parts. Friction, as from holster wear, quickly removes cold bluing, and also removes hot bluing, rust, or fume bluing over long periods of use. It is usually inadvisable to use cold bluing as a touch-up where friction is present. If cold bluing is the only practical option, the area should be kept oiled to extend the life of the coating as much as possible.
New guns are typically available in blued finish options offered as the least-expensive finish, and this finish is also the least effective at providing rust resistance, relative to other finishes such as Parkerizing or hard chrome plating or nitriding processes like Tenifer.
Bluing is also used for providing coloring for steel parts of fine clocks and other fine metalwork. This is often achieved without chemicals by simply heating the steel until a blue oxide film appears. The blue appearance of the oxide film is also used as an indication of temperature when tempering carbon steel after hardening, indicating a state of temper suitable for springs.
Bluing is also used in seasoning carbon steel cookware, to render it relatively rust-proof and non-stick. In this case cooking oil, rather than gun oil, acts to displace water and prevent rust.
Premium fencing blades are often offered with a blued finish. This finish allows them to be stored in high-moisture conditions, like sports bags, without rusting.
Bluing is often a hobbyist endeavor, and there are many methods of bluing, and continuing debates about the relative efficacy of each method.
Historically, razor blades were often blued steel. A non-linear resistance property of the blued steel of razor blades, foreshadowing the same property later discovered in semiconductor diode junctions, along with the ready availability of blued steel razor blades, led to the use of razor blades as a detector in crystal set AM radios that were built by servicemen (as foxhole radios) or by prisoners of war during World War II.
Non-ferrous materials
Bluing only works on ferrous materials, such as steel or cast iron, for protecting against corrosion because it changes iron into Fe3O4. As aluminium and polymers do not rust, they cannot be blued, and no corrosion protection is provided. However, the chemicals from the bluing process can cause uneven staining on aluminium and polymer parts. Hot bluing should never be attempted on aluminium because as it reacts it usually dissolves in the caustic salt bath.
See also
Electrochemical coloring of metals
Patina
Phosphate conversion coating
Tempering (metallurgy)
Notes
References
Further reading
however
Traister, J. E. 1982. Professional Care & Finishing of gun metal. Blue Ridge Summit.
Coatings
Corrosion prevention | Bluing (steel) | [
"Chemistry"
] | 2,621 | [
"Corrosion prevention",
"Coatings",
"Corrosion"
] |
3,140,543 | https://en.wikipedia.org/wiki/Mesoxalic%20acid | Mesoxalic acid, also called oxomalonic acid or ketomalonic acid, is an organic compound with formula C3H2O5 or HO−(C=O)3−OH.
Mesoxalic acid is both a dicarboxylic acid and a ketonic acid. It readily loses two protons to yield the divalent anion , called mesoxalate, oxomalonate, or ketomalonate. These terms are also used for salts containing this anion, such as sodium mesoxalate, Na2C3O5; and for esters containing the −C3O5− or −O−(C=O)3−O− moiety, such as diethyl mesoxalate, (C2H5)2C3O5. Mesoxalate is one of the oxocarbon anions, which (like carbonate and oxalate ) consist solely of carbon and oxygen.
Mesoxalic acid readily absorbs and reacts with water to form a product commonly called "mesoxalic acid monohydrate", more properly dihydroxymalonic acid, HO−(C=O)−C(OH)2−(C=O)−OH. In product catalogs and other contexts, the terms "mesoxalic acid", "oxomalonic acid" and so on often refer to this "hydrated" compound. In particular, the product traded as "sodium mesoxalate monohydrate" is almost always sodium dihydroxymalonate.
Synthesis
Mesoxalic acid can be obtained synthetically by hydrolysis of alloxan with baryta water, by warming caffuric acid with lead acetate solution, or from glyceryl diacetate and concentrated nitric acid in ice-cold water. The product can be obtained also by oxidation of tartronic acid or glycerol. Since they are carried out in water, these procedures generally give the dihydroxy derivative.
It is also prepared by the oxidation of glycerol with the help of bismuth(III) nitrate.
See also
Malonic acid
References
Dicarboxylic acids
Deliquescent materials
Alpha-keto acids | Mesoxalic acid | [
"Chemistry"
] | 471 | [
"Deliquescent materials"
] |
3,140,544 | https://en.wikipedia.org/wiki/Alumel | Alumel is an alloy consisting of approximately 95% nickel, 2% aluminium, 2% manganese, and 1% silicon. This magnetic alloy is used to make the negative conductors of ANSI Type K (chromel-alumel) thermocouples and thermocouple extension wire. Alumel is a registered trademark of Concept Alloys, Inc.
References
External links
Application guide and material properties of thermocouple wires (via Omega Engineering, Inc).
Nickel alloys
Aluminium alloys | Alumel | [
"Chemistry"
] | 106 | [
"Nickel alloys",
"Alloys",
"Alloy stubs",
"Aluminium alloys"
] |
3,140,632 | https://en.wikipedia.org/wiki/Murexide | Murexide (NH4C8H4N5O6, or C8H5N5O6·NH3), also called ammonium purpurate or MX, is the ammonium salt of purpuric acid. It is a purple solid that is soluble in water. The compound was once used as an indicator reagent. Aqueous solutions are yellow at low pH, reddish-purple in weakly acidic solutions, and blue-purple in alkaline solutions.
Preparation
Murexide is prepared by treating alloxantin with ammonia to 100 °C, or by treating uramil (5-aminobarbituric acid) with mercury oxide. It may also be prepared by digesting alloxan with alcoholic ammonia.
History
Justus von Liebig and Friedrich Wöhler in Giessen, Germany, had investigated the purple product, murexide, obtained from snake excrement in the 1830s, but this was not an abundant raw material, and a method of using it as a dyestuff was not established at that time. In the 1850s, French colourists and dye-producers, such as Depoully in Paris, succeeded in making murexide from abundant South American guano and of applying it to natural fibres. It was then widely adopted in Britain, France and Germany.
Use
Murexide is used in analytical chemistry as a complexometric indicator for complexometric titrations, most often of calcium ions, but also for copper, nickel, cobalt, thorium and rare-earth metals. It functions as a tridentate ligand.
Its use has been eclipsed by calcium-ion selective electrodes.
References
Ammonium compounds
Complexometric indicators
Dyes
Pyrimidines
Lactams
Enones | Murexide | [
"Chemistry",
"Materials_science"
] | 363 | [
"Complexometric indicators",
"Ammonium compounds",
"Chromism",
"Salts"
] |
3,140,673 | https://en.wikipedia.org/wiki/Dielectric%20breakdown%20model | Dielectric breakdown model (DBM) is a macroscopic mathematical model combining the diffusion-limited aggregation model with electric field. It was developed by Niemeyer, Pietronero, and Weismann in 1984. It describes the patterns of dielectric breakdown of solids, liquids, and even gases, explaining the formation of the branching, self-similar Lichtenberg figures.
See also
Eden growth model
Lichtenberg figure
Diffusion-limited aggregation
References
External links
Dielectric Breakdown Model
Electricity
Mathematical modeling
Electrical breakdown | Dielectric breakdown model | [
"Physics",
"Mathematics"
] | 106 | [
"Physical phenomena",
"Mathematical modeling",
"Applied mathematics",
"Electrical phenomena",
"Electrical breakdown"
] |
3,140,754 | https://en.wikipedia.org/wiki/Intermedia%20%28hypertext%29 | Intermedia was the third notable hypertext project to emerge from Brown University, after HES (1967) and FRESS (1969). Intermedia was started in 1985 by Norman Meyrowitz, who had been associated with sooner hypertext research at Brown. The Intermedia project coincided with the establishment of the Institute for Research in Information and Scholarship (IRIS). Some of the materials that came from Intermedia, authored by Meyrowitz, Nancy Garrett, and Karen Catlin were used in the development of HTML.
Intermedia ran on A/UX version 1.1. Intermedia was programmed using an object-oriented toolkit and standard DBMS functions. Intermedia supported bi-directional, dual-anchor links for both text and graphics. Small icons are used as anchor markers. Intermedia properties include author, creation date, title, and keywords. Link information is stored by the system apart from the source text. More than one such set of data can be kept, which allows each user to have their own "web" of information. Intermedia has complete multi-user support, with three levels of access rights: read, write, and annotate, which is similar to Unix permissions.
As promising as Intermedia was, it used a lot of resources for its time (it required 4 MB of RAM and 80 MB of hard drive space in 1989). It was also highly tied to A/UX, a less popular Unix-like operating system that ran on Apple Macintosh computers; thus, it wasn't very portable. In 1991, changes in A/UX and lack of funding ended the Intermedia project.
See also
Xanadu
Microcosm
Hyper-G (or HyperWave)
References
L. Nancy Garrett and Karen E. Smith. "Building a Timeline Editor from Prefab Parts: The Architecture of an Object-Oriented Application". ACM Proceedings of OOPSLA ’86 (September 1986).
L. Nancy Garrett, Norman Meyrowitz, and Karen E. Smith. "Intermedia: Issues, Strategies, and Tactics in the Design of a Hypermedia System". ACM Proceedings of the Conference on Computer-Supported Cooperative Work (December 1986).
Nicole Yankelovich, Karen E. Smith, L. Nancy Garrett and Norman Meyrowitz. "Issues in Designing a Hypermedia Document System: The Intermedia Case Study" in Learning Tomorrow: Journal of the Apple Education Advisory Council, n3 p35-87 Spring 1987.
Karen E. Smith and Stanley B. Zdonik. "Intermedia: A case study of the differences between relational and object-oriented database systems". ACM SIGPLAN Notices, Volume 22, Issue 12 (December 1987) Pages: 452 - 465.
L. Nancy Garrett, Julie Launhardt, and Karen Smith Catlin. "Hypermedia Templates: An Author’s Tool". ACM Proceedings of Hypertext ‘91 (December 1991).
Paul Kahn. "Linking Together Books: Experiments in Adapting Published Materials into Intermedia Documents " in Hypermedia and Literary Studies, Paul Delany and George P. Landow (editors). The MIT Press (March 19, 1994)
The History of Hypertext by Jacob Nielsen (February 1, 1995), https://www.nngroup.com/articles/hypertext-history/ (accessed 1/31/2017)
Brown University Department of Computer Science. (May 23, 2019). A Half-Century of Hypertext at Brown: A Symposium.
Hypertext
Hypermedia
Computer-related introductions in 1985
External links
Video of Norman Meyrowitz demonstrating Intermedia at ACM HUMAN’20 conference, Dec 2020 | Intermedia (hypertext) | [
"Technology"
] | 758 | [
"Multimedia",
"Hypermedia"
] |
3,140,884 | https://en.wikipedia.org/wiki/SIPP%20memory | A SIPP (single in-line pin package) or SIP (single in-line package) was a short-lived variant of the 30-pin SIMM random-access memory.
It consisted of a small printed circuit board upon which were mounted a number of memory chips. It had 30 pins along one edge which mated with matching holes in the motherboard of the computer.
This type of memory was used in some 80286 and 80386 (80386SX) systems. It was soon replaced by SIMMs using edge connectors, which proved to be more economical and durable.
30-pin SIPP modules were pin compatible with 30-pin SIMM modules explaining why some SIPP modules were in fact SIMM modules with pins soldered onto the connectors.
See also
Zig-zag in-line package
References
Computer memory form factor | SIPP memory | [
"Technology"
] | 175 | [
"Computing stubs",
"Computer hardware stubs"
] |
3,140,914 | https://en.wikipedia.org/wiki/Symmetric%20product%20of%20an%20algebraic%20curve | In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product
C × C × ... × C
or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients.
For C the projective line (say the Riemann sphere ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space of dimension n.
If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors.
For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now but this does mean that for any rational function F on C
F(x1) + ... + F(xg)
makes sense as a rational function on J, for the xi staying away from the poles of F.
For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai.
Betti numbers and the Euler characteristic of the symmetric product
Let C be a smooth projective curve of genus g over the complex numbers C. The Betti numbers bi(ΣnC) of the symmetric products ΣnC for all n = 0, 1, 2, ... are given by the generating function
and their Euler characteristics e(ΣnC) are given by the generating function
Here we have set u = -1 and y = -p in the previous formula.
Notes
References
Algebraic curves
Symmetric functions | Symmetric product of an algebraic curve | [
"Physics",
"Mathematics"
] | 571 | [
"Algebra",
"Symmetric functions",
"Symmetry"
] |
3,140,923 | https://en.wikipedia.org/wiki/Linearly%20disjoint | In mathematics, algebras A, B over a field k inside some field extension of k are said to be linearly disjoint over k if the following equivalent conditions are met:
(i) The map induced by is injective.
(ii) Any k-basis of A remains linearly independent over B.
(iii) If are k-bases for A, B, then the products are linearly independent over k.
Note that, since every subalgebra of is a domain, (i) implies is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and is a domain then it is a field and A and B are linearly disjoint. However, there are examples where is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k.
One also has: A, B are linearly disjoint over k if and only if the subfields of generated by , resp. are linearly disjoint over k. (cf. Tensor product of fields)
Suppose A, B are linearly disjoint over k. If , are subalgebras, then and are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.)
See also
Tensor product of fields
References
P.M. Cohn (2003). Basic algebra
Algebra | Linearly disjoint | [
"Mathematics"
] | 351 | [
"Algebra stubs",
"Algebra"
] |
3,141,171 | https://en.wikipedia.org/wiki/CoSy | CoSy, short for Conferencing System, was an early computer conferencing system developed at the University of Guelph.
The CoS software grew out of an interest in group computer mediated communication systems in 1981 by Dick Mason and John Black. A project was initiated in the Institute of Computer Science to investigate and possibly acquire an asynchronous computer conferencing system, and a small team of Bob McQueen, Alastair Mayer and Peter Jaspers-Fayer undertook the investigation of two existing systems, EIES from New Jersey, and COM from Sweden. It was then decided that developing a new system in-house would be the best path to take.
A new system started to take shape, written in C and operating under a UNIX operating system on a Digital Equipment Corporation PDP-11 with dial-up telephone ports. Much thought was given to the user interface and group interaction processes, especially as most of the user dial-up connections were originally at very slow 300 bits (30 characters) per second through acoustic modems. The system was gradually introduced to and tested by a small group of users, and eventually made available to other external organizations beginning in 1983. Licenses to use the Unix version of the software were granted to other sites, mainly universities, and a VMS version was also developed and made available for license. CoSy was selected by Byte to launch their BIX system in 1985
In addition to BIX, it was used to implement a similar British system named CIX, as well as numerous other installations such as CompuLink Network. CoSy was also chosen for the Open University's "electronic campus".
The software was used at the University of Victoria in the Creative Writing department in the 1980s. This exposure to Dave Godfrey led to some rights to the software being later acquired by the British Columbia company SoftWords, who developed it into CoSy400 and added a simple web interface, before losing interest.
When the BIX system closed down, several former "bixen" approached University of Guelph and SoftWords and obtained the right to release the original version of CoSy under the GPL. It is now developed as an open source project, and was the basis of the BIX-like NLZero (Noise Level Zero) conferencing service.
References
External links
WebCoSy
CoSy at SourceForge
NLZ website
Free groupware | CoSy | [
"Technology"
] | 489 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,141,360 | https://en.wikipedia.org/wiki/Prime%20decomposition%20of%203-manifolds | In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) finite collection of prime 3-manifolds.
A manifold is prime if it is not homeomorphic to any connected sum of manifolds, except for the trivial connected sum of the manifold with a sphere of the same dimension, . If is a prime 3-manifold then either it is or the non-orientable bundle over
or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of over
The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly. Every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable bundles over This sum is unique as long as we specify that each summand is either irreducible or a non-orientable bundle over
The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor.
References
3-manifolds
Manifolds
Theorems in differential geometry | Prime decomposition of 3-manifolds | [
"Mathematics"
] | 292 | [
"Theorems in differential geometry",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Manifolds",
"Theorems in geometry"
] |
3,141,410 | https://en.wikipedia.org/wiki/Child%20mortality | Child mortality is the death of children under the age of five. The child mortality rate (also under-five mortality rate) refers to the probability of dying between birth and exactly five years of age expressed per 1,000 live births.
It encompasses neonatal mortality and infant mortality (the probability of death in the first year of life).
Reduction of child mortality is reflected in several of the United Nations' Sustainable Development Goals. Target 3.2 states that "by 2030, the goal is to end preventable deaths of newborns and children under 5 years of age with all countries aiming to reduce under‑5 mortality to as low as 25 per 1,000 live births."
Child mortality rates have decreased in the last 40 years. Rapid progress has resulted in a significant decline in preventable child deaths since 1990 with the global under-5 mortality rate declining by over half between 1990 and 2016. While in 1990, 12.6 million children under age five died and in 2016, that number fell to 5.6 million children and then in 2020, the global number fell again to 5 million. However, despite advances, there are still 15,000 under-five deaths per day from largely preventable causes. About 80 percent of these occur in sub-Saharan Africa and South Asia and just 6 countries account for half of all under-five deaths: China, India, Pakistan, Nigeria, Ethiopia and the Democratic Republic of the Congo. 45% of these children died during the first 28 days of life. Death rates were highest among children under age 1, followed by children ages 15 to 19, 1 to 4 and 5 to 14.
Types of Child Mortality
Child mortality refers to number of child deaths under the age of 5 per 1,000 live births. More specific terms include:
Perinatal mortality rate: Number of child deaths within the first week of birth divided by total number of births.
Neonatal mortality rate: Number of child deaths within the first 28 days of life divided by total number of births.
Infant mortality rate: Number of child deaths within the first 12 months of life divided by total number of births.
Under 5 mortality rates: Number of child deaths within the 5th birthday divided by total number of births.
Child Mortality refers to the premature deaths of any child under the age of 5 years old. However, within those 5 years, there are 5 smaller groups. Perinatal refers to a fetus, a living organism, but not yet born. Typically, peri neonate deaths are due to premature birth or birth defects. Neonatal refers to child death within one month or 28 days of birth. Neonate deaths are reflected in the type of care the hospital is providing as well as birth defects and complications. Infant death refers to the death of a child before their first birthday or within 12 months of life. Some of the main causes include premature birth, SIDS, low birth weight, malnutrition and infectious diseases. And lastly, the under-5 mortality rate refers to children who die under the age of 5 years old or within the first 5 years of life.
Causes
The leading causes of death of children under five include:
Preterm birth complications (18%)
Pneumonia (16%)
Interpartum-related events (12%)
Neonatal sepsis (7%)
Diarrhea (8%)
Malaria (5%)
Malnutrition (34%)
There is variation of child mortality around the world. Countries that are in the second or third stage of the Demographic Transition Mode (DTM) have higher rates of child mortality than countries in the fourth or fifth stage. Chad infant mortality is about 96 per 1,000 live births compared to only 2.2 per 1,000 live births in Japan. In 2010, there was a global estimate of 7.6 million child deaths especially in less developed countries and among those, 4.7 million died from infection and disorder. Child mortality is not only caused by infection and disorder, it is also caused by premature birth, birth defect, new born infection, birth complication and diseases like malaria, sepsis, and diarrhea. In less developed countries, malnutrition is the main cause of child mortality. Pneumonia, diarrhea and malaria together are the cause of one out of every three deaths before the age of 5 while nearly half of under-five deaths globally are attributable to under-nutrition.
Prevention
Child survival is a field of public health concerned with reducing child mortality. Child survival
interventions are designed to address the most common causes of child deaths that occur, which include diarrhea, pneumonia, malaria, and neonatal conditions. Out of the number of children under the age of 5 alone, an estimated 5.6 million children die each year mostly from such preventable causes.
The child survival strategies and interventions are in line with the fourth Millennium Development Goals (MDGs) which focused on reducing child mortality by 2/3 of children under five before the year 2015. In 2015, the MDGs were replaced with the Sustainable Development Goals (SDGs) which aim to end these deaths by 2030. In order to achieve SDG targets, progress must be accelerated in more than 1/4 of all countries (most of which are in sub-Saharan Africa) in order to achieve targets for under-5 mortality and in 60 countries (many in sub-Saharan Africa and South Asia) to achieve targets for neonatal mortality. Without accelerated progress, 60 million children under age five will die between 2017 and 2030, about half of which would be newborns. China achieved its target of reduction in under-5 mortality rates well ahead of schedule.
Low-cost interventions
Two-thirds of child deaths are preventable. Most of the children who die each year could be saved by low-tech, evidence-based, cost-effective measures such as vaccines, antibiotics, micronutrient supplementation, insecticide-treated bed nets, improved family care and breastfeeding practices, and oral rehydration therapy. Empowering women, removing financial and social barriers to accessing basic services, developing innovations that make the supply of critical services more available to the poor and increasing local accountability of health systems are policy interventions that have allowed health systems to improve equity and reduce mortality.
In developing countries, child mortality rates related to respiratory and diarrheal diseases can be reduced by introducing simple behavioral changes such as handwashing with soap. This simple action can reduce the rate of mortality from these diseases by almost 50 per cent.
Proven cost-effective interventions can save the lives of millions of children per year. The UN Vaccine division as of 2014 supported 36% of the world's children in order to best improve their survival chances, yet still, low-cost immunization interventions do not reach 30 million children despite success in reducing polio, tetanus, and measles. Measles and tetanus still kill more than 1 million children under 5 each year. Vitamin A supplementation costs only $0.02 for each capsule and given 2–3 times a year will prevent blindness and death. Although vitamin A supplementation has been shown to reduce all-cause mortality by 12 to 24 per cent but only 70 per cent of targeted children were reached in 2015. Between 250,000 and 500,000 children become blind every year with 70 percent of them dying within 12 months. Oral rehydration therapy (ORT) is an effective treatment for lost liquids through diarrhea; yet only 4 in 10 (44 per cent) of children diagnosed with diarrhea are treated with ORT.
Essential newborn care - including immunizing mothers against tetanus, ensuring clean delivery practices in a hygienic birthing environment, drying and wrapping the baby immediately after birth, providing necessary warmth and promoting immediate and continued breastfeeding, immunization, and treatment of infections with antibiotics - could save the lives of 3 million newborns annually. Improved sanitation and access to clean drinking water can reduce childhood infections and diarrhea. , approximately 26% of the world's population do not have access to basic sanitation and 785 million people use unsafe sources of drinking water.
Efforts
Agencies promoting and implementing child survival activities worldwide include UNICEF and non-governmental organizations; major child survival donors worldwide include the World Bank, the British Government's Department for International Development, the Canadian International Development Agency and the United States Agency for International Development. In the United States, most non-governmental child survival agencies belong to the CORE Group, a coalition working through collaborative action to save the lives of young children in the world's poorest countries.
Substantial global progress has been made in reducing child deaths since 1990. The total number of under-5 deaths worldwide has declined from 12.6 million in 1990 to approximately 5.5 million in 2020. Since 1990, the global under-5 mortality rate has dropped by 59%, from 93 deaths per 1000 live births in 1990 to 36 in 2020. This is equivalent to 1 in 11 children dying before reaching age 5 in 1990 compared to 1 in 27 in 2019. The Sustainable Development Goals has set 2 new goals to reduce under-5 and newborn mortality. The goals set newborn mortality for 12 per 1,000 live births in every country and for under 5 mortality 25 per 1,000 livebirths in every country. In 2019, 122 countries met this and every 10 years, 20 more are expected to follow. World Health Organization (WHO) states they support health equity and universal health care so that all countries may have proper health care with no finances involved.
Epidemiology
Child mortality has been dropping as each country reaches a high stage of DTM. From 2000 to 2010, child mortality has dropped from 9.6 million to 7.6 million. In order to reduce child mortality rates, there need to be better education, higher standards of healthcare and more caution in childbearing. Child mortality could be reduced by attendance of professionals at birth and by breastfeeding and through access to clean water, sanitation, and immunization. In 2016, the world average was 41 (4.1%), down from 93 (9.3%) in 1990. This is equivalent to 5.6 million children less than five years old dying in 2016.
Variation
Huge disparities in under-5 mortality rates exist. Globally, the risk of a child dying in the country with the highest under-5 mortality rate is about 60 times higher than in the country with the lowest under-5 mortality rate. Sub-Saharan Africa remains the region with the highest under-5 mortality rates in the world: All six countries with rates above 100 deaths per 1,000 live births are in sub-Saharan Africa, with Somalia having the highest under-5 mortality rates.
Furthermore, approximately 80% of under-5 deaths occur in only two regions: sub-Saharan Africa and South Asia. 6 countries account for half of the global under-5 deaths, namely, India, Nigeria, Pakistan, the Democratic Republic of the Congo, Ethiopia and China. India and Nigeria alone account for almost a third (32 per cent) of the global under-five deaths. Within low- and middle-income countries, there is also substantial variation in child mortality rates across administrative divisions.
Likewise, there are disparities between wealthy and poor households in developing countries. According to a Save the Children paper, children from the poorest households in India are three times more likely to die before their fifth birthday than those from the richest households. A systematic study reports for all the low- and middle-income countries (not including China), the children among the poorest households are twice as likely to die before the age of 5 years old compare to those in the richest household.
A large team of researchers published a major study on the global distribution of child mortality in Nature in October 2019. It was the first global study that mapped child death on the level of subnational district (17,554 units). The study was described as an important step to make action possible that further reduces child mortality.
The child survival rate of nations varies with factors such as fertility rate and income distribution; the change in distribution shows a strong correlation between child survival and income distribution as well as fertility rate where increasing child survival allows the average income to increase as well as the average fertility rate to decrease.
Covid-19 and Child Mortality
Child mortality unlike mortality throughout other ages actually dropped in 2020 when the Covid-19 pandemic hit the world. Children were among the lowest group of deaths in the world due to Covid-19. About 3.7 million deaths occurred and only 0.4% of them occurred in adolescents under 20 years of age making about 13,400 deaths in adolescents. Out of that small proportion, 42% occurred in children under the age of 9 years old.
See also
List of countries by infant mortality rate
Infant mortality
Perinatal mortality
References
External links
WHO fact sheet on child mortality
Global health
Hygiene
Demography
Mortality
Sanitation
Public health | Child mortality | [
"Environmental_science"
] | 2,634 | [
"Demography",
"Environmental social science"
] |
3,141,476 | https://en.wikipedia.org/wiki/E%20Ink | E Ink (electronic ink) is a brand of electronic paper (e-paper) display technology commercialized by the E Ink Corporation, which was co-founded in 1997 by MIT undergraduates JD Albert and Barrett Comiskey, MIT Media Lab professor Joseph Jacobson, Jerome Rubin and Russ Wilcox.
It is available in grayscale and color and is used in mobile devices such as e-readers, digital signage, smartwatches, mobile phones, electronic shelf labels and architecture panels.
History
Background
The notion of a low-power paper-like display had existed since the 1970s, originally conceived by researchers at Xerox PARC, but had never been realized. While a post-doctoral student at Stanford University, physicist Joseph Jacobson envisioned a multi-page book with content that could be changed at the push of a button and required little power to use.
Neil Gershenfeld recruited Jacobson for the MIT Media Lab in 1995, after hearing Jacobson's ideas for an electronic book. Jacobson, in turn, recruited MIT undergrads Barrett Comiskey, a math major, and J.D. Albert, a mechanical engineering major, to create the display technology required to realize his vision.
Product development
The initial approach was to create tiny spheres which were half white and half black, and which, depending on the electric charge, would rotate such that the white side or the black side would be visible on the display. Albert and Comiskey were told this approach was impossible by most experienced chemists and materials scientists and had trouble creating these perfectly half-white, half-black spheres; during his experiments, Albert accidentally created some all-white spheres.
Comiskey experimented with charging and encapsulating those all-white particles in microcapsules mixed in with a dark dye. The result was a system of microcapsules that could be applied to a surface and could then be charged independently to create black and white images. A first patent was filed by MIT for the microencapsulated electrophoretic display in October 1996.
The scientific paper was featured on the cover of Nature, something extremely unusual for work done by undergraduates. The advantage of the microencapsulated electrophoretic display and its potential for satisfying the practical requirements of electronic paper were summarized in the abstract of the Nature paper:
It has for many years been an ambition of researchers in display media to create a flexible low-cost system that is the electronic analogue of paper ... viewing characteristic[s] result in an "ink on paper" look. But such displays have to date suffered from short lifetimes and difficulty in manufacture. Here we report the synthesis of an electrophoretic ink based on the microencapsulation of an electrophoretic dispersion. The use of a microencapsulated electrophoretic medium solves the lifetime issues and permits the fabrication of a bistable electronic display solely by means of printing. This system may satisfy the practical requirements of electronic paper.
A second patent was filed by MIT for the microencapsulated electrophoretic display in March 1997.
Subsequently, Albert, Comiskey and Jacobson along with Russ Wilcox and Jerome Rubin founded the E Ink Corporation in 1997, two months prior to Albert and Comiskey's graduation from MIT.
Company history
E Ink Corporation (or simply "E Ink") is a subsidiary of E Ink Holdings (EIH), a Taiwanese Holding Company (8069.TWO) manufacturer. They are the manufacturer and distributor of electrophoretic displays, a kind of electronic paper, that they market under the name E Ink. E Ink Corporation is headquartered in Billerica, Massachusetts. The company was co-founded in 1997 by two undergraduates J.D. Albert and Barrett Comiskey, along with Joseph Jacobson (professor in the MIT Media Lab), Jerome Rubin (LexisNexis co-founder) and Russ Wilcox. Two years later, E Ink partnered with Philips to develop and market the technology. Jacobson and Comiskey are listed as inventors on the original patent filed in 1996. Albert, Comiskey, and Jacobsen were inducted into the National Inventors Hall of Fame in May 2016. In 2005, Philips sold the electronic paper business as well as its related patents to one of its primary business partners, Prime View International (PVI), a Hsinchu, Taiwan-based manufacturer.
At the E Ink Corporation, Comiskey led the development effort for E Ink's first generation of electronic ink, while Albert developed the manufacturing methods used to make electronic ink displays in high volumes. Wilcox played a variety of business roles and served as CEO from 2004 to 2009.
Acquisition
On June 1, 2008, E Ink Corp. announced an initial agreement to be purchased by PVI for $215 million, an amount that eventually reached US$450 million following negotiations. E Ink was officially acquired on December 24, 2009. The purchase by PVI magnified the scale of production for the E Ink e-paper display, since Prime View also owned BOE Hydis Technology Co., Ltd and maintained a strategic partner relationship with Chi Mei Optoelectronics Corp. (now Chimei InnoLux Corporation, part of the Hon Hai-Foxconn Group). Foxconn is the sole ODM partner for Prime View's Netronix Inc., the supplier of E Ink panel e-readers, but the end-use products appear in various guises, e.g., as Bookeen, COOL-ER, PocketBook, etc.
PVI renamed itself E Ink Holdings Inc. after the purchase. In December 2012, E Ink acquired SiPix, a rival electrophoretic display company.
Applications
E Ink is made into a film and then integrated into electronic displays, enabling novel applications in phones, watches, magazines, wearables and e-readers, etc.
The Motorola F3 was the first mobile phone to employ E Ink technology in its display to take advantage of the material's ultra-low power consumption. In addition, the Samsung Alias 2 uses this technology in its keypad in order to allow varying reader orientations.
The October 2008 limited edition North American issue of Esquire was the first magazine cover to integrate E Ink. This cover featured flashing text. It was manufactured in Shanghai and was shipped refrigerated to the United States for binding. The E Ink was powered by a 90-day integrated battery supply.
In July 2015, New South Wales Road and Maritime Services installed road traffic signs using E Ink in Sydney, Australia. The installed e-paper traffic signs represent the first use of E Ink in traffic signage. Transport for London made trials of E Ink displays at bus stops to offer timetables, route maps and real-time travel information. A Whole Foods store opened in 2016 with E Ink shelf labels that can update product info remotely. E Ink Prism was announced in January 2015 at International CES and is the internal name for E Ink's bistable ink technology in a film that can dynamically change colors, patterns and designs with architectural products. E Ink displays can also be made flexible.
Commercial display products
E Ink has since partnered with various companies, including Sony, , Motorola and Amazon. E Ink's "Vizplex" technology is used by Sony Reader, MOTOFONE F3, Barnes & Noble Nook, Kindle, txtr Beagle, and Kobo eReader. E Ink's "Pearl" technology is claimed to have a 50% better contrast ratio. It is used by 2011-2012 Kindle models, Barnes & Noble Nook Simple Touch, Kobo Touch, and Sony PRS-T1. E Ink's "Carta" technology is used by reMarkable, Kindle Paperwhite (2nd and 3rd generation), Kindle Voyage, Kobo Glo HD, Kobo Aura H2O, and Kindle Oasis.
Versions or models of E Ink
E Ink Vizplex is the first generation of the E Ink displays. Vizplex was announced in May 2007.
E Ink Pearl, announced in July 2010, is the second generation of E Ink displays. The updated Amazon Kindle DX was the first device announced to use the screen. Amazon used this display technology in new Kindle models until the Paperwhite 2 refresh in 2013. The basic Kindle with touch continued to use Pearl until 2022 when the Kindle 11 was upgraded past 167 dpi. Sony also included this technology into its 2010 models of the Sony Reader PRS series. This display is also used in the Nook Simple Touch, Kobo eReader Touch, Kobo Glo, Onyx Boox M90, X61S and Pocketbook Touch.
E Ink Mobius is an E Ink display using a flexible plastic backplane, so it can resist small impacts and some flexing. Products using this include Sony Digital Paper DPT-S1, Pocketbook CAD Reader Flex, Dasung Paperlike HD and Onyx Boox MAX 3.
E Ink Triton, announced in November 2010, is a color display that is easy to read in high light. The Triton is able to display 16 shades of gray, and 4,096 colors. E Ink Triton is used in commercially available products such as the Hanvon color e-reader, JetBook Color made by ectaco and PocketBook Color Lux made by PocketBook.
E Ink Triton 2 is the last generation of E Ink Triton color displays. The e-readers featuring it appeared in 2013. They include Ectaco Jetbook Color 2 and Pocketbook Color Lux.
E Ink Carta, announced in January 2013 at International CES, features 768 by 1024 resolution on 6-inch displays, with 212 ppi pixel density. Named Carta, it is used in the Kindle Paperwhite 2 (2013), the Pocketbook Touch Lux 3 (2015), and the Kobo Nia (2020).
E Ink Carta HD features a 1080 by 1440 resolution on a 6" screen with 300 ppi. It is used in many eReaders including all new Kindle model lines since 2014 (Voyage, Oasis, Scribe) as well as the Paperwhite 3 (2015) and newer, Tolino Vision 2 (2014), Kobo Glo HD (2015), Nook Glowlight Plus (2015), Cybook Muse Frontlight, PocketBook Touch HD (2016), PocketBook Touch HD 2 (2017), and the Kobo Clara HD (2018).
The original E Ink Carta display was renamed to Carta 1000, and refinements in Carta 1100 and Carta 1200 improved response times and display contrast. A later refinement in Carta 1250 improved response times and contrast again.
E Ink Carta and Carta HD displays support Regal waveform technology, which reduces the need for page refreshes.
The overall contrast in a product depends on the entire panel stack, including touch sensor and front light (when provided).
E Ink Spectra is a three pigment display. The display uses microcups, each of which contains three pigments. It is available for retail and electronic shelf tag labels. It is currently produced with black, white and red or black, white and yellow pigments.
Advanced Color ePaper (ACeP) was announced at SID Display Week in May 2016. The display contains four pigments in each microcapsule or microcup thereby eliminating the need for a color filter overlay. The pigments used are cyan, magenta, yellow and white, enabling display of a full color gamut and up to 32,000 colors. Initially targeted at the in-store signage market, with 20-inch displays with a resolution of 1600 by 2500 pixels at 150 ppi with a two-second refresh rate, it began shipping for signage purposes in late 2018. It is also being commercially manufactured for e-readers under the name E Ink Gallery 3. The first readers started shipping in 2023, however some planned e-readers were later postponed due to supply issues.
E Ink Kaleido, originally announced in December 2019 as "Print Color", is the first of a new generation of color displays based on one of E Ink's greyscale displays with a color filter layer. E Ink Kaleido uses a plastic color filter layer, unlike the glass filter layer used in the E Ink Triton family of displays. Kaleido Plus and Kaleido 3 were released in 2021 and 2023 respectively, further improving performance and pixel density.
Comparison of E Ink displays
A comparison of a selection of E Ink displays as of June 2017
See also
Comparison of e-readers
Plastic Logic
References
External links
Official Site of E Ink Corporation
Howstuffworks review on Electronic Ink
Interview with Russ Wilcox, E Ink co-founder, vice-president and (from 2003 to 2010) CEO. 89 minutes.
Display technology
Electronic paper technology
Display technology companies
Electronics companies of the United States
Companies based in Billerica, Massachusetts | E Ink | [
"Engineering"
] | 2,674 | [
"Electronic engineering",
"Display technology"
] |
3,141,503 | https://en.wikipedia.org/wiki/Chemical%20nomenclature | Chemical nomenclature is a set of rules to generate systematic names for chemical compounds. The nomenclature used most frequently worldwide is the one created and developed by the International Union of Pure and Applied Chemistry (IUPAC).
IUPAC Nomenclature ensures that each compound (and its various isomers) have only one formally accepted name known as the systematic IUPAC name. However, some compounds may have alternative names that are also accepted, known as the preferred IUPAC name which is generally taken from the common name of that compound. Preferably, the name should also represent the structure or chemistry of a compound.
For example, the main constituent of white vinegar is , which is commonly called acetic acid and is also its recommended IUPAC name, but its formal, systematic IUPAC name is ethanoic acid.
The IUPAC's rules for naming organic and inorganic compounds are contained in two publications, known as the Blue Book and the Red Book, respectively. A third publication, known as the Green Book, recommends the use of symbols for physical quantities (in association with the IUPAP), while a fourth, the Gold Book, defines many technical terms used in chemistry. Similar compendia exist for biochemistry (the White Book, in association with the IUBMB), analytical chemistry (the Orange Book), macromolecular chemistry (the Purple Book), and clinical chemistry (the Silver Book). These "color books" are supplemented by specific recommendations published periodically in the journal Pure and Applied Chemistry.
Purpose of chemical nomenclature
The main purpose of chemical nomenclature is to disambiguate the spoken or written names of chemical compounds: each name should refer to one compound. Secondarily, each compound should have only one name, although in some cases some alternative names are accepted.
Preferably, the name should also represent the structure or chemistry of a compound. This is achieved by the International Chemical Identifier (InChI) nomenclature. However, the American Chemical Society's CAS numbers nomenclature does not represent a compound's structure.
The nomenclature used depends on the needs of the user, so no single correct nomenclature exists. Rather, different nomenclatures are appropriate for different circumstances.
A common name will successfully identify a chemical compound, given context. Without context, the name should indicate at least the chemical composition. To be more specific, the name may need to represent the three-dimensional arrangement of the atoms. This requires adding more rules to the standard IUPAC system (the Chemical Abstracts Service system (CAS system) is the one used most commonly in this context), at the expense of having names which are longer and less familiar.
The IUPAC system is often criticized for failing to distinguish relevant compounds (for example, for differing reactivity of sulfur allotropes, which IUPAC does not distinguish). While IUPAC has a human-readable advantage over CAS numbering, IUPAC names for some larger, relevant molecules (such as rapamycin) are barely human-readable, so common names are used instead.
Differing needs of chemical nomenclature and lexicography
It is generally understood that the purposes of lexicography versus chemical nomenclature vary and are to an extent at odds. Dictionaries of words, whether in traditional print or on the internet, collect and report the meanings of words as their uses appear and change over time. For internet dictionaries with limited or no formal editorial process, definitions —in this case, definitions of chemical names and terms— can change rapidly without concern for the formal or historical meanings. Chemical nomenclature however (with IUPAC nomenclature as the best example) is necessarily more restrictive: Its purpose is to standardize communication and practice so that, when a chemical term is used it has a fixed meaning relating to chemical structure, thereby giving insights into chemical properties and derived molecular functions. These differing purposes can affect understanding, especially with regard to chemical classes that have achieved popular attention. Examples of the effect of these are as follows:
resveratrol, a single compound defined clearly by this common name, but that can be confused, popularly, with its cis-isomer,
omega-3 fatty acids, a reasonably well-defined class of chemical structures that is nevertheless broad as a result of its formal definition, and
polyphenols, a fairly broad structural class with a formal definition, but where mistranslations and general misuse of the term relative to the formal definition has resulted in serious errors of usage, and so ambiguity in the relationship between structure and activity (SAR).
The rapid pace at which meanings can change on the internet, in particular for chemical compounds with perceived health benefits, ascribed rightly or wrongly, complicate the monosemy of nomenclature (and so access to SAR understanding). Specific examples appear in the Polyphenol article, where varying internet and common-use definitions conflict with any accepted chemical nomenclature connecting polyphenol structure and bioactivity.
History
Alchemical names
The nomenclature of alchemy is descriptive, but does not effectively represent the functions mentioned above. Opinions differ about whether this was deliberate on the part of the early practitioners of alchemy or whether it was a consequence of the particular (and often esoteric) theories according to which they worked. While both explanations are probably valid to some extent, it is remarkable that the first "modern" system of chemical nomenclature appeared at the same time as the distinction (by French chemist Antoine Lavoisier) between elements and compounds, during the late eighteenth century.
Méthode de nomenclature chimique
The French chemist Louis-Bernard Guyton de Morveau published his recommendations in 1782, hoping that his "constant method of denomination" would "help the intelligence and relieve the memory". The system was refined in , published in 1787 in collaboration with Lavoisier, Claude Louis Berthollet, and Antoine-François de Fourcroy, and translated into English as Method of Chymical Nomenclature by James St. John in 1788. Méthode de nomenclature chimique contained handy dictionaries in which older chemical names were listed with their new counterparts and vice versa. New names were provided in both French and Latin for the benefit of an international readership. For a modern reader these dictionaries are still useful, but now to discover and understand older names, rather than the new. In the English version, the new names had been adapted to English, though they did not always align with current conventions. St. John used "acetat" instead of "acetate" for example. For gases, the word "gas" ("gaz") was being popularized by its consistent use in the new names, whereas the old names used the affix "air".
Traité élémentaire de chimie
The new system was presented to a wider audience in Lavoisier's 1789 textbook Traité élémentaire de chimie, translated into English as Elements of Chemistry by Robert Kerr in 1790, and it would be of great influence long after his death at the guillotine in 1794. The project was also endorsed by Swedish chemist Jöns Jakob Berzelius, who adapted the ideas for the German-speaking world.
Traité élémentaire de chimie included the first modern list of elements ("simple substances"). Also here were older names provided to explain their new counterparts. Some element names were new and received English versions similar to the French names. For the new "element" caloric, both the new and some of the "old" names (igneous fluid and matter of fire and of heat) were coined by Lavoisier, their discoverer. Most element names, however, were not new, so they retained their existing English versions. But their status as elements was new—a product of the chemical revolution.
Geneva Rules
The recommendations of Guyton were only for what would later be known as inorganic compounds. With the massive expansion of organic chemistry during the mid-nineteenth century and the greater understanding of the structure of organic compounds, the need for a less ad hoc system of nomenclature was felt just as the theoretical basis became available to make this possible. An international conference was convened in Geneva in 1892 by the national chemical societies, from which the first widely accepted proposals for standardization developed.
IUPAC
A commission was established in 1913 by the Council of the International Association of Chemical Societies, but its work was interrupted by World War I. After the war, the task passed to the newly formed International Union of Pure and Applied Chemistry, which first appointed commissions for organic, inorganic, and biochemical nomenclature in 1921 and continues to do so to this day.
Types of nomenclature
Nomenclature has been developed for both organic and inorganic chemistry. There are also designations having to do with structuresee Descriptor (chemistry).
Organic chemistry
Additive name
Conjunctive name
Functional class name, also known as a radicofunctional name
Fusion name
Hantzsch–Widman nomenclature
Multiplicative name
Replacement name
Substitutive name
Subtractive name
Inorganic chemistry
Compositional nomenclature
Type-I ionic binary compounds
For type-I ionic binary compounds, the cation (a metal in most cases) is named first, and the anion (usually a nonmetal) is named second. The cation retains its elemental name (e.g., iron or zinc), but the suffix of the nonmetal changes to -ide. For example, the compound is made of cations and anions; thus, it is called lithium bromide. The compound , which is composed of cations and anions, is referred to as barium oxide.
The oxidation state of each element is unambiguous. When these ions combine into a type-I binary compound, their equal-but-opposite charges are neutralized, so the compound's net charge is zero.
Type-II ionic binary compounds
Type-II ionic binary compounds are those in which the cation does not have just one oxidation state. This is common among transition metals. To name these compounds, one must determine the charge of the cation and then render the name as would be done with Type-I ionic compounds, except that a Roman numeral (indicating the charge of the cation) is written in parentheses next to the cation name (this is sometimes referred to as Stock nomenclature). For example, for the compound , the cation, iron, can occur as and . In order for the compound to have a net charge of zero, the cation must be so that the three anions can be balanced (3+ and 3− balance to 0). Thus, this compound is termed iron(III) chloride. Another example could be the compound . Because the anion has a subscript of 2 in the formula (giving a 4− charge), the compound must be balanced with a 4+ charge on the cation (lead can form cations with a 4+ or a 2+ charge). Thus, the compound is made of one cation to every two anions, the compound is balanced, and its name is written as lead(IV) sulfide.
An older system – relying on Latin names for the elements – is also sometimes used to name Type-II ionic binary compounds. In this system, the metal (instead of a Roman numeral next to it) has a suffix "-ic" or "-ous" added to it to indicate its oxidation state ("-ous" for lower, "-ic" for higher). For example, the compound contains the cation (which balances out with the anion). Since this oxidation state is lower than the other possibility (), this compound is sometimes called ferrous oxide. For the compound, , the tin ion is (balancing out the 4− charge on the two anions), and because this is a higher oxidation state than the alternative (), this compound is termed stannic oxide.
Some ionic compounds contain polyatomic ions, which are charged entities containing two or more covalently bonded types of atoms. It is important to know the names of common polyatomic ions; these include:
ammonium ()
nitrite ()
nitrate ()
sulfite ()
sulfate ()
hydrogen sulfate (bisulfate) ()
hydroxide ()
cyanide ()
phosphate ()
hydrogen phosphate ()
dihydrogen phosphate ()
carbonate ()
hydrogen carbonate (bicarbonate) ()
hypochlorite ()
chlorite ()
chlorate ()
perchlorate ()
acetate ()
permanganate ()
dichromate ()
chromate ()
peroxide ()
superoxide ()
oxalate ()
hydrogen oxalate ()
The formula denotes that the cation is sodium, or , and that the anion is the sulfite ion (). Therefore, this compound is named sodium sulfite. If the given formula is , it can be seen that is the hydroxide ion. Since the charge on the calcium ion is 2+, it makes sense there must be two ions to balance the charge. Therefore, the name of the compound is calcium hydroxide. If one is asked to write the formula for copper(I) chromate, the Roman numeral indicates that copper ion is and one can identify that the compound contains the chromate ion (). Two of the 1+ copper ions are needed to balance the charge of one 2− chromate ion, so the formula is .
Type-III binary compounds
Type-III binary compounds are bonded covalently. Covalent bonding occurs between nonmetal elements. Compounds bonded covalently are also known as molecules. For the compound, the first element is named first and with its full elemental name. The second element is named as if it were an anion (base name of the element + -ide suffix). Then, prefixes are used to indicate the numbers of each atom present: these prefixes are mono- (one), di- (two), tri- (three), tetra- (four), penta- (five), hexa- (six), hepta- (seven), octa- (eight), nona- (nine), and deca- (ten). The prefix mono- is never used with the first element. Thus, is termed nitrogen trichloride, is termed boron trifluoride, and is termed diphosphorus pentoxide (although the a of the prefix penta- should actually not be omitted before a vowel: the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although "monoxide", rather than "monooxide", is an allowed exception because of general usage).").
Carbon dioxide is written ; sulfur tetrafluoride is written . A few compounds, however, have common names that prevail. , for example, is usually termed water rather than dihydrogen monoxide, and is preferentially termed ammonia rather than nitrogen trihydride.
Substitutive nomenclature
This naming method generally follows established IUPAC organic nomenclature. Hydrides of the main group elements (groups 13–17) are given the base name ending with -ane, e.g. borane (), oxidane (), phosphane () (Although the name phosphine is also in common use, it is not recommended by IUPAC). The compound would thus be named substitutively as trichlorophosphane (with chlorine "substituting"). However, not all such names (or stems) are derived from the element name. For example, is termed "azane".
Additive nomenclature
This method of naming has been developed principally for coordination compounds although it can be applied more widely. An example of its application is , pentaamminechloridocobalt(III) chloride.
Ligands, too, have a special naming convention. Whereas chloride becomes the prefix chloro- in substitutive naming, for a ligand it becomes chlorido-.
See also
Descriptor (chemistry)
International Chemical Identifier
IUPAC nomenclature for organic chemical transformations
IUPAC nomenclature of inorganic chemistry 2005
IUPAC nomenclature of organic chemistry
IUPAC numerical multiplier
List of chemical compounds with unusual names
Preferred IUPAC name
References
External links
Interactive IUPAC Compendium of Chemical Terminology (interactive "Gold Book")
IUPAC Nomenclature Books Series (list of all IUPAC nomenclature books, and means of accessing them)
IUPAC Compendium of Chemical Terminology ("Gold Book") (archived 2005)
Quantities, Units and Symbols in Physical Chemistry ("Green Book")
IUPAC Nomenclature of Organic Chemistry ("Blue Book")
Nomenclature of Inorganic Chemistry IUPAC Recommendations 2005 ("Red Book")
IUPAC Recommendations on Organic & Biochemical Nomenclature, Symbols, Terminology, etc. (includes IUBMB Recommendations for biochemistry)
chemicalize.org A free web site/service that extracts IUPAC names from web pages and annotates a "chemicalized" version with structure images. Structures from annotated pages can also be searched.
ChemAxon Name <> Structure – IUPAC (& traditional) name to structure and structure to IUPAC name software. As used at chemicalize.org
ACD/Name – Generates IUPAC, INDEX (CAS), InChi, Smiles, etc. for drawn structures in 10 languages and translates names to structures. Also available as batch tool and for Pipeline Pilot. Part of I-Lab 2.0 | Chemical nomenclature | [
"Chemistry"
] | 3,630 | [
"nan"
] |
3,141,677 | https://en.wikipedia.org/wiki/Dirichlet%27s%20test | In mathematics, Dirichlet's test is a method of testing for the convergence of a series that is especially useful for proving conditional convergence. It is named after its author Peter Gustav Lejeune Dirichlet, and was published posthumously in the Journal de Mathématiques Pures et Appliquées in 1862.
Statement
The test states that if is a monotonic sequence of real numbers with and is a sequence of real numbers or complex numbers with bounded partial sums, then the series
converges.
Proof
Let and .
From summation by parts, we have that . Since the magnitudes of the partial sums are bounded by some M and as , the first of these terms approaches zero: as .
Furthermore, for each k, .
Since is monotone, it is either decreasing or increasing:
<li>
If is decreasing,
which is a telescoping sum that equals and therefore approaches as . Thus, converges.
<li>
If is increasing,
which is again a telescoping sum that equals and therefore approaches as . Thus, again, converges.
So, the series converges by the direct comparison test to . Hence converges.
Applications
A particular case of Dirichlet's test is the more commonly used alternating series test for the case
Another corollary is that converges whenever is a decreasing sequence that tends to zero. To see that
is bounded, we can use the summation formula
Improper integrals
An analogous statement for convergence of improper integrals is proven using integration by parts. If the integral of a function f is uniformly bounded over all intervals, and g is a non-negative monotonically decreasing function, then the integral of fg is a convergent improper integral.
Notes
References
Hardy, G. H., A Course of Pure Mathematics, Ninth edition, Cambridge University Press, 1946. (pp. 379–380).
Voxman, William L., Advanced Calculus: An Introduction to Modern Analysis, Marcel Dekker, Inc., New York, 1981. (§8.B.13–15) .
External links
PlanetMath.org
Convergence tests | Dirichlet's test | [
"Mathematics"
] | 430 | [
"Theorems in mathematical analysis",
"Convergence tests"
] |
3,141,761 | https://en.wikipedia.org/wiki/Glossary%20of%20arithmetic%20and%20diophantine%20geometry | This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
See also the glossary of number theory terms at Glossary of number theory.
A
B
C
D
E
F
G
H
I
K
L
M
N
O
Q
R
S
T
U
V
W
See also
Glossary of number theory
Arithmetic topology
Arithmetic dynamics
References
Further reading
Dino Lorenzini (1996), An invitation to arithmetic geometry, AMS Bookstore,
Diophantine geometry
Algebraic geometry
Geometry
Wikipedia glossaries using description lists | Glossary of arithmetic and diophantine geometry | [
"Mathematics"
] | 282 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
3,141,855 | https://en.wikipedia.org/wiki/Height%20function | A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers.
For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. for the coordinates ), but in a logarithmic scale.
Significance
Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by .
In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation.
Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by and respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic.
History
An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator and denominator (in reduced form); see .
Heights in Diophantine geometry were initially developed by André Weil and Douglas Northcott beginning in the 1920s. Innovations in 1960s were the Néron–Tate height and the realization that heights were linked to projective representations in much the same way that ample line bundles are in other parts of algebraic geometry. In the 1970s, Suren Arakelov developed Arakelov heights in Arakelov theory. In 1983, Faltings developed his theory of Faltings heights in his proof of Faltings's theorem.
Height functions in Diophantine geometry
Naive height
Classical or naive height is defined in terms of ordinary absolute value on homogeneous coordinates. It is typically a logarithmic scale and therefore can be viewed as being proportional to the "algebraic complexity" or number of bits needed to store a point. It is typically defined to be the logarithm of the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial.
The naive height of a rational number x = p/q (in lowest terms) is
multiplicative height
logarithmic height:
Therefore, the naive multiplicative and logarithmic heights of are and , for example.
The naive height H of an elliptic curve E given by is defined to be .
Néron–Tate height
The Néron–Tate height, or canonical height, is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron, who first defined it as a sum of local heights, and John Tate, who defined it globally in an unpublished work.
Weil height
Let X be a projective variety over a number field K. Let L be a line bundle on X.
One defines the Weil height on X with respect to L as follows.
First, suppose that L is very ample. A choice of basis of the space of global sections defines a morphism ϕ from X to projective space, and for all points p on X, one defines
, where h is the naive height on projective space. For fixed X and L, choosing a different basis of global sections changes , but only by a bounded function of p. Thus is well-defined up to addition of a function that is O(1).
In general, one can write L as the difference of two very ample line bundles L1 and L2 on X and define
which again is well-defined up to O(1).
Arakelov height
The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields. It is the usual Weil height equipped with a different metric.
Faltings height
The Faltings height of an abelian variety defined over a number field is a measure of its arithmetic complexity. It is defined in terms of the height of a metrized line bundle. It was introduced by in his proof of the Mordell conjecture.
Height functions in algebra
Height of a polynomial
For a polynomial P of degree n given by
the height H(P) is defined to be the maximum of the magnitudes of its coefficients:
One could similarly define the length L(P) as the sum of the magnitudes of the coefficients:
Relation to Mahler measure
The Mahler measure M(P) of P is also a measure of the complexity of P. The three functions H(P), L(P) and M(P) are related by the inequalities
where is the binomial coefficient.
Height functions in automorphic forms
One of the conditions in the definition of an automorphic form on the general linear group of an adelic algebraic group is moderate growth, which is an asymptotic condition on the growth of a height function on the general linear group viewed as an affine variety.
Other height functions
The height of an irreducible rational number x = p/q, q > 0 is (this function is used for constructing a bijection between and ).
See also
abc conjecture
Birch and Swinnerton-Dyer conjecture
Elliptic Lehmer conjecture
Heath-Brown–Moroz constant
Height of a formal group law
Height zeta function
Raynaud's isogeny theorem
References
Sources
→ Contains an English translation of
External links
Polynomial height at Mathworld
Polynomials
Abelian varieties
Elliptic curves
Diophantine geometry
Algebraic number theory
Abstract algebra | Height function | [
"Mathematics"
] | 1,387 | [
"Algebra",
"Polynomials",
"Number theory",
"Algebraic number theory",
"Abstract algebra"
] |
3,142,105 | https://en.wikipedia.org/wiki/Bombieri%E2%80%93Lang%20conjecture | In arithmetic geometry, the Bombieri–Lang conjecture is an unsolved problem conjectured by Enrico Bombieri and Serge Lang about the Zariski density of the set of rational points of an algebraic variety of general type.
Statement
The weak Bombieri–Lang conjecture for surfaces states that if is a smooth surface of general type defined over a number field , then the points of do not form a dense set in the Zariski topology on .
The general form of the Bombieri–Lang conjecture states that if is a positive-dimensional algebraic variety of general type defined over a number field , then the points of do not form a dense set in the Zariski topology.
The refined form of the Bombieri–Lang conjecture states that if is an algebraic variety of general type defined over a number field , then there is a dense open subset of such that for all number field extensions over , the set of points in is finite.
History
The Bombieri–Lang conjecture was independently posed by Enrico Bombieri and Serge Lang. In a 1980 lecture at the University of Chicago, Enrico Bombieri posed a problem about the degeneracy of rational points for surfaces of general type. Independently in a series of papers starting in 1971, Serge Lang conjectured a more general relation between the distribution of rational points and algebraic hyperbolicity, formulated in the "refined form" of the Bombieri–Lang conjecture.
Generalizations and implications
The Bombieri–Lang conjecture is an analogue for surfaces of Faltings's theorem, which states that algebraic curves of genus greater than one only have finitely many rational points.
If true, the Bombieri–Lang conjecture would resolve the Erdős–Ulam problem, as it would imply that there do not exist dense subsets of the Euclidean plane all of whose pairwise distances are rational.
In 1997, Lucia Caporaso, Barry Mazur, Joe Harris, and Patricia Pacelli showed that the Bombieri–Lang conjecture implies a uniform boundedness conjecture for rational points: there is a constant depending only on and such that the number of rational points of any genus curve over any degree number field is at most .
References
Diophantine geometry
Unsolved problems in geometry
Conjectures | Bombieri–Lang conjecture | [
"Mathematics"
] | 447 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Conjectures",
"Mathematical problems"
] |
3,142,115 | https://en.wikipedia.org/wiki/Positive%20definiteness | In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
Positive-definite bilinear form
Positive-definite function
Positive-definite function on a group
Positive-definite functional
Positive-definite kernel
Positive-definite matrix
Positive-definite quadratic form
References
.
.
Quadratic forms | Positive definiteness | [
"Mathematics"
] | 87 | [
"Quadratic forms",
"Number theory"
] |
3,142,188 | https://en.wikipedia.org/wiki/I%20band%20%28NATO%29 | The NATO I band is the obsolete designation given to the radio frequencies from 8,000 to 10,000 MHz (equivalent to wavelengths between 3.75 and 3 cm) during the Cold War period. Since 1992, frequency allocations, allotment and assignments are in line with the NATO Joint Civil/Military Frequency Agreement (NJFA).
However, in order to identify military radio spectrum requirements, e.g. for crisis management planning, training, electronic warfare activities, or in military operations, this system is still in use.
References
Radio spectrum
Microwave bands | I band (NATO) | [
"Physics"
] | 114 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,142,548 | https://en.wikipedia.org/wiki/HED%20meteorite | HED meteorites are a clan (subgroup) of achondrite meteorites. HED stands for "howardite–eucrite–diogenite".
These achondrites came from a differentiated parent body and experienced extensive igneous processing not much different from the magmatic rocks found on Earth and for this reason they closely resemble terrestrial igneous rocks.
Classification
HED meteorites are broadly divided into:
Howardites
Eucrites
Diogenites
Several subgroups of both eucrites and diogenites have been found.
The HED meteorites account for about 5% of all falls, which is about 60% of all achondrites.
Origin
No matter their composition, all these types of meteorite are thought to have originated in the crust of the asteroid Vesta. According to this theory, the differences of composition are due to their ejection at different moments in the geologic history of Vesta. Their crystallization ages have been determined to be between 4.43 and 4.55 billion years from radioisotope ratios. HED meteorites are differentiated meteorites, which were created by igneous processes in the crust of their parent asteroid.
It is thought that the method of transport from Vesta to Earth is as follows:
An impact on Vesta ejected debris, creating small ( diameter or less) V-type asteroids. Either the asteroidal chunks were ejected as such, or were formed from smaller debris. Some of these small asteroids formed the Vesta family, while others were scattered somewhat further. This event is thought to have happened less than 1 billion years ago. There is an enormous impact crater on Vesta covering much of the southern hemisphere which is the best candidate for the site of this impact. The amount of rock that was excavated there is many times more than enough to account for all known V-type asteroids.
Some of the more far-flung asteroid debris ended up in the 3:1 Kirkwood gap. This is an unstable region due to strong perturbations by Jupiter, and asteroids that end up here get ejected into very different orbits on a timescale of about 100 million years. Some of these bodies are perturbed into near-Earth orbits forming the small V-type near-Earth asteroids such as e.g. 3551 Verenia, 3908 Nyx, or 4055 Magellan.
Later, smaller impacts on these near-Earth objects dislodged rock-sized meteorites, some of which later struck Earth. On the basis of cosmic ray exposure measurements, it is thought that most HED meteorites arose from several distinct impact events of this kind, and spent from about 6 million to 73 million years in space before striking the Earth.
See also
Glossary of meteoritics
References
External links
Meteorite articles, including discussions of HEDs, in Planetary Science Research Discoveries
Planetary science
Asteroidal achondrites
4 Vesta | HED meteorite | [
"Astronomy"
] | 590 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
3,142,559 | https://en.wikipedia.org/wiki/F%20band%20%28NATO%29 | The NATO F band is the obsolete designation given to the radio frequencies from 3,000 to 4,000 MHz (equivalent to wavelengths between 10 and 7.5 cm) during the cold war period. Since 1992, frequency allocations, allotment and assignments are in line with the NATO Joint Civil/Military Frequency Agreement (NJFA).
However, in order to identify military radio spectrum requirements, e.g. for crisis management planning, training, electronic warfare activities, or in military operations, this system is still in use.
References
Radio spectrum | F band (NATO) | [
"Physics"
] | 112 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,142,605 | https://en.wikipedia.org/wiki/M%20band%20%28NATO%29 | The NATO M band is the obsolete designation given to the radio frequencies from 60 to 100 GHz (equivalent to wavelengths between 5 and 3 mm) during the cold war period. Since 1992 frequency allocations, allotment and assignments are in line to NATO Joint Civil/Military Frequency Agreement (NJFA).
However, in order to identify military radio spectrum requirements, e.g. for crises management planning, training, Electronic warfare activities, or in military operations, this system is still in use.
The NATO M band is also a subset of the EHF band as defined by the ITU. It intersects with the V (50–75 GHz) and W band (75–110 GHz) of the older IEEE classification system.
References
Radio spectrum | M band (NATO) | [
"Physics"
] | 151 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,142,646 | https://en.wikipedia.org/wiki/Howardite | Howardites are achondritic stony meteorites that originate from the surface of the asteroid 4 Vesta, and as such are part of the HED meteorite clan. There are about 200 distinct members known.
Characteristics
They are a regolith breccia consisting mostly of eucrite and diogenite fragments, although carbonaceous chondrules and impact melt can also occur. The rock formed from impact ejecta which was later buried by newer impacts and lithified due to the pressure from overlying layers. Regolith breccias are not found on Earth due to a lack of regolith on bodies which have an atmosphere.
Name
Howardites are named for Edward Howard, a pioneer of meteoritics.
An arbitrary divide between howardites and the polymict eucrites is a 9:1 ratio of eucrite to diogenite fragments.
See also
Glossary of meteoritics
References
External links
Howardite images - Meteorites Australia
Planetary science
Asteroidal achondrites
4 Vesta | Howardite | [
"Astronomy"
] | 211 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
3,142,697 | https://en.wikipedia.org/wiki/Vintage%20clothing | Vintage clothing is a generic term for garments originating from a previous era, as recent as the 1990s. The term can also be applied in reference to second-hand retail outlets, e.g. in vintage clothing store. While the concept originated during World War I as a response to textile shortages, vintage dressing encompasses choosing accessories, mixing vintage garments with new, as well as creating an ensemble of various styles and periods. Vintage clothes typically sell at low prices for high-end name brands.
Vintage clothing can be found in cities at local boutiques or local charities, or on the internet through digital second-hand shopping websites. Vintage fashion has seen a reemergence in popularity within the 21st century due to increased prevalence of vintage pieces in the media and among celebrities, as well as consumer interests in sustainability and slow fashion.
Definitions
"Vintage" is a colloquialism commonly used to refer to all old styles of clothing. A generally accepted industry standard is that items made between 30 and 100 years ago are considered "vintage" if they clearly reflect the styles and trends of the era they represent. These clothing items come with a sense of history attached to them, which is one of the reasons they are valued by vintage enthusiasts. This sense of history allows consumers to express sentimental nostalgia for fashions of past eras and for aspects not common with modern items like craftsmanship.
Vintage items are considered different than antique, which is used to refer to items 100 years old or more. Retro, short for retrospective, or "vintage style," usually refers to clothing that imitates the style of a previous era. Reproduction, or repro, clothing is a newly made copy of an older garment.
Clothing produced more recently is usually called modern or contemporary fashion.
Deadstock
Deadstock refers to merchandise that was withdrawn from sale and warehoused without having been sold to a customer. This is due to the item no longer being in fashion or otherwise outdated or superseded. Such merchandise might once again be in demand and at such point can be returned to sale. Return to sale of fashion merchandise would make it vintage clothing. However, repurposing of deadstock in new products is one way to improve sustainability in the fashion industry.
Sizing
In the United States, due to changes in clothing sizes, vintage sizes are often smaller than the corresponding contemporary size. For example, a garment from the 1970s labeled as Medium (M) might be similar in size to a 2010s Extra Small (XS). Vintage sewing patterns offer an option for those who want a historically accurate garment but cannot find one in their size.
Retail market
Popular places to buy vintage clothing include charity-run second-hand clothing shops, thrift stores, consignment shops, garage sales, car boot sales, flea markets, antique markets, estate sales, auctions, vintage clothing shops and vintage fashion, textile or collectables fairs. Specialist vintage clothing shops, such as Virginia by Virginia Bates in London, often attracted high-end customers.
With the rise of the digital world and social media, the consumption of Vintage clothing has rapidly expanded, with e-commerce websites leading to growth in consumer accessibility of vintage pieces. The internet has drastically increased the availability of specific and hard-to-get items and opened up prospective markets for sellers around the world. In the last 20 years, social media in particular has become the most popular medium for consumers to obtain information about, and interact with vintage fashion.
Popular places to acquire garments include online auctions (e.g. eBay), multi-vendor sites (e.g. Etsy), online vintage clothing shops, (eg. TheRealReal, ThredUp), specialist forums, and social media sites (eg. Facebook Marketplace, Depop), where consumers can like, share, and purchase vintage goods from their smartphones. Many vintage clothing shops with physical locations also sell their goods online. In a world filled with fast fashion and "new" being the most popular choice, vintage style has found a way to stay popular. This has a lot to do with celebrities and influencers following this trend, making it a desirable choice for the general public as well. Famous brands, such as Gucci, have made choices like cutting down the number of yearly fashion shows, in order to move the fashion industry toward greater sustainability. The seasonal fashion cycle that the industry has followed for years is being broken down to favor a more environmentally conscious approach to fashion.
Typically in the United States, vintage clothing shops can be found clustered in college towns and artsy neighborhoods of cities. In contrast to thrift stores that sell both vintage and contemporary used clothing, vintage clothing shops are usually for-profit enterprises, with the market mixed between small chains and independent stores. These stores typically range from 200 to 5,000 square feet in size, and will usually have a fitting room. Vintage clothing stores may obtain clothing from individuals in exchange for cash or store credit.
History
Before the rise of industrial manufacture, construction of most articles of clothing required extensive hand labor. Clothing worn by farmers and laborers was more a matter of practicality than fashion. In order to maximize value, clothing was repaired when worn or damaged, sometimes with layers of patching. Used clothing, in reasonable condition, could be tailored for a new owner. When too tattered to repair, an article might have been taken down to scraps for use in a quilt or braided rag rug, or used as rags for cleaning or dusting.
The term "vintage" in relation to "vintage fashion" and "vintage clothing" was first used in 1997 by Matthew Adams who founded Frock Me!, the first vintage fashion fair in the UK.
During World War I, the United States launched a conservation campaign, with slogans such as "Make economy fashionable lest it become obligatory". One result was an approximate 10% reduction in wartime trash production.
Into the 20th and 21st centuries, vintage clothing has seen increased popularity throughout media and pop culture. The tides of popular fashion create demand for ongoing replacement of products with something that is new and fresh. Once known as secondhand clothing, is now seen as vintage clothing. This is due in part to increased visibility through media, film and television, and celebrity influence. In the past 20 years, vintage fashion has been featured in leading fashion and lifestyle magazines, including a 2011 publication of Marie Claire. The popularity of period pieces within film and television has also contributed to trends of vintage fashion. The authentic portrayal of 1960s fashions in the 2007 award winning series Mad Men sparked a resurgence of glamour in consumer interest. This was reflected in a prevalence of 1950 and 60s fashions in 2010 runways, and increased sales at vintage shops. In the early 2000s, celebrities like Reese Witherspoon and Renee Zellweger brought vintage clothing into the media by wearing vintage pieces to red carpets.
In the past decade, vintage clothing has become part of the movement towards environmental sustainability and sustainable fashion, and is an aspect of slow fashion, a concept coined in 2007 by Kate Fletcher. Vintage fashion appeals to consumer interests of ethical clothing as it falls under categories of reusing, recycling and repairing items rather than throwing them away.
Vintage shopping trends have also seen a transition to E-commerce, with the emergence of sites such as Depop, founded in 2011, ThredUp, founded in 2009, and TheRealReal, founded in 2011. When new retailers try to enter the market for vintage clothing, they face certain barriers unique to this segment of the fashion industry. For example, authenticity and exclusivity are two very important factors that vintage clothing consumers look for, so guaranteeing these qualities is of greatest importance for the retailers. Knowing and disclosing the origin of the clothing is a crucial component of succeeding in the vintage clothing retail industry.
Those who purchase vintage clothes may wear them frequently or use them as showpieces of great value within their wardrobe. These tend to never be worn, rather appreciated from their new home in the owner's closet. While some people may keep these clothes in their possession for a long time, others may look to repurpose, mend, or pass these items to new owners.
Historically based sub-cultural groups like rockabilly and swing dancing played a part in the increased interest in vintage clothes. In Finland the vintage scene resulted in a registered non-profit organization called Fintage, from common interest in the preservation of material culture and the environment.
"Vintage inspired" and "vintage style"
Fashion design, throughout history has turned to previous eras for inspiration. Vintage clothing retains and increases in value due to the fact that it is genuinely from a past era. Vintage clothing allows the buyers to be their own designers because they can choose the different styles from second-hand clothing. In addition, authentic garments are made one at a time, with enough attention to detail to create an item that has long lasting value. Garments closely resembling original vintage (retro or antique) clothing are mass-produced, for the most part, in China. An example of this is the simple slip dresses that emerged in the early 1990s, a style that resembles a 1930s design, but upon examination will show that it only superficially resembles the real thing. These styles are generally referred to as "vintage style", "vintage inspired" or "vintage reproductions". They serve as a convenient alternative to those who admire an old style but prefer a modern interpretation. People who wear vintage clothing look for designer brands and limited edition products to fit in the “vintage” category. Sellers claim consumer advantage in that, unlike the original garments, they are usually available in a range of sizes and perhaps, colours and/or fabrics, and can be sold much cheaper.
Vintage fashion can be understood as a response to fast fashion, in which garments are mass produced. Vintage shopping allow consumers to find unique pieces and create a sense of individuality. Vintage clothing is also meant to evoke an emotional connection to clothing, especially connecting pieces with feelings such as nostalgia and memories. The individuality and sense of style that a person tries to convey by building a wardrobe around "vintage style" is something that drives the trend forward.
Even luxury clothing consumers have made a shift toward a sustainable approach to luxury clothing, and vintage style has contributed greatly to this. Influencers and celebrities gravitating toward branded items that are second-hand or vintage, have pushed consumers to own unique pieces that are more environmentally friendly, rather than shopping for cheaper fast fashion. Giving vintage clothes a strong value in society and fashion has been crucial to making it a desirable choice for the greater public. This has helped create brand desirability in a market which may have not had this component earlier. Especially with the general public who have tighter budgets than celebrities, second-hand luxury items seem to be an appealing path into the world of luxury brands.
Environmental sustainability
Vintage fashion is part of a larger movement of sustainable fashion, and falls under the category of slow fashion, which is direct response to increasing awareness of the environmental impacts of the fast fashion industry. Within the past 10 years, increased media coverage of environmental issues has led to increasing consumer interest in ethical clothing consumption, and vintage fashion specifically.
The fashion industry ranks as the second most polluting industry in the world after the oil industry. Consequently, a trend in becoming more conscious and sustainable shoppers has emerged through the years. The interest and demand in vintage shopping has grown significantly. In 2020, the term “vintage fashion” was researched 35,000 times on Lyst. One way of reducing waste and limiting the negative impact of fashion on the environment is the reuse and recycling of clothes. Vintage stores make fashion more sustainable. One used item purchased as opposed to one new one reduces CO2 emissions by 25% on average per use.
Sometimes vintage items are upcycled via changing the hemline or other features for a more contemporary look. Vintage items in poor condition are also salvaged for reuse as components in new garments. Throughout the world, used apparel is reclaimed and put to new uses. The textile recycling industry is able to process over ninety percent of the waste without the production of any new hazardous waste or harmful by product.
See also
Vintage (design)
Thrift store chic
Indie subculture
Counterculture
2010s fashion
Sustainable fashion
Bibliography
Bamford, Trudie (2003). Viva Vintage: Find it, Wear it, Love it. Carroll & Brown.
Tolkien, Tracy (2000). Vintage: the Art of Dressing up. Pavilion.
References
History of clothing
Fashion design
Reuse
Nostalgia | Vintage clothing | [
"Engineering"
] | 2,534 | [
"Design",
"Fashion design"
] |
3,142,841 | https://en.wikipedia.org/wiki/Stroke%20%28engine%29 | In the context of an internal combustion engine, the term stroke has the following related meanings:
A phase of the engine's cycle (e.g. compression stroke, exhaust stroke), during which the piston travels from top to bottom or vice versa.
The type of power cycle used by a piston engine (e.g. two-stroke engine, four-stroke engine).
"Stroke length", the distance travelled by the piston during each cycle. The stroke length, along with bore diameter, determines the engine's displacement.
Phases in the power cycle
Commonly used engine phases or strokes (i.e. those used in a four-stroke engine) are described below. Other types of engines can have very different phases.
Induction-intake stroke
The induction stroke is the first phase in a four-stroke (e.g. Otto cycle or Diesel cycle) engine. It involves the downward movement of the piston, creating a partial vacuum that draws an air-fuel mixture (or air alone, in the case of a direct injection engine) into the combustion chamber. The mixture enters the cylinder through an intake valve at the top of the cylinder.
Compression stroke
The compression stroke is the second of the four stages in a four-stroke engine.
In this stage, the air-fuel mixture (or air alone, in the case of a direct injection engine) is compressed to the top of the cylinder by the piston. This is the result of the piston moving upwards, reducing the volume of the chamber. Towards the end of this phase, the mixture is ignited, by a spark plug for petrol engines or by self-ignition for diesel engines.
Combustion-power-expansion stroke
The combustion stroke is the third phase, where the ignited air-fuel mixture expands and pushes the piston downwards. The force created by this expansion is what creates an engine's power.
Exhaust stroke
The exhaust stroke is the final phase in a four stroke engine. In this phase, the piston moves upwards, squeezing out the gasses that were created during the combustion stroke. The gasses exit the cylinder through an exhaust valve at the top of the cylinder. At the end of this phase, the exhaust valve closes and the intake valve opens, which then closes to allow a fresh air-fuel mixture into the cylinder so the process can repeat itself.
Types of power cycles
The thermodynamic cycle used by a piston engine is often described by the number of strokes to complete a cycle. The most common designs for engines are two-stroke and four-stroke. Less common designs include one-stroke engines, five-stroke engines, six-stroke engines and two-and-four stroke engines.
One-stroke engine
A Granada, Spain-based company, INNengine invented an opposed-piston engine with four pistons on either side to make a total of eight. Fixed rods hold together all pistons, and they share one combustion chamber. These rods press against plates that have an oscillating wave-like design, allowing the rods to press and release the pistons in a synchronized, smooth process. The engine, known as the e-REX creates 4 times more power events per revolution than a conventional 4 Stroke and twice more than a 2 Stroke. Although the e-REX is called a one-stroke engine there is debate that says it is actually a two-stroke engine, it is called a one-stroke because each piston executes two strokes (i.e., compression/combustion and exhaust/intake) in half an engine revolution, then by INNengine's logic, two strokes multiplied by half a revolution is what gave it the Patented 1 Stroke name.
Two-stroke engine
Two-stroke engines complete a power cycle every two strokes, which means a power cycle is completed with every crankshaft revolution. Two-stroke engines are commonly used in (typically large) marine engines, outdoor power tools (e.g. lawnmowers and chainsaws) and motorcycles.
Four-stroke engine
Four-stroke engines complete a power cycle every four strokes, which means a power cycle is completed every two crankshaft revolutions. Most automotive engines are of a four-stroke design.
Five-stroke engine
Five-stroke engines complete a power cycle every five strokes. The engine only exists as a prototype.
Six-stroke engine
Six-stroke engines complete a power cycle every six strokes, which means a power cycle is completed every three crankshaft revolutions.
Stroke length
The stroke length is how far the piston travels in the cylinder, which is determined by the cranks on the crankshaft.
Engine displacement is calculated by multiplying the cross-section area of the cylinder (determined by the bore) by the stroke length. This number is multiplied by the number of cylinders in the engine, to determine the total displacement.
Steam engine
The term stroke can also apply to movement of the piston in a locomotive cylinder.
References
Engine technology | Stroke (engine) | [
"Technology"
] | 986 | [
"Engine technology",
"Engines"
] |
3,142,847 | https://en.wikipedia.org/wiki/Generalized%20suffix%20tree | In computer science, a generalized suffix tree is a suffix tree for a set of strings. Given the set of strings of total length , it is a Patricia tree containing all suffixes of the strings. It is mostly used in bioinformatics.
Functionality
It can be built in time and space, and can be used to find all occurrences of a string of length in time, which is asymptotically optimal (assuming the size of the alphabet is constant).
When constructing such a tree, each string should be padded with a unique out-of-alphabet marker symbol (or string) to ensure no suffix is a substring of another, guaranteeing each suffix is represented by a unique leaf node.
Algorithms for constructing a GST include Ukkonen's algorithm (1995) and McCreight's algorithm (1976).
Example
A suffix tree for the strings ABAB and BABA is shown in a figure above. They are padded with the unique terminator strings $0 and $1. The numbers in the leaf nodes are string number and starting position. Notice how a left to right traversal of the leaf nodes corresponds to the sorted order of the suffixes. The terminators might be strings or unique single symbols. Edges on $ from the root are left out in this example.
Alternatives
An alternative to building a generalized suffix tree is to concatenate the strings, and build a regular suffix tree or suffix array for the resulting string. When hits are evaluated after a search, global positions are mapped into documents and local positions with some algorithm and/or data structure, such as a binary search in the starting/ending positions of the documents.
References
External links
A C implementation of Generalized Suffix Tree for two strings
Trees (data structures)
Substring indices
String data structures
Computer science suffixes | Generalized suffix tree | [
"Technology"
] | 367 | [
"Computer science",
"Computer science suffixes"
] |
3,142,937 | https://en.wikipedia.org/wiki/List%20of%20works%20on%20intelligent%20design | This is a list of works addressing the subject or the themes of intelligent design.
Non-fiction
Supportive non-fiction
Supportive non-fiction books
Michael J. Behe. Darwin's Black Box: The Biochemical Challenge to Evolution, New York: Free Press, 1996.
Michael J. Behe, William A. Dembski, Stephen C. Meyer. Science and Evidence for Design in the Universe (Proceedings of the Wethersfield Institute), Ignatius Press 2000,
Michael J. Behe, The Edge of Evolution, Free Press, June 5, 2007,
David Berlinski. The Devil's Delusion: Atheism and its Scientific Pretensions, Basic Books; Reprint edition, 2009,
(Attacks evolution. Not a support for Creationism.)
Percival Davis and Dean H. Kenyon Of Pandas and People: The Central Question of Biological Origins 1989 (2nd edition 1993)
William A. Dembski. Intelligent Design: The Bridge Between Science & Theology, InterVarsity Press 1999.
William A. Dembski, James M. Kushiner. Signs of Intelligence: Understanding Intelligent Design, Brazos Press, 2001,
William A. Dembski, John Wilson. Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing, ISI Press, 2004.
William A. Dembski and Jonathan Wells, The Design of Life, Foundation for Thought and Ethics, November 19, 2007.
William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Foreword by Charles W. Colson). Inter Varsity Press. 2004,
William A. Dembski, The Design Inference: Eliminating Chance through Small Probabilities (Cambridge Studies in Probability, Induction and Decision Theory), Cambridge University Press, 2006.
William A. Dembski, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2007),
William A. Dembski, The Design of Life: Discovering Signs of Intelligence in Biological Systems, ISI Distributed Titles; 1st edition (September 5, 2008)
William A. Dembski and Sean McDowell, Understanding Intelligent Design: Everything You Need to Know in Plain Language
William A. Dembski, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, IVP Books, 2010
Michael Denton. Evolution: A Theory In Crisis, Adler & Adler; 3rd edition, 1986,
Michael Denton. Nature's Destiny: How the Laws of Biology Reveal Purpose in the Universe, 2002
Michael Pitman. Adam and Evolution, Rider & Co; First Edition, 1984,
James H. Feldstein, Intelligent Design?
Antony Flew. There Is a God: How the World's Most Notorious Atheist Changed His Mind, HarperOne, 2008,
Steve Fuller. Science vs Religion? Intelligent Design and the Problem of Evolution. Polity Books. 2007, .
Steve Fuller. Dissent Over Descent: Evolution's 500-year War on Intelligent Design. Icon Books Ltd. 2008,
James Gills. Darwinism Under The Microscope: How recent scientific evidence points to divine design, Charisma House, 2002,
Werner Gitt. In the Beginning Was Information: A Scientist Explains the Incredible Design in Nature, Master Books, 2006,
Guillermo Gonzalez and Jay Richards, The Privileged Planet: How Our Place in the Cosmos is Designed for Discovery Regnery Publishing 2006
Cornelius G. Hunter, (2002). Darwin's God: Evolution and the Problem of Evil, Brazos Press.
Phillip E. Johnson. Darwin on Trial, Washington, D.C.: Regnery Gateway, 1991.
Phillip E. Johnson. Defeating Darwinism by opening minds, Downers Grove, Ill.: InterVarsity Press, 1997.
Phillip E. Johnson. Evolution as dogma: the establishment of naturalism, Dallas, Tex.: Haughton Pub. Co., 1990
Stephen C. Meyer, Scott Minnich, Jonathan Moneymaker, Paul A. Nelson, and Ralph Seelke, Explore Evolution: The Arguments for and Against Neo-Darwinism, Hill House Publishers Pty. Ltd., Melbourne and London, 2007, .
Stephen C. Meyer. Signature in the Cell: DNA and the Evidence for Intelligent Design. New York: HarperOne (June 23, 2009)
Stephen C. Meyer. Darwin's Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design. New York: HarperOne (June 10, 2013)
Bradley Monton. Seeking God in Science: An Atheist Defends Intelligent Design, Broadview Press; 1 edition, 2009,
J. P. Moreland. The Creation Hypothesis: Scientific Evidence for an Intelligent Designer, IVP Books, 1994,
Robert G. Neuhauser. The Cosmic Deity: Where Scientists and Theologians Fear to Tread, Mill Creek Publishers, 2004,
Denyse O'Leary, By Design or By Chance? The Growing Controversy on the Origins of Life in the Universe, Augsburg Books, June 2004,
Mark Ludwig. Computer Viruses, Artificial Life and Evolution: The Little Black Book of Computer Viruses, Amer Eagle Pubns Inc, 1993,
Dean L. Overman, A Case Against Accident and Self-Organization, Rowman & Littlefield Publishers, 1997,
Online in full. (pdf of first section)
A.C. Bhaktivedanta Swami Prabhupada. Life Comes From Life, Bhaktivedanta Book Trust (ID from the Vedic Perspective)
Rael. Intelligent Design: Message from the Designers, Nova Distribution, 2006,
Fazale Rana. The Cell's Design: How Chemistry Reveals the Creator's Artistry, Baker Books, 2008,
Hugh Ross. Beyond the Cosmos, Signalman Publishing, 2010,
Hugh Ross. Why the Universe is the Way it Is, Grand Rapids, MI: Baker Books, 2008,
John C. Sanford. Genetic Entropy and the Mystery of the Genome, Feed My Sheep Foundation, Inc, 2008,
Geoffrey Simmons, William Dembski. What Darwin Didn't Know, Harvest House Publishers, 2004,
Philip Snow. Design and Origin of Birds, Day One Publications, 2006,
Lee Strobel. The Case for a Creator, Zondervan, 2004,
David Swift. Evolution Under the Microscope, Leighton Academic Press, 2002,
Charles Thaxton and Walter Bradley. The Mystery of Life's Origin: Reassessing Current Theories, Philosophical Library, January 19, 1984,
Thomas E. Woodward. Doubts About Darwin: A History of Intelligent Design, Baker Books, 1993,
Thomas E. Woodward. Darwin Strikes Back (2006),
Supportive non-fiction anthologies
John Angus Campbell, Stephen C. Meyer ed. Darwinism, Design and Public Education, Michigan State University Press, December 2003,
Why Are We Still Debating Darwinism? Why Not Teach the Controversy? John Angus Campbell
PART I—Should Darwinism Be Presented Critically and Comparatively in the Public Schools? Philosophical, Educational, and Legal Issues
Intelligent Design, Darwinism, and the Philosophy of Public Education, John Angus Campbell
Intelligent Design Theory, Religion, and the Science Curriculum, Warren A. Nord
Teaching the Controversy: Is It Science, Religion, or Speech? David DeWolf, Stephen C. Meyer, and Mark E. DeForrest
PART II—Scientific Critique of Biology Textbooks and Contemporary Evolutionary Theory
The Meanings of Evolution, Stephen C. Meyer and Michael Newton Keas
The Deniable Darwin, David Berlinski
Haeckel's Embryos and Evolution: Setting the Record Straight, Jonathan Wells
Second Thoughts about Peppered Moths, Jonathan Wells
Where Do We Come From? A Humbling Look at the Biology of Life's Origin, Massimo Pigliucci
Origin of Life and Evolution in Biology Textbooks: A Critique, Gordon C. Mills, Malcolm Lancaster, and Walter L. Bradley
PART III—The Theory of Intelligent Design: A Scientific Alternative to Neo-Darwinian and/or Chemical Evolutionary Theories
DNA and the Origin of Life: Information, Specification, and Explanation, Stephen C. Meyer
Design in the Details: The Origin of Biomolecular Machines, Michael J. Behe
Homology in Biology: Problem for Naturalistic Science and Prospect for Intelligent Design, Paul Nelson and Jonathan Wells
The Cambrian Explosion: Biology's Big Bang, Stephen C. Meyer, Marcus Ross, Paul Nelson, and Paul Chien
Reinstating Design within Science, William A. Dembski
PART IV—Critical Responses
The Rhetoric of Intelligent Design: Alternatives for Science and Religion, Celeste Michelle Condit
Intelligent Design and Irreducible Complexity: A Rejoinder, David Depew
Biochemical Complexity: Emergence or Design? Bruce H. Weber
Design Yes, Intelligent No: A Critique of Intelligent Design Theory and Neo-Creationism, Massimo Pigliucci
On Behalf of the Fool, Michael Ruse
Rhetorical Arguments and Scientific Arguments: Do My Children Have to Listen to More Arguments against Evolution? Eugene Garver
Design? Yes! But Is It Intelligent? William Provine
Creation and Evolution: A Modest Proposal, Alvin Plantinga
Thinking Pedagogically about Design, John Lyne
An Intelligent Person's Guide to Intelligent Design Theory Steve Fuller
The Rhetorical Problem of Intelligent Design, Phillip E. Johnson
Appendixes
A. U.S. Commission on Civil Rights Hearing: On Curriculum Controversies in Biology, 21 August 1998
B. Helping Schools to Teach Evolution, Donald Kennedy
C. Stratigraphic First Appearance of Phyla-Body Plans
D. Stratigraphic First Appearance of Phyla-Subphyla Body Plans
E. Probability of Other Body Plans Originating in the Cambrian Explosion
An anthology of papers from the November 1996 conference of the same name, sponsored by Christian Leadership Ministries.
Introduction, William Dembski
Part 1,"Unseating Naturalism," Walter Bradley, Jonathan Wells
Part 2, "Design Theory," Nancy Pearcey, William Dembski, Steve Meyer, Paul Nelson
Part 3, "Biological Design," Michael Behe, Siegfried Scherer, Sigrid Hartwig-Scherer, Jeff Schloss
Part 4, "Philosophy and Design," J.P. Moreland, Del Ratzsch, John Mark Reynolds, Bill Craig
Part 5, "Design in the Universe," Hugh Ross, Robert Kaita, David Berlinski, Robert Newman
Concluding essays, Phillip E. Johnson, Bruce Chapman
Supportive non-fiction papers and articles
Michael Behe. A Response to Critics of Darwin's Black Box
William A. Dembski. Becoming a Disciplined Science: Prospects, Pitfalls, and Reality Check for ID
William A. Dembski. Searching Large Spaces—Displacement and the No Free Lunch Regress
Supportive non-fiction films
The Privileged Planet
Unlocking the Mystery of Life
Expelled: No Intelligence Allowed
Darwin's Dilemma
The Information Enigma
Metamorphosis: The Beauty and Design of Butterflies
Flight: The Genius of Birds
Living Waters: Intelligent Design in the Oceans of the Earth
Neutral non-fiction
Neutral non-fiction books
David L. Bender (1988). Science and Religion; Opposing Viewpoints. St. Paul, Minnesota: Greenhaven Press.
Carl Johan Calleman (2009). The Purposeful Universe: How Quantum Theory and Mayan Cosmology Explain the Origin and Evolution of Life. Bear & Company.
Michael Corey (2007). The God Hypothesis: Discovering Design in Our Just Right Goldilocks Universe. Rowman & Littlefield Publishers.
Paul Davies (2007). Cosmic Jackpot The Goldilocks Enigma: Why is the Universe Just Right for Life?.
James Le Fanu (2009). Why Us?: How Science Rediscovered the Mystery of Ourselves. Pantheon.
Jerry Fodor and Massimo Piattelli-Palmarini. (2011) What Darwin Got Wrong. Picador; Reprint edition
James N. Gardner (2003). Biocosm: The New Scientific Theory of Evolution: Intelligent Life Is the Architect of the Universe. Inner Ocean Publishing.
Brian Goodwin (2001). How the Leopard Changed its Spots: The Evolution of Complexity. Princeton University Press.
Amit Goswami (2008). Creative Evolution: A Physicist's Resolution Between Darwinism and Intelligent Design. Quest Books; 1st Quest Ed edition.
George Greenstein (1988). The Symbiotic Universe: Life and mind in the Cosmos. Morrow.
Bernard Haisch (2010). The Purpose-Guided Universe: Believing In Einstein, Darwin, and God. New Page Books; 1 edition.
Francis Hitching (1983). The Neck of the Giraffe or Where Darwin Went Wrong. Signet.
Mae-Wan Ho (1984). Beyond Neo-Darwinism: An Introduction to the New Evolutionary Paradigm. Academic Pr.
Ervin Laszlo (1987). Evolution: The Grand Synthesis. Shambhala.
Albert Low (2008). The Origin of Human Nature: A Zen Buddhist Looks at Evolution. Sussex Academic Pr.
Richard Milton (2000). Shattering the Myths of Darwinism. Park Street Press.
Norman Macbeth (1971). Darwin Retried. Boston: Harvard Common Press
Johnjoe McFadden (2002). Quantum Evolution: How Physics' Weirdest Theory Explains Life's Biggest Mystery. W. W. Norton and Company.
Robert G. B. Reid (1985). Evolutionary Theory: The Unfinished Synthesis. Cornell Univ Pr.
Stanley Salthe (2003). Development and Evolution: Complexity and Change in Biology. The MIT Press.
James A. Shapiro (2011). Evolution: A View from the 21st Century.
Robert Shapiro (1986). Origins; A Skeptic's Guide to the Creation of Life on Earth. New York, N.Y.: Summit
Lee Spetner (1998). Not by Chance!: Shattering the Modern Theory of Evolution. Judaica Press.
David Stove (2006). Darwinian Fairytales: Selfish Genes, Errors of Heredity and Other Fables of Evolution.
Gordon Rattray Taylor (1984). The Great Evolution Mystery. Publisher Abacus.
Duane Thurman (1978). How To Think About Evolution. Downers Grove, Illinois: The InterVarsity Press.
Hubert Yockey (2011). Information Theory, Evolution, and The Origin of Life. Cambridge University Press; Reissue edition.
Neutral non-fiction anthologies
Robert Pennock ed. Intelligent Design Creationism and its Critics: Philosophical, Theological, and Scientific Perspectives, MIT Press (2002).
Intelligent Design Creationism's "Wedge Strategy"
The Wedge at Work: How Intelligent Design Creationism is Wedging Its Way into the Cultural and Academic Mainstream, by Barbara Forrest
Johnson's Critique of Evolutionary Naturalism
Evolution as Dogma: The Establishment of Naturalism, by Phillip E. Johnson
Naturalism, Evidence and Creationism: The Case of Phillip Johnson, by Robert T. Pennock
Response to Pennock by Phillip E. Johnson
Reply: Johnson's Reason in the Balance, by Robert T. Pennock
A Theological Conflict?: Evolution vs. the Bible
When Faith and Reason Clash: Evolution and the Bible, by Alvin Plantinga
When Faith and Reason Cooperate, by Howard J. Van Till
Plantinga's Defense of Special Creation, by Ernan McMullin
Evolution, Neutrality, and Antecedent Probability: A Reply to McMullin and Van Till, by Alvin Plantinga
Intelligent Design's Scientific Claims
Molecular Machines: Experimental Support for the Design Inference, by Michael J. Behe
Born Again Creationism, by Philip Kitcher
Biology Remystified: The Scientific Claims of the New Creationists, by Matthew J. Brauer & Daniel R. Brumbaugh
Plantinga's Critique of Naturalism & Evolution
Methodological Naturalism?, by Alvin Plantinga
Methodological Naturalism Under Attack, by Michael Ruse
Plantinga's Case Against Naturalistic Epistemology, by Evan Fales
Plantinga's Probability Arguments Against Evolutionary Naturalism, by Branden Fitelson & Elliott Sober
Intelligent Design Creationism vs. Theistic Evolutionism
Creator or "Blind Watchmaker?", by Phillip E. Johnson
Phillip Johnson on Trial: A Critique of His Critique of Darwin, by Nancey Murphy
Welcoming the 'Disguised Friend' – Darwinism and Divinity, by Arthur Peacocke
The Creation: Intelligently Designed or Optimally Equipped?, by Howard J. Van Till
Is Theism Compatible with Evolution?, by Roy Clouser
Intelligent Design and Information
Is Genetic Information Irreducible?, by Phillip E. Johnson
Reply to Phillip Johnson, by Richard Dawkins
Reply to Johnson, by George C. Williams
Intelligent Design as a Theory of Information, by William A. Dembski
Information and the Argument from Design, by Peter Godfrey-Smith
How Not to Detect Design, by Branden Fitelson, Christopher Stephens & Elliott Sober
The 'Information Challenge', by Richard Dawkins
Intelligent Design Theorists Turn the Tables
Who's Got the Magic?, by William A. Dembski
The Wizards of ID: Reply to Dembski, by Robert T. Pennock
The Panda's Peculiar Thumb, by Stephen Jay Gould
The Role of Theology in Current Evolutionary Reasoning, by Paul A. Nelson
Appealing to Ignorance Behind the Cloak of Ambiguity, by Kelly C. Smith
Nonoverlapping Magisteria, by Stephen Jay Gould
Creationism and Education
Why Creationism Should Not Be Taught in the Public Schools, by Robert T. Pennock
Creation and Evolution: A Modest Proposal, by Alvin Plantinga
Reply to Plantinga's 'Modest Proposal', by Robert T. Pennock
Michael Ruse and William Dembski (eds) Debating Design. New York: Cambridge University Press, (pp. 130 – 148, 2004)
Introduction: general introduction, by William Dembski and Michael Ruse
The argument from design: a brief history Michael Ruse
Who's afraid of ID?: a survey of the intelligent design movement Angus Menuge
Part I. Darwinism:
1 Design without a designer: Darwin's greatest discovery Francisco J. Ayala
2 The flagellum unspun: the collapse of 'irreducible complexity' Kenneth Miller
3 The design argument Elliott Sober
4 DNA by design? Stephen Meyer and the return of the god hypothesis Robert T. Pennock
Part II. Complex Self-Organization:
5. Prolegomenon to a general biology Stuart Kauffman
6. Darwinism, design and complex systems dynamics David Depew and Bruce Weber
7. Emergent complexity, teleology, and the arrow of time Paul Davies
8. The emergence of biological value James Barham
Part III. Theistic Evolution:
9. Darwin, design and divine providence John Haught
10. The inbuilt potentiality of creation John Polkinghorne
11. Theistic evolution Keith Ward
12. Intelligent design: some geological, historical and theological questions Michael Roberts
13. The argument from laws of nature reassessed Richard Swinburne
Part IV. Intelligent Design:
14. The logical underpinnings of intelligent design William Dembski
15. Information, entropy and the origin of life Walter Bradley
16. Irreducible complexity: obstacle to Darwinian evolution Michael Behe
17. The Cambrian information explosion: evidence for intelligent design, Stephen Meyer.
Neutral non-fiction papers and articles
Ankerberg, John. Increasing doubts about evolution (lists scientists who are not ID advocates who oppose Darwinism).
Bird, Wendell R. The Yale Law Journal, Vol. 87, No. 3, Jan, 1978
Bird, Wendell R. "Freedom From Establishment and Unneutrality in Public School Instruction and Religious School Regulation." Harvard Journal of Law and Public Policy, Vol. 2, June 1979, pp. 125–205
Burian, Richard. Challenges to the Evolutionary Synthesis Virginia Polytechnic Institute and State University
Edward Goldsmith. Evolution, neo-Darwinism and the paradigm of science The Ecologist Vol. 20 No. 2, March–April 1990
Richard Milton. Neo-Darwinism: time to reconsider Times Higher Education Supplement, 1995
Staune, Jean. Darwinism Design and Purpose: A European Perspective Institutional Affiliation: General Secretary, Université Interdiciplinare de Paris
Critical non-fiction
Critical non-fiction books
Richard Dawkins. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design, W. W. Norton & Company (1996).
Richard Dawkins. The God Delusion, Houghton Mifflin (October 18, 2006).
Richard Dawkins. The Greatest Show on Earth: The Evidence for Evolution (2009)
Barbara Forrest and Paul R. Gross. Creationism's Trojan Horse: The Wedge of Intelligent Design, Oxford University Press (2004).
Ernst Mayr. One Long Argument: Charles Darwin and the Genesis of Modern Evolutionary Thought, Harvard University Press (1993).
Kenneth R. Miller. Finding Darwin's God, HarperCollins (1999).
National Academy of Sciences. Science and Creationism, National Academies Press (1999).
Chris Mooney. The Republican War on Science, Basic Books (2005).
Robert Pennock. Tower of Babel: The Evidence against the New Creationism, MIT Press (1999).
Mark Perakh. Unintelligent Design, Prometheus (Dec 2003).
Andrew J. Petto (Editor), Laurie R. Godfrey (Editor). Scientists Confront Intelligent Design and Creationism, W. W. Norton (2007).
Massimo Pigliucci. Denying Evolution: Creationism, Scientism, and the Nature of Science, Sinauer Associates, Incorporated (2002).
Michael Shermer, Why Darwin Matters: The Case Against Intelligent Design (2007).
Niall Shanks. God, the Devil, and Darwin: A Critique of Intelligent Design Theory, Oxford University Press (2004).
Robyn Williams. Unintelligent Design, Why God isn't as smart as she thinks she is, Allen & Unwin (2006).
Matt Young, Taner Edis eds. Why Intelligent Design Fails: A Scientific Critique of the New Creationism, Rutgers University Press (2004).
Joan Roughgarden Evolution and Christian Faith: Reflections of an Evolutionary Biologist Island Press (August 1, 2006)
Francis Collins The Language of God Free Press (July 17, 2007)
Critical non-fiction anthologies
Foreword by Bill Nye the Science Guy
1. The Once and Future Intelligent Design, by Eugenie C. Scott
2. Analyzing "Critical Analysis", by Nicholas J. Matzke and Paul R. Gross
3. Theology, Religion, and Intelligent Design, by Martinez Hewlett and Ted Peters
4. From the Classroom to the Courtroom: Intelligent Design and the Constitution, by Jay D. Wexler
5 When the Classroom Door Closes, Who Teaches Evolution?, by Brian Alters
6 Defending the Teaching of Evolution, by Glenn Branch and the staff of the National Center for Science Education
Afterword by Rev. Barry W. Lynn
Critical non-fiction papers and articles
Frederick C. Crews. Saving Us from Darwin, The New York Review of Books, Vol 48, No 15 (4 October 2001).
Frederick C. Crews. Saving Us from Darwin, Part II, The New York Review of Books, Vol 48, No 16 (18 October 2001).
Robert Pennock. DNA by Design?: Stephen Meyer and the Return of the God Hypothesis. In Ruse, Michael and William Dembski (eds) Debating Design. New York: Cambridge University Press, (pp. 130 – 148, 2004)
Robert Pennock. Critique of Philip Johnson. In Parsons, Keith (ed.) The Science Wars: Debating Scientific Knowledge and Technology. Prometheus Press. (pp. 277–306, 2003)
Robert Pennock. Creationism and Intelligent Design. Annual Review of Genomics and Human Genetics. (Vol. 4: 143-163, Sept. 2003)
Robert Pennock. Should Creationism be Taught in the Public Schools? Science & Education (Vol.11 no.2, March 2002, pp. 111–133)
Robert Pennock. Whose God? What Science? Reply to Michael Behe. In Reports of the National Center for Science Education. (Vol. 21 No. 3-4 pp. 16–19, May-Aug. 2001)
Robert Pennock. Lions and Tigers and APES, Oh My!: Creationism vs. Evolution in Kansas. Science Teaching & The Search for Origin: Kansas Teach-In. AAAS Dialogue on Science and Religion. (2000)
Robert Pennock. The Wizards of ID: Reply to Dembski. Metanexus (No. 089, Oct. 11, 2000)
Robert Pennock. Of Design and Deception: Kansas, Conflict & Creationism. Science & Spirit (Nov./Dec. 1999)
Robert Pennock. Untitled—Reply to Phillip Johnson re: Tower of Babel. Books and Culture (Sept./Oct. 1999)
Robert Pennock. The Prospects for a Theistic Science. Perspectives on Science and Christian Faith (Vol. 50, No. 3, pp. 205–209, Sept. 1998)
Robert Pennock. Creationism's War on Science. Environmental Review (Vol. 5, No. 2, pp. 7 – 16, February 1998)
Robert Pennock. Naturalism, Creationism and the Meaning of Life: The Case of Phillip Johnson Revisited. Creation/Evolution (Vol. 16, No. 2, pp. 10–30, Winter 1996)
Robert Pennock. Reply to Johnson - Johnson's Reason in the Balance. Biology & Philosophy (Vol. 11, No. 4, pp. 565–568, 1996)
Robert Pennock. Naturalism, Evidence and Creationism: The Case of Phillip Johnson. Biology and Philosophy (Vol. 11, No. 4, pp. 543–559, 1996)
Critical non-fiction films
Flock of Dodos A biting, tongue-in-cheek documentary that pans both sides of the debate.
Judgement Day: Intelligent Design on Trial, a Public Broadcasting Service NOVA television documentary about the Kitzmiller v. Dover federal trial
A War on Science is a 49-minute BBC Horizon television documentary about intelligent design, including the 2005 Kitzmiller v. Dover court battle. It prominently features Oxford University professor, biologist Richard Dawkins. It was first broadcast on 26 January 2006. Intelligent design supporters and promoters Phillip Johnson, Michael Behe, Stephen C. Meyer and William A. Dembski also appear in the documentary.
Fiction
The concept of life having been designed or manipulated is a staple of science fiction. Aspects of Intelligent Design are explored in:
Calculating God by Robert J. Sawyer. 2000. A science fiction novel in which an intelligent designer is manipulating reality solely for the benefit of human-kind and three other sentient species residing in our galaxy.
2001: A Space Odyssey; in the movie, human evolution is accelerated and guided by an unspecified force, assumed by many to be aliens. In the novel based on the film, human evolution is accelerated and guided by aliens.
In the Doctor Who episode Image of the Fendahl, evolution on Earth was guided by an alien, to allow it to feed on humans.
The novel Frankenstein, or the Modern Prometheus prominently features an intelligently (but imperfectly) designed creature, whose faults stem from the inherent flaws of its creator, Victor Frankenstein.
The Hitchhiker's Guide to the Galaxy reveals that the Earth was built by the Magratheans who were commissioned by mice and designed by the computer Deep Thought to find the ultimate question of life, the universe, and everything.
In the movie Mission to Mars, highly evolved aliens accelerated and guided human evolution.
Rama Revealed by Arthur C. Clarke and Gentry Lee; in this final novel of a series, it is revealed that the (mostly offstage) Ramans create universes and test their inhabitants in an attempt to maximise the quantity of consciousness within them.
According to the Star Trek: The Next Generation episode "The Chase", Star Trek aliens all look similar because life was seeded on different planets by highly evolved aliens.
In the Well World series, by Jack L. Chalker, aliens known as Markovians evolved and grew to the point where their computers, by means of a universal mathematics, were able to create/produce/do anything they wanted. Bored with being virtual gods, they decided their race had been flawed in some manner. So they designed a new universe and Markovian volunteers chose to become all of the new races therein, including humans, to see if perhaps another race could attain the perfection they believed existed but which they themselves failed to achieve.
References the mathematical calculation of the improbability of life.
"Surface Tension" is a 1952 science fiction short story by James Blish. A human colonization ship crash-lands and they genetically engineer their descendants into something that can survive. They create a race of microscopic aquatic humanoids and metal plates of knowledge for them. Blish coined the term pantropy to refer to this concept, as opposed to terraforming.
"Microcosmic God" is a 1941 science fiction novelette by Theodore Sturgeon. A scientist develops a synthetic life form, which he calls "Neoterics", that live at a greatly accelerated rate and produce many generations over a short time so he can use their inventions. The scientist asserts his authority by killing half the population whenever they disobey his "divine" orders.
Prometheus is a 2012 science fiction film that follows the journey of the Earth spaceship Prometheus as it follows an ancient star map which takes them to humanity's creators or "Engineers".
See also
List of creation myths
List of god video games
Artificial life: Notable simulators
Life simulation game
Shaggy God story
References
Intelligent design
Lists of controversial books
Religious bibliographies
Pseudoscience-related lists | List of works on intelligent design | [
"Engineering"
] | 6,020 | [
"Intelligent design",
"Design"
] |
3,143,054 | https://en.wikipedia.org/wiki/Poroelasticity | Poroelasticity is a field in materials science and mechanics that studies the interaction between fluid flow, pressure and bulk solid deformation within a linear porous medium and it is an extension of elasticity and porous medium flow (diffusion equation). The deformation of the medium influences the flow of the fluid and vice versa. The theory was proposed by Maurice Anthony Biot (1935, 1941) as a theoretical extension of soil consolidation models developed to calculate the settlement of structures placed on fluid-saturated porous soils.
The theory of poroelasticity has been widely applied in geomechanics, hydrology, biomechanics, tissue mechanics, cell mechanics, and micromechanics.
An intuitive sense of the response of a saturated elastic porous medium to mechanical loading can be developed by thinking about, or experimenting with, a fluid-saturated sponge. If a fluid-saturated sponge is compressed, fluid will flow from the sponge. If the sponge is in a fluid reservoir and compressive pressure is subsequently removed, the sponge will reimbibe the fluid and expand. The volume of the sponge will also increase if its exterior openings are sealed and the pore fluid pressure is increased. The basic ideas underlying the theory of poroelastic materials are that the pore fluid pressure contributes to the total stress in the porous matrix medium and that the pore fluid pressure alone can strain the porous matrix medium. There is fluid movement in a porous medium due to differences in pore fluid pressure created by different pore volume strains associated with mechanical loading of the porous medium. In unconventional reservoir and source rocks for natural gas like coal and shales, there can be strain due to sorption of gases like methane and carbon dioxide on the porous rock surfaces. Depending on the gas pressure the induced sorption-based strain can be poroelastic or poroinelastic in nature.
Types of Poroelasticity
The theories of poroelasticity can be divided into two categories: static (or quasi-static) and dynamic theories, just like mechanics can be divided into statics and dynamics. The static poroelasticity considers processes in which the fluid movement and solid skeleton deformation occur simultaneously and affect each other. The static poroelasticity is predominant in the literature for poroelasticity; as a result, this term is used interchangeably with poroelasticity in many publications. This static poroelasticity theory is a generalization of the one-dimensional consolidation theory in soil mechanics. This theory was developed from Biot's work in 1941. The dynamic poroelasticity is proposed for understanding the wave propagation in both the liquid and solid phases of saturated porous materials. The inertial and associated kinetic energy, which are not considered in static poroelasticity, are included. This is especially necessary when the speed of the movement of the phases in the porous material is considerable, e.g., when vibration or stress waves is present. The dynamic poroelasticity was developed attributed to Biot's work on the propagation of elastic waves in fluid-saturated media.
Literature
References for the theory of poroelasticity:
See also
Advanced Simulation Library
References
Elasticity (physics)
Porous media | Poroelasticity | [
"Physics",
"Materials_science",
"Engineering"
] | 656 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Porous media",
"Materials science",
"Physical properties"
] |
3,143,089 | https://en.wikipedia.org/wiki/Repeated%20game | In game theory, a repeated game (or iterated game) is an extensive form game that consists of a number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.
For an example of a repeated game, consider two gas stations that are adjacent to one another. They compete by publicly posting pricing, and have the same and constant marginal cost c (the wholesale price of gasoline). Assume that when they both charge p = 10, their joint profit is maximized, resulting in a high profit for everyone. Despite the fact that this is the best outcome for them, they are motivated to deviate. By modestly lowering the price, either can steal all of their competitors' customers, nearly doubling their revenues. P = c, where their profit is zero, is the only price without this profit deviation. In other words, in the pricing competition game, the only Nash equilibrium is inefficient (for gas stations) that both charge p = c. This is more of a rule than an exception: in a staged game, the Nash equilibrium is the only result that an agent can consistently acquire in an interaction, and it is usually inefficient for them. This is because the agents are just concerned with their own personal interests, and do not care about the benefits or costs that their actions bring to competitors. On the other hand, gas stations make a profit even if there is another gas station adjacent. One of the most crucial reasons is that their interaction is not one-off. This condition is portrayed by repeated games, in which two gas stations compete for pricing (stage games) across an indefinite time range t = 0, 1, 2,....
Finitely vs infinitely repeated games
Repeated games may be broadly divided into two classes, finite and infinite, depending on how long the game is being played for.
Finite games are those in which both players know that the game is being played a specific (and finite) number of rounds, and that the game ends for certain after that many rounds have been played. In general, finite games can be solved by backwards induction.
Infinite games are those in which the game is being played an infinite number of times. A game with an infinite number of rounds is also equivalent (in terms of strategies to play) to a game in which the players in the game do not know for how many rounds the game is being played. Infinite games (or games that are being repeated an unknown number of times) cannot be solved by backwards induction as there is no "last round" to start the backwards induction from.
Even if the game being played in each round is identical, repeating that game a finite or an infinite number of times can, in general, lead to very different outcomes (equilibria), as well as very different optimal strategies.
Infinitely repeated games
The most widely studied repeated games are games that are repeated an infinite number of times. In iterated prisoner's dilemma games, it is found that the preferred strategy is not to play a Nash strategy of the stage game, but to cooperate and play a socially optimum strategy. An essential part of strategies in infinitely repeated game is punishing players who deviate from this cooperative strategy. The punishment may be playing a strategy which leads to reduced payoff to both players for the rest of the game (called a trigger strategy). A player may normally choose to act selfishly to increase their own reward rather than play the socially optimum strategy. However, if it is known that the other player is following a trigger strategy, then the player expects to receive reduced payoffs in the future if they deviate at this stage. An effective trigger strategy ensures that cooperating has more utility to the player than acting selfishly now and facing the other player's punishment in the future.
There are many results in theorems which deal with how to achieve and maintain a socially optimal equilibrium in repeated games. These results are collectively called "Folk Theorems". An important feature of a repeated game is the way in which a player's preferences may be modelled.
There are many different ways in which a preference relation may be modelled in an infinitely repeated game, but two key ones are :
Limit of means - If the game results in a path of outcomes and player i has the basic-game utility function , player i'''s utility is:
Discounting - If player i's valuation of the game diminishes with time depending on a discount factor , then player is utility is:
For sufficiently patient players (e.g. those with high enough values of ), it can be proved that every strategy that has a payoff greater than the minmax payoff can be a Nash equilibrium - a very large set of strategies.
Finitely repeated games
Repeated games allow for the study of the interaction between immediate gains and long-term incentives. A finitely repeated game is a game in which the same one-shot stage game is played repeatedly over a number of discrete time periods, or rounds. Each time period is indexed by 0 < t ≤ T where T is the total number of periods. A player's final payoff is the sum of their payoffs from each round.
For those repeated games with a fixed and known number of time periods, if the stage game has a unique Nash equilibrium, then the repeated game has a unique subgame perfect Nash equilibrium strategy profile of playing the stage game equilibrium in each round. This can be deduced through backward induction. The unique stage game Nash equilibrium must be played in the last round regardless of what happened in earlier rounds. Knowing this, players have no incentive to deviate from the unique stage game Nash equilibrium in the second-to-last round, and so on this logic is applied back to the first round of the game. This ‘unravelling’ of a game from its endpoint can be observed in the Chainstore paradox.
If the stage game has more than one Nash equilibrium, the repeated game may have multiple subgame perfect Nash equilibria. While a Nash equilibrium must be played in the last round, the presence of multiple equilibria introduces the possibility of reward and punishment strategies that can be used to support deviation from stage game Nash equilibria in earlier rounds.
Finitely repeated games with an unknown or indeterminate number of time periods, on the other hand, are regarded as if they were an infinitely repeated game. It is not possible to apply backward induction to these games.
Examples of cooperation in finitely repeated games
Example 1: Two-Stage Repeated Game with Multiple Nash EquilibriaExample 1 shows a two-stage repeated game with multiple pure strategy Nash equilibria. Because these equilibria differ markedly in terms of payoffs for Player 2, Player 1 can propose a strategy over multiple stages of the game that incorporates the possibility for punishment or reward for Player 2. For example, Player 1 might propose that they play (A, X) in the first round. If Player 2 complies in round one, Player 1 will reward them by playing the equilibrium (A, Z) in round two, yielding a total payoff over two rounds of (7, 9).
If Player 2 deviates to (A, Z) in round one instead of playing the agreed-upon (A, X), Player 1 can threaten to punish them by playing the (B, Y) equilibrium in round two. This latter situation yields payoff (5, 7), leaving both players worse off.
In this way, the threat of punishment in a future round incentivizes a collaborative, non-equilibrium strategy in the first round. Because the final round of any finitely repeated game, by its very nature, removes the threat of future punishment, the optimal strategy in the last round will always be one of the game's equilibria. It is the payoff differential between equilibria in the game represented in Example 1 that makes a punishment/reward strategy viable (for more on the influence of punishment and reward on game strategy, see 'Public Goods Game with Punishment and for Reward').
Example 2: Two-Stage Repeated Game with Unique Nash EquilibriumExample 2' shows a two-stage repeated game with a unique Nash equilibrium. Because there is only one equilibrium here, there is no mechanism for either player to threaten punishment or promise reward in the game's second round. As such, the only strategy that can be supported as a subgame perfect Nash equilibrium is that of playing the game's unique Nash equilibrium strategy (D, N) every round. In this case, that means playing (D, N) each stage for two stages (n=2), but it would be true for any finite number of stages n''. To interpret: this result means that the very presence of a known, finite time horizon sabotages cooperation in every single round of the game. Cooperation in iterated games is only possible when the number of rounds is infinite or unknown.
Solving repeated games
In general, repeated games are easily solved using strategies provided by folk theorems. Complex repeated games can be solved using various techniques most of which rely heavily on linear algebra and the concepts expressed in fictitious play.
It may be deducted that you can determine the characterization of equilibrium payoffs in infinitely repeated games. Through alternation between two payoffs, say a and f, the average payoff profile may be a weighted average between a and f.
Incomplete information
Repeated games can include some incomplete information. Repeated games with incomplete information were pioneered by Aumann and Maschler. While it is easier to treat a situation where one player is informed and the other not, and when information received by each player is independent, it is possible to deal with zero-sum games with incomplete information on both sides and signals that are not independent.
References
External links
Game-Theoretic Solution to Poker Using Fictitious Play
Game Theory notes on Repeated games
on Repeated Games and the Chainstore Paradox
Game theory game classes | Repeated game | [
"Mathematics"
] | 2,097 | [
"Game theory game classes",
"Game theory"
] |
3,143,150 | https://en.wikipedia.org/wiki/History%20of%20scientific%20method | The history of scientific method considers changes in the methodology of scientific inquiry, as distinct from the history of science itself. The development of rules for scientific reasoning has not been straightforward; scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of one or another approach to establishing scientific knowledge.
Rationalist explanations of nature, including atomism, appeared both in ancient Greece in the thought of Leucippus and Democritus, and in ancient India, in the Nyaya, Vaisheshika and Buddhist schools, while Charvaka materialism rejected inference as a source of knowledge in favour of an empiricism that was always subject to doubt. Aristotle pioneered scientific method in ancient Greece alongside his empirical biology and his work on logic, rejecting a purely deductive framework in favour of generalisations made from observations of nature.
Some of the most important debates in the history of scientific method center on: rationalism, especially as advocated by René Descartes; inductivism, which rose to particular prominence with Isaac Newton and his followers; and hypothetico-deductivism, which came to the fore in the early 19th century. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was central to discussions of scientific method as powerful scientific theories extended beyond the realm of the observable, while in the mid-20th century some prominent philosophers argued against any universal rules of science at all.
Early methodology
Ancient Egypt and Babylonia
There are few explicit discussions of scientific methodologies in surviving records from early cultures. The most that can be inferred about the approaches to undertaking science in this period stems from descriptions of early investigations into nature, in the surviving records. An Egyptian medical textbook, the Edwin Smith papyrus, (c. 1600 BCE), applies the following components: examination, diagnosis, treatment and prognosis, to the treatment of disease, which display strong parallels to the basic empirical method of science and according to G. E. R. Lloyd played a significant role in the development of this methodology. The Ebers papyrus (c. 1550 BCE) also contains evidence of traditional empiricism.
By the middle of the 1st millennium BCE in Mesopotamia, Babylonian astronomy had evolved into the earliest example of a scientific astronomy, as it was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian Asger Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in the Islamic world, and in the West – if not indeed all subsequent endeavour in the exact sciences – depend upon Babylonian astronomy in decisive and fundamental ways."
The early Babylonians and Egyptians developed much technical knowledge, crafts, and mathematics used in practical tasks of divination, as well as a knowledge of medicine, and made lists of various kinds. While the Babylonians in particular had engaged in the earliest forms of an empirical mathematical science, with their early attempts at mathematically describing natural phenomena, they generally lacked underlying rational theories of nature.
Classical antiquity
Greek-speaking ancient philosophers engaged in the earliest known forms of what is today recognized as a rational theoretical science, with the move towards a more rational understanding of nature which began at least since the Archaic Period (650 – 480 BCE) with the Presocratic school. Thales was the first known philosopher to use natural explanations, proclaiming that every event had a natural cause, even though he is known for saying "all things are full of gods" and sacrificed an ox when he discovered his theorem. Leucippus, went on to develop the theory of atomism – the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. This was elaborated in great detail by Democritus.
Similar atomist ideas emerged independently among ancient Indian philosophers of the Nyaya, Vaisesika and Buddhist schools. In particular, like the Nyaya, Vaisesika, and Buddhist schools, the Cārvāka epistemology was materialist, and skeptical enough to admit perception as the basis for unconditionally true knowledge, while cautioning that if one could only infer a truth, then one must also harbor a doubt about that truth; an inferred truth could not be unconditional.
Towards the middle of the 5th century BCE, some of the components of a scientific tradition were already heavily established, even before Plato, who was an important contributor to this emerging tradition, thanks to the development of deductive reasoning, as propounded by his student, Aristotle. In Protagoras (318d-f), Plato mentioned the teaching of arithmetic, astronomy and geometry in schools. The philosophical ideas of this time were mostly freed from the constraints of everyday phenomena and common sense. This denial of reality as we experience it reached an extreme in Parmenides who argued that the world is one and that change and subdivision do not exist.
As early as the 4th century BCE, armillary spheres had been invented in China, and in the 3rd century BCE in Greece for use in astronomy; their use was promulgated thereafter, for example by § Ibn al-Haytham, and by § Tycho Brahe.
In the 3rd and 4th centuries BCE, the Greek physicians Herophilos (335–280 BCE) and Erasistratus of Chios employed experiments to further their medical research; Erasistratus at one time repeatedly weighed a caged bird, and noted its weight loss between feeding times.
Aristotle
Aristotle's inductive-deductive method used inductions from observations to infer general principles, deductions from those principles to check against further observations, and more cycles of induction and deduction to continue the advance of knowledge.
The Organon (Greek: , meaning "instrument, tool, organ") is the standard collection of Aristotle's six works on logic. The name Organon was given by Aristotle's followers, the Peripatetics.
The order of the works is not chronological (the chronology is now difficult to determine) but was deliberately chosen by Theophrastus to constitute a well-structured system. Indeed, parts of them seem to be a scheme of a lecture on logic. The arrangement of the works was made by Andronicus of Rhodes around 40 BCE.
The Organon comprises the following six works:
The Categories (Greek: , Latin: ) introduces Aristotle's 10-fold classification of that which exists: substance, quantity, quality, relation, place, time, situation, condition, action, and passion.
On Interpretation (Greek: , Latin: ) introduces Aristotle's conception of proposition and judgment, and the various relations between affirmative, negative, universal, and particular propositions. Aristotle discusses the square of opposition or square of Apuleius in Chapter 7 and its appendix Chapter 8. Chapter 9 deals with the problem of future contingents.
The Prior Analytics (Greek: , Latin: ) introduces Aristotle's syllogistic method (see term logic), argues for its correctness, and discusses inductive inference.
The Posterior Analytics (Greek: , Latin: ) deals with demonstration, definition, and scientific knowledge.
The Topics (Greek: , Latin: ) treats of issues in constructing valid arguments, and of inference that is probable, rather than certain. It is in this treatise that Aristotle mentions the predicables, later discussed by Porphyry and by the scholastic logicians.
The Sophistical Refutations (Greek: , Latin: ) gives a treatment of logical fallacies, and provides a key link to Aristotle's work on rhetoric.
Aristotle's Metaphysics has some points of overlap with the works making up the Organon but is not traditionally considered part of it; additionally there are works on logic attributed, with varying degrees of plausibility, to Aristotle that were not known to the Peripatetics.
Aristotle has been called the founder of modern science by De Lacy O'Leary. His demonstration method is found in Posterior Analytics. He provided another of the ingredients of scientific tradition: empiricism. For Aristotle, universal truths can be known from particular things via induction. To some extent then, Aristotle reconciles abstract thought with observation, although it would be a mistake to imply that Aristotelian science is empirical in form. Indeed, Aristotle did not accept that knowledge acquired by induction could rightly be counted as scientific knowledge. Nevertheless, induction was for him a necessary preliminary to the main business of scientific enquiry, providing the primary premises required for scientific demonstrations.
Aristotle largely ignored inductive reasoning in his treatment of scientific enquiry. To make it clear why this is so, consider this statement in the Posterior Analytics:
We suppose ourselves to possess unqualified scientific knowledge of a thing, as opposed to knowing it in the accidental way in which the sophist knows, when we think that we know the cause on which the fact depends, as the cause of that fact and of no other, and, further, that the fact could not be other than it is.
It was therefore the work of the philosopher to demonstrate universal truths and to discover their causes. While induction was sufficient for discovering universals by generalization, it did not succeed in identifying causes. For this task Aristotle used the tool of deductive reasoning in the form of syllogisms. Using the syllogism, scientists could infer new universal truths from those already established.
Aristotle developed a complete normative approach to scientific inquiry involving the syllogism, which he discusses at length in his Posterior Analytics. A difficulty with this scheme lay in showing that derived truths have solid primary premises. Aristotle would not allow that demonstrations could be circular (supporting the conclusion by the premises, and the premises by the conclusion). Nor would he allow an infinite number of middle terms between the primary premises and the conclusion. This leads to the question of how the primary premises are found or developed, and as mentioned above, Aristotle allowed that induction would be required for this task.
Towards the end of the Posterior Analytics, Aristotle discusses knowledge imparted by induction.
Thus it is clear that we must get to know the primary premises by induction; for the method by which even sense-perception implants the universal is inductive. [...] it follows that there will be no scientific knowledge of the primary premises, and since except intuition nothing can be truer than scientific knowledge, it will be intuition that apprehends the primary premises. [...] If, therefore, it is the only other kind of true thinking except scientific knowing, intuition will be the originative source of scientific knowledge.
The account leaves room for doubt regarding the nature and extent of Aristotle's empiricism. In particular, it seems that Aristotle considers sense-perception only as a vehicle for knowledge through intuition. He restricted his investigations in natural history to their natural settings, such as at the Pyrrha lagoon, now called Kalloni, at Lesbos. Aristotle and Theophrastus together formulated the new science of biology, inductively, case by case, for two years before Aristotle was called to tutor Alexander. Aristotle performed no modern-style experiments in the form in which they appear in today's physics and chemistry laboratories.
Induction is not afforded the status of scientific reasoning, and so it is left to intuition to provide a solid foundation for Aristotle's science. With that said, Aristotle brings us somewhat closer an empirical science than his predecessors.
Epicurus
In his work Kαvώv ('canon', a straight edge or ruler, thus any type of measure or standard, referred to as 'canonic'), Epicurus laid out his first rule for inquiry in physics: 'that the first concepts be seen, and that they not require demonstration '.
His second rule for inquiry was that prior to an investigation, we are to have self-evident concepts, so that we might infer [ἔχωμεν οἷς σημειωσόμεθα] both what is expected [τò προσμένον], and also what is non-apparent [τò ἄδηλον].
Epicurus applies his method of inference (the use of observations as signs, Asmis' summary, p. 333: the method of using the phenomena as signs (σημεῖα) of what is unobserved) immediately to the atomic theory of Democritus. In Aristotle's Prior Analytics, Aristotle himself employs the use of signs. But Epicurus presented his 'canonic' as rival to Aristotle's logic. See: Lucretius (c. 99 BCE – c. 55 BCE) De rerum natura (On the nature of things) a didactic poem explaining Epicurus' philosophy and physics.
Emergence of inductive experimental method
During the Middle Ages issues of what is now termed science began to be addressed. There was greater emphasis on combining theory with practice in the Islamic world than there had been in Classical times, and it was common for those studying the sciences to be artisans as well, something that had been "considered an aberration in the ancient world." Islamic experts in the sciences were often expert instrument makers who enhanced their powers of observation and calculation with them. Starting in the early ninth century, early Muslim scientists such as al-Kindi (801–873) and the authors writing under the name of Jābir ibn Hayyān (writings dated to c. 850–950) began to put a greater emphasis on the use of experiment as a source of knowledge. Several scientific methods thus emerged from the medieval Muslim world by the early 11th century, all of which emphasized experimentation as well as quantification to varying degrees.
Ibn al-Haytham
The Arab physicist Ibn al-Haytham (Alhazen) used experimentation to obtain the results in his Book of Optics (1021). He combined observations, experiments and rational arguments to support his intromission theory of vision, in which rays of light are emitted from objects rather than from the eyes. He used similar arguments to show that the ancient emission theory of vision supported by Ptolemy and Euclid (in which the eyes emit the rays of light used for seeing), and the ancient intromission theory supported by Aristotle (where objects emit physical particles to the eyes), were both wrong.
Experimental evidence supported most of the propositions in his Book of Optics and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics. His legacy was elaborated through the 'reforming' of his Optics by Kamal al-Din al-Farisi (d. c. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics).
Alhazen viewed his scientific studies as a search for truth: "Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things. Finding the truth is difficult, and the road to it is rough. ...
Alhazen's work included the conjecture that "Light travels through transparent bodies in straight lines only", which he was able to corroborate only after years of effort. He stated, "[This] is clearly observed in the lights which enter into dark rooms through holes. ... the entering light will be clearly observable in the dust which fills the air." He also demonstrated the conjecture by placing a straight stick or a taut thread next to the light beam.
Ibn al-Haytham also employed scientific skepticism and emphasized the role of empiricism. He also explained the role of induction in syllogism, and criticized Aristotle for his lack of contribution to the method of induction, which Ibn al-Haytham regarded as superior to syllogism, and he considered induction to be the basic requirement for true scientific research.
Something like Occam's razor is also present in the Book of Optics. For example, after demonstrating that light is generated by luminous objects and emitted or reflected into the eyes, he states that therefore "the extramission of [visual] rays is superfluous and useless." He may also have been the first scientist to adopt a form of positivism in his approach. He wrote that "we do not go beyond experience, and we cannot be content to use pure concepts in investigating natural phenomena", and that the understanding of these cannot be acquired without mathematics. After assuming that light is a material substance, he does not further discuss its nature but confines his investigations to the diffusion and propagation of light. The only properties of light he takes into account are those treatable by geometry and verifiable by experiment.
Al-Biruni
The Persian scientist Abū Rayhān al-Bīrūnī introduced early scientific methods for several different fields of inquiry during the 1020s and 1030s. For example, in his treatise on mineralogy, Kitab al-Jawahir (Book of Precious Stones), al-Biruni is "the most exact of experimental scientists", while in the introduction to his study of India, he declares that "to execute our project, it has not been possible to follow the geometric method" and thus became one of the pioneers of comparative sociology in insisting on field experience and information. He also developed an early experimental method for mechanics.
Al-Biruni's methods resembled the modern scientific method, particularly in his emphasis on repeated experimentation. Biruni was concerned with how to conceptualize and prevent both systematic errors and observational biases, such as "errors caused by the use of small instruments and errors made by human observers." He argued that if instruments produce errors because of their imperfections or idiosyncratic qualities, then multiple observations must be taken, analyzed qualitatively, and on this basis, arrive at a "common-sense single value for the constant sought", whether an arithmetic mean or a "reliable estimate." In his scientific method, "universals came out of practical, experimental work" and "theories are formulated after discoveries", as with inductivism.
Ibn Sina (Avicenna)
In the On Demonstration section of The Book of Healing (1027), the Persian philosopher and scientist Avicenna (Ibn Sina) discussed philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper procedure for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist might find "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty." Avicenna added two further methods for finding a first principle: the ancient Aristotelian method of induction (istiqra), and the more recent method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he advocated "a method of experimentation as a means for scientific inquiry."
Earlier, in The Canon of Medicine (1025), Avicenna was also the first to describe what is essentially methods of agreement, difference and concomitant variation which are critical to inductive logic and the scientific method. However, unlike his contemporary al-Biruni's scientific method, in which "universals came out of practical, experimental work" and "theories are formulated after discoveries", Avicenna developed a scientific procedure in which "general and universal questions came first and led to experimental work." Due to the differences between their methods, al-Biruni referred to himself as a mathematical scientist and to Avicenna as a philosopher, during a debate between the two scholars.
Robert Grosseteste
During the European Renaissance of the 12th century, ideas on scientific methodology, including Aristotle's empiricism and the experimental approaches of Alhazen and Avicenna, were introduced to medieval Europe via Latin translations of Arabic and Greek texts and commentaries. Robert Grosseteste's commentary on the Posterior Analytics places Grosseteste among the first scholastic thinkers in Europe to understand Aristotle's vision of the dual nature of scientific reasoning. Concluding from particular observations into a universal law, and then back again, from universal laws to prediction of particulars. Grosseteste called this "resolution and composition". Further, Grosseteste said that both paths should be verified through experimentation to verify the principles.
Roger Bacon
Roger Bacon was inspired by the writings of Grosseteste. In his account of a method, Bacon described a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification. He recorded the way he had conducted his experiments in precise detail, perhaps with the idea that others could reproduce and independently test his results.
About 1256 he joined the Franciscan Order and became subject to the Franciscan statute forbidding Friars from publishing books or pamphlets without specific approval. After the accession of Pope Clement IV in 1265, the Pope granted Bacon a special commission to write to him on scientific matters. In eighteen months he completed three large treatises, the Opus Majus, Opus Minus, and Opus Tertium which he sent to the Pope. William Whewell has called Opus Majus at once the Encyclopaedia and Organon of the 13th century.
Part I (pp. 1–22) treats of the four causes of error: authority, custom, the opinion of the unskilled many, and the concealment of real ignorance by a pretense of knowledge.
Part VI (pp. 445–477) treats of experimental science, domina omnium scientiarum. There are two methods of knowledge: the one by argument, the other by experience. Mere argument is never sufficient; it may decide a question, but gives no satisfaction or certainty to the mind, which can only be convinced by immediate inspection or intuition, which is what experience gives.
Experimental science, which in the Opus Tertium (p. 46) is distinguished from the speculative sciences and the operative arts, is said to have three great prerogatives over all sciences:
It verifies their conclusions by direct experiment;
It discovers truths which they could never reach;
It investigates the secrets of nature, and opens to us a knowledge of past and future.
Roger Bacon illustrated his method by an investigation into the nature and cause of the rainbow, as a specimen of inductive research.
Renaissance humanism and medicine
Aristotle's ideas became a framework for critical debate beginning with absorption of the Aristotelian texts into the university curriculum in the first half of the 13th century. Contributing to this was the success of medieval theologians in reconciling Aristotelian philosophy with Christian theology. Within the sciences, medieval philosophers were not afraid of disagreeing with Aristotle on many specific issues, although their disagreements were stated within the language of Aristotelian philosophy. All medieval natural philosophers were Aristotelians, but "Aristotelianism" had become a somewhat broad and flexible concept. With the end of Middle Ages, the Renaissance rejection of medieval traditions coupled with an extreme reverence for classical sources led to a recovery of other ancient philosophical traditions, especially the teachings of Plato. By the 17th century, those who clung dogmatically to Aristotle's teachings were faced with several competing approaches to nature.
The discovery of the Americas at the close of the 15th century showed the scholars of Europe that new discoveries could be found outside of the authoritative works of Aristotle, Pliny, Galen, and other ancient writers.
Galen of Pergamon (129 – c. 200 AD) had studied with four schools in antiquity — Platonists, Aristotelians, Stoics, and Epicureans, and at Alexandria, the center of medicine at the time. In his Methodus Medendi, Galen had synthesized the empirical and dogmatic schools of medicine into his own method, which was preserved by Arab scholars. After the translations from Arabic were critically scrutinized, a backlash occurred and demand arose in Europe for translations of Galen's medical text from the original Greek. Galen's method became very popular in Europe. Thomas Linacre, the teacher of Erasmus, thereupon translated Methodus Medendi from Greek into Latin for a larger audience in 1519. Limbrick 1988 notes that 630 editions, translations, and commentaries on Galen were produced in Europe in the 16th century, eventually eclipsing Arabic medicine there, and peaking in 1560, at the time of the scientific revolution.
By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. Other Renaissance teaching gardens were established, notably by the physician Leonhart Fuchs, one of the founders of botany.
The first printed work devoted to the concept of method is Jodocus Willichius, De methodo omnium artium et disciplinarum informanda opusculum (1550). An Informative Essay on the Method of All Arts and Disciplines (1550)
Skepticism as a basis for understanding
In 1562 Outlines of Pyrrhonism by the ancient Pyrrhonist philosopher Sextus Empiricus (c. 160-210 AD) was published in a Latin translation (from Greek), quickly placing the arguments of classical skepticism in the European mainstream. These arguments establish seemingly insurmountable challenges for the possibility of certain knowledge.
The skeptic philosopher and physician Francisco Sanches, was led by his medical training at Rome, 1571–73, to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, 1) syllogism fails upon circular reasoning; 2) Aristotle's modal logic was not stated clearly enough for use in medieval times, and remains a research problem to this day. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581, in Latin Quod Nihil Scitur). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon who was influenced by another prominent exponent of skepticism, Montaigne; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor.
"Sanches develops his scepticism by means of an intellectual critique of Aristotelianism, rather than by an appeal to the history of human stupidity and the variety and contrariety of previous theories." —, as cited by
Descartes' famous "Cogito" argument is an attempt to overcome skepticism and reestablish a foundation for certainty but other thinkers responded by revising what the search for knowledge, particularly physical knowledge, might be.
Tycho Brahe
See History of astronomy § Renaissance and Early Modern Europe, Kepler's laws of planetary motion, and History of optics § Renaissance and Early Modern
The first modern science, in which practitioners were prepared to revise or reject long-held beliefs in the light of new evidence, was astronomy, and Tycho Brahe was the first modern astronomer. See Sextant, right. Note the explicit reduction of geometrical diagrams to practice (real objects with actual lengths and angles).
In 1572, Tycho noticed a completely new star that was brighter than any star or planet. Astonished by the existence of a star that ought not to have been there and gaining the patronage of King Frederick II of Denmark, Tycho built the Uraniborg observatory at enormous cost. Over a period of fifteen years (1576–91), Tycho and upwards of thirty assistants charted the positions of stars, planets, and other celestial bodies at Uraniborg with unprecedented accuracy. In 1600, Tycho hired Johannes Kepler to assist him in analyzing and publishing his observations. Kepler later used Tycho's observations of the motion of Mars to deduce the laws of planetary motion, which were later explained in terms of Newton's law of universal gravitation.
Besides Tycho's specific role in advancing astronomical knowledge, Tycho's single-minded pursuit of ever-more-accurate measurement was enormously influential in creating a modern scientific culture in which theory and evidence were understood to be inseparably linked. See Sextant, right.
By 1723, standard units of measure had spread to § terrestrial mass and length.
Francis Bacon's eliminative induction
Francis Bacon (1561–1626) entered Trinity College, Cambridge in April 1573, where he applied himself diligently to the several sciences as then taught, and came to the conclusion that the methods employed and the results attained were alike erroneous; he learned to despise the current Aristotelian philosophy. He believed philosophy must be taught its true purpose, and for this purpose a new method must be devised. With this conception in his mind, Bacon left the university.
Bacon attempted to describe a rational procedure for establishing causation between phenomena based on induction. Bacon's induction was, however, radically different than that employed by the Aristotelians. As Bacon put it,
[A]nother form of induction must be devised than has hitherto been employed, and it must be used for proving and discovering not first principles (as they are called) only, but also the lesser axioms, and the middle, and indeed all. For the induction which proceeds by simple enumeration is childish. —Novum Organum section CV
Bacon's method relied on experimental histories to eliminate alternative theories. Bacon explains how his method is applied in his Novum Organum (published 1620). In an example he gives on the examination of the nature of heat, Bacon creates two tables, the first of which he names "Table of Essence and Presence", enumerating the many various circumstances under which we find heat. In the other table, labelled "Table of Deviation, or of Absence in Proximity", he lists circumstances which bear resemblance to those of the first table except for the absence of heat. From an analysis of what he calls the natures (light emitting, heavy, colored, etc.) of the items in these lists we are brought to conclusions about the form nature, or cause, of heat. Those natures which are always present in the first table, but never in the second are deemed to be the cause of heat.
The role experimentation played in this process was twofold. The most laborious job of the scientist would be to gather the facts, or 'histories', required to create the tables of presence and absence. Such histories would document a mixture of common knowledge and experimental results. Secondly, experiments of light, or, as we might say, crucial experiments would be needed to resolve any remaining ambiguities over causes.
Bacon showed an uncompromising commitment to experimentation. Despite this, he did not make any great scientific discoveries during his lifetime. This may be because he was not the most able experimenter. It may also be because hypothesising plays only a small role in Bacon's method compared to modern science. Hypotheses, in Bacon's method, are supposed to emerge during the process of investigation, with the help of mathematics and logic. Bacon gave a substantial but secondary role to mathematics "which ought only to give definiteness to natural philosophy, not to generate or give it birth" (Novum Organum XCVI). An over-emphasis on axiomatic reasoning had rendered previous non-empirical philosophy impotent, in Bacon's view, which was expressed in his Novum Organum:
XIX. There are and can be only two ways of searching into and discovering truth. The one flies from the senses and particulars to the most general axioms, and from these principles, the truth of which it takes for settled and immoveable, proceeds to judgment and to the discovery of middle axioms. And this way is now in fashion. The other derives axioms from the senses and particulars, rising by a gradual and unbroken ascent, so that it arrives at the most general axioms last of all. This is the true way, but as yet untried.
In Bacon's utopian novel, The New Atlantis, the ultimate role is given for inductive reasoning:
Lastly, we have three that raise the former discoveries by experiments into greater observations, axioms, and aphorisms. These we call interpreters of nature.
Descartes
In 1619, René Descartes began writing his first major treatise on proper scientific and philosophical thinking, the unfinished Rules for the Direction of the Mind. His aim was to create a complete science that he hoped would overthrow the Aristotelian system and establish himself as the sole architect of a new system of guiding principles for scientific research.
This work was continued and clarified in his 1637 treatise, Discourse on Method, and in his 1641 Meditations. Descartes describes the intriguing and disciplined thought experiments he used to arrive at the idea we instantly associate with him: I think therefore I am.
From this foundational thought, Descartes finds proof of the existence of a God who, possessing all possible perfections, will not deceive him provided he resolves "[...] never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of methodic doubt."
This rule allowed Descartes to progress beyond his own thoughts and judge that there exist extended bodies outside of his own thoughts. Descartes published seven sets of objections to the Meditations from various sources along with his replies to them. Despite his apparent departure from the Aristotelian system, a number of his critics felt that Descartes had done little more than replace the primary premises of Aristotle with those of his own. Descartes says as much himself in a letter written in 1647 to the translator of Principles of Philosophy,
a perfect knowledge [...] must necessarily be deduced from first causes [...] we must try to deduce from these principles knowledge of the things which depend on them, that there be nothing in the whole chain of deductions deriving from them that is not perfectly manifest.
And again, some years earlier, speaking of Galileo's physics in a letter to his friend and critic Mersenne from 1638,
without having considered the first causes of nature, [Galileo] has merely looked for the explanations of a few particular effects, and he has thereby built without foundations.
Whereas Aristotle purported to arrive at his first principles by induction, Descartes believed he could obtain them using reason only. In this sense, he was a Platonist, as he believed in the innate ideas, as opposed to Aristotle's blank slate (tabula rasa), and stated that the seeds of science are inside us.
Unlike Bacon, Descartes successfully applied his own ideas in practice. He made significant contributions to science, in particular in aberration-corrected optics. His work in analytic geometry was a necessary precedent to differential calculus and instrumental in bringing mathematical analysis to bear on scientific matters.
Galileo Galilei
During the period of religious conservatism brought about by the Reformation and Counter-Reformation, Galileo Galilei unveiled his new science of motion. Neither the contents of Galileo's science, nor the methods of study he selected were in keeping with Aristotelian teachings. Whereas Aristotle thought that a science should be demonstrated from first principles, Galileo had used experiments as a research tool. Galileo nevertheless presented his treatise in the form of mathematical demonstrations without reference to experimental results. It is important to understand that this in itself was a bold and innovative step in terms of scientific method. The usefulness of mathematics in obtaining scientific results was far from obvious. This is because mathematics did not lend itself to the primary pursuit of Aristotelian science: the discovery of causes.
Whether it is because Galileo was realistic about the acceptability of presenting experimental results as evidence or because he himself had doubts about the epistemological status of experimental findings is not known. Nevertheless, it is not in his Latin treatise on motion that we find reference to experiments, but in his supplementary dialogues written in the Italian vernacular. In these dialogues experimental results are given, although Galileo may have found them inadequate for persuading his audience. Thought experiments showing logical contradictions in Aristotelian thinking, presented in the skilled rhetoric of Galileo's dialogue were further enticements for the reader.
As an example, in the dramatic dialogue titled Third Day from his Two New Sciences, Galileo has the characters of the dialogue discuss an experiment involving two free falling objects of differing weight. An outline of the Aristotelian view is offered by the character Simplicio. For this experiment he expects that "a body which is ten times as heavy as another will move ten times as rapidly as the other". The character Salviati, representing Galileo's persona in the dialogue, replies by voicing his doubt that Aristotle ever attempted the experiment. Salviati then asks the two other characters of the dialogue to consider a thought experiment whereby two stones of differing weights are tied together before being released. Following Aristotle, Salviati reasons that "the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter". But this leads to a contradiction, since the two stones together make a heavier object than either stone apart, the heavier object should in fact fall with a speed greater than that of either stone. From this contradiction, Salviati concludes that Aristotle must, in fact, be wrong and the objects will fall at the same speed regardless of their weight, a conclusion that is borne out by experiment.
In his 1991 survey of developments in the modern accumulation of knowledge such as this, Charles Van Doren considers that the Copernican Revolution really is the Galilean Cartesian (René Descartes) or simply the Galilean revolution on account of the courage and depth of change brought about by the work of Galileo.
Isaac Newton
Both Bacon and Descartes wanted to provide a firm foundation for scientific thought that avoided the deceptions of the mind and senses. Bacon envisaged that foundation as essentially empirical, whereas Descartes provides a metaphysical foundation for knowledge. If there were any doubts about the direction in which scientific method would develop, they were set to rest by the success of Isaac Newton. Implicitly rejecting Descartes' emphasis on rationalism in favor of Bacon's empirical approach, he outlines his four "rules of reasoning" in the Principia,
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
Therefore to the same natural effects we must, as far as possible, assign the same causes.
The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, until such time as other phænomena occur, by which they may either be made more accurate, or liable to exceptions.
But Newton also left an admonition about a theory of everything:
To explain all nature is too difficult a task for any one man or even for any one age. 'Tis much better to do a little with certainty, and leave the rest for others that come after you, than to explain all things.
Newton's work became a model that other sciences sought to emulate, and his inductive approach formed the basis for much of natural philosophy through the 18th and early 19th centuries. Some methods of reasoning were later systematized by Mill's Methods (or Mill's canon), which are five explicit statements of what can be discarded and what can be kept while building a hypothesis. George Boole and William Stanley Jevons also wrote on the principles of reasoning.
Integrating deductive and inductive method
Attempts to systematize a scientific method were confronted in the mid-18th century by the problem of induction, a positivist logic formulation which, in short, asserts that nothing can be known with certainty except what is actually observed. David Hume took empiricism to the skeptical extreme; among his positions was that there is no logical necessity that the future should resemble the past, thus we are unable to justify inductive reasoning itself by appealing to its past success. Hume's arguments, of course, came on the heels of many, many centuries of excessive speculation upon excessive speculation not grounded in empirical observation and testing. Many of Hume's radically skeptical arguments were argued against, but not resolutely refuted, by Immanuel Kant's Critique of Pure Reason in the late 18th century. Hume's arguments continue to hold a strong lingering influence and certainly on the consciousness of the educated classes for the better part of the 19th century when the argument at the time became the focus on whether or not the inductive method was valid.
Hans Christian Ørsted, (Ørsted is the Danish spelling; Oersted in other languages) (1777–1851) was heavily influenced by Kant, in particular, Kant's Metaphysische Anfangsgründe der Naturwissenschaft (Metaphysical Foundations of Natural Science). The following sections on Ørsted encapsulate our current, common view of scientific method. His work appeared in Danish, most accessibly in public lectures, which he translated into German, French, English, and occasionally Latin. But some of his views go beyond Kant:
"In order to achieve completeness in our knowledge of nature, we must start from two extremes, from experience and from the intellect itself. ... The former method must conclude with natural laws, which it has abstracted from experience, while the latter must begin with principles, and gradually, as it develops more and more, it becomes ever more detailed. Of course, I speak here about the method as manifested in the process of the human intellect itself, not as found in textbooks, where the laws of nature which have been abstracted from the consequent experiences are placed first because they are required to explain the experiences. When the empiricist in his regression towards general laws of nature meets the metaphysician in his progression, science will reach its perfection."
Ørsted's "First Introduction to General Physics" (1811) exemplified the steps of observation, hypothesis, deduction and experiment. In 1805, based on his researches on electromagnetism Ørsted came to believe that electricity is propagated by undulatory action (i.e., fluctuation). By 1820, he felt confident enough in his beliefs that he resolved to demonstrate them in a public lecture, and in fact observed a small magnetic effect from a galvanic circuit (i.e., voltaic circuit), without rehearsal;
In 1831 John Herschel (1792–1871) published A Preliminary Discourse on the study of Natural Philosophy, setting out the principles of science. Measuring and comparing observations was to be used to find generalisations in "empirical laws", which described regularities in phenomena, then natural philosophers were to work towards the higher aim of finding a universal "law of nature" which explained the causes and effects producing such regularities. An explanatory hypothesis was to be found by evaluating true causes (Newton's "vera causae") derived from experience, for example evidence of past climate change could be due to changes in the shape of continents, or to changes in Earth's orbit. Possible causes could be inferred by analogy to known causes of similar phenomena. It was essential to evaluate the importance of a hypothesis; "our next step in the verification of an induction must, therefore, consist in extending its application to cases not originally contemplated; in studiously varying the circumstances under which our causes act, with a view to ascertain whether their effect is general; and in pushing the application of our laws to extreme cases."
William Whewell (1794–1866) regarded his History of the Inductive Sciences, from the Earliest to the Present Time (1837) to be an introduction to the Philosophy of the Inductive Sciences (1840) which analyzes the method exemplified in the formation of ideas. Whewell attempts to follow Bacon's plan for discovery of an effectual art of discovery. He named the hypothetico-deductive method (which Encyclopædia Britannica credits to Newton); Whewell also coined the term scientist. Whewell examines ideas and attempts to construct science by uniting ideas to facts. He analyses induction into three steps:
the selection of the fundamental idea, such as space, number, cause, or likeness
a more special modification of those ideas, such as a circle, a uniform force, etc.
the determination of magnitudes
Upon these follow special techniques applicable for quantity, such as the method of least squares, curves, means, and special methods depending on resemblance (such as pattern matching, the method of gradation, and the method of natural classification (such as cladistics).
But no art of discovery, such as Bacon anticipated, follows, for "invention, sagacity, genius" are needed at every step. Whewell's sophisticated concept of science had similarities to that shown by Herschel, and he considered that a good hypothesis should connect fields that had previously been thought unrelated, a process he called consilience. However, where Herschel held that the origin of new biological species would be found in a natural rather than a miraculous process, Whewell opposed this and considered that no natural cause had been shown for adaptation so an unknown divine cause was appropriate.
John Stuart Mill (1806–1873) was stimulated to publish A System of Logic (1843) upon reading Whewell's History of the Inductive Sciences. Mill may be regarded as the final exponent of the empirical school of philosophy begun by John Locke, whose fundamental characteristic is the duty incumbent upon all thinkers to investigate for themselves rather than to accept the authority of others. Knowledge must be based on experience.
In the mid-19th century Claude Bernard was also influential, especially in bringing the scientific method to medicine. In his discourse on scientific method, An Introduction to the Study of Experimental Medicine (1865), he described what makes a scientific theory good and what makes a scientist a true discoverer. Unlike many scientific writers of his time, Bernard wrote about his own experiments and thoughts, and used the first person.
William Stanley Jevons' The Principles of Science: a treatise on logic and scientific method (1873, 1877) Chapter XII "The Inductive or Inverse Method", Summary of the Theory of Inductive Inference, states "Thus there are but three steps in the process of induction :-
Framing some hypothesis as to the character of the general law.
Deducing some consequences of that law.
Observing whether the consequences agree with the particular tasks under consideration."
Jevons then frames those steps in terms of probability, which he then applied to economic laws. Ernest Nagel notes that Jevons and Whewell were not the first writers to argue for the centrality of the hypothetico-deductive method in the logic of science.
Charles Sanders Peirce
In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the further development of scientific method generally. Peirce's work quickly accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both Deduction and Induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume a century before). Secondly, and of more direct importance to scientific method, Peirce put forth the basic schema for hypothesis-testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that play a role in scientific inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself – indeed this was his primary specialty.
Charles S. Peirce was also a pioneer in statistics. Peirce held that science achieves statistical probabilities, not certainties, and that chance, a veering from law, is very real. He assigned probability to an argument's conclusion rather than to a proposition, event, etc., as such. Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the "propensity" theory of probability. Peirce (sometimes with Jastrow) investigated the probability judgments of experimental subjects, pioneering decision analysis.
Peirce was one of the founders of statistics. He formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With a repeated measures design, he introduced blinded, controlled randomized experiments (before Fisher). He invented an optimal design for experiments on gravity, in which he "corrected the means". He used logistic regression, correlation, and smoothing, and improved the treatment of outliers. He introduced terms "confidence" and "likelihood" (before Neyman and Fisher). (See the historical books of Stephen Stigler.) Many of Peirce's ideas were later popularized and developed by Ronald A. Fisher, Jerzy Neyman, Frank P. Ramsey, Bruno de Finetti, and Karl Popper.
Modern perspectives
Karl Popper (1902–1994) is generally credited with providing major improvements in the understanding of the scientific method in the mid-to-late 20th century. In 1934 Popper published The Logic of Scientific Discovery, which repudiated the by then traditional observationalist-inductivist account of the scientific method. He advocated empirical falsifiability as the criterion for distinguishing scientific work from non-science. According to Popper, scientific theory should make predictions (preferably predictions not made by a competing theory) which can be tested and the theory rejected if these predictions are shown not to be correct. Following Peirce and others, he argued that science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism. His astute formulations of logical procedure helped to rein in the excessive use of inductive speculation upon inductive speculation, and also helped to strengthen the conceptual foundations for today's peer review procedures.
Ludwik Fleck, a Polish epidemiologist who was contemporary with Karl Popper but who influenced Kuhn and others with his Genesis and Development of a Scientific Fact (in German 1935, English 1979). Before Fleck, scientific fact was thought to spring fully formed (in the view of Max Jammer, for example), when a gestation period is now recognized to be essential before acceptance of a phenomenon as fact.
Critics of Popper, chiefly Thomas Kuhn, Paul Feyerabend and Imre Lakatos, rejected the idea that there exists a single method that applies to all science and could account for its progress. In 1962 Kuhn published the influential book The Structure of Scientific Revolutions which suggested that scientists worked within a series of paradigms, and argued there was little evidence of scientists actually following a falsificationist methodology. Kuhn quoted Max Planck who had said in his autobiography, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
A well quoted source on the subject of the scientific method and statistical models, George E. P. Box (1919-2013) wrote "Since all models are wrong the scientist cannot obtain a correct one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist, so over-elaboration and over-parameterization is often the mark of mediocrity" and "Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad."
These debates clearly show that there is no universal agreement as to what constitutes the "scientific method". There remain, nonetheless, certain core principles that are the foundation of scientific inquiry today.
Mention of the topic
In Quod Nihil Scitur (1581), Francisco Sanches refers to another book title, De modo sciendi (on the method of knowing). This work appeared in Spanish as Método universal de las ciencias.
In 1833 Robert and William Chambers published their 'Chambers's information for the people'. Under the rubric 'Logic' we find a description of investigation that is familiar as scientific method,
Investigation, or the art of inquiring into the nature of causes and their operation, is a leading characteristic of reason [...] Investigation implies three things – Observation, Hypothesis, and Experiment [...] The first step in the process, it will be perceived, is to observe...
In 1885, the words "Scientific method" appear together with a description of the method in Francis Ellingwood Abbot's 'Scientific Theism',
Now all the established truths which are formulated in the multifarious propositions of science have been won by the use of Scientific Method. This method consists in essentially three distinct steps (1) observation and experiment, (2) hypothesis, (3) verification by fresh observation and experiment.
The Eleventh Edition of Encyclopædia Britannica did not include an article on scientific method; the Thirteenth Edition listed scientific management, but not method. By the Fifteenth Edition, a 1-inch article in the Micropædia of Britannica was part of the 1975 printing, while a fuller treatment (extending across multiple articles, and accessible mostly via the index volumes of Britannica) was available in later printings.
Current issues
In the past few centuries, some statistical methods have been developed, for reasoning in the face of uncertainty, as an outgrowth of methods for eliminating error. This was an echo of the program of Francis Bacon's Novum Organum of 1620. Bayesian inference acknowledges one's ability to alter one's beliefs in the face of evidence. This has been called belief revision, or defeasible reasoning: the models in play during the phases of scientific method can be reviewed, revisited and revised, in the light of further evidence. This arose from the work of Frank P. Ramsey
(1903–1930), of John Maynard Keynes
(1883–1946), and earlier, of William Stanley Jevons (1835–1882) in economics.
Science and pseudoscience
The question of how science operates and therefore how to distinguish genuine science from pseudoscience has importance well beyond scientific circles or the academic community. In the judicial system and in public policy controversies, for example, a study's deviation from accepted scientific practice is grounds for rejecting it as junk science or pseudoscience. However, the high public perception of science means that pseudoscience is widespread. An advertisement in which an actor wears a white coat and product ingredients are given Greek or Latin sounding names is intended to give the impression of scientific endorsement. Richard Feynman has likened pseudoscience to cargo cults in which many of the external forms are followed, but the underlying basis is missing: that is, fringe or alternative theories often present themselves with a pseudoscientific appearance to gain acceptance.
See also
Timeline of the history of the scientific method
Notes and references
Sources
. Third enlarged edition.
as cited by
as cited by
Critical edition of Sanches' Quod Nihil Scitur Latin:(1581, 1618, 1649, 1665), Portuguese:( 1948, 1955, 1957), Spanish:(1944, 1972), French:(1976, 1984), German:(2007)
English translation: On Discipline.
Part 1: De causis corruptarum artium,
Part 2: De tradendis disciplinis
Part 3: De artibus
Scientific Method | History of scientific method | [
"Technology"
] | 11,969 | [
"History of science",
"History of science and technology"
] |
3,143,195 | https://en.wikipedia.org/wiki/GIOVE | ; ), or Galileo In-Orbit Validation Element, is the name for two satellites built for the European Space Agency (ESA) to test technology in orbit for the Galileo positioning system.
The name was chosen as a tribute to Galileo Galilei, who discovered the first four natural satellites of Jupiter, and later discovered that they could be used as a universal clock to obtain the longitude of a point on the Earth's surface.
The GIOVE satellites are operated by the GIOVE Mission (GIOVE-M) segment in the frame of the risk mitigation for the In Orbit Validation (IOV) of the Galileo positioning system.
Purpose
These validation satellites were previously known as the Galileo System Testbed (GSTB) version 2 (GSTB-V2). In 2004 the Galileo System Test Bed Version 1 (GSTB-V1) project validated the on-ground algorithms for Orbit Determination and Time Synchronization (OD&TS). This project, led by ESA and European Satellite Navigation Industries, has provided industry with fundamental knowledge to develop the mission segment of the Galileo positioning system.
GIOVE satellites transmitted multifrequency ranging signals equivalent to the signals of future Galileo: L1BC, L1A, E6BC, E6A, E5a, E5b. The main purpose of the GIOVE mission was to test and validate the reception and performance of novel code modulations designed for Galileo including new signals based on the use of the BOC (Binary Offset Carrier) technique, in particular the high-performance E5AltBOC signal.
Satellites
GIOVE-A
Previously known as GSTB-V2/A, this satellite was constructed by Surrey Satellite Technology Ltd (SSTL).
Its mission has the main goal of claiming the frequencies allocated to Galileo by the ITU. It has two independently developed Galileo signal generation chains and also tests the design of two on-board rubidium atomic clocks and the orbital characteristics of the intermediate circular orbit for future satellites.
GIOVE-A is the first spacecraft whose design is based upon SSTL's new Geostationary Minisatellite Platform (GMP) satellite bus, intended for geostationary orbit. GIOVE-A is also SSTL's first satellite outside low Earth orbit, operating in medium Earth orbit), and is SSTL's first satellite to use deployable Sun-tracking solar arrays. Previous SSTL satellites use body-mounted solar arrays, which generate less power per unit area as they do not face the Sun directly.
Launched on 28 December 2005
It was launched at 05:19 UTC on December 28, 2005, on a Soyuz-FG/Fregat from the Baikonur Cosmodrome in Kazakhstan.
First Galileo transmissions
It began communicating as planned at 09:01 UTC while circling the Earth at a height of 23,222 km. The satellite successfully transmitted its first navigation signals at 17:25 GMT on 12 January 2006. These signals were received at Chilbolton Observatory in Hampshire, UK and the ESA Station at Redu in Belgium. Teams from SSTL and ESA have measured the signal generated by GIOVE-A to ensure it meets the frequency-filing allocation and reservation requirements for the International Telecommunication Union (ITU), a process that was required to be complete by June 2006.
Technical details
The GIOVE-A signal in space is fully representative of the Galileo signal from the point of view of frequencies and modulations, chip rates, and data rates. However, GIOVE-A can only transmit at two frequency bands at a time (i.e., L1+E5 or L1+E6).
GIOVE-A codes are different from Galileo codes. The GIOVE-A navigation message is not representative from the structure and contents viewpoint (demonstration only purpose). The generation of pseudorange measurements and detailed analysis of the tracking noise and multipath performance of GIOVE-A ranging signals have been performed with the use of the GETR (Galileo Experimental Test Receiver) designed by Septentrio.
There has been some public controversy about the open source nature of some of the Pseudo-Random Noise (PRN) codes. In the early part of 2006, researchers at Cornell monitored the GIOVE-A signal and extracted the PRN codes. The methods used and the codes which were found were published in the June 2006 issue of GPS World. ESA has now made the codes public.
Retirement
GIOVE-A was retired (but not decommissioned) on 30 June 2012, after being raised in altitude to make way for an operational satellite. It remained under command by SSTL until 24 November 2021, when it was officially decommissioned.
GIOVE-B
GIOVE-B (previously called GSTB-V2/B), has a similar mission, but has greatly improved signal generation hardware.
It was originally built by satellite consortium European Satellite Navigation Industries, but following re-organization of the project in 2007, the satellite prime contractor responsibility was passed to Astrium.
GIOVE-B also has MEO environment characterization objectives, as well as signal-in-space and receiver experimentation objectives. GIOVE-B carries three atomic clocks: two rubidium standards and the first space-qualified passive hydrogen maser.
Launched on 27 April 2008
The launch was delayed due to various technical problems, and took place on 27 April 2008 at 04:16 Baikonur time (22:16 UTC Saturday) aboard a Soyuz-FG/Fregat rocket provided by Starsem. The Fregat stage was ignited three times to place the satellite into orbit. Giove-B reached its projected orbit after 02:00 UTC and successfully deployed its solar panels.
First Galileo navigation transmissions
GIOVE-B started transmitting navigation signals on May 7, 2008. The reception of the signals by GETR receivers and other means has been confirmed at a few ESA facilities.
Technical details
According to ESA, this is "a truly historic step for satellite navigation since GIOVE-B is now, for the first time, transmitting the GPS-Galileo common signal using a specific optimised waveform, MBOC (multiplexed binary offset carrier), in accordance with the agreement drawn up in July 2007 by the EU and the US for their respective systems, Galileo and the future GPS III".
"Now with GIOVE-B broadcasting its highly accurate signal in space we have a true representation of what Galileo will offer to provide the most advanced satellite positioning services, while ensuring compatibility and interoperability with GPS", said ESA Galileo Project Manager, Javier Benedicto.
After launch, early orbit operations and platform commissioning, GIOVE-B's navigation payload was switched on and signal transmission commenced on May 7 and the quality of these signals is now being checked. Several facilities are involved in this process, including the GIOVE-B Control Centre at Telespazio's facilities in Fucino, Italy, the Galileo Processing Centre at ESA's European Space Research and Technology Centre (ESTEC), in the Netherlands, the ESA ground station at Redu, Belgium, and the Rutherford Appleton Laboratory (RAL) Chilbolton Observatory in the United Kingdom.
Chilbolton's 25-metre antenna makes it possible to analyse the characteristics of GIOVE-B signals with great accuracy and verify that they conform to the Galileo system's design specification. Each time the satellite is visible from Redu and Chilbolton, the large antennas are activated and track the satellite. GIOVE-B is orbiting at an altitude of 23 173 kilometres, making a complete journey around the Earth in 14 hours and 3 minutes.
The quality of the signals transmitted by GIOVE-B will have an important influence on the accuracy of the positioning information that will be provided by the user receivers on the ground. On board, GIOVE-B carries a passive hydrogen maser atomic clock, which is expected to deliver unprecedented stability performance.
The signal quality can be affected by the environment of the satellite in its orbit and by the propagation path of the signals travelling from space to ground. Additionally, the satellite signals must not create interference with services operating in adjacent frequency bands, and this is also being checked.
Galileo teams within ESA and industry have the means to observe and record the spectrum of the signals transmitted by GIOVE-B in real time. Several measurements are performed relating to transmitted signal power, centre frequency and bandwidth, as well as the format of the navigation signals generated on board. This allows the analysis of the satellite transmissions in the three frequency bands reserved for it.
The GIOVE-B mission also represents an opportunity for validating in-orbit critical satellite technologies, characterising the Medium Earth Orbit (MEO) radiation environment, and to test a key element of the future Galileo system - the user receivers.
Retirement
GIOVE B was retired (but not decommissioned) on 23 July 2012.
GIOVE-A2
With the delays of GIOVE-B, the European Space Agency again contracted with SSTL for a second satellite, to ensure that the Galileo programme continues without any interruptions that could lead to loss of frequency allocations. Construction of GIOVE-A2 was terminated due to the successful launch and in-orbit operation of GIOVE-B.
Mission segment
The GIOVE Mission segment, or GIOVE-M, is the name of a project dedicated to the exploitation and experimentation of the GIOVE satellites. The GIOVE Mission was intended to ensure risk mitigation of the In Orbit Validation (IOV) phase of the Galileo positioning system.
GIOVE Mission history
The GIOVE Mission Segment began in October 2005 with the purpose of providing experimental results based on real data to be used for risk mitigation throughout the overall Galileo In Orbit Validation (IOV) phase of the Galileo positioning system.
The GIOVE Mission segment infrastructure was based on evolution of the Galileo System Test Bed Version 1 (GSTB-V1) infrastructure conceived to process data from the GIOVE-A and GIOVE-B satellites. The GIOVE Mission segment was composed of a central processing facility called the Giove Processing Center (GPC) and a network of thirteen experimental Giove Sensor Stations (GESS).
The main objectives of the GIOVE Mission Segment experimentation were in the areas of:
On-board clock characterisation
Navigation message generation
Orbit modelling
References
External links
ESA GIOVE-B launch pages
GIOVE Mission Processing Centre website
eoPortal description of GIOVE
blog of GIOVE-A launch and press releases from Ballard Communications Management, used by SSTL.
Technical papers on GIOVE-A and B missions
GIOVE Mission Processing Centre - Website
eoPortal description of GIOVE
European Space Agency satellites
Galileo satellites
Satellites orbiting Earth
Aerospace engineering
Twin satellites | GIOVE | [
"Engineering"
] | 2,180 | [
"Aerospace engineering"
] |
3,143,285 | https://en.wikipedia.org/wiki/Chemosterilant | A chemosterilant is a chemical compound that causes reproductive sterility in an organism. Chemosterilants are particularly useful in controlling the population of species that are known to cause disease, such as insects, or species that are, in general, economically damaging. The sterility induced by chemosterilants can have temporary or permanent effects. Chemosterilants can be used to target one or both sexes, and it prevents the organism from advancing to be sexually functional. They may be used to control pest populations by sterilizing males. The need for chemosterilants is a direct consequence of the limitations of insecticides. Insecticides are most effective in regions in which there is high vector density in conjunction with endemic transmission, and this may not always be the case. Additionally, the insects themselves will develop a resistance to the insecticide either on the target protein level or through avoidance of the insecticide in what is called a behavioral resistance. If an insect that has been treated with a chemosterilant mates with a fertile insect, no offspring will be produced. The intention is to keep the percent of sterile insects within a population constant, such that with each generation, there will be fewer offspring.
Early research and concerns
Research on chemosterilants began in the 1960s–1970s, but the effort was abandoned due to concerns regarding toxicity. However, with great advancements made in genetics and analysis of vectors, the search for safer chemosterilants has resumed in the 21st century. Initially, there were many concerns with using chemosterilants on an operational scale due to difficulties in finding the ideal small molecule. The molecule used as a chemosterilant must satisfy a certain criteria. Firstly, the molecule must be available at a low cost. The molecule must result in permanent sterility upon exposure through topical application or immersion of larvae into water. Additionally, the survivability of the sterile males must not be affected, and the chemosterilant should not be toxic to humans or the environment. The two promising agents in the beginning were aziridines thiotepa and bisazir, but they were unable to satisfy the criteria of minimal toxicity to humans as well as the vector's predators. Pyriproxyfen was another compound of interest since it is not toxic to humans, but it would not be possible to induce sterility in larvae due to the fact that it exists as a larvicide. Exposure of larvae to pyriproxyfen will essentially kill the larvae.
Examples of chemosterilants
Use of chemosterilants for non-surgical castration (dogs and cats)
There are many regions in which there is a population of cats and dogs that freely roam on the streets. The most conventional approach to controlling reproductive rates in companion animals is through surgical means. However, surgical intervention poses ethical concerns. Through the formulation of a non-surgical castration technique, animals would not have to undergo anesthesia, and would not have to experience post-surgical bleeding or infection of the area that has been operated on. Some examples of chemosterilants include CaCl2 and zinc gluconate. These are specifically known as necrosis-inducing agents, which result in the degeneration of cells in the testes, resulting in infertility. These kinds of chemicals are generally injected into male reproductive organs, such as the testes, vas deferens, or epididymis. When injected, they induce azoospermia, which is the degeneration of the sperm cells normally found in the semen. If no sperm cells are present, reproduction can no longer occur. There is, however, one complication that results from the use of necrosis-inducing agents. Many animals generally exhibit an inflammatory response directly after the injection. To avoid the pain and discomfort associated with necrosis-inducing agents, another form of sterilization, known as apoptosis-inducing agents, has been studied. If cells are signaled to perform apoptosis rather than being eliminated by a foreign substance, this will result in no inflammation in the area. Experiments were tested using mice in vitro and ex vivo that have proved this. Using an apoptosis-inducing agent known as doxorubicin encapsulated in a nanoemulsion, and injecting it into mice, testicular cell death was observed. Inflammation was not observed in this case; however, more research still needs to be conducted with these materials, as the long-term impacts are unknown.
Effect of chemosterilants on the behavior of wandering male dogs in Puerto Natales, Chile
Chemosterilants can be useful to developing countries due to the fact that they have less resources and funds that can be allocated towards castration of their free-roaming animals. Additionally, the culture opposes the removal of testes. This study, performed in 2015, was unable to conclude the effects of chemical sterilization on dog aggression, as not enough is known about the aggression displayed by free-roaming dogs, and thus, researchers were unable to objectively make a decision on this front. Using GPS technology to track the movement of the free-roaming male dogs, it was found that chemical sterilization in comparison to surgical sterilization did not have a significant impact on the range of their roaming around the city. Much more detailed studies need to be performed in this area, since this study was the first of its kind and had relatively short sample sizes along with the examination of behavior not spanning a long enough time period.
Use of CaCl2 and zinc gluconate in cattle
The method of administration of CaCl2 and zinc gluconate is through a transvaginal injection of the chemical into the ovaries, and visualization is achieved through the use of an ultrasound. One group of cattle was only treated with CaCl2, one group was only treated with zinc gluconate, and one group was treated with both CaCl2 and zinc gluconate. Treatment with CaCl2 seems to be most promising, as the ovarian mass of the female cattle upon slaughter was less than cattle treated with zinc gluconate or the combination. The goal of treatment with CaCl2 is to cause ovarian atrophy with a minimal amount of pain.
Ornitrol in controlling the sparrow population
Another chemosterilant found to be effective is known as ornitrol. This chemosterilant was provided to sparrows by impregnating canary seeds, and this was used as a food source for a group of sparrows. There was a control group that was fed canary seeds without the ornitrol, and these birds laid almost twice as many eggs as group that was given ornitrol. It was deemed an effective chemosterilant in the study; however, after the removal of the chemosterilant from the diet, the birds were able to lay viable eggs as soon as 1–2 weeks later.
Commonly used chemosterilants
Two types of chemosterilants are commonly used:
Antimetabolites resemble a substance that the cell or tissue needs that the organism's body mistakes for a true metabolite and tries to incorporate them in its normal building processes. The fit of the chemical is not exactly right and the metabolic process comes to a halt.
Alkylating agents are a group of chemicals that act on chromosomes. These chemicals are extremely reactive, capable of intense cell destruction, damage to chromosomes and production of mutations.
See also
Sterile insect technique
References
Pest control techniques
Chemical compounds | Chemosterilant | [
"Physics",
"Chemistry"
] | 1,566 | [
"Chemical compounds",
"Molecules",
"Matter"
] |
3,143,467 | https://en.wikipedia.org/wiki/Cricket%20%28roofing%29 | A cricket or saddle is a ridge structure designed to divert water on a roof around the high side of a large penetration, typically a skylight, equipment curb, or chimney. In some cases, a cricket can be used to transition from one roof area to another. On low-slope and flat roofs with parapet walls, crickets are commonly used to divert water to the drainage, against or perpendicular to the main roof slope.
The pitch of a cricket is sometimes the same as the rest of the roof, but not always. For Steep-slope roofs, it is most common to have the cricket pitch to be equal to or less than the main roof, however for low-slope or flat roofs, it is more common to see the cricket be at least 50% greater slope than the roof, to minimize ponding. Smaller crickets (on steep-slope roofs only) are covered with metal flashing, and larger ones can be covered with the same material as the rest of the roof.
References
Roofs | Cricket (roofing) | [
"Technology",
"Engineering"
] | 202 | [
"Structural system",
"Structural engineering",
"Roofs"
] |
3,143,591 | https://en.wikipedia.org/wiki/Euclid%27s%20theorem | Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proven by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that there exists at least one additional prime number not included in this list. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would also divide P (since P is the product of every number in the list). If p divides P and q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists that is not in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, Euclid denoted the arbitrary finite set of prime numbers as A, B, Γ.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on logic, states, "Euclid's proof that there are infinitely many primes is not an indirect proof [...] The argument is sometimes formulated as an indirect proof by replacing it with the assumption 'Suppose are all the primes'. However, since this assumption isn't even used in the proof, the reformulation is pointless."
Variations
Several variations on Euclid's proof exist, including the following:
The factorial n! of a positive integer n is divisible by every integer from 2 to n, as it is the product of all of them. Hence, is not divisible by any of the integers from 2 to n, inclusive (it gives a remainder of 1 when divided by each). Hence is either prime or divisible by a prime larger than n. In either case, for every positive integer n, there is at least one prime bigger than n. The conclusion is that the number of primes is infinite.
Euler's proof
Another proof, by the Swiss mathematician Leonhard Euler, relies on the fundamental theorem of arithmetic: that every integer has a unique prime factorization. What Euler wrote (not with this modern notation and, unlike modern standards, not restricting the arguments in sums and products to any finite sets of integers) is equivalent to the statement that we have
where denotes the set of the first prime numbers, and is the set of the positive integers whose prime factors are all in
To show this, one expands each factor in the product as a geometric series, and distributes the product over the sum (this is a special case of the Euler product formula for the Riemann zeta function).
In the penultimate sum, every product of primes appears exactly once, so the last equality is true by the fundamental theorem of arithmetic. In his first corollary to this result Euler denotes by a symbol similar to the "absolute infinity" and writes that the infinite sum in the statement equals the "value" , to which the infinite product is thus also equal (in modern terminology this is equivalent to saying that the partial sum up to of the harmonic series diverges asymptotically like ). Then in his second corollary, Euler notes that the product
converges to the finite value 2, and there are consequently more primes than squares. This proves Euclid's Theorem.
In the same paper (Theorem 19) Euler in fact used the above equality to prove a much stronger theorem that was unknown before him, namely that the series
is divergent, where denotes the set of all prime numbers (Euler writes that the infinite sum equals , which in modern terminology is equivalent to saying that the partial sum up to of this series behaves asymptotically like ).
Erdős's proof
Paul Erdős gave a proof that also relies on the fundamental theorem of arithmetic. Every positive integer has a unique factorization into a square-free number and a square number . For example, .
Let be a positive integer, and let be the number of primes less than or equal to . Call those primes . Any positive integer which is less than or equal to can then be written in the form
where each is either or . There are ways of forming the square-free part of . And can be at most , so . Thus, at most numbers can be written in this form. In other words,
Or, rearranging, , the number of primes less than or equal to , is greater than or equal to . Since was arbitrary, can be as large as desired by choosing appropriately.
Furstenberg's proof
In the 1950s, Hillel Furstenberg introduced a proof by contradiction using point-set topology.
Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset to be an open set if and only if it is either the empty set, , or it is a union of arithmetic sequences (for ), where
Then a contradiction follows from the property that a finite set of integers cannot be open and the property that the basis sets are both open and closed, since
cannot be closed because its complement is finite, but is closed since it is a finite union of closed sets.
Recent proofs
Proof using the inclusion-exclusion principle
Juan Pablo Pinasco has written the following proof.
Let p1, ..., pN be the smallest N primes. Then by the inclusion–exclusion principle, the number of positive integers less than or equal to x that are divisible by one of those primes is
Dividing by x and letting x → ∞ gives
This can be written as
If no other primes than p1, ..., pN exist, then the expression in (1) is equal to and the expression in (2) is equal to 1, but clearly the expression in (3) is not equal to 1. Therefore, there must be more primes than p1, ..., pN.
Proof using Legendre's formula
In 2010, Junho Peter Whang published the following proof by contradiction. Let k be any positive integer. Then according to Legendre's formula (sometimes attributed to de Polignac)
where
But if only finitely many primes exist, then
(the numerator of the fraction would grow singly exponentially while by Stirling's approximation the denominator grows more quickly than singly exponentially),
contradicting the fact that for each k the numerator is greater than or equal to the denominator.
Proof by construction
Filip Saidak gave the following proof by construction, which does not use reductio ad absurdum or Euclid's lemma (that if a prime p divides ab then it must divide a or b).
Since each natural number greater than 1 has at least one prime factor, and two successive numbers n and (n + 1) have no factor in common, the product n(n + 1) has more different prime factors than the number n itself. So the chain of pronic numbers:1×2 = 2 {2}, 2×3 = 6 {2, 3}, 6×7 = 42 {2, 3, 7}, 42×43 = 1806 {2, 3, 7, 43}, 1806×1807 = 3263442 {2, 3, 7, 43, 13, 139}, · · ·provides a sequence of unlimited growing sets of primes.
Proof using the incompressibility method
Suppose there were only k primes (p1, ..., pk). By the fundamental theorem of arithmetic, any positive integer n could then be represented as
where the non-negative integer exponents ei together with the finite-sized list of primes are enough to reconstruct the number. Since for all i, it follows that for all i (where denotes the base-2 logarithm). This yields an encoding for n of the following size (using big O notation):
bits.
This is a much more efficient encoding than representing n directly in binary, which takes bits. An established result in lossless data compression states that one cannot generally compress N bits of information into fewer than N bits. The representation above violates this by far when n is large enough since . Therefore, the number of primes must not be finite.
Proof using an even-odd argument
Romeo Meštrović used an even-odd argument to show that if the number of primes is not infinite then 3 is the largest prime, a contradiction.
Suppose that are all the prime numbers. Consider and note that by assumption all positive integers relatively prime to it are in the set . In particular, is relatively prime to and so is . However, this means that is an odd number in the set , so , or . This means that must be the largest prime number which is a contradiction.
The above proof continues to work if is replaced by any prime with , the product becomes and even vs. odd argument is replaced with a divisible vs. not divisible by argument. The resulting contradiction is that must, simultaneously, equal and be greater than , which is impossible.
Stronger results
The theorems in this section simultaneously imply Euclid's theorem and other results.
Dirichlet's theorem on arithmetic progressions
Dirichlet's theorem states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d.
Prime number theorem
Let be the prime-counting function that gives the number of primes less than or equal to , for any real number . The prime number theorem then states that is a good approximation to , in the sense that the limit of the quotient of the two functions and as increases without bound is 1:
Using asymptotic notation this result can be restated as
This yields Euclid's theorem, since
Bertrand–Chebyshev theorem
In number theory, Bertrand's postulate is a theorem stating that for any integer , there always exists at least one prime number such that
Equivalently, writing for the prime-counting function (the number of primes less than or equal to ), the theorem asserts that for all .
This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all numbers in the interval
His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem.
Notes
References
External links
Euclid's Elements, Book IX, Prop. 20 (Euclid's proof, on David Joyce's website at Clark University)
Articles containing proofs
Theorems about prime numbers | Euclid's theorem | [
"Mathematics"
] | 2,420 | [
"Mathematical objects",
"Infinity",
"Theorems about prime numbers",
"Theorems in number theory",
"Articles containing proofs"
] |
3,143,719 | https://en.wikipedia.org/wiki/Eucrite | Eucrites are achondritic stony meteorites, many of which originate from the surface of the asteroid 4 Vesta and are part of the HED meteorite clan. They are the most common achondrite group with over 100 meteorites found.
Eucrites consist of basaltic rock from the crust of 4 Vesta or a similar parent body. They are mostly composed of calcium-poor pyroxene, pigeonite, and calcium-rich plagioclase (anorthite).
Based on differences of chemical composition and features of the component crystals, they are subdivided into several groups:
Non-cumulate eucrites are the most common variety and can be subdivided further:
Main series eucrites formed near the surface and are mostly regolith breccias lithified under the pressure of overlying newer deposits.
Stannern trend eucrites are a rare variety.
Nuevo Laredo trend eucrites are thought to come from deeper layers of 4 Vesta's crust, and are a transition group towards the cumulate eucrites.
Cumulate eucrites are rare types with oriented crystals, thought to have solidified in magma chambers deep within 4 Vesta's crust.
Polymict eucrites are regolith breccias consisting of mostly eucrite fragments and less than one part in ten of diogenite. They are less common.
Etymology
Eucrites get their name from the Greek word meaning "easily distinguished". This refers to the silicate minerals in them, which can be easily distinguished because of their relatively large grain size.
Eucrite is also a now obsolete term for bytownite-gabbro, an igneous rock formed in the Earth's crust. The term was used as a rock type name for some of the Paleogene igneous rocks of Scotland.
See also
Glossary of meteoritics
References
External links
Eucrite images - Meteorites Australia
Planetary science
Asteroidal achondrites
4 Vesta
it:Meteorite HED#Eucriti | Eucrite | [
"Astronomy"
] | 422 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
3,143,942 | https://en.wikipedia.org/wiki/Hydrazinium%20nitroformate | Hydrazinium nitroformate (HNF) is a salt of hydrazine and nitroform (trinitromethane). It has the molecular formula and is soluble in most solvents.
Hydrazinium nitroformate is an energetic oxidizer. Research is being conducted at the European Space Agency to investigate its use in solid rocket propellants. It tends to produce propellants which burn very rapidly and with very high combustion efficiency. Its high energy leads to high specific impulse propellants. It is currently an expensive research chemical available only in limited quantities. A disadvantage of HNF is its limited thermal stability.
References
Further reading
Rocket oxidizers
Hydrazinium compounds
Nitro compounds | Hydrazinium nitroformate | [
"Chemistry"
] | 148 | [
"Rocket oxidizers",
"Hydrazinium compounds",
"Oxidizing agents",
"Salts"
] |
3,144,048 | https://en.wikipedia.org/wiki/Diogenite | Diogenites are a group of the HED meteorite clan, a type of achondritic stony meteorites.
Origin and composition
Diogenites are currently believed to originate from deep within the crust of the asteroid 4 Vesta, and as such are part of the HED meteorite clan. There are about 40 distinct members known.
Diogenites are composed of igneous rocks of plutonic origin, having solidified slowly enough deep within Vesta's crust to form crystals which are larger than in the eucrites. These crystals are primarily magnesium-rich orthopyroxene, with small amounts of plagioclase and olivine.
Name
Diogenites are named for Diogenes of Apollonia, an ancient Greek philosopher who was the first to suggest an outer space origin for meteorites.
See also
Glossary of meteoritics
Vesta family
References
External links
Diogenite images - Meteorites Australia
Planetary science
Asteroidal achondrites
4 Vesta | Diogenite | [
"Astronomy"
] | 202 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
3,144,059 | https://en.wikipedia.org/wiki/Carrion%20flower | Carrion flowers, also known as corpse flowers or stinking flowers, are mimetic flowers that emit an odor that smells like rotting flesh. Apart from the scent, carrion flowers often display additional characteristics that contribute to the mimesis of a decaying corpse. These include their specific coloration (red, purple, brown), the presence of setae, and orifice-like flower architecture. Carrion flowers attract mostly scavenging flies and beetles as pollinators. Some species may trap the insects temporarily to ensure the gathering and transfer of pollen.
Plants known as "carrion flower"
Amorphophallus
Many plants in the genus Amorphophallus (family Araceae) are known as carrion flowers. One such plant is the Titan arum (Amorphophallus titanum), which has the world's largest unbranched inflorescence. Rather than a single flower, the titan arum presents an inflorescence or compound flower composed of a spadix or stalk of small and anatomically reduced male and female flowers, surrounded by a spathe that resembles a single giant petal. This plant has a mechanism to heat up the spadix enhancing the emission of the strong odor of decaying meat to attract its pollinators, carrion-eating beetles and "flesh flies" (family Sarcophagidae). It was first described scientifically in 1878 in Sumatra.
Rafflesia
Flowers of plants in the genus Rafflesia (family Rafflesiaceae) emit an odor similar to that of decaying meat. This odor attracts the flies that pollinate the plant. The world's largest single bloom is that of R. arnoldii. This rare flower is found in the rainforests of Borneo and Sumatra. It can grow to be across and can weigh up to . R. arnoldii is a parasitic plant on Tetrastigma vine, which grows only in primary rainforests. It has no visible leaves, roots, or stem. It does not photosynthesize, but rather uses the host plant to obtain water and nutrients.
Stapelia
Plants in the genus Stapelia are also called "carrion flowers". They are small, spineless, cactus-like succulent plants. Most species are native to South Africa, and are grown as potted plants elsewhere. The flowers of all species are hairy to varying degrees and generate the odor of rotten flesh. The color of the flowers also mimics rotting meat. This attracts scavenging flies, for pollination. The flowers in some species can be very large, notably Stapelia gigantea can reach in diameter.
Smilax or Nemexia
In North America, the herbaceous vines of the genus Smilax are known as carrion flowers. These plants have a cluster of small greenish flowers. The most familiar member of this groups is Smilax herbacea. These plants are sometimes placed in the genus Nemexia.
Bulbophyllum (Orchid)
Orchids of the genus Bulbophyllum produce strongly scented flowers. The flowers produce various odors resembling sap, urine, blood, dung, carrion, and, in some species, fragrant fruity aromas. Most are fly-pollinated, and attract hordes of flies. Bulbophyllum beccarii, Bulbophyllum fletcherianum and Bulbophyllum phalaenopsis in bloom have been likened to smelling like a herd of dead elephants. Their overpowering floral odors are sometimes described as making it difficult to walk into a greenhouse in which they in bloom.
Scent
The sources of the flowers' unique scent are not fully identified, partly due to the extremely low concentration of the compounds (5 to 10 parts per billion). Biochemical tests on Amorphophallus species revealed foul-smelling dimethyl sulfides such as dimethyl disulfide and dimethyl trisulfide, and in other species, trace amounts of amines such as putrescine and cadaverine have been found. Methyl thioacetate (which has a cheesy, garlic-like odor) and isovaleric acid (smells of sweat) also contribute to the smell of the flower. Trimethylamine is the cause of the "rotten fish smell" towards the end of the flower's life.
Pollination
Both visual interactions and odor are important attractants for pollinators. In order for pollination to occur, a relationship of attraction and reward must be present between the flower and the pollinator. The pollinator's body mechanically promotes pollen adherence, which is necessary for effective pollen dispersal. The recognizable scent of the carrion flowers is produced in the petals of both male and female flowers and the pollen reward attracts beetles and flies. Popular pollinators of carrion flowers are blowflies (Calliphoridae), house flies (Muscidae), flesh flies (Sarcophagidae) and varying types of beetles, due to the scents produced by the plant. Fly pollinators are typically attracted to pale, dull plants or those with translucent patches. Additionally, these plants produce pollen, do not have present nectar guides and flowers resemble a funnel or complex trap. The host plant can sometimes trap the pollinator during the pollination/feeding process.
Other plants with carrion-scented flowers
Annonaceae
Asimina, commonly referred to as "pawpaw"
Sapranthus palanga
Apocynaceae
subtribe Stapeliinae: Boucerosia frerei, Caralluma, Duvalia, Echidnopsis, Edithcolea grandis, Hoodia, Huernia, Orbea, Piaranthus, Pseudolithos
Araceae
Arum dioscoridis, A. maculatum
Dracunculus vulgaris
Helicodiceros muscivorus
Lysichiton americanum
Symplocarpus foetidus
Aristolochiaceae
Aristolochia californica, A. grandiflora, A. microstoma, A. salvadorensis, A. littoralis
Hydnora
Asparagaceae
Eucomis bicolor
Balanophoraceae
Sarcophyte sanguinea subsp. sanguinea
Bignoniaceae
Crescentia alata
Burmanniaceae
Tiputinia foetida
Cytinaceae
Bdallophytum
Iridaceae
Moraea lurida
Ferraria crispa
Malvaceae
Sterculia foetida
Melanthiaceae
Trillium erectum, T. foetidissimum, T. sessile, T. stamineum
Orchidaceae
Satyrium pumilum
Masdevallia elephanticeps, M. angulata, M. colossus, M. picea
See also
Stinkhorn — fungi that use the same basic principle for spore dispersal
Aseroe rubra — fungi that use the same basic principle for spore dispersal
References
External links
All about stinking flowers
Carrion and Dung Mimicry in Plants
Plant common names
Pollination | Carrion flower | [
"Biology"
] | 1,434 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
3,144,140 | https://en.wikipedia.org/wiki/Fujimoto%E2%80%93Belleau%20reaction | The Fujimoto–Belleau reaction is a chemical reaction that forms cyclic α-substituted α,β-unsaturated ketones from enol lactones. The reaction was discovered in 1951 by George I. Fujimoto and Bernard Belleau. Belleau used this reaction to synthesize 1-methyl-3-keto-1,2,3,9,10,10a-hexahydrophenanthrene from a ketoacid starting material and Fujimoto demonstrated that steroids could be synthesized from naturally occurring lactone species using this method as well.
The reaction starts with opening the ring by the attack of the Grignard reagent, forming an enolate. A proton transfer occurs, forming another enolate via deprotonation of the carbon atom attached to the R group. Using the enolate, an aldol condensation then occurs with aqueous or acidic workup—i.e., aldol addition followed by an E1cB elimination of hydroxide occurs.
Applications of the Fujimoto-Belleau reaction
Steroid synthesis
The Fujimoto–Belleau reaction is used in commonly used in steroid synthesis. The reaction can be employed in the syntheses of steroids such as cholestenone, testosterone, and cortisone. Below is a scheme for the Fujimoto–Belleau reaction invoked in steroid synthesis. Note that this pathway is not the true total syntheses for these steroids.
References
Weill-Raynal, J. Synthesis 1969, 49.
Addition reactions
Carbon-carbon bond forming reactions
Condensation reactions
Name reactions | Fujimoto–Belleau reaction | [
"Chemistry"
] | 329 | [
"Name reactions",
"Condensation reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
3,144,280 | https://en.wikipedia.org/wiki/Belyi%27s%20theorem | In mathematics, Belyi's theorem on algebraic curves states that any non-singular algebraic curve C, defined by algebraic number coefficients, represents a compact Riemann surface which is a ramified covering of the Riemann sphere, ramified at three points only.
This is a result of G. V. Belyi from 1979. At the time it was considered surprising, and it spurred Grothendieck to develop his theory of dessins d'enfant, which describes non-singular algebraic curves over the algebraic numbers using combinatorial data.
Quotients of the upper half-plane
It follows that the Riemann surface in question can be taken to be the quotient
H/Γ
(where H is the upper half-plane and Γ is a subgroup of finite index in the modular group) compactified by cusps. Since the modular group has non-congruence subgroups, it is not the conclusion that any such curve is a modular curve.
Belyi functions
A Belyi function is a holomorphic map from a compact Riemann surface S to the complex projective line P1(C) ramified only over three points, which after a Möbius transformation may be taken to be . Belyi functions may be described combinatorially by dessins d'enfants.
Belyi functions and dessins d'enfants – but not Belyi's theorem – date at least to the work of Felix Klein; he used them in his article to study an 11-fold cover of the complex projective line with monodromy group PSL(2,11).
Applications
Belyi's theorem is an existence theorem for Belyi functions, and has subsequently been much used in the inverse Galois problem.
References
Further reading
Algebraic curves
Theorems in algebraic geometry | Belyi's theorem | [
"Mathematics"
] | 371 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
3,144,661 | https://en.wikipedia.org/wiki/Yakov%20Frenkel |
Yakov Il'ich Frenkel (; 10 February 1894 – 23 January 1952) was a Soviet physicist renowned for his works in the field of condensed-matter physics. He is also known as Jacov Frenkel, frequently using the name J. Frenkel in publications in English.
Early years
He was born to a Jewish family in Rostov on Don, in the Don Host Oblast of the Russian Empire on 10 February 1894. His father was involved in revolutionary activities and spent some time in internal exile to Siberia; after the danger of pogroms started looming in 1905, the family spent some time in Switzerland, where Yakov Frenkel began his education. In 1912, while studying in the Karl May Gymnasium in St. Petersburg, he completed his first physics work on the Earth's magnetic field and atmospheric electricity. This work attracted Abram Ioffe's attention and later led to collaboration with him. He considered moving to the USA (which he visited in the summer of 1913, supported by money hard-earned by tutoring) but was nevertheless admitted to St. Petersburg University in the winter semester of 1913, at which point any emigration plans ended. Frenkel graduated from the university in three years and remained there to prepare for a professorship (his oral exam for the master's degree was delayed due to the events of the October revolution). His first scientific paper came to light in 1917.
Early scientific career
In the last years of the Great War and until 1921 Frenkel was involved (along with Igor Tamm) in the foundation of the University in Crimea (his family moved to Crimea due to the deteriorating health of his mother). From 1921 till the end of his life, Frenkel worked at the Physico-Technical Institute. Beginning in 1922, Frenkel published a book virtually every year. In 1924, he published 16 papers (of which 5 were basically German translations of his other publications in Russian), three books, and edited multiple translations. He was the author of the first theoretical course in the Soviet Union. For his distinguished scientific service, he was elected a corresponding member of the USSR Academy of Sciences in 1929.
He married Sara Isakovna Gordin in 1920. They had two sons, Sergei and Viktor (Victor). He served as a visiting professor at the University of Minnesota in the United States for a short period of time around 1930.
Early works of Yakov Frenkel focused on electrodynamics, statistical mechanics and relativity, though he soon switched to the quantum theory. Paul Ehrenfest, whom he met at a conference in Leningrad, encouraged him to go abroad for collaborations which he did in 1925–1926, mainly in Hamburg and Göttingen, and met with Albert Einstein in Berlin. It was during this period when Schrödinger published his groundbeaking papers on wave mechanics; Heisenberg's had appeared shortly before. Frenkel enthusiastically entered the field through discussions (he reportedly discovered what is now called the Klein–Gordon equation simultaneously with Oskar Klein) but his first scientific paper on the matter (considering electrodynamics in metals) was published in 1927.
In 1927–1930, he discovered the reason for the existence of domains in ferromagnetics; worked on the theory of resonance broadening and collision broadening of the spectral lines; developed a theory of electric resistance on the boundary of two metals and of a metal and a semiconductor.
Celebrated discoveries
In conducting research on the molecular theory of the condensed state (1926), he introduced the notion of the hole in a crystal, three years before Paul Dirac introduced his eponymous sea. The Frenkel defect became firmly established in the physics of solids and liquids. In the 1930s, his research was supplemented with works on the theory of plastic deformation. His theory, now known as the Frenkel–Kontorova model, is important in the study of dislocations. Tatyana Kontorova was then a PhD candidate working with Frenkel.
In 1930 to 1931, Frenkel showed that neutral excitation of a crystal by light is possible, with an electron remaining bound to a hole created at a lattice site identified as a quasiparticle, the exciton. Mention should be made of Frenkel's works on the theory of metals, nuclear physics (the liquid drop model of the nucleus, in 1936), and semiconductors.
In 1930, his son Viktor Frenkel was born. Viktor became a prominent historian of science, writing a number of biographies of prominent physicists including an enlarged version of Yakov Ilich Frenkel, published in 1996.
In 1934, Frenkel outlined the formalism for the multi-configuration self-consistent field method, later rediscovered and developed by Douglas Hartree.
He contributed to semiconductor and insulator physics by proposing a theory, which is now commonly known as the Poole–Frenkel effect, in 1938. "Poole" refers to H. H. Poole (Horace Hewitt Poole, 1886–1962), Ireland. Poole reported experimental results on the conduction in insulators and found an empirical relationship between conductivity and electrical field. Frenkel later developed a microscopic model, similar to the Schottky effect, to explain Poole's results more accurately. In this paper published in USA, Frenkel only very briefly mentioned an empirical relationship as Poole's law. Frenkel cited Poole's paper when he wrote a longer article in a Soviet journal.
During the 1930s, Frenkel and Ioffe opposed dangerous tendencies in Soviet physics, tying science to the materialist ideology, with remarkable courage. Soviet physics, as a result of these actions, never descended to the depths biology did. Still, he subsequently had to forgo publishing several papers, fearing that might have unfortunate consequences.
Yakov Frenkel was involved in the studies of the liquid phase, too, since the mid-1930s (he undertook some research in colloids) and during the World War II, when the institute was evacuated to Kazan. The results of his more than twenty years of study of the theory of liquid state were generalized in the classic monograph "Kinetic theory of liquids".
Later years
During the wartime, he worked on contemporary practical problems to help his country in sustaining the harsh fight. After the war, Frenkel focussed on seismoelectrics, also proposing that sound waves in metals might affect electric phenomena. He subsequently worked mainly in the field of atmospheric effects, but did not abandon his other interests, publishing several papers in nuclear physics.
Frenkel died in Leningrad in 1952. His son, Victor Frenkel, wrote a biography of his father, Yakov Ilich Frenkel: His work, life and letters. This book, originally written in Russian, has also been translated and published in English.
See also
Chandrasekhar limit
Poromechanics
Solid state ionics
References
English translations of books by Frenkel
, 2nd edition ( Dover Publications, 1950),
Literature
Victor Frenkel|Victor Yakovlevich Frenkel: Yakov Illich Frenkel. His work, life and letters. (original: (ru) Яков Ильич Френкель, translated by Alexander S. Silbergleit), Birkhäuser, Basel / Boston / Berlin 2001, (English).
Online
External links
Biography of Jacov Il'ich Frenkel
1894 births
1952 deaths
Scientists from Rostov-on-Don
People from Don Host Oblast
Russian materials scientists
Jewish Russian physicists
Soviet physicists
Soviet nuclear physicists
Corresponding Members of the USSR Academy of Sciences
Condensed matter physicists
Russian scientists | Yakov Frenkel | [
"Physics",
"Materials_science"
] | 1,568 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
3,144,703 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem%20for%20surfaces | In mathematics, the Riemann–Roch theorem for surfaces describes the dimension of linear systems on an algebraic surface. The classical form of it was first given by , after preliminary versions of it were found by and . The sheaf-theoretic version is due to Hirzebruch.
Statement
One form of the Riemann–Roch theorem states that if D is a divisor on a non-singular projective surface then
where χ is the holomorphic Euler characteristic, the dot . is the intersection number, and K is the canonical divisor. The constant χ(0) is the holomorphic Euler characteristic of the trivial bundle, and is equal to 1 + pa, where pa is the arithmetic genus of the surface. For comparison, the Riemann–Roch theorem for a curve states that χ(D) = χ(0) + deg(D).
Noether's formula
Noether's formula states that
where χ=χ(0) is the holomorphic Euler characteristic, c12 = (K.K) is a Chern number and the self-intersection number of the canonical class K, and e = c2 is the topological Euler characteristic. It can be used to replace the
term χ(0) in the Riemann–Roch theorem with topological terms; this gives the Hirzebruch–Riemann–Roch theorem for surfaces.
Relation to the Hirzebruch–Riemann–Roch theorem
For surfaces, the Hirzebruch–Riemann–Roch theorem is essentially the Riemann–Roch theorem for surfaces combined with the Noether formula. To see this, recall that for each divisor D on a surface there is an invertible sheaf L = O(D) such that the linear system of D is more or less the space of sections of L.
For surfaces the Todd class is , and the Chern character of the sheaf L is just , so the Hirzebruch–Riemann–Roch theorem states that
Fortunately this can be written in a clearer form as follows. First putting D = 0 shows that
(Noether's formula)
For invertible sheaves (line bundles) the second Chern class vanishes. The products of second cohomology classes can be identified with intersection numbers in the Picard group, and we get a more classical version of Riemann Roch for surfaces:
If we want, we can use Serre duality to express h2(O(D)) as h0(O(K − D)), but unlike the case of curves there is in general no easy way to write the h1(O(D)) term in a form not involving sheaf cohomology (although in practice it often vanishes).
Early versions
The earliest forms of the Riemann–Roch theorem for surfaces were often stated as an inequality rather than an equality, because there was no direct geometric description of first cohomology groups.
A typical example is given by , which states that
where
r is the dimension of the complete linear system |D| of a divisor D (so r = h0(O(D)) −1)
n is the virtual degree of D, given by the self-intersection number (D.D)
π is the virtual genus of D, equal to 1 + (D.D + K.D)/2
pa is the arithmetic genus χ(OF) − 1 of the surface
i is the index of speciality of D, equal to dim H0(O(K − D)) (which by Serre duality is the same as dim H2(O(D))).
The difference between the two sides of this inequality was called the superabundance s of the divisor D.
Comparing this inequality with the sheaf-theoretic version of the Riemann–Roch theorem shows that the superabundance of D is given by s = dim H1(O(D)). The divisor D was called regular if i = s = 0 (or in other words if all higher cohomology groups of O(D) vanish) and superabundant if s > 0.
References
Topological Methods in Algebraic Geometry by Friedrich Hirzebruch
Theorems in algebraic geometry
Algebraic surfaces
Topological methods of algebraic geometry | Riemann–Roch theorem for surfaces | [
"Mathematics"
] | 910 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
3,144,730 | https://en.wikipedia.org/wiki/Motorola%20Mobility | Motorola Mobility LLC, marketing as Motorola, is an American consumer electronics manufacturer primarily producing smartphones and other mobile devices running Android. Headquartered at Merchandise Mart in Chicago, Illinois, it is a wholly owned subsidiary of the Chinese technology company Lenovo.
Motorola Mobility was formed on January 4, 2011, after a split of the original Motorola into two separate companies, with Motorola Mobility assuming the company's consumer-oriented product lines, including its mobile phone business, as well as its cable modems and pay television set-top boxes. In May 2012, Google acquired Motorola Mobility for US$12.5 billion; the main intent of the purchase was to gain Motorola Mobility's patent portfolio, in order to protect other Android vendors from litigation. Shortly after the purchase, Google sold Motorola Mobility's cable modem and set-top box business to Arris Group, and products increasingly focused on entry-level smartphones. Under the ATAP division, Google also began development on Project Ara. In October 2014, Google sold Motorola Mobility for $2.91 billion to Lenovo, which excluded ATAP and most of the patents. Lenovo's existing smartphone division was subsumed by Motorola Mobility.
The company currently sells a range of smartphones, mainly consisting of the high-end Edge series, the Razr series of foldables, the Moto G series, and the budget Moto E series, as well as a number of other series and products depending on region. As of 2024, its current flagship device is the Motorola Edge 50.
History
On January 4, 2011, Motorola, Inc. was split into two publicly traded companies; Motorola Solutions took on the company's enterprise-oriented business units, while the remaining consumer division was taken on by Motorola Mobility. Motorola Mobility originally consisted of the mobile devices business, which produced smartphones, mobile accessories including Bluetooth headphones, and the home business, which produced set-top boxes, end-to-end video solutions, cordless phones, and cable modems. Legally, the split was structured so that Motorola Inc. changed its name to Motorola Solutions and spun off Motorola Mobility as a new publicly traded company.
2012–2014: Google ownership
On August 15, 2011, American technology company Google announced that it would acquire Motorola Mobility for $12.5 billion, pending regulatory approval. Critics viewed Google as being a white knight, since Motorola Mobility had recently had a fifth straight quarter of losses. Google planned to operate Motorola Mobility as an independent company. In a post on the company's blog, Google CEO and co-founder Larry Page revealed that Google's acquisition of Motorola Mobility was a strategic move to strengthen Google's patent portfolio. At the time, the company had 17,000 patents, with 7,500 patents pending. The expanded portfolio was to defend the viability of its Android operating system, which had been the subject of numerous patent infringement lawsuits between device vendors and other companies such as Apple, Microsoft and Oracle.
On November 17, 2011, Motorola Mobility announced that its shareholders voted in favor of the company's acquisition by Google for $12.5 billion. The deal received regulatory approval from the United States Department of Justice and the European Union on February 13, 2012. The deal received subsequent approval from Chinese authorities and was completed on May 22, 2012. Alongside the completion of the acquisition, Motorola Mobility's CEO Sanjay Jha was replaced by Dennis Woodside, a former senior vice president at Google.
On August 13, 2012, Google announced that it would cut 4,000 employees and close one third of the company's locations, mostly outside the US.
On December 19, 2012, it was announced that Arris Group would purchase Motorola Mobility's cable modem and set-top box business for $2.35 billion in a cash-and-stock transaction.
In May 2013, Motorola Mobility opened a factory in Fort Worth, Texas, with the intent to assemble customized smartphones in the US. At its peak, the factory employed 3,800 workers. On April 9, 2014, following the departure of Woodside, lead product developer Rick Osterloh was named the new president of Motorola Mobility.
Under Google ownership, Motorola Mobility's market share would be boosted by a focus on high-quality entry-level smartphones, aimed primarily at emerging markets; in the first quarter of 2014, Motorola Mobility sold 6.5 million phones—led by strong sales of its low-end Moto G, especially in markets such as India, and in the United Kingdom—where the company accounted for 6% of smartphone sales sold in the quarter, up from nearly zero. These goals were compounded further by the May 2014 introduction of the Moto E—a low-end device aimed at first-time smartphone owners in emerging markets. In May 2014, Motorola Mobility announced that it would close its Fort Worth factory by the end of the year, citing the high costs of domestic manufacturing in combination with the weak sales of the Moto X (which was customized and assembled at the plant) and the company's increased emphasis on low-end devices and emerging markets.
2014–present: Lenovo ownership
On January 29, 2014, Google announced it would, pending regulatory approval, sell Motorola Mobility to the Chinese technology company Lenovo for US$2.91 billion in a cash-and-stock deal, seeing the sale of $750 million in Lenovo shares to Google. Google retained the Advanced Technologies & Projects unit (which was integrated into the main Android team), and all but 2000 of the company's patents. Lenovo had prominently disclosed its intent to enter the U.S. smartphone market, and had previously expressed interest in acquiring BlackBerry, but was reportedly blocked by the Canadian government due to national security concerns. The acquisition was completed on October 30, 2014. The company remained headquartered in Chicago, and continued to use the Motorola brand, while Liu Jun () — president of Lenovo's mobile device business, became the company's chairman.
On January 26, 2015, owing to its new ownership, Motorola Mobility re-launched its product line in China with the local release of the second-generation Moto X, and an upcoming release of the Moto G LTE and Moto X Pro (a re-branded Nexus 6) in time for the Chinese New Year.
Lenovo maintained a "hands-off" approach in regards to Motorola Mobility's product development. Head designer Jim Wicks explained that "Google had very little influence and Lenovo has been the same." The company continued to engage in practices it adopted under Google, such as the use of nearly "stock" Android, undercutting competitor's pricing while offering superior hardware (as further encouraged by Lenovo), and placing a larger focus on direct-to-consumer selling of unlocked phones in the US market (as opposed to carrier subsidized versions). On July 28, 2015, Motorola Mobility unveiled three new devices, and its first under Lenovo ownership—the third-generation Moto G, Moto X Play, and Moto X Style—in three separate events.
Integration with Lenovo
In August 2015, Lenovo announced that it would merge its existing smartphone division, including design, development, and manufacturing, into the Motorola Mobility unit. The announcement came in addition to a cut of 3,200 jobs across the entire company. As a result of the change, Motorola Mobility will be responsible for the development and production of its own "Moto" product line, as well as Lenovo's own "Vibe" range.
In January 2016, Lenovo announced that the "Motorola" name would be further downplayed in public usage in favor of the "Moto" brand. Motorola Mobility later clarified that the "Motorola" brand will continue to be used in product packaging and through its brand licensees. The company said that "the Motorola legacy is near and dear to us as product designers, engineers and Motorola [Mobility] employees, and clearly it's important to many of you who have had long relationships with us. We plan to continue it under our parent company, Lenovo."
In response to claims by a Lenovo executive that only high-end devices would be produced under the "Moto" name, with low-end devices being amalgamated into Lenovo's existing "Vibe" brand, Motorola Mobility clarified its plans and explained that it would continue to release low-end products under the Moto brand, including the popular Moto G and Moto E lines. Motorola Mobility stated that there would be overlap between the Vibe and Moto lines in some price points and territories, but that both brands would have different "identities" and experiences. Moto devices would be positioned as "innovative" and "trendsetting" products, and Vibe would be a "mass-market challenger brand".
In November 2016, it was reported that Lenovo would be branding all its future smartphones under the brand "Moto by Lenovo". In March 2017, it was reported that Lenovo would continue to use the "Motorola" brand and logo, citing its recognition as a heritage cellphone brand. Furthermore, Aymar de Lencquesaing, Motorola Mobility's president at the time, stated that Lenovo planned to phase out its self-branded smartphones in favor of the Motorola brand.
Under Lenovo, Motorola Mobility has faced criticism for having an increasingly poor commitment to maintaining Android software updates for its devices, exemplified by negative responses to a 2019 announcement that Android 9.0 "Pie" updates to the Moto Z2 Force in the United States would only be available to the Verizon Wireless model.
Products
Razr
Motorola Mobility's predecessor Motorola, Inc. released the Razr V3 in the third quarter of 2004. Because of its striking appearance and thin profile, it was initially marketed as an exclusive fashion phone, but within a year, its price was lowered and it was wildly successful, selling over 50 million units by July 2006. Over the Razr four-year run, Motorola sold more than 130 million units, becoming the bestselling clamshell phone in the world.
Motorola released other phones based on the Razr design as part of the 4LTR line. These include the Pebl U6, Slvr L6, Slvr L7 (more expensive variant of Slvr L6), Razr V3c (CDMA), Razr V3i (with upgraded camera and appearance and iTunes syncing for 100 songs), V3x (supports 3G technology and has a 2-megapixel camera), Razr V3xx (supports 3.5G technology) and Razr maxx V6 (supports 3.5G technology and has a 2-megapixel camera) announced in July 2006.
The Razr series was marketed until July 2007, when the succeeding Motorola Razr2 series was released. Marketed as a more sleek and more stable design of the Razr, the Razr2 included more features, improved telephone audio quality, and a touch-sensitive external screen. The new models were the V8, the V9, and the V9m. However, Razr2 sales were only half of the original in the same period.
Because Motorola relied so long upon the Razr and its derivatives and was slow to develop new products in the growing market for feature-rich touchscreen and 3G phones, the Razr appeal declined while rival offerings like the LG Chocolate, BlackBerry, and iPhone captured, leading Motorola to eventually drop behind Samsung and LG in market share for mobile phones. Motorola's strategy of grabbing market share by selling tens of millions of low-cost Razrs cut into margins and resulted in heavy losses in the cellular division.
Motorola capitalized on the Razr too long and it was also slow adopting 3G. While Nokia managed to retain its lead of the worldwide cellular market, Motorola was surpassed first by Samsung and then LG Electronics. By 2007, without new cellphones that carriers wanted to offer, Motorola sold tens of millions of Razrs and their offshoots by slashing prices, causing margins to collapse in the process. CEO of Motorola Ed Zander departed for Dell, while his successor failed to turn around the struggling mobile handset division.
Motorola continued to experience severe problems with its cellphone/handset division in the latter-2000s, recording a record $1.2 billion loss in Q4 2007. Its global competitiveness continued to decline: from 18.4% market share in 2007, to 9.7% by 2008. By 2010 Motorola's global market share had dropped to seventh place, leading to speculation of bankruptcy of the company. While Motorola's other businesses were thriving, the poor results from the Mobile Devices Unit as well as the 2008 financial crisis delayed the company plans to spin off the mobile division.
Early Android smartphones
Droid
In 2008, Sanjay Jha took over as co-chief executive officer of Motorola's mobile device division; under Jha's control, significant changes were made to Motorola's mobile phone business, including most prominently, a shift to the recently introduced Android operating system as its sole smartphone platform, replacing both Symbian and Windows Mobile. In August 2009, Motorola introduced the Cliq, its first Android device, for T-Mobile USA. The device also featured a user interface known as Motoblur, which aimed to aggregate information from various sources, such as e-mail and social networking services, into a consistent interface.
A month later, Motorola unveiled the Droid, Verizon Wireless's first Android phone, which was released on November 8, 2009. Backed with a marketing campaign by Verizon, which promoted the device as a direct competitor to the iPhone with the slogan "iDon't," "Droid Does," the Droid was a significant success for Motorola and Verizon. Flurry estimated that at least 250,000 Droid smartphones had been sold in its first week of availability. PC World considered the sales figures to be an indicator of mainstream growth for the Android platform as a whole. The Droid was also named "Gadget of the Year" for 2009 by Time. Other Droid-branded devices would be released by Verizon, although not all of them were manufactured by Motorola.
In 2010, Motorola released the Droid X along with other devices such as the Charm, Flipout, and i1. In July 2010, Motorola reported that it had sold 2.7 million smartphones during the second quarter of 2010, an increase of 400,000 units over the first quarter. Jha stated that the company was in "a strong position to continue improving our share in the rapidly growing smartphone market and improve our operating performance." In its third quarter earnings report, Jha reaffirmed that the Droid X was selling "extremely well."
Atrix 4G, Droid Bionic, XOOM, and Droid RAZR
On January 5, 2011, Motorola Mobility announced that the Atrix 4G and the Droid Bionic were headed to AT&T and Verizon, respectively, with expected release dates in Q1 of 2011. The Atrix was released on February 22 as the world's first phone with both a Dual-Core Processor and 1 GB of RAM. The phone also had optional peripherals such as a Multimedia Dock and a Laptop Dock which launched a Webtop UI. On February 24, two days after the release of Atrix, the company released Motorola Xoom, the world's first Android 3.0 tablet, and followed it up shortly afterwards with an update to make it the world's first Android 3.1 tablet.
In the fourth quarter of 2011, Motorola Mobility unveiled the Droid RAZR, the world's thinnest 4G LTE smartphone at that time at just 7.1 mm. The Droid Razr featured Kevlar backing, the same used in bulletproof vests, and a Gorilla Glass faceplate. The phone was very successful through Verizon Wireless, and many color variants were released. In addition, a Maxx version of the Droid RAZR with an extended battery was released at CES 2012. The Droid RAZR MAXX won CTIA's "Best Smartphone" award. The company also announced new products by late 2011 and early 2012 such as the Xoom 2 tablets, the motoACTV fitness watch with Android, and the Droid 4 with 4G LTE for Verizon Wireless.
Though Jha managed to restore some of the lost luster to Motorola Mobility, it struggled against Samsung and Apple. Even among Android manufacturers, Motorola Mobility had dropped behind Samsung, HTC, and LG in market share by the second quarter of 2011. This may have been due to the delay in releasing 4G LTE-capable devices, as well as setting the prices of its new products too high. Jha was replaced by Dennis Woodside as CEO by May 2012, when the Google acquisition was complete.
Motorola Mobility released the Droid RAZR HD (and Droid RAZR MAXX HD) as its 2012 flagship devices, featuring improvements over 2011's RAZR. A lower end RAZR M was released, along with an Intel powered RAZR i. Through late 2012 until 2013's third quarter, no further devices were released, except for the lower end RAZR D1 and D3 devices for Latin America.
Google era
Moto X (2013-2015)
In an August 2013 interview, Motorola Mobility's vice president of product management Lior Ron explained that the company will focus on the production of fewer products to focus on quality rather than quantity. Ron stated, "Our mandate from Google, from Larry, is really to innovate and take long-term bets. When you have that sort of mentality, it's about quality and not quantity".
Speaking at the D11 conference in Palos Verdes, California, in May 2013, Motorola Mobility CEO Dennis Woodside announced that a new mobile device would be built by his company at a 500,000 square-feet facility near Fort Worth, Texas, formerly used by Nokia. The facility will employ 2,000 people by August 2013 and the new phone, to be named "Moto X", will be available to the public in October 2013. The Moto X featured Google Now software, and an array of sensors and two microprocessors that will mean that users can “interact with [the phone] in very different ways than you can with other devices”. Media reports suggested that the phone will be able to activate functions preemptively based on an "awareness" of what the user is doing at any given moment.
On July 3, 2013, Motorola Mobility released a full-page color advertisement in many prominent newspapers across the US. It claimed that Motorola Mobility's next flagship phone will be "the first smartphone designed, engineered, and assembled in the United States". On the same day that the advertisement was published, ABC News reported that customers will be able to choose the color of the phone, as well as add custom engravings and wallpaper at the time of purchase.
In early July 2013, the Wall Street Journal reported that Motorola Mobility would spend nearly US$500 million on global advertising and marketing for the device. The amount is equivalent to half of Apple's total advertising budget for 2012.
On August 1, 2013, Motorola Mobility unveiled the first-generation Moto X smartphone. It was released on August 23, 2013, in the US and Canada.
On September 5, 2014, Motorola Mobility released the second-generation Moto X. This continued the trend of the company letting consumers customize their devices through their Moto Maker website, and added new customization options like additional real wood choices and new leather options. The device also got many increases in specs. With a new 5.2-inch (13 cm) 1080p super AMOLED pentile display, a faster 2.5 GHz Qualcomm Snapdragon 801 processor, and an improved 13-megapixel rear camera capable of recording 4k resolution video with a dual LED flash. The device also came with new software features along with new infrared proximity sensors.
The Moto X Play and Moto X Style smartphones were announced in July 2015, and were released in September 2015. Many customers who have ordered customized Moto X Pure Editions via Motorola Mobility's website have experienced delays receiving their devices. These delays have been attributed to issues including manufacturing issues, lack of parts needed to complete assembly of custom phones (black fronts, Verizon SIM cards and 64 GB versions), a possible redesign due to initial phones having a defect that causes one of the front facing speakers to rattle at high volume and multiple day delays clearing US customs at FedEx's Memphis, Tennessee hub due to issues related to the import paperwork.
The Moto X Force was launched on October 27, 2015. in the US, it was branded as the Droid Turbo 2, and was Motorola Mobility's flagship device of the year, offering Snapdragon 810 processor and 3 GB of memory. Like other Moto X devices, it was customizable through Moto Maker. This was Motorola Mobility's first smartphone that featured the company's "ShatterShield" technology, which consists of two touchscreen elements, reinforced by an internal aluminum frame to make it resistant to bending and cracking, although this does not protect against scratches or other superficial screen damage. The top layer of the display is designed to be user-replaceable. The screen and case also have a water repellent nanocoating to protect the device from liquids that could damage internal components.
The Moto X4 was introduced in August 2017. It featured a 5.2-inch 1080p IPS display compared to Moto X 2014, and 3, 4, or 6 GB of RAM depending on versions. There were three iterations: a retail one from Motorola Mobility, an Android One version from Google, and an Amazon Prime edition. Unlike its predecessors, the Moto X4 did not offer some older Moto X exclusive features such as the Moto Maker. The phone came with Android Nougat 7.1.1 and Moto features, with support of 'A/B Partitions' and 'seamless update'. It was updated to Android Oreo in late 2017 and Android Pie in 2018. It was succeeded by the Motorola One lineup since 2018.
Droid Mini, Ultra and Maxx
Droid Mini, Droid Ultra and Droid Maxx were announced in a Verizon press conference on July 23, 2013. These phones share similar design with the predecessing Droid Razr HD lines in different screen and battery sizes, while all featuring the same Motorola X8 Mobile Computing System as the first-generation Moto X, with exclusive features like Motorola Active Notifications and 'OK Google' on device neural-based voice recognition system.
In September 2015, Droid Maxx 2 were launched as Verizon exclusives in the US market, which share the same overall design as the Moto X Style, with Verizon software on board. Unlike its Moto X counterpart, Droid Maxx 2 did not support Moto Maker for customization.
Moto G
On November 13, 2013, Motorola Mobility unveiled the first generation Moto G, a relatively low-cost smartphone. The Moto G had been launched in several markets, including the UK, United States, France, Germany, India and parts of Latin America and Asia. The Moto G was available in the United States, unlocked, for a starting price of US$179. The device is geared toward global markets and some US models support 4G LTE. Unlike the Moto X, the Moto G was not manufactured in the United States.
On September 5, 2014, Motorola Mobility released its successor to the 2013 version of the Moto G, called the Moto G (2nd generation). It came with a larger screen, higher resolution camera, along with dual front-facing stereo speakers.
On July 28, 2015, Motorola Mobility released the third generation of the Moto G series, called the Moto G (3rd generation), in a worldwide press conference in New Delhi, India. It retained the same screen as before but upgraded the processor and RAM. Furthermore, it has an IPx7 water-resistance certification and comes into two variants – one with 1 GB of memory and 8 GB of storage, and the other with 2 GB of memory and 16 GB of storage. The device launched with Android version 5.1.1.
In May 2016, Motorola Mobility released three fourth generation Moto G smartphones: Moto G⁴, Moto G⁴ Plus, and Moto G⁴ Play.
On February 26, 2017, Motorola Mobility released two fifth generation Moto G smartphones during Mobile World Congress: Moto G5 and Moto G5 Plus. The company added two 'special edition' models to the Moto G lineup, the Moto G5S and Moto G5S Plus, on August 1.
In May 2018, Motorola Mobility released the sixth generation of the Moto G lineup in three variants, the G6, G6 Plus and the G6 Play. The G6 and G6 Plus have two rear cameras, and a larger screen with an aspect ratio of 18:9. The G6 Plus is capable of taking 4K video. The specification ranges from 3/32 GB to 6/128 GB.
In February 2019, Motorola Mobility launched the Moto G7 with 3 extra variants, the Moto G7 Play, Moto G7 Power and the Moto G7 Plus.
Moto E
The Moto E (1st generation) was announced and launched on May 13, 2014. It was an entry-level device intended to compete against feature phones by providing a durable, low-cost device for first-time smartphone owners or budget-minded consumers, with a particular emphasis on emerging markets. The Moto E shipped with a stock version of Android 4.4 "KitKat".
The Moto E (2nd generation) was announced and launched on March 10, 2015, in India. Released in the wake of its successful first generation, the second generation of the Moto E series still aims to provide a smooth experience to budget-oriented consumers. It increased the screen size to 4.5 inches but kept the resolution at 540 × 960px. It came in two versions, a 3G-only one powered by a Snapdragon 200 chipset and a 4G LTE version powered by a Snapdragon 410 chipset. As before, it shipped with a stock version of Android 5.0 "Lollipop".
In 2015 Motorola Mobility marketed the 2nd generation Moto E with the promise of continual updates and support, "And while other smartphones in this category don't always support upgrades, we won't forget about you, and we'll make sure your Moto E stays up to date after you buy it." However, 219 days after launch Motorola announced that the device would not receive an upgrade from Lollipop to 6.0 "Marshmallow". It was later announced that the LTE variant of the device would receive an upgrade to Marshmallow in Canada, Europe, Latin America, and Asia (excluding China). China and the US carrier-branded versions of the device remained on Lollipop, with a minor upgrade to version 5.1. However, the 2nd generation Moto E in the USA did continue to receive support via Android Security Patch updates until at least the October 1, 2016 patch for the LTE variant and the November 1, 2016 patch for the non-LTE variant.
Google Nexus 6 / Moto X Pro
The Nexus 6 was announced October 15, 2014 by Motorola Mobility in partnership with Google. It was the first 6-inch smartphone in the mainstream market, and came with many high-end specs. It was the successor to the Nexus 5, Google's previous flagship phone from their Google Nexus series of devices. Its design was similar to the Moto X (2nd generation) but with a larger display and dual, front-facing speakers rather than the single front-facing speaker on the Moto X. It was the first phone running vanilla Android Lollipop, receiving software updates directly from Google. It was later updated to Android Marshmallow in 2015 and Android Nougat in 2016, though later versions of Android 7.X took some time to arrive, and it never received Android 7.1.2 update, ending its support with Android 7.1.1 in the end of 2017.
On January 26, 2015, Motorola Mobility announced that they would sell the Moto X Pro in China. The Moto X Pro was similar to the Nexus 6 in terms of hardware, but excluded all of Google's services and applications. The phone was released in April 2015 with Android 5.0.2 'Lollipop'. However, it never received any Android version update throughout its lifetime despite the same hardware as the Nexus 6.
Droid Turbo / Moto Maxx / Moto Turbo
The Droid Turbo (Moto Maxx in South America and Mexico, Moto Turbo in India) features a 3900 mAh battery lasting up to two days. Motorola claims an additional eight hours of use after only fifteen minutes of charging with the included Turbo Charger. The device is finished in ballistic nylon over a Kevlar fiber layer and is protected by a water repellent nano-coating. Droid Turbo uses a quad-core Snapdragon 805 processor clocked at 2.7 GHz, 3 GB RAM, a 21-megapixel camera with 4K video, 5.2-inch screen with resolution of 2560 × 1440 pixels. The Droid Turbo includes 32 or 64 GB of internal storage, while the Moto Maxx is only available in 64 GB.
In late 2015, Droid Turbo 2 was introduced as a Verizon exclusive, which was a rebrand of Moto X Force. It was the first Droid device to offer Moto Mods for extensive customization, and ShatterShield display technologies.
Lenovo era
Moto Z and Moto Mods
The Moto Z lineup was introduced in June 2016. The smartphone features Motorola Mobility's Moto Mods platform, in which the user can magnetically attach accessories or "Mods" to the back of the phone, including a projector, style shells, a Hasselblad-branded camera lens, and a JBL speaker. There were three versions of the original Moto Z. The global flagship model Moto Z, and Moto Z Droid as a Verizon exclusive, were introduced as the thinnest premium smartphone in the world, according to Motorola, and featured a 13-megapixel camera with 4K video, 5.5-inch screen and 4 GB of RAM. and an underclocked Snapdragon 820 chipset at 1.8 GHz, but was able to unlock with root permission granted. Moto Z Play, on the other hand, featured a less powerful processor and a bigger battery. Moto Z Force Droid, only introduced as a Verizon exclusive, featured the Snapdragon 820 chipset with standard frequency, a display with Motorola ShatterShield technology and a 21-megapixel camera. These phones ship with near stock Android 6.0 'Marshmallow' with usual Moto experiences. They were later updated to Android Nougat and Android Oreo in early 2017 and 2018, a substantial delay compared to older Moto X models.
Moto Z2 Play was launched in June 2017 with updated Moto experience, a slightly faster processor than Moto Z Play, and Android Nougat. Moto Z2 Force was launched in July 2017, and was the last Motorola flagship phone with up-to-date processor as for now. It was the first non-Google phone to feature 'A/B partition' and 'seamless update' features of Android Nougat, allowing users to install update in the background and finish with just a restart. It was updated to Android Oreo in the end of 2017, which was fairly speedy compared to other Android OEMs. In China, the phones run ZUI from ZUK, a subsidiary of Lenovo instead of near stock Android, and Moto Z2 Force was rebranded as 'Moto Z 2018'. There were also several Moto Mods released in 2017, including a Motorola TurboPower Moto Mod with fast charging capabilities.
The Moto Z3 lineup was released in August 2018, consisting of Moto Z3 Play with Snapdragon 6-series SoC and Moto Z3 with last generation 8-series SoC, both supporting all Moto Mods introduced. The 5G Moto Mod was introduced alongside the Moto Z3 lineup, and was a Verizon exclusive at launch in early 2019. It enables 5G connectivity for Moto Z2 Force and later Moto Z devices when the mod is attached on the phone. The phones launched with Android Oreo and was later updated to Android Pie with a similar software experience as earlier Moto Z devices.
The Moto Z4 was launched in May 2019 with a 48-megapixel camera sensor and enhanced night sight features, as well as an intergraded fingerprint sensor. It featured a Snapdragon 675 SoC, 4 GB of memory, and launched with Android Pie. Unlike older Moto Z models, this phone was focused on upper mid-range market, and came with a Moto 360 Camera Mod in the box.
Moto M
The Moto M was introduced in late 2016. It was a mid-range device launched in markets such as mainland China and Hong Kong, Southeast Asia, and South Asia. It featured an octa-core MediaTek processor and 3 or 4 GB of memory depending on storage, and ran near-stock Android. Despite the Moto branding, the bootloader and software update software came from Lenovo directly instead of Motorola Mobility, similar to the Moto E3 Power.
Moto C
The Moto C was announced in 2017 as a low-end device, slotting in below the Moto E as the cheapest in the Moto range. The base model includes a 5-inch display, MediaTek quad-core processor, a 5-megapixel rear-facing camera, and an LED flash for the front-facing camera. It is accompanied by a higher-end model, the Moto C Plus, which features a more powerful processor, larger battery, 8-megapixel camera, and a 720p display.
Motorola One lineup
In 2018, Motorola Mobility launched the Motorola One lineup as upper mid-range replacements for the Moto X4. In August 2018, the first phones in the lineup, the Motorola One and Motorola One Power were launched. Both phones featured dual camera setup, displays with 'notches' for sensors and front-facing camera, and they were all in the Android One program by Google, with guaranteed three-year security updates. The phones launched with Android Oreo 8.1, and were later updated to Android Pie and Android 10 at the end of 2018 and 2019, respectively.
Motorola One Vision and Motorola One Action were introduced in May and August 2019. These devices all feature Samsung Exynos processors. Moto One Vision featured dual camera setup, with the main shooter being 48-megapixel. It featured a 'hole punch' display for front camera, and 27 W TurboPower fast charging. The Motorola One Action featured a triple camera setup with an ultra-wide lens. Both phones were a part of Android One program by Google. They launched with Android Pie, and were updated to Android 10 in early January 2020.
Motorola Razr (2020)
The Razr is the first foldable smartphone from Motorola Mobility, utilizing a similar design as the original Razr. It was introduced in November 2019 and was expected to launch in 2020. It featured mid-range Snapdragon 7-series SoC and Android Pie, with promised updates to Android 10 in 2020.
Motorola Edge & Edge+
The Motorola Edge and Motorola Edge+ were introduced in April 2020. The Edge uses the Snapdragon 765G, while the Edge+ uses the Snapdragon 865; both feature Android 10.0, standard 5G and a curved 90 Hz OLED display. The Edge+ was the first Motorola phone to use a 108-megapixel sensor for the main camera with 6K video recording, and marked a return to flagship devices for Motorola Mobility.
ThinkPhone
The ThinkPhone by Motorola is a business-oriented smartphone that combines the security of ThinkShield with the mobility of Android. It is built to withstand the demands of everyday use and comes with a varigety of proprietary features that make it ideal for current Lenovo ThinkPad, ThinkBook, and ThinkCentre users.
Smartwatches
Motoactv
Motoactv is a square smartwatch running Android 2.3 released by Motorola Mobility in 2011. It contained a number of hardware features and software applications tailored to fitness training.
Moto 360
Moto 360 is a round smartwatch, powered by Google's Android Wear OS, a version of Google's popular Android mobile platform specifically designed for the wearable market. It integrates Google Now and pairs to an Android 4.3 or above smartphone for notifications and control over various features. The second version of this smartwatch was released in 2015.
MINNIDIP x RAZR CH(AIR)
In August 2020, the MINNIDIP x RAZR CH(AIR) was announced by Motorola Mobility.
Brand licensing
The company has licensed its brand through the years to several companies and a variety of home products and mobile phone accessories have been released. Motorola Mobility created a dedicated "Motorola Home" website for these products, which sells corded and cordless phones, cable modems and routers, baby monitors, home monitoring systems and pet safety systems. In 2015, Motorola Mobility sold its brand rights for accessories to Binatone, which already was the official licensee for certain home products. This deal includes brand rights for all mobile and car accessories under the Motorola brand.
In 2016, Zoom Telephonics was granted the worldwide brand rights for home networking products, including cable modems, routers, Wi-Fi range extenders and related networking products.
Sponsorship
From 2022, Motorola was the main kit sponsor of Italian football club AC Monza.
In September 2024, Motorola became the global smartphone partner of Formula 1 from 2025 onwards.
See also
iDEN
WiDEN
List of electronics brands
List of Motorola products
List of Illinois companies
Motorola Moto
Motorola Solutions
References
External links
2011 establishments in Illinois
American companies established in 2011
Corporate spin-offs
Computer hardware companies
Computer systems companies
Manufacturing companies based in Chicago
Electronics companies established in 2011
Electronics companies of the United States
American brands
American subsidiaries of foreign companies
Google acquisitions
Mobile phone manufacturers
Mobile phone companies of the United States
2012 mergers and acquisitions
2014 mergers and acquisitions | Motorola Mobility | [
"Technology"
] | 8,041 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
17,622,533 | https://en.wikipedia.org/wiki/Sexual%20dimorphism%20in%20non-human%20primates | Sexual dimorphism describes the morphological, physiological, and behavioral differences between males and females of the same species. Most primates are sexually dimorphic for different biological characteristics, such as body size, canine tooth size, craniofacial structure, skeletal dimensions, pelage color and markings, and vocalization. However, such sex differences are primarily limited to the anthropoid primates; most of the strepsirrhine primates (lemurs and lorises) and tarsiers are monomorphic.
Sexual dimorphism can manifest itself in many different forms. In male and female primates there are obvious physical difference such as body size or canine size. Dimorphism can also be seen in skeletal features such as the shape of the pelvis or the robustness of the skeleton. There are two mating systems in the sexual selection of primates.
Types
Body size
Extant primates exhibit a broad range of variation in sexual size dimorphism (SSD), or sexual divergence in body size. It ranges from species such as gibbons and strepsirrhines (including Madagascar's lemurs) in which males and females have almost the same body sizes to species such as chimpanzees and bonobos in which males' body sizes are larger than females' body sizes. In extreme cases, males have body sizes that are almost twice as large as those of females, as in some species including gorillas, orangutans, mandrills, hamadryas baboons, and proboscis monkeys.
Patterns of size dimorphism exhibited in primates may correspond to the intensity of competition between members of the same sex for access to mates–intrasexual competition, counteracted by fecundity selection on the other sex. Some callitrichine and strepsirrhine primates are, however, characterized by the reverse dimorphism, a phenomenon in which females are larger than males.
Tooth size
Canine sexual dimorphism is one particular type of sexual dimorphism, in which males of a species have larger canines than females. Within primates, the male and female canine tooth size varies among different taxonomic subgroups, yet canine dimorphism is most extensively found in catarrhines among haplorhine primates. For example, in many baboons and macaques, the size of male canines is more than twice as large as that of female canines. It is rare, yet females in some species are known to have larger canines than males, such as the eastern brown mouse lemur (Microcebus rufus). Sexual dimorphism in canine tooth size is relatively weak or absent in extant strepsirrhine primates. The South American titi monkeys (Callicebus moloch), for instance, do not exhibit any differences in the size of canine teeth between the sexes.
Among different types of teeth constituting the dentition of primates, canines exhibit the greatest degree of variation in tooth size, whereas incisors have less variation and cheek teeth have the least. A canine dimorphism is also more widely seen in maxillary canines than in mandibular canines.
Craniofacial structure
Craniofacial sex differentiation among anthropoid primates varies in a wide range and is known to arise primarily through ontogenetic processes. Studies on hominids have shown that, in general, males tend to have a greater increase of facial volume than of neurocranial volume, a more obliquely oriented foramen magnum, and a more pronounced rearrangement of the nuchal region. The breadth, length and height of the neurocranium in adult male macaques, guenons, orangutans and gorillas are about nine percent larger than the neurocranial dimensions in adult females, whereas in spider monkeys and gibbons the sex differences is on a general average about 4 to 5 percent. In orangutans, males and females share similarities in facial dimensions and growth in terms of orbits, nasal width, and facial width. They tend to have some significant differences, however, in various facial heights (e.g., height of the anterior face, premaxilla, and nose).
Skeletal structure
Primates also exhibit sexual dimorphism in skeletal structures. In general, skeletal dimorphism in primates is primarily known as a product of body mass dimorphism. Hence, males have proportionally larger skeletons compared to females due to their larger body masses. Larger and more robust skeletal structures in males is also attributable to better developed muscle scarring, and more intense cresting of bones compared to those of females. Male gorillas, for example, possess large sagittal and nuchal crests, which correspond to their large temporalis muscles and nuchal musculature. Also, an unusual skeletal dimorphism includes enlarged, hollow hyoid bones found in males of gibbons and howler monkeys, which contribute to the resonation of their voices.
Pelage color and markings
Sex differences in pelage, such as capes of hair, beards, or crests, and skin can be found in several species among adult primates. Several species (e.g., Lemur macaco, Pithecia pithecia, Alouatta caraya) show an extensive dimorphism in pelage colors or patterning. For example, in mandrills (Mandrillus sphinx), males display extensive red and blue coloration on their face, rump and genitalia as compared to females. Male mandrills also possess a yellow beard, nuchal crest of hair, and pronounced boney paranasal ridges, all of which are absent or vestigial in females. Studies have shown that male color in mandrills serves as a badge of social status in the species.
Temporary sexual dimorphism
Some sexual dimorphic traits in primates are known to appear on a temporary basis. In squirrel monkeys (Saimiri sciureus), males can gain fat as much as 25 percent of the body mass only during the breeding season, specifically in their upper torso, arms, and shoulders. This seasonal phenomenon, known as “male fattening,” is associated with both male–male competition and female choice for larger males.
Orangutan males tend to gain weight and develop large cheek flanges, when they achieve dominance over other group members.
Vocalization
In many adult primates, dimorphism in the vocal repertoire can appear in both call production (e.g., calls with a particular set of acoustic traits) and usage (e.g., call frequency and context-specificity) between the sexes. Sex-specific calls are commonly found in Old World monkeys, in which males produce loud calls for intergroup spacing and females produce copulation calls for sexual activity. Forest guenons also tend to display strong vocal divergences between sexes, with mostly sex-specific call types. Studies on De Brazza's monkeys (Cercopithecus neglectus), one of the African guenon species, have shown that call rates in adult females (24 call.hr-1) are more than seven times higher than in adult males (2.5call.hr-1). A usage of different call types also differs between sexes, in that females mostly utter contact(-food) calls, whereas males produce a great number of threat calls. Such difference in vocal usage is associated with social roles, with females being involved in more social tasks within the group and males being responsible for territory defense.
Ultimate mechanisms
Ultimate mechanisms explain the evolutionary history and functional significance of the sexual dimorphism expressed among primates.
Intrasexual selection
Intrasexual selection is one of two components that make up sexual selection as defined by Darwin and refers to competition within a sex for access to mates. For species where such competition determines their reproductive success, selection pressures for increased strength/size and weaponry/canines are heightened, resulting in the evolution of sexual dimorphism. The most common illustration of intrasexual selection is male–male competition, in which males of a species fight or threaten each other for preferential access to females.
A prime example of intrasexual selection can be found in baboons. Male baboons are known to violently fight and threaten each other over females and show high levels of sexual dimorphism in body and canine size, both of which are assumed to aid in combat. The “winners” of such interactions mate with the desired female and produce offspring, passing their traits to the next generation, while unsuccessful males are excluded from mating. As a result, traits beneficial to fighting are selected for in the population over time.
Intrasexual selection also operates through female–female competition. Female howler monkeys, for example, experience frequent agonistic encounters both within and between coalitions. One possible evolutionary explanation for female–female competition in red howler monkeys is its role as a counter-strategy to infanticide through group size regulation (by evicting other females). Instances of female–female competition such as this could potentially select for greater body and/or canine size in females, as well as reduce the pressure for those same traits in males by limiting the occurrence of male–male competition (as group size regulation reduces the likelihood of threats/takeovers by immigrant males), overall reducing dimorphism.
Intersexual selection
Intersexual selection is often represented by female choice, but more generally refers to differential preferences one sex has for individuals of the opposite sex, including sexual coercion of females by males. Sexual dimorphism arises via intersexual selection most often through female preference for certain male secondary sexual characteristics, but can also arise as a result of males' selective pressure to physically overpower females he wishes to mate with. Gamete production, gestation, lactation, and infant care are all highly energetically costly processes for females, so these energy and time constraints would lead them to choose—when possible—mates with higher quality genes leading to higher quality offspring with a better chance of survival and reproductive success. Importantly, what is deemed “high quality” by the female in this instance need not confer a survival advantage to the male, but must be perceived by females as a sign of attractiveness if not health. A common example of this is sexually dimorphic coloration.
In rhesus macaques, red facial coloration is attractive to females to the point of influencing the reproductive success of high-ranking males. To be deemed a sexually selected trait said trait must be heritable and confer a reproductive advantage. In this example, facial redness is heritable, but only increases a male's reproductive success if he is also high-ranking, and rank is not determined by facial redness (dominance in rhesus macaques is not competition-based but rather queue-based). While this trait is believed to be the result of intersexual selection, such examples demonstrate the complex nature of determining evolutionary explanations for sexually dimorphic characteristics.
Paternity confusion is another component of female choice. By actively seeking out matings with newly immigrated males, females produce offspring whose fathers are unknown. This is beneficial to females because it allows them to sire offspring without the risk of infanticide. These “sneaky matings” mean that even if a male “wins” the opportunity to mate with a female, the father of her infant is not necessarily determined by the outcome of male–male competition, thus limiting the reproductive benefits associated with such competition and dampening the pressure for sexually selected dimorphic traits.
Mating systems
In haplorhines, the degree to which intrasexual and intersexual selection drive sexual dimorphism is dependent on the social organization and mating system of a particular species. Phylogenetic studies reveal polygynous systems among haplorhines show elevated levels of dimorphism. This is expected because polygynous groups, i.e. single-male multi-female, imply males can monopolize females, suggesting male–male competition plays an important role in ensuring any opportunity to reproduce. Without somewhat guaranteed access to females—as is the case in monogamous primates—a male's lifetime reproductive output is dependent on his ability to outcompete other males and lead a group of females. As an exception, among polygynous primates, colobines as a group consistently exhibit a low level of sexual size dimorphism for unclear reasons. Gibbons, on the other hand, are an example of monogamous primates that can be described as “monomorphic,” meaning males and females appear the same with little to no sexual dimorphism. The correlation between mating system and dimorphism in haplorhines likely indicates sexual selection is the driving force behind dimorphism in species of this suborder. Another more general trend observed in haplorhines is a correlation between body mass dimorphism with overall body size.
The lack of a clear relationship between mating system and intensity of sexual dimorphism in strepsirrhines remains a mystery, with some explanations ranging from ecological constraints to selection for speed and agility to unique instances of female social dominance (such as in lemurs) reducing dimorphism. One study offers a challenge to the argument that environmental constraints are the main factor driving monomorphism on Madagascar but fails to isolate specific factors to substitute this theory; simply put, there is no consensus on why strepsirrhines do not follow similar patterns to haplorhines.
Phylogeny
Similar magnitudes of body weight dimorphism have been observed in all species within several taxonomic groups such as callitrichids, hylobatids, Cercopithecus, and Macaca. Such correlation between phylogenetic relatedness and sexual dimorphism across different groups reflects similarities in their behaviors and ecological conditions, but not in independent adaptations. This idea is referred to as “phylogenetic niche conservatism."
Terrestriality
Terrestrial primates tend to show a greater degree of dimorphism than arboreal primates. It has been hypothesized that larger sizes of body mass and canine tooth are favored among males of terrestrial primates due to the likelihood of higher vulnerability to predators. Another hypothesis suggests that arboreal primates have limitations on their upper body size, given that larger body size could disrupt their usage of terminal branches for locomotion. However, among some species of guenons (Cercopithecus), arboreal blue monkeys (C. mitis) appear to be more sexually dimorphic than terrestrial vervet monkeys (C. aethiops).
Niche divergence
It has been hypothesized that niche divergence between the sexes attributes to the evolution of size dimorphism in primates. Males and females are known to have different preferences for ecological habitat due to different reproductive activities, which could possibly lead to dietary differences, followed by dimorphic morphological traits. This niche divergence hypothesis, however, has never been strongly supported due to the lack of compelling data.
See also
Sex differences in humans
References
Academic resources
The Differences between the sexes. Short, R. V. (Roger Valentine), 1930-, Balaban, E. (Evan), International Conference on Comparative Physiology (11th : 1992 : Crans, Switzerland). Cambridge: Cambridge University Press. 1994. . OCLC 28708379.
Plavcan, J. Michael (2001). "Sexual dimorphism in primate evolution". American Journal of Physical Anthropology. 116 (S33): 25–53. . ISSN 0002-9483.
Larsen, C. S. "Equality for the Sexes in Human Evolution? Early Hominid Sexual Dimorphism and Implications for Mating Systems and Social Behavior." Proceedings of the National Academy of Sciences, vol. 100, no. 16, 2003, pp. 9103–9104., .
Leigh, Steven R. “Socioecology and the Ontogeny of Sexual Size Dimorphism in Anthropoid Primates.” American Journal of Physical Anthropology, vol. 97, no. 4, 1995, pp. 339–356., .
Scaglion, Richard. "On Australopithecine Sexual Dimorphism". Current Anthropology, vol. 19, no. 1, 1978, pp. 153–154., .
"Sexual Dimorphism". The American Naturalist, vol. 37, no. 437, 1903, pp. 349–349., .
Primate anatomy
Sexual dimorphism | Sexual dimorphism in non-human primates | [
"Physics",
"Biology"
] | 3,473 | [
"Sex",
"Sexual dimorphism",
"Symmetry",
"Asymmetry"
] |
17,623,209 | https://en.wikipedia.org/wiki/Intensifier | In linguistics, an intensifier (abbreviated ) is a lexical category (but not a traditional part of speech) for a modifier that makes no contribution to the propositional meaning of a clause but serves to enhance and give additional emotional context to the lexical item it modifies. Intensifiers are grammatical expletives, specifically expletive attributives (or, equivalently, attributive expletives or attributive-only expletives; they also qualify as expressive attributives), because they function as semantically vacuous filler. Characteristically, English draws intensifiers from a class of words called degree modifiers, words that quantify the idea they modify. More specifically, they derive from a group of words called adverbs of degree, also known as degree adverbs. When used grammatically as intensifiers, these words cease to be degree adverbs, because they no longer quantify the idea they modify; instead, they emphasize it emotionally. By contrast, the words moderately, slightly, and barely are degree adverbs, but not intensifiers. The other hallmark of prototypical intensifiers is that they are adverbs which lack the primary characteristic of adverbs: the ability to modify verbs. Intensifiers modify exclusively adjectives and adverbs, but this rule is insufficient to classify intensifiers, since there exist other words commonly classified as adverbs that never modify verbs but are not intensifiers, e.g. questionably.
For these reasons, Huddleston argues that intensifier not be recognized as a primary grammatical or lexical category. Intensifier is a category with grammatical properties, but insufficiently defined unless its functional significance is also described (what Huddleston calls a notional definition).
Technically, intensifiers roughly qualify a point on the affective semantic property, which is gradable. Syntactically, intensifiers pre-modify either adjectives or adverbs. Semantically, they increase the emotional content of an expression. The basic intensifier is very. A versatile word, English permits very to modify adjectives and adverbs, but not verbs. Other intensifiers often express the same intention as very.
Examples of English intensifiers
Syntax
Not all intensifiers are the same syntactically since they vary on whether they can be used attributively or predicatively. For example, really and super can be used in both ways:
a. The car is really expensive. - Predicative intensifier
b. the really expensive car - Attributive intensifier
a. Today was super cold. - Predicative intensifier
b. a super cold day - Attributive intensifier
Words such as so can occur only as predicative intensifiers, and others, such as -ass, typically are used only as attributive intensifiers:
a. The car is so expensive. - Predicative intensifier
b. *the so expensive car - Attributive intensifier (not grammatically correct, not used)
a. *Today was cold-ass. - Predicative intensifier (not grammatically correct, not used)
b. a cold-ass day - Attributive intensifier
There is dialectal variation in the "correctness" of certain forms.
Illocutionary force
An intensifier expressly provides an emotional characterization of a lexical item for the benefit of a reader or listener. A speaker or writer's use of the characterization encourages a reader or listener to consider and begin to feel the underlying emotion.
Persuasiveness and credibility
Legal
In general, overuse of intensifiers negatively affects the persuasiveness or credibility of a legal argument. However, if a judge's authoritative written opinion uses a high rate of intensifiers, a lawyer's written appeal of that opinion that also uses a high rate of intensifiers is associated with an increase in favorable outcomes for such appeals. Also, when judges disagree with each other in writing, they tend to use more intensifiers.
Business
A 2010 Stanford Graduate School of Business study found that, in quarterly earnings conference calls, deceptive CEOs use a greater percent quantity of "extreme positive emotions words" than do CEOs telling the truth. That finding agrees with the presumption that CEOs attempting to hide poor performance exert themselves more forcefully to persuade their listeners. David F. Larcker and Zakolyukinaz give a list of 115 extreme positive emotions words, including intensifiers: awful, deucedly, emphatically, excellently, fabulously, fantastically, genuinely, gloriously, immensely, incredibly, insanely, keenly, madly, magnificently, marvelously, splendidly, supremely, terrifically, truly, unquestionably, wonderfully, very [good].
A 2013 Forbes Magazine article about counterproductive modes of expression in English specifically discouraged use of really and observed that it provokes doubt and degrades the speaker's credibility: "'Really' – Finder calls this a 'poor attempt to instill candor and truthfulness' that makes clients and coworkers question whether you're really telling the truth."
Quotes
Philosopher Friedrich Nietzsche, in Human, All Too Human (1878), wrote:
The narrator. It is easy to tell whether a narrator is narrating because the subject matter interests him or because he wants to evoke interest through his narrative. If the latter is the case, he will exaggerate, use superlatives, etc. Then he usually narrates the worse, because he is not thinking so much about the story as about himself.
A quote often attributed to Mark Twain but probably by newspaper editor William Allen White is "Substitute 'damn' every time you're inclined to write 'very'; your editor will delete it and the writing will be just as it should be."
See also
Comparison (grammar)
Do-support
Intensive pronoun
Intensive word form
So (sentence closer)
Notes
References
External links
Modifying Meaning: Intensifiers
Grammar | Intensifier | [
"Technology"
] | 1,283 | [
"Parts of speech",
"Components"
] |
17,623,931 | https://en.wikipedia.org/wiki/Kinesin%208 | The Kinesin 8 Family are a subfamily of the molecular motor proteins known as kinesins. Most kinesins transport materials or cargo around the cell while traversing along microtubule polymer tracks with the help of ATP-hydrolysis-created energy. The Kinesin 8 family has been shown to play an important role in chromosome alignment during mitosis. Kinesin 8 family members KIF18A in humans and Kip3 in yeast have been shown to be in vivo plus-end directed microtubule depolymerizers. During prometaphase of mitosis, the microtubules attach to the kinetochores of sister chromatids. Kinesin 8 is thought to play some role in this process, as knockdown of this protein via siRNA produces a phenotype of sister chromatids that are unable to align properly.
References
External links
Video Illustrations of Kinesin 8 depletion
Motor proteins | Kinesin 8 | [
"Chemistry"
] | 196 | [
"Molecular machines",
"Motor proteins"
] |
17,625,193 | https://en.wikipedia.org/wiki/European%20Sleep%20Apnea%20Database | The European Sleep Apnea Database (ESADA) (also referred to with spelling European Sleep Apnoea Database and European Sleep Apnoea Cohort) is a collaboration between European sleep centres as part of the European Cooperation in Science and Technology (COST) Action B 26. The main contractor of the project is the Sahlgrenska Academy at Gothenburg University, Institute of Medicine, Department of Internal Medicine, and the co-ordinator is Jan Hedner, MD, PhD, Professor of Sleep Medicine.
The book Clinical Genomics: Practical Applications for Adult Patient Care said ESADA was an example initiatives which afford an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS). Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource.
History
2006 – 2010
In 2006 the European Sleep Apnea Database (ESADA) began as an initiative between 27 European sleep study facilities to combine information and compile it into one shared resource. It was formed as part of the European Cooperation in Science and Technology (COST) Action B 26. In addition to financial help from COST, the initiative received assistance from companies Philips Respironics and ResMed. The database storing the association's resource information is located in Gothenburg, Sweden. The group's goal was twofold: to serve as a reference guide to those researching sleep disorders, and to compile information about how different caregivers treat patients with sleep apnea.
5,103 patients were tracked from March 2007 to August 2009. Data collected on these patients included symptoms experienced, medication, medical history, and sleep data, all inputted into an online format for further analysis. Database researchers reported on their methodology and results in 2010 to the American Thoracic Society, on their observed findings regarding percentages of metabolic and cardiovascular changes related to patients with obstructive sleep apnea. The 2010 research resulted from collaboration between 22 study centres across 16 countries in Europe involving 27 researchers. The primary participants who presented to the American Thoracic Society included researchers from: Sahlgrenska University Hospital, Gothenburg, Sweden; Technion – Israel Institute of Technology, Haifa, Israel; National TB & Lung Diseases Research Institute, Warsaw, Poland; CNR Institute of Biomedicine and Molecular, Palermo, Italy; Instituto Auxologico Italiano, Ospedale San Luca, Milan, Italy; and St. Vincent University Hospital, Dublin, Ireland. Their analysis was published in 2010 in the American Journal of Respiratory and Critical Care Medicine.
2011 – present
In 2011 there were 22 sleep disorder centres in Europe involved in the collaboration. The group published research in 2011 analyzing the percentage of patients with sleep apnea that have obesity. By 2012 the database maintained information on over 12,500 patients in Europe; it also contained DNA samples of 2,600 individuals. ESADA was represented in 2012 at the 21st annual meeting of the European Sleep Research Society in Paris, France, and was one of four European Sleep Research Networks that held a session at the event. Pierre Escourrou and Fadia Jilwan wrote a 2012 article for the European Respiratory Journal after studying data from ESADA involving 8,228 total patients from 23 different facilities. They analyzed whether polysomnography was a good measure for hypopnea and sleep apnea. Researchers from the department of pulmonary diseases at Turku University Hospital in Turku, Finland compared variations between sleep centres in the ESADA database and published their findings in the European Respiratory Journal. They looked at the traits of 5,103 patients from 22 centres. They reported on the average age of patients in the database, and the prevalence by region of performing sleep study with cardiorespiratory polygraphy.
The database added a centre in Hamburg, Germany in 2013 managed by physician Holger Hein. The group's annual meeting in 2013 was held in Edinburgh, United Kingdom and was run by Renata Riha. By March 2013, there were approximately 13,000 total patients being studied in the program, with about 200 additional patients being added into the database each month. Analysis published by researchers from Italy and Sweden in September 2013 in the European Respiratory Journal analyzed if there was a correlation between renal function problems and obstructive sleep apnea. They analyzed data from 17 countries in Europe representing 24 sleep centres and 8,112 total patients. They tested whether patients of different types of demographics with other existing health problems had a change in probability of kidney function problems, if they concurrently had obstructive sleep apnea.
In 2014, researchers released data studying 5,294 patients from the database compared prevalence of sleep apnea with increased blood sugar. Their results were published in the European Respiratory Journal. They studied glycated hemoglobin levels in the patients and compared them with measured severity in sleep apnea. The researchers analyzed glycated hemoglobin levels among a class of individuals with less severe sleep apnea and those with a higher determined amount of sleep apnea problems. As of 20 March 2014 the database included information on a total of 15,956 patients. A 2014 article in the European Respiratory Journal drawing from the ESADA analyzed whether lack of adequate oxygen during a night's sleep was an indicator for high blood pressure.
Reception
In the 2013 book Clinical Genomics: Practical Applications for Adult Patient Care, ESADA is said to be an example of the kind of initiative which affords an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS). Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource.
See also
Catathrenia
Deviated septum
Narcolepsy
Obesity hypoventilation syndrome
Congenital central hypoventilation syndrome
Sleep medicine
Sleep sex
Snoring
Notes
References
Further reading
External links
Sleep disorders
University of Gothenburg
Databases in Sweden
Health informatics
Science and technology in Europe
Organizations established in 2006
Pulmonology and respiratory therapy organizations
International medical associations of Europe | European Sleep Apnea Database | [
"Biology"
] | 1,237 | [
"Behavior",
"Health informatics",
"Sleep disorders",
"Sleep",
"Medical technology"
] |
17,625,503 | https://en.wikipedia.org/wiki/HD%2074180 | HD 74180 is a single star in the constellation Vela. It is a yellow-white F-type supergiant with a mean apparent magnitude of +3.81 and a spectral classification F8Ib. Estimates of its distance to Earth vary between 3,200 and 8,300 light-years.
b Velorum has been classified as a suspected α Cygni variable star which varies by only 0.06 magnitude. There are possible periods near 53, 80, and 160 days, but the variation is largely irregular. It lies less than a degree from the small open cluster NGC 2645, but is not a member.
Several studies have considered b Velorum to be a highly luminous supergiant or hypergiant with an early F spectral type, for example F2 Ia+, F0 Ia, and F4 I. There were corresponding luminosity estimates of . A 2015 study used the Barbier-Chalonge-Divan (BCD) system to derive a luminosity of and a cooler less luminous F8 Ib spectral type.
Distance and size
Multiple papers give different distances for b Velorum. Bailer-Jones et al. (2021) give a distance of about . The Hipparcos spacecraft give a parallax of , translating into a distance of . Aidelman et al. (2015) give a distance of . At the Hipparcos distance, bVelorum has its apparent brightness diminished by 1.11magnitudes due to interstellar extinction.
b Velorum has an angular diameter estimated at . The physical size depends on the star's distance, and could be assuming the distance of Aidelman et al., assuming the Hipparcos distance, or even assuming the Bailer-Jones et al. distance.
In chinese astronomy
In Chinese, (), meaning Celestial Earth God's Temple, refers to an asterism consisting of Kappa Velorum, Gamma2 Velorum, b Velorum and Delta Velorum. Consequently, Kappa Velorum itself is known as (), "the Fifth Star of Celestial Earth God's Temple".
Notes
References
Velorum, b
074180
Vela (constellation)
Binary stars
Alpha Cygni variables
F-type supergiants
Durchmusterung objects
042570
3445 | HD 74180 | [
"Astronomy"
] | 474 | [
"Vela (constellation)",
"Constellations"
] |
17,625,568 | https://en.wikipedia.org/wiki/HD%2088955 | HD 88955 is a single, white-hued star in the southern constellation of Vela. It can be viewed with the naked eye, having an apparent visual magnitude of 3.85. The distance to HD 88955 can be determined from its annual parallax shift of , which yields a separation of 100 light years from the Sun. It is moving further from the Earth with a heliocentric radial velocity of +7 km/s. Bayesian analysis suggests HD 88955 is a member of the Argus Association, a group of co-moving stars usually associated with the IC 2391 open cluster.
This is an A-type main-sequence star with a stellar classification of A2 V. It is about 410 million years old with a projected rotational velocity of 100 km/s. The star has 2.17 times the mass of the Sun and 2.11 times the Sun's radius. It is radiating 23 times the Sun's luminosity from its photosphere at an effective temperature of 9,451 K. An infrared excess has been detected from HD 88955, which analysis suggests is a debris disc with a mean temperature of orbiting the host star at an average distance of .
References
A-type main-sequence stars
Circumstellar disks
Vela (constellation)
Velorum, q
Durchmusterung objects
088955
050191
4023 | HD 88955 | [
"Astronomy"
] | 286 | [
"Vela (constellation)",
"Constellations"
] |
17,625,628 | https://en.wikipedia.org/wiki/Magic%20number%20%28chemistry%29 | The concept of magic numbers in the field of chemistry refers to a specific property (such as stability) for only certain representatives among a distribution of structures. It was first recognized by inspecting the intensity of mass-spectrometric signals of rare gas cluster ions. Then, the same effect was observed with sodium clusters.
In case a gas condenses into clusters of atoms, the number of atoms in these clusters that are most likely to form varies between a few and hundreds. However, there are peaks at specific cluster sizes, deviating from a pure statistical distribution. Therefore, it was concluded that clusters of these specific numbers of atoms dominate due to their exceptional stability. The concept was also successfully applied to explain the mono-dispersed occurrence of thiolate-protected gold clusters; here the outstanding stability of specific cluster sizes is connected with their respective electronic configuration.
The term magic numbers is also used in the field of nuclear physics. In this context, magic numbers refer to a specific number of protons or neutrons that forms complete nucleon shells.
See also
Magic number (physics)
References
Gas laws | Magic number (chemistry) | [
"Chemistry"
] | 222 | [
"Gas laws"
] |
17,626,156 | https://en.wikipedia.org/wiki/LANA | The latency-associated nuclear antigen (LANA-1) or latent nuclear antigen (LNA, LNA-1) is a Kaposi's sarcoma-associated herpesvirus (KSHV) latent protein initially found by Moore and colleagues as a speckled nuclear antigen present in primary effusion lymphoma cells that reacts with antibodies from patients with KS. It is the most immunodominant KSHV protein identified by Western-blotting as 222–234 kDa double bands migrate slower than the predicted molecular weight. LANA has been suspected of playing a crucial role in modulating viral and cellular gene expression. It is commonly used as an antigen in blood tests to detect antibodies in persons that have been exposed to KSHV.
KSHV or Human herpesvirus 8 (HHV-8) has been identified as the etiological agent of Kaposi’s sarcoma (KS) and certain AIDS-associated lymphomas.
As KSHV establishes latent infection in tumorous foci, it invariably expresses high levels of the viral LANA protein, which is necessary and sufficient to maintain the KSHV episome.
Encoded by ORF73, LANA-1 is one of few HHV-8 encoded proteins that is highly expressed in all latently infected tumour cells; specifically, it is a phosphoprotein with an acidic internal repeat domain flanked by a carboxy-terminal domain and an amino-terminal domain. LANA-1 acts as a transcriptional regulator, and it has been implicated directly in oncogenesis because of its ability to bind to the tumour-suppressing protein p53 and to the retinoblastoma protein pRb. This leads to the inactivation of p53-dependent promoters and induction of E2F-dependent genes.
Studies have also shown that LANA-1 can transactivate the promoter of the reverse transcriptase subunit of the human telomerase holoenzyme, thus overextending a critical step in cellular transformation.
Paradoxically, LANA-1 has been shown to be involved in transcriptional repression and can, moreover, interact with the mSin3/HDAC1 co-repressor complex.
It has been also shown to interact with and inhibit the ATF4/CREB2 transcription factor that interacts with the basic transcription machinery and to bind with two human chromosome-associated cellular proteins, MeCP2 and DEK.
LANA-1 is associated with cellular chromatin and stays on the chromosomes during cell division. It maintains the viral genomes during cell division by tethering the viral episomes to the chromosomes. It binds directly to replication origin recognition complexes (ORCs) that are primarily associated with the terminal repeat (TR) region of the HHV-8 genome.
Notes and references
Immune system | LANA | [
"Biology"
] | 595 | [
"Immune system",
"Organ systems"
] |
17,626,196 | https://en.wikipedia.org/wiki/IPTC%207901 | IPTC 7901 is a news service text markup specification published by the International Press Telecommunications Council that was designed to standardize the content and structure of text news articles. It was formally approved in 1979, and is still the world's most common way of transmitting news articles to newspapers, web sites and broadcasters from news services.
Using fixed metadata fields and a series of control and other special characters, IPTC 7901 was designed to feed text stories to both teleprinters and computer-based news editing systems. Stories can be assigned to broad categories (such as sports or culture) and be given a higher or lower priority based upon importance.
Although superseded in the early 1990s by IPTC Information Interchange Model and later by the XML-based News Industry Text Format, 7901's huge existing user base has persisted.
IPTC 7901 is closely related to ANPA-1312 (also known as ANPA 84-2 and later 89-3) of the Newspaper Association of America.
C0 control codes
The standard replaces several of the ASCII control codes:
External links
IPTC Website
specification on iptc.org
References
Metadata | IPTC 7901 | [
"Technology"
] | 231 | [
"Computing stubs",
"Metadata",
"Data"
] |
17,626,256 | https://en.wikipedia.org/wiki/ANPA-1312 | ANPA-1312 is a 7-bit text markup specification for news agency use. It standardizes the content and structure of text news articles. It was created by (and named after) the former American Newspaper Publishers Association (ANPA) (1887–1992), one of the predecessors of the News Media Alliance, a trade association of American newspapers.
The specification was last modified in 1989 and as of the 2010s was still a common method of transmitting news to newspapers, websites and broadcasters from news agencies in North and South America. Although the specification provides for 1200 bit-per-second transmission speeds, modern transmission technology removes any speed limitations.
Using fixed metadata fields and a series of control and other special characters, ANPA-1312 was designed to feed text stories to both teleprinters and computer-based news editing systems.
Although the specification was based upon the 7-bit ASCII character set, some characters were declared to be replaced by traditional newspaper characters, e.g. small fractions and typesetting code. As such, it was a bridge between older typesetting methods, newspaper traditions and newer technology.
Perhaps the best known part of ANPA-1312 was the category code system, which allowed articles to be categorized by a single letter. For example, sports articles were assigned category S, and articles about politics were assigned P. Many newspapers found the system convenient and sorted both incoming news agency and staff articles by ANPA-1312 categories.
Although ANPA-1312 was superseded in the early 1990s by IPTC Information Interchange Model and later by the XML-based News Industry Text Format, its popularity in North America remained strong due, in part, to its widespread support by The Associated Press and the reluctance of newspapers to invest in new computers or software modifications. The Associated Press retired ANPA as a delivery option in 2023.
A modified version — but with the same name — was implemented by several news agencies after the vendor of some early computer systems modified the specification for its own purposes.
An international standard, IPTC 7901, is widely used in Europe and is closely related to ANPA-1312.
C0 control codes
The ASCII control characters were modified/replaced in this format.
External links
Wire Service Transmission Guidelines (ANPA-1312 standard description)
Newspaper Association of America
International Press Telecommunications Council
References
Markup languages
Metadata | ANPA-1312 | [
"Technology"
] | 482 | [
"Metadata",
"Data"
] |
17,627,014 | https://en.wikipedia.org/wiki/Joback%20method | The Joback method, often named Joback–Reid method, predicts eleven important and commonly used pure component thermodynamic properties from molecular structure only. It is named after Kevin G. Joback in 1984 and developed it further with Robert C. Reid. The Joback method is an extension of the Lydersen method and uses very similar groups, formulas, and parameters for the three properties the Lydersen already supported (critical temperature, critical pressure, critical volume).
Joback and Reid extended the range of supported properties, created new parameters and modified slightly the formulas of the old Lydersen method.
Basic principles
Group-contribution method
The Joback method is a group-contribution method. These kinds of methods use basic structural information of a chemical molecule, like a list of simple functional groups, add parameters to these functional groups, and calculate thermophysical and transport properties as a function of the sum of group parameters.
Joback assumes that there are no interactions between the groups, and therefore only uses additive contributions and no contributions for interactions between groups. Other group-contribution methods, especially methods like UNIFAC, which estimate mixture properties like activity coefficients, use both simple additive group parameters and group-interaction parameters. The big advantage of using only simple group parameters is the small number of needed parameters. The number of needed group-interaction parameters gets very high for an increasing number of groups (1 for two groups, 3 for three groups, 6 for four groups, 45 for ten groups and twice as much if the interactions are not symmetric).
Nine of the properties are single temperature-independent values, mostly estimated by a simple sum of group contribution plus an addend.
Two of the estimated properties are temperature-dependent: the ideal-gas heat capacity and the dynamic viscosity of liquids. The heat-capacity polynomial uses 4 parameters, and the viscosity equation only 2. In both cases the equation parameters are calculated by group contributions.
Model strengths and weaknesses
Strengths
The popularity and success of the Joback method mainly originates from the single group list for all properties. This allows one to get all eleven supported properties from a single analysis of the molecular structure.
The Joback method additionally uses a very simple and easy to assign group scheme, which makes the method usable for people with only basic chemical knowledge.
Weaknesses
Newer developments of estimation methods have shown that the quality of the Joback method is limited. The original authors already stated themselves in the original article abstract: "High accuracy is not claimed, but the proposed methods are often as or more accurate than techniques in common use today."
The list of groups does not cover many common molecules sufficiently. Especially aromatic compounds are not differentiated from normal ring-containing components. This is a severe problem because aromatic and aliphatic components differ strongly.
The data base Joback and Reid used for obtaining the group parameters was rather small and covered only a limited number of different molecules. The best coverage has been achieved for normal boiling points (438 components), and the worst for heats of fusion (155 components). Current developments that can use data banks, like the Dortmund Data Bank or the DIPPR data base, have a much broader coverage.
The formula used for the prediction of the normal boiling point shows another problem. Joback assumed a constant contribution of added groups in homologous series like the alkanes. This doesn't describe the real behavior of the normal boiling points correctly. Instead of the constant contribution, a decrease of the contribution with increasing number of groups must be applied. The chosen formula of the Joback method leads to high deviations for large and small molecules and an acceptable good estimation only for mid-sized components.
Formulas
In the following formulas Gi denotes a group contribution. Gi are counted for every single available group. If a group is present multiple times, each occurrence is counted separately.
Normal boiling point
Melting point
Critical temperature
This critical-temperature equation needs a normal boiling point Tb. If an experimental value is available, it is recommended to use this boiling point. It is, on the other hand, also possible to input the normal boiling point estimated by the Joback method. This will lead to a higher error.
Critical pressure
where Na is the number of atoms in the molecular structure (including hydrogens).
Critical volume
Heat of formation (ideal gas, 298 K)
Gibbs energy of formation (ideal gas, 298 K)
Heat capacity (ideal gas)
The Joback method uses a four-parameter polynomial to describe the temperature dependency of the ideal-gas heat capacity. These parameters are valid from 273 K to about 1000 K. But you are able to extend it to 1500K if you don't mind a bit of uncertainty here and there.
Heat of vaporization at normal boiling point
Heat of fusion
Liquid dynamic viscosity
where Mw is the molecular weight.
The method uses a two-parameter equation to describe the temperature dependency of the dynamic viscosity. The authors state that the parameters are valid from the melting temperature up to 0.7 of the critical temperature (Tr < 0.7).
Group contributions
Example calculation
Acetone (propanone) is the simplest ketone and is separated into three groups in the Joback method: two methyl groups (−CH3) and one ketone group (C=O). Since the methyl group is present twice, its contributions have to be added twice.
References
External links
Online molecular drawing and property estimation tool with the Joback method
Online property estimation with the Joback method
Physical chemistry
Thermodynamic models | Joback method | [
"Physics",
"Chemistry"
] | 1,122 | [
"Applied and interdisciplinary physics",
"Thermodynamic models",
"Thermodynamics",
"nan",
"Physical chemistry"
] |
17,627,671 | https://en.wikipedia.org/wiki/Global%20Alliance%20for%20Information%20and%20Communication%20Technologies%20and%20Development | The Global Alliance for Information and Communication Technologies and Development (also known as Global Alliance for ICT and Development or GAID) is a subgroup or continuation of the United Nations Information and Communication Technologies Task Force. GAID was launched by the United Nations Secretary General Kofi Annan in 2006, at the end of his tenure.
Mission
According to the United Nations press release, the organization's mission is to facilitate and promote integration by providing a platform for an open, inclusive, multi-stakeholder cross-sectoral policy dialogue on the role of information and communication technology in development. The Alliance organizes events which address core issues related to the role of information and communication technology in economic development, especially of impoverished or disadvantaged segments of society.
Structure
The Alliance makes extensive use of web-based collaborative technologies, thus minimizing the need for physical meetings. Members include both governments and members of the private and commercial sectors. Its inaugural meeting was held on June 19, 2006 in Kuala Lumpur, Malaysia.
It is led by an 11-person steering committee, with Intel Corporation's Craig Barrett as its initial chairperson, followed by Talal Abu-Ghazaleh, a prominent businessman in the Arab world. A 60-person Strategy Council is composed of 30 governments, plus 30 representatives from the private sector, civil society and international
organizations.
Communities of Expertise
Communities of Expertise (CoEs) are networks convened by GAID to bring together motivated and capable actors to address specific, well-defined ICTD problems in a results-oriented manner and to identify and disseminate good practices. These CoEs include:
Education, Entrepreneurship, Governance, Health
Cross cutting themes include gender, rural development and connectivity.
In October 2006, under the CoE of Governance, the ICT4Peace Foundation was invited to a partnership with the UN Department of Economic and Social Affairs (DESA) and GAID as the focal point for overseeing and promoting the spirit of Paragraph 36 of the WSIS Tunis Declaration. Paragraph 36 flags the potential for the use of ICTs to promote peace and to prevent conflict which, inter alia, negatively affects achieving development goals. On 13 January 2010 the ICT4Peace Foundation created the ICT4Peace Foundation wiki on Haiti Earthquake.
Notes
Organizations established in 2006
Internet governance organizations
International Telecommunication Union
Information and communication technologies for development
Organizations established by the United Nations | Global Alliance for Information and Communication Technologies and Development | [
"Technology"
] | 472 | [
"Information and communications technology",
"Information and communication technologies for development"
] |
17,630,223 | https://en.wikipedia.org/wiki/Helicos%20Biosciences | Helicos BioSciences Corporation was a publicly traded life science company headquartered in Cambridge, Massachusetts focused on genetic analysis technologies for the research, drug discovery and diagnostic markets. The firm's Helicos Genetic Analysis Platform was the first DNA-sequencing instrument to operate by imaging individual DNA molecules. In May 2010, the company announced a 50% layoff and a re-focusing on molecular diagnostics. After long financial troubles, in November 2010, Helicos was delisted from NASDAQ.
Helicos was co-founded in 2003 by life science entrepreneur Stanley Lapidus, Stephen Quake, and Noubar Afeyan with investments from Atlas Venture, Flagship Ventures, Highland Capital Partners, MPM Capital, and Versant Ventures.
Helicos's technology images the extension of individual DNA molecules using a defined primer and individual fluorescently labeled nucleotides, which contain a "Virtual Terminator" preventing incorporation of multiple nucleotides per cycle. The "Virtual Terminator" technology was developed by Dr. Suhaib Siddiqi, while at Helicos Biosciences.
In the August 2009 issue of Nature Biotechnology, Dr. Stephen Quake, a professor of bioengineering at Stanford University and a co-founder of Helicos BioSciences, sequenced his own genome, using Single Molecule Sequencing for under $50,000 in reagents.
On November 15, 2012, Helicos BioSciences filed for Chapter 11 bankruptcy.
The patents that Helicos had licensed from Cal Tech (where Quake was when he made the underlying inventions) were subsequently licensed to Direct Genomics, founded by Jiankui He, a former post-doc in Quake's lab who gained notoriety in November 2018 when he created the first germline genome-edited babies.
See also
Helicos single molecule fluorescent sequencing
Pharmacogenomics
Genetic counseling
Genomics
References
External links
Helicos BioSciences Corp. firm website (Now defunct after Bankruptcy)
Biotechnology companies of the United States
Genomics companies
2003 establishments in Massachusetts
Biotechnology companies established in 2003
Life science companies based in Massachusetts
Companies based in Cambridge, Massachusetts
Companies that filed for Chapter 11 bankruptcy in 2012 | Helicos Biosciences | [
"Biology"
] | 452 | [
"Life science companies based in Massachusetts",
"Life sciences industry"
] |
17,630,523 | https://en.wikipedia.org/wiki/BanxQuote | BanxQuote was a provider and licensor of indexes and analytics, which were used as a barometer of the U.S. banking and mortgage markets. Its bank rate website and consumer banking marketplace featured daily updated market rates on banking, mortgage and loan products in the United States, until its close in 2010.
History and activities
BanxQuote was established by its parent BanxCorp in 1984, and its Internet operations were launched at a BanxQuote National Banking Conference held at Salomon Brothers in New York, on April 7, 1995.
Clients of the firm have included hundreds of financial institutions nationwide and its indexes were frequently used as a trusted source and performance benchmark by public policymakers, government agencies, major banks and corporations.
BanxQuote operated an online national banking marketplace for 15 years, until its exit in 2010.
It featured rates on money market accounts, savings and jumbo certificate of deposit (CDs), mortgage loans, home equity and auto loans for various terms and amounts.
BanxQuote also provided proprietary state-by-state, regional, and national composite benchmarks for its various banking and lending products. Clients of the firm have included hundreds of banks and financial institutions nationwide. In 1985, The Wall Street Journal started featuring BanxQuote for 17 consecutive years.
BanxQuote on Bloomberg Terminal
BanxQuote current and historical proprietary data, indices, charts and analytical tools were available on Bloomberg Terminals from 1995 until its exit from the market in 2015, reaching over 250,000 financial market professionals worldwide.
The BanxQuote Index, Trademark and Performance benchmarks
The Dow Jones Barron's Dictionary of Banking Terms defines the BanxQuote Money Market Index(tm) as an "Index of rates paid by investors on negotiable certificates of deposit and high yield savings accounts, compiled weekly by BanxCorp. The index offers a side-by-side comparison of rates paid by selected banks and savings institutions on small-denomination (under $10,000) savings accounts."
The BanxQuote Conforming-Jumbo Mortgage Index(tm) is typically used to analyze the historical spread between national average conforming and jumbo mortgage rates.
BanxQuote licenses its registered trademark, proprietary indices, data, analytical tools, and financial applications to third parties.
Case studies
AAA (American Automobile Association) Money Markets & CDs
General Electric Capital Corp. — GE Interest Plus
Ford Interest Advantage Notes issued by Ford Motor Credit Company
Bloomberg Professional terminals worldwide
UBS, one of the world's leading financial institutions
Discover Bank, part of Discover Financial Services
Countrywide Bank
MetLife Bank, a subsidiary of MetLife, Inc.
Capital One Direct Banking
Charles Schwab & Co.
Zions Bank
Usage
BanxQuote data are cited by various government agencies, policy makers, [GSEs], Non-Profit and Religious Organizations, and economists, such as outlined below.
U.S. government agencies
The White House Council of Economic Advisers
U.S. Senate Committee on Banking, Housing, and Urban Affairs
U.S. Department of the Treasury
Federal Deposit Insurance Corporation (FDIC)
William Poole (Federal Reserve Bank president), Federal Reserve Bank of St. Louis
Government Finance Officers Association
Office of Federal Housing Enterprise Oversight (OFHEO)
Office of Thrift Supervision (OTS) - Selected Asset and Liability Price Tables; As of June 30, 2007
Government Sponsored Enterprises (GSEs)
The Role of Freddie Mac, the Federal Home Loan Mortgage Corporation
Freddie Mac Provides Stability to the Mortgage Market.
Wharton Financial Institutions Center, Working Paper: Measuring the Benefits of Fannie Mae and Freddie Mac to Consumers
Foundations, non-profit, judiciary, and religious organizations
F.B. Heron Foundation - has established performance benchmarks for each asset class in its mission-related portfolio. The benchmark for deposits is the national average for two-year jumbo deposits as reported by BanxQuote.
Michigan Court, Michigan Judicial Institute - CitiStreet Investing Webcast
Diocese of Monterey, California In 2007, the bishop of Monterey established a policy that all funds of the Monterey diocese deposited in its Cash Management and Deposit and Loan programs would earn a rate tied to the BanxQuote Money Market Rate.
References
External links
BanxQuote.com website
BanxCorp corporate website
Financial services companies based in New York City
Retail financial services
American companies established in 1984
Financial services companies established in 1984
Retail companies established in 1984
American companies disestablished in 2010
Financial services companies disestablished in 2010
Retail companies disestablished in 2010
Companies based in New York (state)
Data collection
News aggregators | BanxQuote | [
"Technology"
] | 937 | [
"Data collection",
"Data"
] |
17,630,584 | https://en.wikipedia.org/wiki/Analytical%20profile%20index | The analytical profile index, or API, is a classification system for bacteria based on biochemical tests. The system was developed to accelerate the speed of identifying clinically relevant bacteria. It can only be used to identify known species from an index. The data obtained are phenotypic traits. DNA sequence-based methods, including multi-locus sequence typing and even whole-genome sequencing, are increasingly used in the identification of bacterial species and strains. These newer methods can be used to complement or even replace the use of API testing in clinical settings.
History
The analytical profile index (API) was invented in the 1970s in the United States by Pierre Janin of Analytab Products Inc. The API test system is currently manufactured by bioMérieux.
The API range introduced a standardized and miniaturized version of existing techniques, which were considered complicated to perform and difficult to read.
Description
Identification is only possible with a microbiological culture. API test strips consist of wells containing dehydrated substrates such as the redox substrates, electrogenic substrates and luminogenic substrates to detect enzymatic activity, usually related to the fermentation of carbohydrate or catabolism of proteins or amino acids by the inoculated organisms. A bacterial suspension is used to rehydrate each of the wells and the strips are incubated. During incubation, metabolism produces color changes that are either spontaneous or revealed by the addition of reagents. For example, when carbohydrates are fermented, the pH within the well decreases and that is indicated by a change in the color of the pH indicator. All test results are compiled to obtain a profile number, which is then compared with profile numbers in a commercial codebook (or online) to determine the identification of the bacterial species.
API-20E
Before starting a test, one must confirm the cultured bacteria are Enterobacteriaceae, this is done by a quick oxidase test for cytochrome coxidase. Enterobacteriaceae are typically oxidase negative, meaning they either do not use oxygen as an electron acceptor in the electron transport chain, or they use a different cytochrome enzyme for transferring electrons to oxygen. If the culture is determined to be oxidase-positive, alternative tests must be carried out to correctly identify the bacterial species. API-20E is specific for differentiating between members of the Gram-negative bacteria family Enterobacteriaceae. Another API system, API-Staph, is specific for Gram-positive bacteria, including Staphylococcus species, Micrococcus species, and related organisms.
API 20E/NE
The API 20E/NE system is a widely used biochemical test system designed for the rapid identification of Gram-negative bacteria. It specifically targets two groups: Enterobacteriaceae (API 20E) and non-Enterobacteriaceae (API 20NE). This system is particularly useful in clinical microbiology and environmental studies for identifying bacteria based on their biochemical activities.
How It Works
Test Strips: The system consists of a plastic strip containing 20 small reaction tubes (microtubes), each containing different dehydrated substrates.
Rehydration and Inoculation: To perform the test, the microtubes are rehydrated with a bacterial suspension. Each microtube is designed to detect specific enzymatic activities or carbohydrate fermentations characteristic of the bacteria being tested.
Incubation: After inoculation, the strip is incubated for a period, usually 18-24 hours, at a specified temperature (often 35-37°C). During incubation, the bacteria interact with the substrates, leading to visible color changes if the reactions occur.
Reading Results: After incubation, the results are interpreted by observing the color changes in each tube, which are indicative of positive or negative reactions. The pattern of reactions is compared to a reference database to identify the organism.
Identification: Each reaction is scored numerically, and the cumulative score forms a profile number, which is then used to identify the organism through a database or identification software.
References
Further reading
United States Patent Office, Patent Number 3936356
API System: a Multitube Micromethod for Identification of Enterobacteriaceae, P. B. Smith, K. M. Tomfohrde, D. L. Rhoden, and A. BalowsCenter for Disease Control, Atlanta, Georgia 30333
Bacteria
Biochemistry detection reactions
Microbiology techniques
Taxonomy (biology) | Analytical profile index | [
"Chemistry",
"Biology"
] | 934 | [
"Biochemistry detection reactions",
"Prokaryotes",
"Taxonomy (biology)",
"Biochemical reactions",
"Microbiology techniques",
"Bacteria",
"Microorganisms"
] |
17,630,629 | https://en.wikipedia.org/wiki/Dinitrophenyl | Dinitrophenyl is any chemical compound containing two nitro functional groups attached to a phenyl ring. It is a hapten used in vaccine preparation. Dinitrophenyl does not elicit any immune response on its own and it does not bind to any antigen.
References
Nitroarenes | Dinitrophenyl | [
"Chemistry"
] | 65 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,630,972 | https://en.wikipedia.org/wiki/Lasthenia%20chrysantha | Lasthenia chrysantha is a species of flowering plant in the family Asteraceae known by the common name alkalisink goldfields. It is endemic to the California Central Valley, where it grows in vernal pools and alkali flats.
Description
Lasthenia chrysantha is an annual herb approaching a maximum height near 28 centimeters. The stem may be branched or not and it bears mostly hairless, linear leaves up to 7 or 8 centimeters long.
Atop the hairy to hairless stems are inflorescences of flower heads with hairless phyllaries. The head contains many yellow disc florets with a fringe of small yellow ray florets. The fruit is a black oval-shaped achene a few millimeters long with a fringe of tiny dull hairs around the edge. Like other goldfields, populations of this species bloom in the spring to produce a carpet of yellow in its habitat.
External links
Jepson Manual Treatment: Lasthenia chrysantha
USDA Plants Profile
Lasthenia chrysantha U.C. Photo gallery
chrysantha
Endemic flora of California
Halophytes
Natural history of the Central Valley (California)
Flora without expected TNC conservation status | Lasthenia chrysantha | [
"Chemistry"
] | 250 | [
"Halophytes",
"Salts"
] |
17,631,262 | https://en.wikipedia.org/wiki/Lasthenia%20ferrisiae | Lasthenia ferrisiae is a species of flowering plant in the family Asteraceae known by the common name Ferris' goldfields. It is endemic to the California Central Valley, where it grows in vernal pools and alkali flats.
Description
Lasthenia ferrisiae is an annual herb approaching a maximum height of 40 centimeters. It is variable in appearance and similar to other species of goldfields (Lasthenia); it is probably the result of a cross between Lasthenia chrysantha and Lasthenia glabrata, which grow throughout its range. The stem may be branched or not and it bears hairless, linear leaves that are up to 8 centimeters long.
Atop the hairless or sparsely hairy stems are inflorescences of flower heads with fused, hairless phyllaries. The head contains many yellow disc florets with a fringe of small yellow ray florets.
The fruit is a flat, oval-shaped achene up to about 2 millimeters long.
External links
Jepson Manual Treatment: Lasthenia ferrisiae
USDA Plants Profile
Lasthenia ferrisiae (Ferris' goldfield) — U.C. Photo gallery
ferrisiae
Endemic flora of California
Halophytes
Natural history of the Central Valley (California)
Flora without expected TNC conservation status | Lasthenia ferrisiae | [
"Chemistry"
] | 266 | [
"Halophytes",
"Salts"
] |
17,632,037 | https://en.wikipedia.org/wiki/Pneumonia%20front | A pneumonia front, also known as a lake-modified synoptic scale cold front, is a rare meteorological phenomenon observed in coastal areas of Lake Michigan, in the United States, most commonly between the months of April to July. The phenomenon, according to the National Weather Service, consists of a cold front that accelerates southerly down Lake Michigan, rapidly dropping temperatures in coastal areas of the lake by or greater. These fronts are often followed by fog clouds, and, less commonly, rain.
Pneumonia fronts are most often observed when there is a large temperature difference between the cold lake waters and the warmer air over land, sometimes as much as . These conditions are present in spring and early summer. Under weak prevailing winds, a density current can often develop in the form of a lake breeze that moves from that water to the adjacent shoreline and several miles inland.
Pneumonia fronts occur most frequently on Lake Michigan's southwestern shore, in cities such as Chicago, Milwaukee, and Kenosha. However, they are also commonly observed elsewhere on the lakeshore, including cities such as Michigan City, Benton Harbor, Green Bay, and Traverse City.
History
The first documented pneumonia front was on June 13, 1909, in Michigan City, Indiana. The term 'pneumonia front' was coined by the National Weather Service in Milwaukee in the 1960s.
Causes
Pneumonia fronts occur when a cold front (generally of synoptic scale), typically approaching from the north or northeast, encounters a mass of cold, dense air that has persisted over Lake Michigan, typically a remnant of winter conditions. The air mass fuels the cold front, allowing it to grow in density and momentum as it travels south along the lake. This movement displaces the warmer, less dense air over land, leading to an abrupt and significant temperature drop. Lake Michigan's elongated north-south shape and two long north-south bays (Green Bay and Grand Traverse Bay) allow for pneumonia fronts to pick up great speed and change air temperatures relatively far inland.
Documented occurrences
The following are documented occurrences of a lake-modified synoptic scale cold front or a "pneumonia front":
See also
Bomb cyclone
Cold front
Hurricane Huron
Lake-effect snow
November gale
Pneumonia
References
Further reading
"Synoptic and Local Controls of the Lake Michigan Pneumonia Front", Cory Behnke, M.S. Thesis, University of Wisconsin - Milwaukee (2005)
Anomalous weather
Weather fronts
Great Lakes
Lake Michigan
Weather and health
Regional climate effects
Weather events in the United States | Pneumonia front | [
"Physics"
] | 502 | [
"Weather",
"Physical phenomena",
"Anomalous weather"
] |
17,633,222 | https://en.wikipedia.org/wiki/Chemically%20inert | In chemistry, the term chemically inert is used to describe a substance that is not chemically reactive. From a thermodynamic perspective, a substance is inert, or nonlabile, if it is thermodynamically unstable (positive standard Gibbs free energy of formation) yet decomposes at a slow, or negligible rate.
Most of the noble gases, which appear in the last column of the periodic table, are classified as inert (or unreactive). These elements are stable in their naturally occurring form (gaseous form) and they are called inert gases.
Noble gas
The noble gases (helium, neon, argon, krypton, xenon and radon) were previously known as 'inert gases' because of their perceived lack of participation in any chemical reactions. The reason for this is that their outermost electron shells (valence shells) are completely filled, so that they have little tendency to gain or lose electrons. They are said to acquire a noble gas configuration, or a full electron configuration.
It is now known that most of these gases in fact do react to form chemical compounds, such as xenon tetrafluoride. Hence, they have been renamed to 'noble gases', as the only two of these we know truly to be inert are helium and neon. However, a large amount of energy is required to drive such reactions, usually in the form of heat, pressure, or radiation, often assisted by catalysts. The resulting compounds often need to be kept in moisture-free conditions at low temperatures to prevent rapid decomposition back into their elements.
Inert gas
The term inert may also be applied in a relative sense. For example, molecular nitrogen is an inert gas under ordinary conditions, existing as diatomic molecules, . The presence of a strong triple covalent bond in the molecule renders it unreactive under normal circumstances. Nevertheless, nitrogen gas does react with the alkali metal lithium to form compound lithium nitride (Li3N), even under ordinary conditions. Under high pressures and temperatures and with the right catalysts, nitrogen becomes more reactive; the Haber process uses such conditions to produce ammonia from atmospheric nitrogen.
Main uses
Inert atmospheres consisting of gases such as argon, nitrogen, or helium are commonly used in chemical reaction chambers and in storage containers for oxygen- or water-sensitive substances, to prevent unwanted reactions of these substances with oxygen or water.
Argon is widely used in fluorescence tubes and low energy light bulbs. Argon gas helps to protect the metal filament inside the bulb from reacting with oxygen and corroding the filament under high temperature.
Neon is used in making advertising signs. Neon gas in a vacuum tube glows bright red in colour when electricity is passed through. Different coloured neon lights can also be made by using other gases.
Helium gas is mainly used to fill hot air and party balloons. Balloons filled with it float upwards and this phenomenon is achieved as helium gas is less dense than air.
See also
Noble metal
References
Chemical nomenclature
Chemical properties
Gases
Industrial gases
Noble gases | Chemically inert | [
"Physics",
"Chemistry",
"Materials_science"
] | 639 | [
"Matter",
"Noble gases",
"Phases of matter",
"Nonmetals",
"Industrial gases",
"nan",
"Chemical process engineering",
"Statistical mechanics",
"Gases"
] |
17,633,498 | https://en.wikipedia.org/wiki/Software%20quality%20assurance%20analyst | A software quality assurance (QA) analyst, also referred to as a software quality analyst or simply a quality assurance (QA) analyst, is an individual who is responsible for applying the principles and practices of software quality assurance throughout the software development life cycle.
Software testing is one of many parts of the larger process of QA. Testing is used to detect errors in a product, while QA also fixes the processes that resulted in those errors.
Software QA analysts may have professional certification from a software testing certification board, like the International Software Testing Qualifications Board (ISTQB).
References
Software quality
Computer occupations
Systems analysis | Software quality assurance analyst | [
"Technology",
"Engineering"
] | 128 | [
"Software engineering",
"Computer occupations",
"Software engineering stubs"
] |
17,633,579 | https://en.wikipedia.org/wiki/Topological%20combinatorics | The mathematical discipline of topological combinatorics is the application of topological and algebro-topological methods to solving problems in combinatorics.
History
The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology.
In 1978 the situation was reversed—methods from algebraic topology were used to solve a problem in combinatorics—when László Lovász proved the Kneser conjecture, thus beginning the new field of topological combinatorics. Lovász's proof used the Borsuk–Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems.
In another application of homological methods to graph theory, Lovász proved both the undirected and directed versions of a conjecture of András Frank: Given a k-connected graph G, k points , and k positive integers that sum up to , there exists a partition of such that , , and spans a connected subgraph.
In 1987 the necklace splitting problem was solved by Noga Alon using the Borsuk–Ulam theorem. It has also been used to study complexity problems in linear decision tree algorithms and the Aanderaa–Karp–Rosenberg conjecture. Other areas include topology of partially ordered sets and Bruhat orders.
Additionally, methods from differential topology now have a combinatorial analog in discrete Morse theory.
See also
Sperner's lemma
Discrete exterior calculus
Topological graph theory
Combinatorial topology
Finite topological space
References
.
Further reading
.
.
.
.
.
.
.
Combinatorics
Topology
Algebraic topology | Topological combinatorics | [
"Physics",
"Mathematics"
] | 340 | [
"Discrete mathematics",
"Algebraic topology",
"Combinatorics",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
17,633,708 | https://en.wikipedia.org/wiki/Tucker%27s%20lemma | In mathematics, Tucker's lemma is a combinatorial analog of the Borsuk–Ulam theorem, named after Albert W. Tucker.
Let T be a triangulation of the closed n-dimensional ball . Assume T is antipodally symmetric on the boundary sphere . That means that the subset of simplices of T which are in provides a triangulation of where if σ is a simplex then so is −σ.
Let be a labeling of the vertices of T which is an odd function on , i.e., for every vertex .
Then Tucker's lemma states that T contains a complementary edge - an edge (a 1-simplex) whose vertices are labelled by the same number but with opposite signs.
Proofs
The first proofs were non-constructive, by way of contradiction.
Later, constructive proofs were found, which also supplied algorithms for finding the complementary edge. Basically, the algorithms are path-based: they start at a certain point or edge of the triangulation, then go from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in a simplex which contains a complementary edge.
An easier proof of Tucker's lemma uses the more general Ky Fan lemma, which has a simple algorithmic proof.
The following description illustrates the algorithm for . Note that in this case is a disc and there are 4 possible labels: , like the figure at the top-right.
Start outside the ball and consider the labels of the boundary vertices. Because the labeling is an odd function on the boundary, the boundary must have both positive and negative labels:
If the boundary contains only or only , there must be a complementary edge on the boundary. Done.
Otherwise, the boundary must contain edges. Moreover, the number of edges on the boundary must be odd.
Select an edge and go through it. There are three cases:
You are now in a simplex. Done.
You are now in a simplex. Done.
You are in a simplex with another edge. Go through it and continue.
The last case can take you outside the ball. However, since the number of edges on the boundary must be odd, there must be a new, unvisited edge on the boundary. Go through it and continue.
This walk must end inside the ball, either in a or in a simplex. Done.
Run-time
The run-time of the algorithm described above is polynomial in the triangulation size. This is considered bad, since the triangulations might be very large. It would be desirable to find an algorithm which is logarithmic in the triangulation size. However, the problem of finding a complementary edge is PPA-complete even for dimensions. This implies that there is not too much hope for finding a fast algorithm.
Equivalent results
See also
Topological combinatorics
References
Combinatorics
Topology
Lemmas | Tucker's lemma | [
"Physics",
"Mathematics"
] | 606 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Discrete mathematics",
"Fixed-point theorems",
"Theorems in topology",
"Combinatorics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Mathematical problems",
"Lemmas"
] |
17,633,886 | https://en.wikipedia.org/wiki/Frankenia%20pauciflora | Frankenia pauciflora, the common sea-heath or southern sea-heath, is an evergreen shrub native to southern Australia. It is part of the Frankenia genus of the Frankeniaceae family.
It can be prostrate or may grow up to 0.5 m in height. Pink or white flowers are produced between June and February in its native range. It occurs in saline flats, salt marshes, or coastal limestone areas.
Taxonomy
The specific epithet pauciflora, referring the Latin words paucus, meaning few, and florus meaning flower, referring to the fact that the species produces few flowers.
Varieties
The currently recognised varieties are:
F. p. var. fruticulosa (DC.) Summerh.
F. p. var. gunnii Summerh.
F. p. var. longifolia Summerh.
F. p. var. pauciflora DC.
Habitat
Frankenia pauciflora is characterized as a halophyte and as such is found to localize in sandy soils, salt floats, salt marshes, and coastal limestones. The plant subsists in environments with a soil class of S2 and S3 which is described as moderately to highly saline soil. The species is a xerophyte, a drought-tolerant plant and survives in environments with sustained predictable dry periods followed by periods of moist soil. Frankenia pauciflora can subsist in a range of soil pHs ranging from acidic to alkaline. In addition, the plant tolerates hot overhead sun to warm low sun and is characterized as is shade tolerant.
Distribution
The species occurs in Western Australia, South Australia, Victoria, and Tasmania, where it is represented by the variety F. p. var. gunnii which only grows on Flinders, Short, and Harcus Islands. The species is generally considered not threatened, but F. p. var gunnii is considered rare as it only has a small population located in Tasmania that may be at more risk. Var. fruticulosa is found primarily in Southern Australia; var. longifolia is found in Western and Southern Australia, and var. pauciflora is found in only Western Australia.
Description
Individuals of this species are prostrate perennial shrubs up to 0.5 m in height. It forms many branches that create a thick mat-like structure. It produces fleshy, linear grey-green leaves reaching up to 2 cm, somewhat resembling thyme. The leaves can range for hairless to densely haired.
Flowers
Its flowers are 2 cm across, stalkless and are generally pink, but sometimes white. The flowers have four to six petals that usually have irregular edges. The flowers bloom either solitary at the base of stalks or in bunches of 2–25 that can be found either at the base or end of stems. Each flower is supported by 4 bracts. The circular pollen has a tricolpate morphology with a reticulated surface pattern. The species in the Frankenia pauciflora is distinguished from other members of its family by the structure of its ovaries. The female flower has a 3-branched style, while the male flower most commonly has six stamens. The ovaries usually have 3 placentae in a basal or parietal configuration. Each placenta is known to contain 2-6 ovules. The fruit is a small brown cylindrical capsule shape.
Leaves
Due to its halophytic properties, Frankenia pauciflora’s leaves are covered in minuscule salt crystals. These crystals cover the smooth upper surface of the leaf, which range from 2 to 13 mm long and 0.5 to 2.2 mm wide. Small hairs can be seen on most leaves, mainly on the midrib of the lower surface, along with folded over edges. Its leaves generally wilt and turn brown during drought periods. Var gunnii is distinguished in that they have longer, narrower leaves, and inconspicuous mid vein.
Seeds
There is one brown seed per fruit capsule, a cylindrical capsule with 5 or 6 ribbed parts measuring 3–7 mm long. The seeds come attached with a pappus-like structure and separate easily from the fruits. Seeds are sprouted during the months of late January to mid-March.
Bark
Frankenia pauciflora's bark differs from its trunk versus its younger branches. Its new branches have a smoother, and rusty brown appearance while its trunk contains rough and flaky grey to brown bark.
Reproduction
Frankenia pauciflora does not have a set flowering time, flowering throughout the year but particularly between the months of June and February, and can produce seeds at any time during the year. The flowers of Frankenia pauciflora are insect-pollinated to produce dicotyledon seeds. In particular, the flower of F. p. var gunnii are pollinated by insects in the order Diptera, Hymenoptera and Lepidoptera. It has been found that xenogamy in this species leads to more fruits per flower and more seeds in each fruit compared to autogamy; this was reported to be true in both observational studies and controlled experiments.
Uses
The relative simplicity of growth and ability for the plant to adapt to a wide range of soils makes Frankenia Pauciflora an attractive choice for home gardening. Its flame-retardant properties also provide reduced chances of bush-fire spread in risk zones such as Australia when planted surrounding homes.
Frankenia pauciflora provides shelter for many faunae as well as being a food source for a number of insects. Its thick network of fine roots are also useful for providing stability in sediments and floodplains.
References
pauciflora
Halophytes
Caryophyllales of Australia
Flora of South Australia
Flora of Tasmania
Flora of Victoria (state)
Eudicots of Western Australia | Frankenia pauciflora | [
"Chemistry"
] | 1,194 | [
"Halophytes",
"Salts"
] |
1,624,952 | https://en.wikipedia.org/wiki/Telephone%20number%20pooling | Telephone number pooling, thousands-block number pooling, or just number pooling, is a method of allocating telephony numbering space of the North American Numbering Plan in the United States. The method allocates telephone numbers in blocks of 1,000 consecutive numbers of a given central office code to telephony service providers. In the United States it replaced the practice of allocating all 10,000 numbers of a central office prefix at a time. Under number pooling, the entire prefix is assigned to a rate center, to be shared among all providers delivering services in that rate center. Number pooling reduced the quantity of unused telephone numbers in markets which have been fragmented between multiple service providers, avoided central office prefix exhaustion in high growth areas, and extended the lifetime of the North American telephone numbering plan without structure changes of telephone numbers. Telephone number pooling was first tested for area code 847 in Illinois in June 1998, and became national policy in a series of Federal Communications Commission (FCC) orders from 2000 to 2003.
History
The North American Numbering Plan is a closed numbering plan, meaning that it assigns telephone numbers to individual endpoints based on a fixed-length telephone number. The national telephone number consists of a three-digit area code, a three-digit central office code, and a four-digit line number. Thus, each central office provides a resource of 10,000 telephone lines with a unique number each. While often enough for small communities, most cities require multiple central offices to service the community.
In the North American Numbering Plan, mobile telephones do not use distinct area codes from wireline services, but many central offices provide only wireless services, or just wireline services.
After the breakup of the Bell System on January 1, 1984, most telephone service areas in the United States were dominated by one carrier which held a monopoly on local service. The landline telephone systems evolved over a period of over one hundred years before diversification, so that it was technically difficult to share infrastructure between multiple providers. Central offices were established based on local demand and convention, and dialing prefixes were assigned to single providers who managed the plant in each location. This allocated all ten thousand line numbers of the central office code to one provider, even when demand could not exhaust the numbering pool. A new, competitive provider for the same area would be assigned the entire number space of a new central office prefix, reducing the overall efficiency of number utilization in the area.
Widespread introduction of the Advanced Mobile Phone System (AMPS) mobile phone service in 1983 created two competing carriers in each service area. Additional mobile carriers entered the market to provide digital service (such as GSM, introduced in 1991). In 1985, competitive access providers (CAPs) began to offer private line and special access services; originally based on private branch exchange (PBX) standards such as direct inward dial, these evolved into competitive local exchange carriers (CLEC). The Telecommunications Act of 1996 required incumbent telephone companies (ILECs) to interconnect with the new entrants.
Deployment of cable modems and voice over IP in the 1990s further blurred the boundaries between telcos and cable television providers. Every broadband Internet provider could become a telephone company, with telephony merely being one more application running over the packet-switched network. There was no requirement that the Internet to telephony gateway be operated by a facilities-based telco or cable company; anyone could buy a large block of telephone numbers from a CLEC, deploy a server to feed the calls to broadband Internet and offer telephone service.
With the advent of competition, each individual carrier required its own prefixes in each rate center, depleting available prefixes within high-growth and high-competition areas. Many of these new prefixes were sparsely used. This led to a rapid increase in the introduction of new area codes.
Testing and deployment
Public resistance to the introduction of new area codes, whether as overlay complexes (which allowed customers to keep their existing numbers, but broke seven-digit local calling) or by area code splits (where the area code of existing numbers was changed), prompted the FCC and state commissions to introduce thousands-block number pooling, i.e. the allocation of number space in blocks of only 1,000 numbers (area code and one digit of line number), rather than of an entire central office prefix with 10,000 lines.
These developments largely coincided with the deployment of local number portability (LNP), a technology that allowed subscribers to keep their existing telephone numbers when switching to a different provider in the same community. The original LNP database contract was granted in 1996.
Telephone number pooling relies on LNP as it relies on carriers to return blocks of mostly unused numbers. The thousand-number blocks being returned to the pool may be "contaminated" with up to a hundred working numbers which must be ported to a block which the carrier intends to keep.
In 1998, the North American Numbering Plan Administration estimated that the NANP would have run out of area codes for 10-digit telephone numbers by 2025 at then-current rates of depletion.
An initial number pooling trial was conducted in area code 847 in Illinois in April 1998. Nortel had implemented support for thousands-level routing of calls in its equipment by 1999. Pooling trials were conducted in 34 area codes across a dozen US states between 1997 and 2000.
Area code 847, located northwest of Chicago, had already been subjected to multiple area code splits and a proposed overlay area code 224 with 10-digit local calling was drawing public backlash; inefficient allocation meant that some providers had been holding 10,000 number blocks in rate centers where they had few or no clients. A carrier with a few thousand clients scattered across multiple rate centers often had 50,000 allocated numbers. The state's telecommunications regulator, the Illinois Commerce Commission, at the urging of consumer advocates, pushed back against industry and FCC demands for a distributed overlay for 847 from 1999 to 2001 as half of the existing 847-numbers were not in use. The Citizens Utility Board, a Chicago-based consumer group, attempted to litigate against an FCC requirement that calls within the same area code be dialed with the area code when 224 was introduced in 2002. A similar fight by New York state's Public Service Commission to maintain seven-digit dialing within the same area code (including calls within 212) was also unsuccessful.
An attempt by the United States Telecom Association, a trade group of local telephone companies, to propose mandatory ten-digit dialing nationwide was rejected by the FCC in 2000. The FCC instead adopted many aspects of the Illinois Commerce Commission number pooling trial, including a requirement that telcos actually use 60% percent of their allocations (increased to 75% after three years) before requesting more phone numbers, allocation of new numbers to phone companies in blocks of 1,000 and a requirement that phone companies return unused numbers to a pool.
Number pooling was implemented in various areas (including Spokane in January 2002) with national rollout in the 100 largest metropolitan statistical areas (MSAs) on March 15, 2002.
While mandatory number pooling requirements originally existed only in the top 100 MSA's, the National Association of Regulatory Commissioners (NARUC) petitioned the FCC in 2006 to extend them to rural states to cope with demand for numbers for VoIP. All US states have implemented their own regulations requiring that carriers implement number pooling. By 2013, even sparsely populated Montana was using number pooling in order to extend the useful life of area code 406, the one area code in the state.
In some areas, decreased code demand and conservation efforts have allowed the introduction of proposed new area codes to be delayed. Area code 564, proposed to overlay the portion of western Washington State currently in area codes 206, 253, 360, and 425 was delayed in 2001 as codes were reclaimed and numbers pooled; it was later reinstated, initially affecting area code 360 and expanding to the other mentioned area codes as needed, with 10-digit dialing mandatory as of September 30, 2017. A proposed area code 445 overlaying Philadelphia was scrapped in 2003; it was later reinstated, and went into effect on March 3, 2018. An area code 582 intended to split Pennsylvania's existing area code 814 was abandoned in 2012. A plan which would have implemented a concentrated overlay in 2002 in the greater Hampton Roads area, Virginia area code 757, was scrapped after number pooling was implemented, and it was not until 2022 that area code 948 was introduced as an all-services overlay area code.
Number pooling remains available to carriers on an optional basis in many US markets in which it is not yet mandatory.
Thousands-block number pooling is just one of several approaches for conserving numbering resources. Other options include consolidating multiple rate centers into one–as much of the problem is caused by carriers needlessly requesting a prefix in each rate center–allowing carriers to use a single prefix in each LATA or local interconnect region to port existing numbers from all rate centers in that area, or even placing all the unused numbers in one rate center into a single available pool from which carriers port only what they–as already exists for toll-free telephone numbers with the SMS/800 database and RespOrg structure. A block of 1,000 numbers per carrier, like the earlier allocation of 10,000 numbers per carrier in each rate center, is arbitrary. Local number portability permits telephone numbers to be assigned to carriers one at a time.
Implementation
In areas which were running short of numbers, blocks of 10,000 numbers would be assigned to an individual rate center; from there, it would be split into smaller blocks of 1,000 numbers each, for assignment to individual providers by a number pooling administrator.
According to 47 CFR 52.20, a US federal regulation administered by the Federal Communications Commission:
Thousands-block number pooling is a process by which the 10,000 numbers in a central office code (NXX) are separated into ten sequential blocks of 1,000 numbers each (thousands-blocks), and allocated separately within a rate center.
In area codes where service providers are required to participate in thousands-block number pooling, the carrier is to return any blocks of 1,000 numbers which are more than 90% empty; an exemption applies for one block per rate center which the carrier must keep as an initial block or footprint block.
The Pooling Administrator, a neutral third party, maintains no more than a six-month inventory of telephone numbers in each thousands-block number pool.
The default National Number Pool Administration in the United States is Somos, the North American Numbering Plan Administrator. Canada currently has no number pooling, but has been ordered by the Canadian Radio-television and Telecommunications Commission to implement it by 2025.
Local exchange routing databases now include a "block ID" to indicate the ownership of the specific sub-blocks within a prefix.
An example of a small hamlet with number pooling is La Fargeville, New York (population 600), in the 315/680 area codes. Once a small incorporated village built around a saw mill, its town hall closed in 1922. The La Fargeville rate center's local calling area is the same as neighboring Clayton, New York, yet there is a separate Verizon landline exchange for each village — likely as a historical artifact of an earlier era when telcos built many small, local stations. While both villages are served by separate, unattended remote switching centers controlled from Watertown, Verizon nominally has a half-dozen competitors offering local numbers in tiny La Fargeville:
Ten thousand telephone numbers for a hamlet of 600 people is inefficient, but the actual result, were number pooling not available, would be seven times worse; each telephone company would be assigned an entire 10,000-number telephone exchange code for a total of 70,000 numbers. Repeating this method in the entire numbering plan area for each municipality would quickly exhaust the number of central office codes of the NPA.
See also
Numbering plan area
References
Telephone numbers
North American Numbering Plan | Telephone number pooling | [
"Mathematics"
] | 2,483 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
1,625,074 | https://en.wikipedia.org/wiki/Caterpillar%20D6 | The Caterpillar D6 track-type tractor is a medium bulldozer manufactured by Caterpillar Inc. with a nominal operating weight of . The military versions were classified as the SNL G152 medium tractor, under the G-numbers classification system used for army tractors.
D6 Version history
The D6 started out in 1935 as the RD6, fitted with a 3-cylinder D6600 engine. The numbering was changed to the D6 in 1937.
Caterpillar first introduced the entirely new D6 in 1941 with the 4R & 5R series. This was powered by the D4600 engine of (drawbar).
The D6 4R/5R was replaced by the D6 8U & 9U series, fitted with the 6-cylinder D318 engine of in 1947.
In 1959 the D6 was replaced with the D6B after 60,000 D6 4R/5R and 8U/9U tractors had been built.
In 1963 the D6C was introduced.
In 1977 the D6D with a engine was introduced, as well as the D6D SA version with (drawbar) for agricultural use.
The D6H was introduced in 1986 along with 3 other H-Series track-type tractors. This was the first D6 with Caterpillar's elevated drive sprocket undercarriage.
The D6R replaced the D6H in 1996, and came in five models ranging from the D6R to the D6R XL and XR, and the D6R XL (IG) and D6R LGP. All used the Cat 3306T 6-cylinder engine.
The D6M XL and D6M LGP versions with weighed . Respectively, closer to the older D6C and D6D. Both used the Cat 3116T engine.
Since 1996 the D6K at and , the D6N at and and the D6T at at all have been introduced.
Dozer blades
The dozer blade on the front of the tractor can come in different varieties:
A straight blade ("S-Blade") which is short and has no lateral curve, no side wings, and can be used for fine grading.
A universal blade ("U-Blade") which is tall and very curved, and has large side wings to carry more material.
An "S-U" combination blade which is shorter, has less curvature, and smaller side wings.
A Power Angle Tilt ("PAT") blade is a straight blade which has the ability to change the angle of the blade from side to side as well as the ability to hydraulically raise, tilt and angle the blade. This configuration is typically used in fine grading applications.
Rear equipment
The rear of the machine can be outfitted with various work tools to complement the front blade:
The rear of the machine can have large slabs of steel to add weight to the rear of the machine to aid in heavy dozing applications.
A drawbar and CCU (Cable control unit) for pulling towed scrapers and trailed implements; later models now use hydraulics in place of the CCU to operate scrapers.
A hydraulic winch can be outfitted on the rear of the machine for towing or pulling; for forestry work, usually with a lighter front blade as well, for pushing the brush and trees about.
A ripper with a single or multiple shanks can be added to the rear of the machine to break up hard soil or rock.
Fitted with a linkage (three-point hitch) for use in agriculture with mounted implements like plows.
The crawler tractor is powerful, yet small and light, weighing 16 to 20 tons (18 to 23 short tons) depending on configuration. This makes it ideal for working on very steep slopes, in forests, and for backfilling pipelines safely without risking damage to the pipe.
A low ground pressure version is also available with extra wide tracks to provide low ground pressure which allows working in muddy areas without sinking.
See also
Civil engineering
Heavy equipment
Caterpillar D7
Caterpillar D5
References
Caterpillar Pocket Guide (Track type tractors 1925-57), Iconographix,
External links
Caterpillar D-Series Track-Type Tractors - Official Caterpillar website
Configure your own Caterpillar Track-Type Tractor - from the official website
Military D6 at U.S. Veterans Memorial Museum
Caterpillar Inc. vehicles
Tractors
Tracked vehicles
Bulldozers | Caterpillar D6 | [
"Engineering"
] | 914 | [
"Engineering vehicles",
"Tractors",
"Bulldozers",
"Caterpillar Inc. vehicles"
] |
1,625,082 | https://en.wikipedia.org/wiki/Weak%20isospin | In particle physics, weak isospin is a quantum number relating to the electrically charged part of the weak interaction: Particles with half-integer weak isospin can interact with the bosons; particles with zero weak isospin do not.
Weak isospin is a construct parallel to the idea of isospin under the strong interaction. Weak isospin is usually given the symbol or , with the third component written as or is more important than ; typically "weak isospin" is used as short form of the proper term "3rd component of weak isospin". It can be understood as the eigenvalue of a charge operator.
Notation
This article uses and for weak isospin and its projection.
Regarding ambiguous notation, is also used to represent the 'normal' (strong force) isospin, same for its third component a.k.a. or . Aggravating the confusion, is also used as the symbol for the Topness quantum number.
Conservation law
The weak isospin conservation law relates to the conservation of weak interactions conserve . It is also conserved by the electromagnetic and strong interactions. However, interaction with the Higgs field does not conserve , as directly seen in propagating fermions, which mix their chiralities by the mass terms that result from their Higgs couplings. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time, even in vacuum. Interaction with the Higgs field changes particles' weak isospin (and weak hypercharge). Only a specific combination of electric charge is conserved.
The electric charge, is related to weak isospin, and weak hypercharge, by
In 1961 Sheldon Glashow proposed this relation by analogy to the Gell-Mann–Nishijima formula for charge to isospin.
Relation with chirality
Fermions with negative chirality (also called "left-handed" fermions) have and can be grouped into doublets with that behave the same way under the weak interaction. By convention, electrically charged fermions are assigned with the same sign as their electric charge.
For example, up-type quarks (u, c, t) have and always transform into down-type quarks (d, s, b), which have and vice versa. On the other hand, a quark never decays weakly into a quark of the same Something similar happens with left-handed leptons, which exist as doublets containing a charged lepton (, , ) with and a neutrino (, , ) with In all cases, the corresponding anti-fermion has reversed chirality ("right-handed" antifermion) and reversed sign
Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have and form singlets that do not undergo charged weak interactions.
Particles with do not interact with ; however, they do all interact with the .
Neutrinos
Lacking any distinguishing electric charge, neutrinos and antineutrinos are assigned the opposite their corresponding charged lepton; hence, all left-handed neutrinos are paired with negatively charged left-handed leptons with so those neutrinos have Since right-handed antineutrinos are paired with positively charged right-handed anti-leptons with those antineutrinos are assigned The same result follows from particle-antiparticle charge & parity reversal, between left-handed neutrinos () and right-handed antineutrinos ().
Weak isospin and the W bosons
The symmetry associated with weak isospin is SU(2) and requires gauge bosons with (, , and ) to mediate transformations between fermions with half-integer weak isospin charges. implies that bosons have three different values of
boson is emitted in transitions →
boson would be emitted in weak interactions where does not change, such as neutrino scattering.
boson is emitted in transitions → .
Under electroweak unification, the boson mixes with the weak hypercharge gauge boson ; both have This results in the observed boson and the photon of quantum electrodynamics; the resulting and likewise have zero weak isospin.
See also
Weak hypercharge
Weak charge
Mathematical formulation of the Standard Model
Footnotes
References
Standard Model
Flavour (particle physics)
Electroweak theory
he:איזוספין חלש | Weak isospin | [
"Physics"
] | 952 | [
"Standard Model",
"Physical phenomena",
"Electroweak theory",
"Fundamental interactions",
"Particle physics"
] |
1,625,087 | https://en.wikipedia.org/wiki/Area%20code%20split | In telecommunications, an area code split is the practice of introducing a new telephone area code by geographically dividing an existing numbering plan area (NPA), and assigning area codes to the resulting divisions, but retaining the existing area code only for one of the divisions. The purpose of this practice is to provide more central office prefixes, and therefore more telephone numbers, in an area with high demand for telecommunication services, and prevent a shortage of telephone numbers.
An increasing demand for telephone numbers has existed since the development of automatic telephony in the early 20th century, but was spurred especially since the 1990s, with the proliferation of fax machines, pager systems, mobile telephones, computer modems and, eventually, smart phones.
The implementation of an area code split typically involves the establishment of a Class-4 toll switching center for each division of the existing numbering plan area that receive a new area code. The local seven-digit telephone numbers in any of the areas are typically not changed. The existing central office prefixes are maintained and only the central offices of the new divisions are reassigned to a new area code. The impact of a split on the general public involves the printing of new stationery, advertisements, and signage for many customers, and the dissemination of the new area code to family, friends, and customers. Computer systems and telephone equipment may require updates in address books, speed dialing directories, and other automated equipment.
Since area code splits have substantial impact in the involved communities, and involve substantial cost in telephone plant and exchange equipment, they are planned carefully well ahead of implementation with the intent that an area is not again affected by a subsequent realignment for at least a decade.
The new boundaries of the numbering plan areas are drawn in a manner that minimizes splitting communities and should coincide with political subdivision where practical. Other geographic features, such as rivers and bodies of water, mountain ranges, or highways may serve as guides for boundary placements. Tributary toll telephone routes should not be unduly cut, so prevent rerouting to new toll center switching systems.
The area that retains the existing area code is typically the largest, or historically more established or developed place, but the locations of large government bodies or other criteria may affect this decision.
Area code overlays
Not withstanding the desire for long-term stability of the local numbering plan and customer understanding, rapid growth in some areas has resulted in many splits within just a few years.
As a result, in the early 1990s, the North American Numbering Plan Administrator (NANPA) introduced another method for exhaustion relief: the area code overlay. This method assigns multiple area codes to the same numbering plan area, so that existing subscribers can keep established telephone numbers. Only new accounts and extra lines receive telephone numbers with the new area code. This method requires ten-digit dialing of local calls for customers of both area codes. Since 2007, most territories use overlays for mitigating numbering shortages. Most area code relief plans today do not even consider splits as relief options.
See also
Flash-cut
Number pooling
Permissive dialing
Seven-digit dialing
Telephone exchange
References
Telephone numbers | Area code split | [
"Mathematics"
] | 636 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
1,625,167 | https://en.wikipedia.org/wiki/Pluribus | The Pluribus multiprocessor was an early multi-processor computer designed by BBN for use as a packet switch in the ARPANET. Its design later influenced the BBN Butterfly computer.
The Pluribus had its beginnings in 1972 when the need for a second-generation interface message processor (IMP) became apparent. At that time, the BBN had already installed IMPs at more than thirty-five ARPANET sites. These IMPs were Honeywell 316 and 516 minicomputers. The network was growing rapidly in several dimensions: number of nodes, hosts, and terminals; volume of traffic; and geographic coverage (including plans, now realized, for satellite extensions to Europe and Hawaii).
A goal was established to design a modular machine which, at its lower end, would be smaller and less expensive than the 316's and 516's while being expandable in capacity to provide ten times the bandwidth of, and capable of servicing five times as many input-output (I/O) devices as, the 516. Related goals included greater memory addressing capability and increased reliability.
The designers decided on a multiprocessor approach because of its promising potential for modularity, for cost per performance advantages, for reliability, and because the IMP packet switch algorithms were clearly suitable for parallel processing by independent processors.
Hardware
A Pluribus consisted of two or more standard 19" electronic equipment racks, each divided into four bays. Each bay contained a backplane bus and an independent power supply. A bay might contain a processor bus, a shared memory bus, or an I/O bus. Custom-built bus couplers connected the bays to one another so that the processors could reach the shared memory and the I/O devices.
A 6-processor Pluribus was used as a network switch to interconnect BBN's five Tenex/"Twenex" timesharing systems along with 378 terminals on direct serial and dial-in modem lines. The Pluribus used the Lockheed SUE as its processor. The SUE was similar to DEC's PDP-11.
Software
The Pluribus software implemented MIMD symmetric multiprocessing. Software processes were implemented using non-preemptive multiprogramming. Process scheduling used a hardware device, called the pseudo-interrupt device or PID, that was accessible to both programs and to I/O devices. Each processor ran its own copy of the process scheduler, which would read an integer value from the PID. The value was used to select the process to run. If a program or device needed to signal another process to run, it would write that process' number into the PID. The PID would emit the highest priority process that anyone had requested, and served them out to all processors.
An important aspect of the Pluribus software was the "STAGE" system, which detected system errors and took steps to recover from them. The processor clocks had interrupt handlers which implemented watchdog timers on all processors. If a processor stopped running, another processor would detect it and initiate a recovery. The recovery process would unlock any locks placed on shared resources, release allocated storage, and restart all processing on all processors. This was acceptable on an ARPANET routing node, since any lost packets would eventually be retransmitted.
References
Further reading
Networking hardware
ARPANET | Pluribus | [
"Engineering"
] | 692 | [
"Computer networks engineering",
"Networking hardware"
] |
1,625,292 | https://en.wikipedia.org/wiki/GD2 | GD2 is a disialoganglioside expressed on tumors of neuroectodermal origin, including human neuroblastoma and melanoma, with highly restricted expression on normal tissues, principally to the cerebellum and peripheral nerves in humans.
The relatively tumor-specific expression of GD2 makes it a suitable target for immunotherapy with monoclonal antibodies or with artificial T cell receptors. An example of such antibodies is hu14.18K322A, a monoclonal antibody. This anti-GD2 antibody is currently undergoing a phase II clinical trial in the treatment of previously untreated high risk neuroblastoma given alongside combination chemotherapy prior to stem cell transplant and radiation therapy. A prior phase I clinical trial for patients with refractory or recurrent neuroblastoma designed to decrease toxicity found safe dosage amounts and determined that common toxicities, particularly pain, could be well managed. The chimeric (murine-human) anti-GD2 monoclonal antibody ch14.18 is FDA-approved for the treatment of pediatric patients with high-risk neuroblastoma and has been studied in patients with other GD2-expressing tumors.
See also
3F8
References
Further reading
Glycolipids | GD2 | [
"Chemistry"
] | 267 | [
"Glycobiology",
"Carbohydrates",
"Glycolipids"
] |
1,625,328 | https://en.wikipedia.org/wiki/Omron | , styled as OMRON, is a Japanese electronics company based in Kyoto, Japan. Omron was established by in 1933 (as the Tateisi Electric Manufacturing Company) and incorporated in 1948.
The company originated in an area of Kyoto called ""(ja), from which the name "Omron" was derived. Prior to 1990, the corporation was known as Omron Tateisi Electronics. During the 1980s and early 1990s, the company motto was: "To the machine the work of machines, to man the thrill of further creation".
Omron's primary business is the manufacture and sale of automation components, equipment and systems. In the consumer and medical markets, it is known for medical equipment such as digital thermometers, blood pressure monitors and nebulizers. Omron developed the world's first electronic ticket gate, which was named an IEEE Milestone in 2007, and was one of the first manufacturers of automated teller machines (ATM) with magnetic stripe card readers.
Omron Oilfield & Marine is a provider of AC and DC drive systems and custom control systems for oil and gas and related industries.
Omron was named one of Thomson Reuters Top 100 Global Innovators in 2013.
Sales for 2023 were 876,082 million yen (up 14.8% from 2022). Net income was 73,861 million yen (up 20.3% from 2022). Basic earnings per share increased 21.8%.
Omron received a platinum (in the top 1%) EcoVadis rating for outstanding sustainability performance. The rating is based on the company's achievements in four areas: Environment, Labour & Human Rights, Sustainable Procurement and Ethics.
Business divisions and products
Operating principle: OMRON's reason for being is to create social value through business and continue to contribute to the development of society. That is precisely what the practical application of our corporate philosophy seeks to do.
Industrial automation: industrial robots, sensors, switches, industrial cameras, safety components, relays, control components, electric power monitoring equipment, power supplies and PLCs
Electronic components: relays, switches, connectors, micro sensing devices, MEMS sensors, image sensing technologies,
Social systems: access control systems (building entry systems), road management systems, traffic signal controllers, security/surveillance cameras, automated ticket gates, ticket vending machines, fare adjustment machines
Healthcare:
Develops and produces medical devices for homes and medical facilities, health management software and health improvement services.
Personal use: OMRON's wide range of home healthcare products includes: blood pressure monitors, digital thermometers, body composition monitors, pedometers, nebulizers. Omron has achieved more than 200 million Blood Pressure Monitors sold worldwide, making Omron the world leader in Electronic Blood Pressure Monitors.
Professional use: blood pressure monitors, non-invasive vascular monitors, portable ECGs, patient monitors
Other businesses
Power distribution and controls for drilling rigs
Environmental sensors
Electronic controls and automation for detention center systems
Shareholders
As of September 30, 2015:
State Street Bank and Trust Company, 505223
Japan Trustee Services Bank, Ltd.(trust account)
The Bank of Tokyo-Mitsubishi UFJ, Ltd. (trust account)
State Street Bank and Trust Company, 505001
The Bank of Kyoto, Ltd.
The Master Trust Bank of Japan, Ltd. (trust account)
Nippon Life Insurance Company
Japan Trustee Services Bank, Ltd. (trust account 9)
The Bank of New York, Non-Treaty Jasdec Account
Community activities
Omron has carried out many activities and programs for the community and the environment, such as:
Participate in collaborative projects with non-governmental organizations, such as the United Nations Children's Fund, to support developing countries in improving children's health and education.
Implement energy saving measures, reduce greenhouse gas emissions, recycle and reuse materials and products, to contribute to environmental protection and response to climate change.
Organize volunteer activities, such as donating blood, cleaning beaches, planting trees, to show care and responsibility to the community and nature.
Omron Corporation Japan is a company with a long history, reputation and innovation. Omron always strives to provide customers and society with high quality, safe and environmentally friendly products and services. Omron wishes to become a trusted and long-term partner of customers and society, contributing to promoting the sustainable development of humanity and the planet.
Gallery
See also
Motorola 88000 used by the Omron luna88k 4-processor computer
Arena, a browser which was extended by OMRON
References
External links
Medical technology companies of Japan
Electronics companies of Japan
Access control
Robotics companies of Japan
Industrial automation
Defense companies of Japan
Manufacturing companies based in Kyoto
Companies listed on the Tokyo Stock Exchange
Companies listed on the Frankfurt Stock Exchange
Electronics companies established in 1933
Japanese companies established in 1933
Japanese brands
Medical device manufacturers
Power supply manufacturers | Omron | [
"Engineering"
] | 980 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
1,625,348 | https://en.wikipedia.org/wiki/IC%20power-supply%20pin | IC power-supply pins are voltage and current supply terminals found on integrated circuits (ICs) in electrical engineering, electronic engineering, and integrated circuit design. ICs have at least two pins that connect to the power rails of the circuit in which they are installed. These are known as the power-supply pins. However, the labeling of the pins varies by IC family and manufacturer. The double subscript notation usually corresponds to a first letter in a given IC family (transistors) notation of the terminals (e.g. VDD supply for a drain terminal in FETs etc.).
The simplest labels are V+ and V−, but internal design and historical traditions have led to a variety of other labels being used. V+ and V− may also refer to the non-inverting (+) and inverting (−) voltage inputs of ICs like op amps.
For power supplies, sometimes one of the supply rails is referred to as ground (abbreviated "GND") positive and negative voltages are relative to the ground. In digital electronics, negative voltages are seldom present, and the ground nearly always is the lowest voltage level. In analog electronics (e.g. an audio power amplifier) the ground can be a voltage level between the most positive and most negative voltage level.
While double subscript notation, where subscripted letters denote the difference between two points, uses similar-looking placeholders with subscripts, the double-letter supply voltage subscript notation is not directly linked (though it may have been an influencing factor).
BJTs
ICs using bipolar junction transistors have VCC (+, positive) and VEE (-, negative) power-supply pins though VCC is also often used for CMOS devices as well.
In circuit diagrams and circuit analysis, there are long-standing conventions regarding the naming of voltages, currents, and some components. In the analysis of a bipolar junction transistor, for example, in a common-emitter configuration, the DC voltage at the collector, emitter, and base (with respect to ground) may be written as VC, VE, and VB respectively.
Resistors associated with these transistor terminals may be designated RC, RE, and RB. In order to create the DC voltages, the furthest voltage, beyond these resistors or other components if present, was often referred to as VCC, VEE, and VBB. In practice VCC and VEE then refer to the positive and negative supply lines respectively in common NPN circuits. Note that VCC would be negative, and VEE would be positive in equivalent PNP circuits.
The VBB specifies reference bias supply voltage in ECL logic.
FETs
Exactly analogous conventions were applied to field-effect transistors with their drain, source and gate terminals. This led to VD and VS being created by supply voltages designated VDD and VSS in the more common circuit configurations. In equivalence to the difference between NPN and PNP bipolars, VDD is positive with regard to VSS in the case of n-channel FETs and MOSFETs and negative for circuits based on p-channel FETs and MOSFETs.
CMOS
CMOS ICs have generally borrowed the NMOS convention of VDD for positive and VSS for negative, even though both positive and negative supply rails connect to source terminals (the positive supply goes to PMOS sources, the negative supply to NMOS sources).
In many single-supply digital and analog circuits the negative power supply is also called "GND". In "split-rail" supply systems there are multiple supply voltages. Examples of such systems include modern cell phones, with GND and voltages such as 1.2 V, 1.8 V, 2.4 V, 3.3 V, and PCs, with GND and voltages such as −5 V, 3.3 V, 5 V, 12 V. Power-sensitive designs often have multiple power rails at a given voltage, using them to conserve energy by switching off supplies to components that are not in active use.
More advanced circuits often have pins carrying voltage levels for more specialized functions, and these are generally labeled with some abbreviation of their purpose. For example, VUSB for the supply delivered to a USB device (nominally 5 V), VBAT for a battery, or Vref for the reference voltage for an analog-to-digital converter. Systems combining both digital and analog circuits often distinguish digital and analog grounds (GND and AGND), helping isolate digital noise from sensitive analog circuits. High-security cryptographic devices and other secure systems sometimes require separate power supplies for their unencrypted and encrypted (red/black) subsystems to prevent leakage of sensitive plaintext.
BJTs and FETs mixed
Although still in relatively common use, there is limited relevance of these device-specific power-supply designations in circuits that use a mixture of bipolar and FET elements, or in those that employ either both NPN and PNP transistors or both n- and p-channel FETs. This latter case is very common in modern chips, which are often based on CMOS technology, where the C stands for complementary, meaning that complementary pairs of n- and p-channel devices are common throughout.
These naming conventions were part of a bigger picture, where, to continue with bipolar-transistor examples, although the FET remains entirely analogous, DC or bias currents into or out of each terminal may be written IC, IE, and IB. Apart from DC or bias conditions, many transistor circuits also process a smaller audio-, video-, or radio-frequency signal that is superimposed on the bias at the terminals. Lower-case letters and subscripts are used to refer to these signal levels at the terminals, either peak-to-peak or RMS as required. So we see vc, ve, and vb, as well as ic, ie, and ib. Using these conventions, in a common-emitter amplifier, the ratio vc/vb represents the small-signal voltage gain at the transistor, and vc/ib the small-signal trans-resistance, from which the name transistor is derived by contraction. In this convention, vi and vo usually refer to the external input and output voltages of the circuit or stage.
Similar conventions were applied to circuits involving vacuum tubes, or thermionic valves, as they were known outside of the U.S. Therefore, we see VP, VK, and VG referring to plate (or anode outside of the U.S.), cathode (note K, not C) and grid voltages in analyses of vacuum triode, tetrode, and pentode circuits.
See also
4000 series
7400 series
Bob Widlar
Common collector
Differential amplifier
List of 4000 series integrated circuits
List of 7400 series integrated circuits
Logic family
Logic gate
Open collector
Operational amplifier applications
Pin-compatibility
Reference designator
Notes
References
Integrated circuits
fr:Boîtier de circuit intégré#Broches d'alimentation d'un circuit intégré | IC power-supply pin | [
"Technology",
"Engineering"
] | 1,474 | [
"Computer engineering",
"Integrated circuits"
] |
1,625,481 | https://en.wikipedia.org/wiki/Van%20Arkel%E2%80%93de%20Boer%20process | The van Arkel–de Boer process, also known as the iodide process or crystal-bar process, was the first industrial process for the commercial production of pure ductile titanium, zirconium and some other metals. It was developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925 for Philips Nv. Now it is used in the production of small quantities of ultrapure titanium and zirconium. It primarily involves the formation of the metal iodides and their subsequent decomposition to yield pure metal, for example at one of the Allegheny Technologies' Albany plants.
This process was superseded commercially by the Kroll process.
Process
As seen in the diagram below, impure titanium, zirconium, hafnium, vanadium, thorium or protactinium is heated in an evacuated vessel with a halogen at 50–250 °C. The patent specifically involved the intermediacy of TiI4 and ZrI4, which were volatilized (leaving impurities as solid).
At atmospheric pressure TiI4 melts at 150 °C and boils at 377 °C, while ZrI4 melts at 499 °C and boils at 600 °C. The boiling points are lower at reduced pressure. The gaseous metal tetraiodide is decomposed on a white hot tungsten filament (1400 °C). As more metal is deposited the filament conducts better and thus a greater electric current is required to maintain the temperature of the filament. The process can be performed in the span of several hours or several weeks, depending on the particular setup.
Generally, the crystal bar process can be performed using any number of metals using whichever halogen or combination of halogens is most appropriate for that sort of transport mechanism, based on the reactivities involved. The only metals it has been used to purify on an industrial scale are titanium, zirconium and hafnium, and in fact it is still in use today on a much smaller scale for special purity needs.
References
Industrial processes
Zirconium
Dutch inventions
Methods of crystal growth
Titanium processes
Metallurgical processes
Materials science
1925 introductions
20th-century inventions | Van Arkel–de Boer process | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 450 | [
"Applied and interdisciplinary physics",
"Metallurgical processes",
"Methods of crystal growth",
"Metallurgy",
"Materials science",
"Titanium processes",
"Crystallography",
"nan"
] |
1,625,876 | https://en.wikipedia.org/wiki/Parkes%20process | The Parkes process is a pyrometallurgical industrial process for removing silver from lead during the production of bullion. It is an example of liquid–liquid extraction.
The process takes advantage of two liquid-state properties of zinc. The first is that zinc is immiscible with lead, and the other is that silver is 3000 times more soluble in zinc than it is in lead. When zinc is added to liquid lead that contains silver as a contaminant, the silver preferentially migrates into the zinc. Because the zinc is immiscible in the lead it remains in a separate layer and is easily removed. The zinc-silver solution is then heated until the zinc vaporizes, leaving nearly pure silver. If gold is present in the liquid lead, it can also be removed and isolated by the same process.
The process was patented by Alexander Parkes in 1850. Parkes received two additional patents in 1852.
The Parkes process was not adopted in the United States, due to the low native production of lead. The problems were overcome during the 1880s and by 1923 only when the Parkes process was used.
See also
Lead smelter
Pattison's Process
Patio process
References
Lead
Silver
Metallurgical processes | Parkes process | [
"Chemistry",
"Materials_science"
] | 255 | [
"Metallurgical processes",
"Metallurgy"
] |
1,626,186 | https://en.wikipedia.org/wiki/Herbert%20Callen | Herbert Bernard Callen (July 1, 1919 – May 22, 1993) was an American physicist specializing in thermodynamics and statistical mechanics. He is considered one of the founders of the modern theory of irreversible thermodynamics, and is the author of the classic textbook Thermodynamics and an Introduction to Thermostatistics, published in two editions. During World War II, his services were invoked in the theoretical division of the Manhattan Project.
Life and work
A native of Philadelphia, Herbert Callen received his Bachelor of Science degree from Temple University. His graduate studies were interrupted by the Manhattan Project. He also worked on a U.S. Navy project concerning guided missiles (Project Bumblebee) at Princeton University in 1945. Callen subsequently completed his PhD in physics at the Massachusetts Institute of Technology (MIT) in 1947. He was supervised by the physicist László Tisza. His doctoral dissertation concerns the Kelvin thermoelectric and thermomagnetic relations, and Onsager's reciprocal relations; it was titled On the Theory of Irreversible Processes. Upon receiving his degree, Callen spent a year at the MIT Laboratory for Insulation Research and developed his theory of electrical breakdown for insulators.
In 1948, Callen joined the faculty of the department of physics at the University of Pennsylvania and became a professor in 1956. Specialists consider his most lasting contribution to physics to be the paper co-written with Theodore A. Welton presenting a proof of the fluctuation-dissipation theorem, an extremely general result describing how a system's response to perturbations relates to its behavior at equilibrium. This crucial result became the basis for the statistical theory of irreversible processes and explains how fluctuations dissipate energy into heat in general and the phenomenon of Nyquist noise in particular. Callen then pioneered the thermodynamic Green's functions for magnetism. With his students, he studied many-body problems involving spin operators. This led to the discovery of some useful methods of approximations.
The first edition of his classic text Thermodynamics and an Introduction to Thermostatistics was published in 1960. In it, he presents a rigorous axiomatic treatment of thermodynamics in which the state functions are the fundamental entities and the processes are their differentials. The postulates concern the existence of thermal equilibrium, and the properties of entropy. From them, he derives the fundamentals of thermodynamics, found in the first eight chapters. The much revised second edition, published in 1985, became a highly cited reference in the literature and an enduring textbook.
He was a successful teacher, noted for his ability to explain complicated phenomena in simple terms. He played a key role in the recruitment of promising solid-state physicists to the University of Pennsylvania in the late 1950s and continued to be active in the university's academic affairs till his retirement in 1985.
He was the recipient of a Guggenheim Fellowship for the academic year 1972–1973. In 1984, Callen received the Elliott Cresson Medal from the Franklin Institute. He retired in 1985. He was made a member of the National Academy of Sciences in 1990.
After battling Alzheimer's disease for eleven years, Callen died in the Philadelphia suburb of Merion in 1993. He was 73 years old. He was survived by his wife, Sara Smith, and their two children, Jed and Jill.
Commenting on his own approach to science, Callen noted the importance of "inspired insight guided by faith in the simplicity of nature."
See also
List of textbooks in thermodynamics and statistical mechanics
Richard Chase Tolman
Constantin Carathéodory, who also sought an axiomatic formulation of thermodynamics
References
External links
Callen, Herbert B, and Theodore A Welton. “Irreversibility and Generalized Noise.” Physical Review, vol. 83, no. 1, 1951, pp. 34–40.
1919 births
1993 deaths
20th-century American physicists
20th-century American writers
Massachusetts Institute of Technology School of Science alumni
Scientists from Philadelphia
Temple University alumni
Thermodynamicists
University of Pennsylvania faculty
Manhattan Project people
Members of the United States National Academy of Sciences
Deaths from dementia in Pennsylvania
Deaths from Alzheimer's disease in the United States | Herbert Callen | [
"Physics",
"Chemistry"
] | 880 | [
"Thermodynamics",
"Thermodynamicists"
] |
1,626,330 | https://en.wikipedia.org/wiki/WHDLoad | WHDLoad is a software package for the Amiga platform to make installation of software to a hard disk easier, for such things as demos or games. Allowing for better compatibility for Amiga software, which can sometimes have hardware incompatibilities making them hard to use in emulated environments due to the widely varying hardware specifications of the Amiga product line across its history. WHDLoad basically circumvents the operating system in the Amiga for greater compatibility and preserves the original program environment.
WHDLoad makes it possible to autostart an installed floppy disk image by clicking an icon.
Two special parts are required, each one specially written for the client program: To install media, it must be read from the original disk and written to an image file on the hard drive by the "Imager". Then the installed program can be run from a virtual disk drive with the "Slave" interface.
Slave interface
The "Slave" interface allows interaction between the program and WHDLoad, and co-ordinates the reading and writing of files. This makes it possible to run or emulate programs that are traditionally incompatible with common emulators such as WinFellow, or WinUAE. WHDLoad can be easier to use than trying to figure out the exact configuration for the aforementioned emulators as well.
History
The primary reason for this loader is that a large number of computer games for the Amiga don't properly interact with the AmigaOS operating system, but instead run directly on the Amiga hardware, making assumptions about specific control registers, memory locations, etc. The hardware of newer Amiga models had been greatly revised, causing these assumptions to break when trying to run the same games on newer hardware, and vice versa with newer games on older hardware. WHDLoad provides a way to install such games on an AmigaOS-compatible hard drive and run on newer hardware. An added benefit is the avoidance of loading times and disk swaps, because everything the game needs is stored on the hard drive.
The first public release of WHDLoad was on September 5, 1996, and the latest available version is 18.8 released in May 2022.
Features
WHDLoad takes over the entire operating system which may cause problems with some software (e.g. TCP/IP stack), but quitting the game or demo restores the system back into its normal working state.
WHDLoad games are stored on the AmigaOS file system as disk images, relying on driver files known as "WHDLoad slaves" to work. These slave files are freely available from the Internet (as Freeware), but the games themselves have to be acquired separately, to prevent software piracy. Additionally, many fans have made their own freeware games, which are also freely, and legally, available.
How WHDLoad works
The WHDLoad "Slave" interface is integrated into the OS in such way that one can double-click a program icon to run the program at any time. When the user executes the program, by clicking a stored image icon, the AmigaOS operating system loads the WHDLoad executable and starts it. Then the loader checks the software and hardware environment, loads and checks the Slave interface required for that chosen demo or game and allocates required memory for the installed program. If the Preload feature is enabled into the requester page of WHDLoad, then the program attempts to load disk images and files into RAM (insofar as free memory is available).
At this point WHDLoad performs its main task of switching off the AmigaOS operating system, disables multitasking and interrupts, and copies memory regions which are used by AmigaOS and required by the installed program to an unused place until the AmigaOS is needed again.
WHDLoad also degrades the graphics hardware to OCS on original Amiga machines (this function actually can work also on emulated Amigas, but only on newer versions of WinUAE which recognizes WHDLoad and preserves its interrupts), then WHDLoad initializes all hardware with defined values and jumps into the Slave interface required for the program in question.
The Slave interface loads the main executable of the installed program by calling a WHDLoad function (resload_DiskLoad or resload_LoadFile), then patches the main executable (so that the loaded program will be capable of loading its data stored into the hard disk via the Slave, in order to fix compatibility problems, and to enable an exit from the program) and calls the main executable.
At this point, the program that has been installed can perform the task for which it has been written, loading its data as it would from a real floppy disk.
Users can break the execution of the loaded program by way of a "Quit" key (usually F10). When this action is performed, then the Slave interface returns to WHDLoad by calling a resload_Abort internal function.
The OS will be restored with all hardware registers and original display. The memory and all allocated resources are left free for any further usage.
Requirements
A standard Amiga 1200 or Amiga 600 without any extra memory, will only work with a limited number of games. Which usually means games using OCS/ECS and one floppy disk. It is recommended to install either a or RAM Board in the trapdoor slot to ensure compatibility for 99% of the games.
A harddisk is required, the number of games that can be installed depend on the size of the harddisk.
References
External links
whdload.de: WHDLoad home page
jimneray.com: X-bEnCh - A GUI to launch WHDLoad installed and other games/demos
See also
Amiga emulation software
AmigaOS | WHDLoad | [
"Technology"
] | 1,173 | [
"AmigaOS",
"Computing platforms"
] |
1,626,448 | https://en.wikipedia.org/wiki/CAP%20computer | The Cambridge CAP computer was the first successful experimental computer that demonstrated the use of security capabilities, both in hardware and software. It was developed at the University of Cambridge Computer Laboratory in the 1970s. Unlike most research machines of the time, it was also a useful service machine.
The sign currently on the front of the machine reads:
The CAP project on memory protection ran from 1970 to 1977. It was based on capabilities implemented in hardware, under M. Wilkes and R. Needham with D. Wheeler responsible for the implementation. R. Needham was awarded a BCS Technical Award in 1978 for the CAP (Capability Protection) Project.
Design
The CAP was designed such that any access to a memory segment or hardware required that the current process held the necessary capabilities.
The 32-bit processor featured microprogramming control, two 256-entry caches, a 32-entry write buffer and the capability unit itself, which had 64 registers for holding evaluated capabilities. Floating point operations were available using a single 72-bit accumulator. The instruction set featured over 200 instructions, including basic ALU and memory operations, to capability- and process-control instructions.
Instead of the programmer-visible registers used in Chicago and Plessey System 250 designs, the CAP would load internal registers silently when a program defined a capability. The memory was divided into segments of up to 64K 32-bit words. Each segment could contain data or capabilities, but not both. Hardware was accessed via an associated minicomputer.
All procedures constituting the operating system were written in ALGOL 68C, although a number of other closely associated protected procedures - such as a paginator - are written in BCPL.
Operation
The CAP first became operational in 1976. A fully functional computer, it featured a complete operating system, file system, compilers, and so on. The OS used a process tree structure, with an initial process called the "Master coordinator". This removed the need for separate modes of operation, as each process could directly access the resources of its children. In practice, only two levels were ever used during the CAP's operation.
In 1981 the MACRO SPITBOL version of the SNOBOL4 programming language was implemented on the CAP by Nicholas J. L. Brown.
See also
Plessey System 250
IBM System/38
C.mmp
RSRE Flex
Notes
References
Computers designed in the United Kingdom
Capability systems
One-of-a-kind computers
University of Cambridge Computer Laboratory | CAP computer | [
"Technology"
] | 499 | [
"Capability systems",
"Computer systems"
] |
1,626,485 | https://en.wikipedia.org/wiki/Gromatici | Gromatici (from Latin groma or gruma, a surveyor's pole) or agrimensores was the name for land surveyors amongst the ancient Romans. The "gromatic writers" were technical writers who codified their techniques of surveying, most of whose preserved writings are found in the Corpus Agrimensorum Romanorum.
History
Roman Republic
At the foundation of a colony and the assignation of lands the auspices were taken, for which purpose the presence of the augur was necessary. But the business of the augur did not extend beyond the religious part of the ceremony: the division and measurement of the land were made by professional measurers. These were the finitores mentioned by the early writers, who in the later periods were called mensores and agrimensores. The business of a finitor could only be done by a free man, and the honourable nature of his office is indicated by the rule that there was no bargain for his services, but he received his pay in the form of a gift. These finitores appear also to have acted as judices, under the name of arbitri (single arbiter), in those disputes about boundaries which were purely of a technical, not a legal, character. The first professional surveyor mentioned is Lucius Decidius Saxa, who was employed by Mark Antony in the measurement of camps.
Roman Empire
Under the empire the observance of the auspices in the fixing of camps and the establishment of military colonies was less regarded, and the practice of the agrimensores was greatly increased. The distribution of land amongst the veterans, the increase in the number of military colonies, the settlement of Italian peasants in the provinces, the general survey of the empire under Augustus, the separation of private and state domains, led to the establishment of a recognized professional corporation of surveyors. The practice was also codified as a system by technical writers such as Julius Frontinus, Hyginus, Siculus Flaccus, and other Gromatic writers, as they are sometimes termed. The teachers of geometry in the large cities of the empire used to give practical instruction on the system of gromatics. This practical geometry was one of the liberalia studia; but the professors of geometry and the teachers of law were not exempted from the obligation of being tutores, and from other such burdens,<ref>Frag. Vat. § 150</ref> a fact which shows the subordinate rank which the teachers of elementary science then held.
The agrimensor could mark out the limits of the centuriae, and restore the boundaries where they were confused, but he could not assign without a commission from the emperor. Military persons of various classes are also sometimes mentioned as practising surveying, and settling disputes about boundaries. The lower rank of the professional agrimensor, as contrasted with the finitor of earlier periods, is shown by the fact that in the imperial period there might be a contract with an agrimensor for paying him for his services.
Late empire
The agrimensor of the later period was merely employed in disputes as to the boundaries of properties. The foundation of colonies and the assignation of lands were now less common, though we read of colonies being established to a late period of the empire, and the boundaries of the lands must have been set out in due form. Those who marked out the ground in camps for the soldiers' tents are also called mensores, but they were military men. The functions of the agrimensor are shown by a passage of Hyginus, in all questions as to determining boundaries by means of the marks (signa), the area of surfaces, and explaining maps and plans, the services of the agrimensor were required: in all questions that concerned property, right of road, enjoyment of water, and other easements (servitutes) they were not required, for these were purely legal questions. Generally, therefore, they were either employed by the parties themselves to settle boundaries, or they received their instructions for that purpose from a judex. In this capacity they were advocati. But they also acted as judices, and could give a final decision in that class of smaller questions which concerned the quinque pedes of the Lex Mamilia (the law setting which boundary spaces were not subject to usucapio), as appears from Frontinus.
Under the Christian emperors the name mensores was changed into agrimensores to distinguish them from another class of mensores, who are mentioned in the codes of Theodosius I and Justinian I. By a rescript of Constantine I and Constans (344 AD) the teachers and learners of geometry received immunity from civil burdens. According to a constitution of Theodosius II and Valentinian III (440 AD), they received jurisdiction in questions of alluvio; but some writers disagree that this crucial passage is genuine. According to another constitution of the same emperors, the agrimensor was to receive an aureus from each of any three bordering proprietors whose boundaries he settled, and if he set a limes right between proprietors, he received an aureus for each twelfth part of the property through which fee restored the limes. Further, by another constitution of the same emperors, the young agrimensores were to be called "clarissimi" while they were students, and when they began to practise their profession, "spectabiles".Jean-Baptiste Dureau de la Malle. Economie Politique des Romains, vol. i. p. 170
Writers and works
The earliest of the gromatic writers was Frontinus, whose De agrorum qualitate, dealing with the legal aspect of the art, was the subject of a commentary by Aggenus Urbicus, a Christian schoolmaster. Under Trajan a certain Balbus, who had accompanied the emperor on his Dacian campaign, wrote a still extant manual of geometry for land surveyors (Expositio et ratio omnium formarum or mensurarum, probably after a Greek original by Hero), dedicated to a certain Celsus who had invented an improvement in a gromatic instrument (perhaps the dioptra, resembling the modern theodolite); for the treatises of Hyginus see that name.
Somewhat later than Trajan was Siculus Flaccus (De condicionibus agrorum, extant), while the most curious treatise on the subject, written in barbarous Latin and entitled Casae litterarum (long a school textbook) is the work of a certain Innocentius (4th-5th century). It is doubtful whether Boetius is the author of the treatises attributed to him. The Gromatici veteres also contains extracts from official registers (probably belonging to the 5th century) of colonial and other land surveys, lists and descriptions of boundary stones, and extracts from the Theodosian Codex.
According to Mommsen, the collection had its origin during the 5th century in the office of a vicarius (diocesan governor) of Rome, who had a number of surveyors under him. The surveyors were known by various names: decempedator (with reference to the instrument used); finitor, metator or mensor castrorum in republican times; togati Augustorum as imperial civil officials; professor, auctor as professional instructors.
The best edition of the Gromatici is by Karl Lachmann and others (1848) with supplementary volume, Die Schriften der römischen Feldmesser (1852). The 1913 edition of Carl Olof Thulin contains only a few works. The 2000 edition of Brian Campbell is much broader and also contains an English translation.
See also
Bematist
Triangulation (surveying)#History
References
Further reading
Campbell, Brian. 1996. "Shaping the Rural Environment: Surveyors in Ancient Rome." Journal of Roman Studies 86:74–99.
Campbell, J. B. 2000. The Writings of the Roman Land Surveyors: Introduction, Text, Translation and Commentary. London: Society for the Promotion of Roman Studies.
Classen, C. Joachim. 1994. "On the Training of the Agrimensores in Republican Rome and Related Problems: Some Preliminary Observations." Illinois Classical Studies 19:161-170.
Cuomo, Serafina. 2000. "Divide and Rule: Frontinus and Roman Land-Surveying." Studies in the History and Philosophy of Science 31A:189–202.
Dilke, Oswald Ashton Wentworth. 1967. "Illustrations from Roman Surveyors’ Manuals." Imago Mundi 21:9–29.
Dilke, Oswald Ashton Wentworth. 1971. The Roman Land Surveyors: An Introduction to the Agrimensores. Newton Abbot, UK: David and Charles.
Duncan-Jones, R. P. 1976. "Some Configurations of Landholding in the Roman Empire." In Studies in Roman Property. Edited by M. I. Finley, 7–24. Cambridge, UK, and New York: Cambridge Univ. Press.
Gargola, Daniel J. 1995. Lands, Laws and Gods: Magistrates and Ceremony in the Regulation of Public Lands in Republican Rome. Chapel Hill: Univ. of North Carolina Press.
Lewis, Michael Jonathan Taunton. 2001. Surveying Instruments of Greece and Rome. Cambridge, UK, and New York: Cambridge Univ. Press.
Nicolet, Claude. 1991. "Control of the Fiscal Sphere: The Cadastres." In Space, Geography, and Politics in the Early Roman Empire.'' By Claude Nicolet, 149–169. Ann Arbor: Univ. of Michigan Press.
Surveying
Ancient Roman technology
History of measurement | Gromatici | [
"Engineering"
] | 2,014 | [
"Surveying",
"Civil engineering"
] |
1,626,747 | https://en.wikipedia.org/wiki/Financial%20econometrics | Financial econometrics is the application of statistical methods to financial market data. Financial econometrics is a branch of financial economics, in the field of economics. Areas of study include capital markets, financial institutions, corporate finance and corporate governance. Topics often revolve around asset valuation of individual stocks, bonds, derivatives, currencies and other financial instruments.
It differs from other forms of econometrics because the emphasis is usually on analyzing the prices of financial assets traded at competitive, liquid markets.
People working in the finance industry or researching the finance sector often use econometric techniques in a range of activities – for example, in support of portfolio management and in the valuation of securities. Financial econometrics is essential for risk management when it is important to know how often 'bad' investment outcomes are expected to occur over future days, weeks, months and years.
Topics
The sort of topics that financial econometricians are typically familiar with include:
analysis of high-frequency price observations
arbitrage pricing theory
asset price dynamics
optimal asset allocation
cointegration
event study
nonlinear financial models such as autoregressive conditional heteroskedasticity
realized variance
fund performance analysis such as returns-based style analysis
tests of the random walk hypothesis
the capital asset pricing model
the term structure of interest rates (the yield curve)
value at risk
volatility estimation techniques such as exponential smoothing models and RiskMetrics
Research community
The Society for Financial Econometrics (SoFiE) is a global network of academics and practitioners dedicated to sharing research and ideas in the fast-growing field of financial econometrics. It is an independent non-profit membership organization, committed to promoting and expanding research and education by organizing and sponsoring conferences, programs and activities at the intersection of finance and econometrics, including links to macroeconomic fundamentals. SoFiE was co-founded by Robert F. Engle and Eric Ghysels.
Premier-quality journals which publish financial econometrics research include Econometrica, Journal of Econometrics and Journal of Business & Economic Statistics. The Journal of Financial Econometrics has an exclusive focus on financial econometrics. It is edited by Federico Bandi and Andrew Patton, and it has a close relationship with SoFiE.
The Nobel Memorial Prize in Economic Sciences has been awarded for significant contribution to financial econometrics; in 2003 to Robert F. Engle "for methods of analyzing economic time series with time-varying volatility" and Clive Granger "for methods of analyzing economic time series with common trends" and in 2013 to Eugene Fama, Lars Peter Hansen and Robert J. Shiller "for their empirical analysis of asset prices". Other highly influential researchers include Torben G. Andersen, Tim Bollerslev and Neil Shephard.
References
Econometrics
Mathematical finance
Financial economics
Financial data analysis | Financial econometrics | [
"Mathematics"
] | 583 | [
"Applied mathematics",
"Mathematical finance"
] |
1,626,816 | https://en.wikipedia.org/wiki/Size%20change%20in%20fiction | Resizing (including miniaturization, growth, shrinking, and enlargement) is a recurring theme in speculative fiction, in particular in fairy tales, fantasy, and science fiction. Resizing is often achieved through the consumption of mushrooms or toadstools, which might have been established due to their psychedelic properties, through magic, by inherent yet-latent abilities, or by size-changing rays of ambiguous properties.
Mythological precursors
Chinese mythology
In the Liezi, the giants of the Longbo Kingdom were shrunk over time as punishment by the heavenly emperor after their burning of the bones of the ao caused the Daiyu and Yuanjiao islands to sink, forcing billions of xian to evacuate their homes.
Hindu mythology
In the Ramayana, the deity Hanuman has the ability to alter his size, which he can use to enlarge himself to the size of a mountain or shrink himself down to the size of an insect.
The Bhagavata Purana mentions the story of King Kakudmi and his daughter Revati, who go to Satyaloka to ask Brahma for help deciding who Revati should marry. After waiting for a musical performance to finish, they are told by Brahma that many successions of ages have passed on Earth, so all of Kakudmi's candidates for husbands are long gone. When he and Revati return to Earth, they find that the new race of people dwelling upon it are "dwindled in stature, reduced in vigour, and enfeebled in intellect". They find Balarama, who marries Revati and shrinks her down to his size.
Along with many other texts, the Bhagavata Purana also mentions some avatars of Vishnu growing to large sizes. The legend of Matsya starting out as a tiny fish and gradually growing bigger whilst under the care of Manu is first told in the Shatapatha Brahmana. Varaha, a boar, starts out as small as a thumb and grows big enough to carry the earth on his tusks. The dwarf Vamana grows to astronomical proportions and takes three steps, liberating the three worlds from the rule of the asuras and sending King Bali to Patala after taking his third and final step. When Krishna and his friends were swallowed by Aghasura, one of the demons sent by Kamsa to kill Krishna, Krishna grew larger and larger inside of him until he burst out through the top of his head.
The tenth book and thirteenth chapter of the Devi Bhagavata Purana mentions a battle between the devas and the daitya Arunasura, during which the goddess Bhramari grew to a massive size and began to summon bees and various other insects from her hands.
In the Srimad Bhagavatam, Chitralekha shrinks Aniruddha down to the size of a doll and brings him to Usha's palace.
According to different sources, two of the eight classical siddhis are aṇimā and mahimā—the ability to shrink to the size of an atom and to expand to an infinitely large size respectively.
Greco-Roman literature
In some tellings of the myth of Tithonus, who is granted immortality but not eternal youth, his continued aging causes him to eventually become a cicada. A similar story is told about the Cumaean Sibyl in Ovid's Metamorphoses, in which her wish for longevity results in her aging body gradually shrinking, causing her to become small enough to be kept in a jar. The Metamorphoses also includes a story in which Athena transforms Arachne into a spider such that "her whole body became tiny."
According to Porphyry, the love god Eros grows when he is near his brother Anteros, but shrinks back down to his previous small form when they are apart.
Irish mythology
According to one variant of a story pertaining to the goddess Áine, her son Gerald FitzGerald has the ability to change his size, which he does when he shrinks himself down in order to jump into a bottle.
Modern depictions
In Journey to the West, Sun Wukong wields a staff called the Ruyi Jingu Bang which he can command to shrink down to the size of a needle or expand to gigantic proportions.
In one story narrated in the Norske Folkeeventyr, a tiny character called Doll i' the Grass accidentally falls into a body of water and ends up normal-sized when she is brought out by a merman.
In Lewis Carroll's Alice's Adventures in Wonderland (1865), the protagonist Alice grows or shrinks as she eats foodstuffs or drinks potions.
The first motion picture to depict a character changing size is Georges Méliès' 1901 trick film The Dwarf and the Giant, in which Méliès portrays a man who splits into two differently-sized counterparts.
In science fiction
The novel The Food of the Gods and How It Came to Earth by H. G. Wells describes a kind of food that can accelerate and extend the growth process, which causes great upheaval when it is introduced to the world. Though one of Wells' lesser-known works, many of the features of the novel have been incorporated into other works, including a film adaptation.
One of the earliest lengthy depictions of size change in popular printed fiction was the 1890 adventure/science-fiction novel by Polish scientific researcher and author Erazm Majewski, Doktor Muchołapski. Fantastyczne przygody w świecie owadów (Doctor Flycatcher. The Fantastic Adventures in the World of Insects); it was translated into several languages, including Czech and Russian, and was later referenced in another adventure/science fiction novel about size change, В Стране Дремучих Трав (In the Land of Dense Grasses), written in 1948 by Russian author Vladimir Bragin.
Other early works in the science fiction genre to feature characters changing size include the 1936 novella He Who Shrank by Henry Hasse, as well as the 1936 Metro-Goldwyn-Meyer film The Devil-Doll and the 1940 Paramount Pictures film Dr. Cyclops.
A year after its publication in 1956, the novel The Shrinking Man by Richard Matheson was adapted into the Universal Pictures film The Incredible Shrinking Man, which was followed by The Incredible Shrinking Woman in 1981.
Size alteration was also a common motif of many films directed by Bert I. Gordon, including Beginning of the End, The Amazing Colossal Man, Attack of the Puppet People, Village of the Giants, and an adaptation of H. G. Wells' The Food of the Gods. Other science fiction and horror films released in the late 1950s and 1960s with enlargement or shrinking as a major plot element include Tarantula, The Phantom Planet, Fantastic Voyage (which was adapted into an animated television series of the same name), and Attack of the 50 Foot Woman—which got a remake in 1993 starring Daryl Hannah and served as inspiration for similar plot elements in films like The 30 Foot Bride of Candy Rock, Attack of the 60 Foot Centerfold, Monsters vs. Aliens and Attack of the 50 Foot Cheerleader.
The year 1989 saw the release of Disney's Honey, I Shrunk the Kids, which grossed $222 million (equivalent to $545.67 million in 2023) at the box office worldwide and spawned a media franchise consisting of two sequels, Honey, I Blew Up the Kid and Honey, We Shrunk Ourselves, as well as a television series and a few theme park attractions, including Honey, I Shrunk the Audience.
See also
Shapeshifting
Shrink ray
Square–cube law – a mathematical principle that defines why resizing is not possible in real life
List of works of fiction about size change
References
Further reading
Glassy, Mark C. The Biology of Science Fiction Cinema. Jefferson, N.C.: McFarland. 2001.
External links
The Biology of B-Movie Monsters by Michael C. LaBarbera.
Fictional technology
Science fiction themes | Size change in fiction | [
"Physics",
"Mathematics"
] | 1,655 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
1,626,958 | https://en.wikipedia.org/wiki/Commit%20%28data%20management%29 | In computer science and data management, a commit is the making of a set of tentative changes permanent, marking the end of a transaction and providing Durability to ACID transactions. A commit is an act of committing. The record of commits is called the commit log.
In terms of transactions, the opposite of commit is to discard the tentative changes of a transaction, a rollback.
The transaction, commit and rollback concepts are key to the ACID property of databases.
A COMMIT statement in SQL ends a transaction within a relational database management system (RDBMS) and makes all changes visible to other users. The general format is to issue a BEGIN WORK (or BEGIN TRANSACTION, depending on the database vendor) statement, one or more SQL statements, and then the COMMIT statement. Alternatively, a ROLLBACK statement can be issued, which undoes all the work performed since BEGIN WORK was issued. A COMMIT statement will also release any existing savepoints that may be in use.
See also
Atomic commit
Two-phase commit protocol
Three-phase commit protocol
References
Data management
SQL
Transaction processing
Database management systems | Commit (data management) | [
"Technology"
] | 220 | [
"Data management",
"Data"
] |
1,627,004 | https://en.wikipedia.org/wiki/Javelin%20argument | The javelin argument, credited to Lucretius, is an ancient logical argument that the universe, or cosmological space, must be infinite. The javelin argument was used to support the Epicurean thesis about the universe. It was also constructed to counter the Aristotelian view that the universe is finite.
Overview
Lucretius introduced the concept of the javelin argument in his discourse of space and how it can be bound. He explained:
For whatever bounds it, that thing must itself be bounded likewise; and to this bounding thing there must be a bound again, and so on for ever and ever throughout all immensity. Suppose, however, for a moment, all existing space to be bounded, and that a man runs forward to the uttermost borders, and stands upon the last verge of things, and then hurls forward a winged javelin,— suppose you that the dart, when hurled by the vivid force, shall take its way to the point the darter aimed at, or that something will take its stand in the path of its flight, and arrest it? For one or other of these things must happen. There is a dilemma here that you never can escape from.
The javelin argument has two implications. If the hurled javelin flew onwards unhindered, it meant that the man running was not at the edge of the universe because there is something beyond the edge where the weapon flew. On the other hand, if it did not, the man was still not at the edge because there must be an obstruction beyond that stopped the javelin. However, the argument assumes incorrectly that a finite universe must necessarily have a "limit" or edge. The argument fails in the case that the universe might be shaped like the surface of a hypersphere or torus. (Consider a similar fallacious argument that the Earth's surface must be infinite in area: because otherwise one could go to the Earth's edge and throw a javelin, proving that the Earth's surface continued wherever the javelin hit the ground.)
References
Arguments
Epicureanism
Atomism
Ancient Greek physics
Physical cosmology | Javelin argument | [
"Physics",
"Astronomy"
] | 426 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
1,627,114 | https://en.wikipedia.org/wiki/Baker%27s%20map | In dynamical systems theory, the baker's map is a chaotic map from the unit square into itself. It is named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed.
The baker's map can be understood as the bilateral shift operator of a bi-infinite two-state lattice model. The baker's map is topologically conjugate to the horseshoe map. In physics, a chain of coupled baker's maps can be used to model deterministic diffusion.
As with many deterministic dynamical systems, the baker's map is studied by its action on the space of functions defined on the unit square. The baker's map defines an operator on the space of functions, known as the transfer operator of the map. The baker's map is an exactly solvable model of deterministic chaos, in that the eigenfunctions and eigenvalues of the transfer operator can be explicitly determined.
Formal definition
There are two alternative definitions of the baker's map which are in common use. One definition folds over or rotates one of the sliced halves before joining it (similar to the horseshoe map) and the other does not.
The folded baker's map acts on the unit square as
When the upper section is not folded over, the map may be written as
The folded baker's map is a two-dimensional analog of the tent map
while the unfolded map is analogous to the Bernoulli map. Both maps are topologically conjugate. The Bernoulli map can be understood as the map that progressively lops digits off the dyadic expansion of x. Unlike the tent map, the baker's map is invertible.
Properties
The baker's map preserves the two-dimensional Lebesgue measure.
The map is strong mixing and it is topologically mixing.
The transfer operator maps functions on the unit square to other functions on the unit square; it is given by
The transfer operator is unitary on the Hilbert space of square-integrable functions on the unit square. The spectrum is continuous, and because the operator is unitary the eigenvalues lie on the unit circle. The transfer operator is not unitary on the space of functions polynomial in the first coordinate and square-integrable in the second. On this space, it has a discrete, non-unitary, decaying spectrum.
As a shift operator
The baker's map can be understood as the two-sided shift operator on the symbolic dynamics of a one-dimensional lattice. Consider, for example, the bi-infinite string
where each position in the string may take one of the two binary values . The action of the shift operator on this string is
that is, each lattice position is shifted over by one to the left. The bi-infinite string may be represented by two real numbers as
and
In this representation, the shift operator has the form
which is seen to be the unfolded baker's map given above.
See also
Bernoulli process
References
Ronald J. Fox, "Construction of the Jordan basis for the Baker map", Chaos, 7 p 254 (1997)
Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands (Exposition of the eigenfunctions the Baker's map).
Chaotic maps
Exactly solvable models
Articles containing video clips | Baker's map | [
"Mathematics"
] | 705 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
1,627,125 | https://en.wikipedia.org/wiki/Glucose%20meter | A glucose meter, also referred to as a "glucometer", is a medical device for determining the approximate concentration of glucose in the blood. It can also be a strip of glucose paper dipped into a substance and measured to the glucose chart. It is a key element of glucose testing, including home blood glucose monitoring (HBGM) performed by people with diabetes mellitus or hypoglycemia. A small drop of blood, obtained from slightly piercing a fingertip with a lancet, is placed on a disposable test strip that the meter reads and uses to calculate the blood glucose level. The meter then displays the level in units of mg/dL or mmol/L.
Since approximately 1980, a primary goal of the management of type 1 diabetes and type 2 diabetes mellitus has been achieving closer-to-normal levels of glucose in the blood for as much of the time as possible, guided by HBGM several times a day. The benefits include a reduction in the occurrence rate and severity of long-term complications from hyperglycemia as well as a reduction in the short-term, potentially life-threatening complications of hypoglycemia.
History
Leland Clark presented his first paper about the oxygen electrode, later named the Clark electrode, on 15 April 1956, at a meeting of the American Society for Artificial Organs during the annual meetings of the Federated Societies for Experimental Biology.
In 1962, Clark and Ann Lyons from the Cincinnati Children's Hospital developed the first glucose enzyme electrode. This biosensor was based on a thin layer of glucose oxidase (GOx) on an oxygen electrode. Thus, the readout was the amount of oxygen consumed by GOx during the enzymatic reaction with the substrate glucose. This publication became one of the most often cited papers in life sciences. Due to this work he is considered the “father of biosensors,” especially with respect to the glucose sensing for diabetes patients.
Another early glucose meter was the Ames Reflectance Meter by Anton H. Clemens. It was used in American hospitals in the 1970s. A moving needle indicated the blood glucose after about a minute.
Home glucose monitoring was demonstrated to improve glycemic control of type 1 diabetes in the late 1970s, and the first meters were marketed for home use around 1981. The two models initially dominant in North America in the 1980s were the Glucometer, introduced in November 1981, whose trademark is owned by Bayer, and the Accu-Chek meter (by Roche). Consequently, these brand names have become synonymous with the generic product to many health care professionals. In Britain, a health care professional or a patient may refer to "taking a BM": "Mrs X's BM is 5", etc. BM stands for Boehringer Mannheim, now part of Roche, who produce test strips called 'BM-test' for use in a meter.
In North America, hospitals resisted adoption of meter glucose measurements for inpatient diabetes care for over a decade. Managers of laboratories argued that the superior accuracy of a laboratory glucose measurement outweighed the advantage of immediate availability and made meter glucose measurements unacceptable for inpatient diabetes management. Patients with diabetes and their endocrinologists eventually persuaded acceptance. Prior to its discontinuation in July 2021, the YSI 2300 STAT PLUS Glucose and Lactate Analyzer was widely accepted as the de facto standard for reference measurements and system calibration by most manufacturers of glucometers for the past 30 years, despite there being no such regulatory requirement.
Home glucose testing was adopted for type 2 diabetes more slowly than for type 1, and a large proportion of people with type 2 diabetes have never been instructed in home glucose testing. This has mainly come about because health authorities are reluctant to bear the cost of the test strips and lancets.
Non-meter test strips
Test strips that changed color and could be read visually, without a meter, have been widely used since the 1980s. They had the added advantage that they could be cut longitudinally to save money. Critics argued that test strips read by eye are not as accurate or convenient as meter testing. The manufacturer cited studies that show the product is just as effective despite not giving an answer to one decimal place, something they argue is unnecessary for control of blood sugar. This debate also happened in Germany where "Glucoflex-R" was an established strip for type 2 diabetes. As meter accuracy and insurance coverage improved, they lost popularity.
"Glucoflex-R" is Australia manufacturer National Diagnostic Products alternative to the BM test strip. It has versions that can be used either in a meter or read visually. It is also marketed under the brand name Betachek. On May 1, 2009, the UK distributor Ambe Medical Group reduced the price of their "Glucoflex-R" test strip to the NHS, by approximately 50%.
Types of meters
Hospital glucose meters
Special glucose meters for multi-patient hospital use are now used. These provide more elaborate quality control records. Their data handling capabilities are designed to transfer glucose results into electronic medical records and the laboratory computer systems for billing purposes.
Test strip meters
There are several key characteristics of glucose meters which may differ from model to model:
Size: The typical size is smaller than the palm of the hand. They are battery-powered.
Test strips: A consumable element, different for each meter, containing spots impregnated with glucose oxidase, which reacts with glucose, and other components. A drop of blood is absorbed by a spot for each measurement. Some models use single-use plastic test strips with a spot; other models use discs, drums, or cartridges with multiple spots to make several readings.
Coding: Since test strips may vary from batch to batch, some models require a code to be provided, either by the user or on a plug-in chip supplied with each batch of test strips, to calibrate the meter to the strips of the batch. An incorrect code can cause errors of up to 4 mmol/L (72 mg/dL), with possibly serious consequences, including risk of hypoglycemia. Some test media contain the code information in the strip.
Volume of blood sample: The size of the drop of blood needed by different models varies from 0.3 to 1 μl. Older models required larger blood samples, usually defined as a "hanging drop" from the fingertip. Smaller volume requirements reduce the frequency of pricks that do not produce enough blood.
Alternate site testing: Smaller drop volumes have enabled "alternate site testing" – pricking the forearms or other less sensitive areas instead of the fingertips. Manufacturers recommend that this type of testing should only be used when blood glucose levels are stable, such as before meals, when fasting, or just before going to sleep.
Duration of test: The time it takes for a reading to be displayed may range from 3 to 60 seconds from application of blood for different models.
Display: The glucose value in mg/dL or mmol/L (1 mmol/L = 18.0 mg/dL) is displayed on a digital display. Different countries use different measurement units: for example mg/dL are used in the US, France, Japan, Iran, Israel, and India; mmol/L are used in Australia, Canada, China, and the UK. In Germany both units are used. Many meters can display either unit of measure. Instances have been published in which a patient has interpreted a reading in mmol/L as a very low reading in mg/dL or vice versa. Usually mmol/L readings have a decimal point and mg/dL readings do not.
Countries that use mmol/L include Australia, Canada, China, Croatia, Czech Republic, Denmark, Finland, Hong Kong, Hungary, Iceland, Ireland, Jamaica, Kazakhstan, Latvia, Lithuania, Malaysia, Malta, Netherlands, New Zealand, Norway, Russia, Slovakia, Slovenia, South Africa, Sweden, Switzerland, and United Kingdom.
Countries that use mg/dL include Algeria, Argentina, Austria, Bangladesh, Belgium, Brazil, Chile, Columbia, Cyprus, Ecuador, Egypt, France, Georgia, Germany, Greece, India, Indonesia, Iran, Israel, Italy, Japan, Jordan, Korea, Lebanon, Mexico, Peru, Poland, Portugal, South Korea, Spain, Syria, Taiwan, Thailand, Tunisia, Turkey, United Arab Emirates, United States, Uruguay, Venezuela, and Yemen.
Glucose vs. plasma glucose: Glucose levels in plasma (one of the components of blood) are higher than glucose measurements in whole blood; the difference is about 11% when the hematocrit is normal. This is important because home blood glucose meters measure the glucose in whole blood while most lab tests measure the glucose in plasma. Currently, there are many meters on the market that give results as "plasma equivalent," even though they are measuring whole blood glucose. The plasma equivalent is calculated from the whole blood glucose reading using an equation built into the glucose meter. This allows patients to easily compare their glucose measurements in a lab test and at home. It is important for patients and their health care providers to know whether the meter gives its results as "whole blood equivalent" or "plasma equivalent." One model measures beta-hydroxybutyrate in the blood to detect ketosis for measuring both unhealthy ketoacidosis and healthy nutritional ketosis.
Memory and timestamping: Most meters include a memory to store test results, timestamped by a clock set by the user, and many can display an average of recent readings. Stored data will only reflect trends accurately if the clock is set to approximately the right time.
Data transfer: Some meters can transfer stored data, typically to a computer or mobile phone running diabetes management software. Meters have been combined with devices such as insulin injection devices, PDAs, and cellular transmitters.
Cost
The cost of home blood glucose monitoring can be substantial due to the cost of the test strips. In 2006, the US cost to consumers of each glucose strip ranged from about US$0.35 to $1.00. Manufacturers often provide meters at no cost to encourage use of the profitable test strips. Type 1 diabetics may test as often as 4 to 10 times a day due to the dynamics of insulin adjustment, whereas type 2 typically test less frequently, especially when insulin is not part of treatment. In the UK, where the National Health Service (NHS) rather than patients pay for medications including test strips, a 2015 study on the comparative cost-effectiveness of all options for the self-monitoring of blood glucose funded by the NHS uncovered considerable variation in the price charged, which could not be explained by the availability of advanced meter features. It estimated that a total of £12m was invested in providing 42 million self-monitoring blood glucose tests with systems that failed to meet acceptable accuracy standards, and efficiency savings of £23.2m per annum were achievable if the NHS were to disinvest from technologies providing less functionality than available alternatives, but at a much higher price.
Batches of counterfeit test strips for some meters were found in the United States, producing erratic test results that do not meet the legitimate manufacturer's performance specifications.
Noninvasive meters
The search for a successful technique began about 1975 and has continued to the present without a clinically or commercially viable product. , only one such product had ever been approved for sale by the FDA, based on a technique for electrically pulling glucose through intact skin, and it was withdrawn after a short time owing to poor performance and occasional damage to the skin of users.
Continuous glucose monitors
Continuous glucose monitor systems can consist of a disposable sensor placed under the skin, a transmitter connected to the sensor and a reader that receives and displays the measurements. The sensor can be used for several days before it needs to be replaced. The devices provide real-time measurements, and reduce the need for fingerprick testing of glucose levels. A drawback is that the meters are not as accurate because they read the glucose levels in the interstitial fluid which lags behind the levels in the blood. Continuous blood glucose monitoring systems are also relatively expensive.
Accuracy
Accuracy of glucose meters is a common topic of clinical concern. Blood glucose meters must meet accuracy standards set by the International Organization for Standardization (ISO). According to ISO 15197 Blood glucose meters must provide results that are within ±15% of a laboratory standard for concentrations above 100 mg/dL or within ±15 mg/dL for concentrations below 100 mg/dL at least 95% of the time. However, a variety of factors can affect the accuracy of a test. Factors affecting accuracy of various meters include calibration of meter, ambient temperature, pressure use to wipe off strip (if applicable), size and quality of blood sample, high levels of certain substances (such as ascorbic acid) in blood, hematocrit, dirt on meter, humidity, and aging of test strips. Models vary in their susceptibility to these factors and in their ability to prevent or warn of inaccurate results with error messages. The Clarke Error Grid has been a common way of analyzing and displaying accuracy of readings related to management consequences. More recently an improved version of the Clarke Error Grid has come into use: It is known as the Consensus Error Grid. Older blood glucose meters often need to be "coded" with the lot of test strips used, otherwise, the accuracy of the blood glucose meter may be compromised due to lack of calibration.
Future
One noninvasive glucose meter has been approved by the U.S. FDA: The GlucoWatch G2 Biographer made by Cygnus Inc. The device was designed to be worn on the wrist and used electric fields to draw out body fluid for testing. The device did not replace conventional blood glucose monitoring. One limitation was that the GlucoWatch was not able to cope with perspiration at the measurement site. Sweat must be allowed to dry before measurement can resume. Due to this limitation and others, the product is no longer on the market.
The market introduction of noninvasive blood glucose measurement by spectroscopic measurement methods, in the field of near-infrared (NIR), by extracorporal measuring devices, has not been successful because the devices measure tissue sugar in body tissues and not the blood sugar in blood fluid. To determine blood glucose, the measuring beam of infrared light, for example, has to penetrate the tissue for measurement of blood glucose.
There are currently three CGMS (continuous glucose monitoring system) available. The first is Medtronic's Minimed Paradigm RTS with a sub-cutaneous probe attached to a small transmitter (roughly the size of a quarter) that sends interstitial glucose levels to a small pager sized receiver every five minutes. The Dexcom System is another system, available in two different generations in the US, the G4 and the G5. (1Q 2016). It is a hypodermic probe with a small transmitter. The receiver is about the size of a cell phone and can operate up to twenty feet from the transmitter. The Dexcom G4 transmits via radio frequency and requires a dedicated receiver. The G5 version utilizes Bluetooth low energy for data transmission, and can transmit data directly to a compatible cellular telephone. Currently, Apple's iPhone and Android devices can be used as a receiver. Aside from a two-hour calibration period, monitoring is logged at five-minute intervals for up to 1 week. The user can set the high and low glucose alarms. The third CGMS available is the FreeStyle Navigator from Abbott Laboratories.
There is currently an effort to develop an integrated treatment system with a glucose meter, insulin pump, and wristop controller, as well as an effort to integrate the glucose meter and a cell phone. Testing strips are proprietary and available only through the manufacturer (no insurance availability). These "Glugophones" are currently offered in three forms: as a dongle for the iPhone, an add-on pack for LG model UX5000, VX5200, and LX350 cell phones, as well as an add-on pack for the Motorola Razr cell phone. In US, this limits providers to AT&T and Verizon. Similar systems have been tested for a longer time in Finland.
Recent advances in cellular data communications technology have enabled the development of glucose meters that directly integrate cellular data transmission capability, enabling the user to both transmit glucose data to the medical caregiver and receive direct guidance from the caregiver on the screen of the glucose meter. The first such device, from Telcare, Inc., was exhibited at the 2010 CTIA International Wireless Expo, where it won an E-Tech award. This device then underwent clinical testing in the US and internationally.
In early 2014 Google reported testing prototypes of contact lenses that monitor glucose levels and alert users when glucose levels cross certain thresholds. Apple has patented methods for determining blood sugar levels by absorption spectroscopy, as well as by analyzing exhaled air in its electronic devices.
Technology
Many glucose meters employ the oxidation of glucose to gluconolactone catalyzed by glucose oxidase (sometimes known as GOx). Others use a similar reaction catalysed instead by another enzyme, glucose dehydrogenase (GDH). This has the advantage of sensitivity over glucose oxidase but is more susceptible to interfering reactions with other substances.
The first-generation devices relied on the same colorimetric reaction that is still used nowadays in glucose test strips for urine. Besides glucose oxidase, the test kit contains a benzidine derivative, which is oxidized to a blue polymer by the hydrogen peroxide formed in the oxidation reaction. The disadvantage of this method was that the test strip had to be developed after a precise interval (the blood had to be washed away), and the meter needed to be calibrated frequently.
Most glucometers today use an electrochemical method. Test strips contain a capillary that sucks up a reproducible amount of blood. The glucose in the blood reacts with an enzyme electrode containing glucose oxidase (or dehydrogenase). The enzyme is reoxidized with an excess of a mediator reagent, such as a ferricyanide ion, a ferrocene derivative or osmium bipyridyl complex. The mediator in turn is reoxidized by reaction at the electrode, which generates an electric current. In order for the mediator to operate over long timeframes, it needs to be stable in both oxidised and reduced states. This is to allow for continuous regeneration of the oxidised form of the mediator for shuttling of electrons from enzyme to active site. Osmium-based polypyridyl redox complexes and polymers are attractive candidates as mediators due to their stability in oxidised and reduced forms, tunable redox potential, ease of co-immobilisation and ability to operate at low potentials.
The total charge passing through the electrode is proportional to the amount of glucose in the blood that has reacted with the enzyme. The coulometric method is a technique where the total amount of charge generated by the glucose oxidation reaction is measured over a period of time. The amperometric method is used by some meters and measures the electric current generated at a specific point in time by the glucose reaction. This is analogous to throwing a ball and using the speed at which it is travelling at a point in time to estimate how hard it was thrown. The coulometric method can allow for variable test times, whereas the test time on a meter using the amperometric method is always fixed. Both methods give an estimation of the concentration of glucose in the initial blood sample.
The same principle is used in test strips that have been commercialized for the detection of diabetic ketoacidosis (DKA). These test strips use a beta-hydroxybutyrate-dehydrogenase enzyme instead of a glucose oxidizing enzyme and have been used to detect and help treat some of the complications that can result from prolonged hyperglycemia.
Blood alcohol sensors using the same approach, but with alcohol dehydrogenase enzymes, have been tried and patented but have not yet been successfully commercially developed.
Meter use for hypoglycemia
Although the apparent value of immediate measurement of blood glucose might seem to be higher for hypoglycemia (low sugar) than hyperglycemia (high sugar), meters have been less useful. The primary problems are precision and ratio of false positive and negative results. An imprecision of ±15% is less of a problem for high glucose levels than low. There is little difference in the management of a glucose of 200 mg/dL compared with 260 (i.e., a "true" glucose of 230±15%), but a ±15% error margin at a low glucose concentration brings greater ambiguity with regards to glucose management.
The imprecision is compounded by the relative likelihoods of false positives and negatives in populations with diabetes and those without. People with type 1 diabetes usually have a wider range of glucose levels, and glucose peaks above normal, often ranging from 40 to 500 mg/dL (2.2 to 28 mmol/L), and when a meter reading of 50 or 70 (2.8 or 3.9 mmol/L) is accompanied by their usual hypoglycemic symptoms, there is little uncertainty about the reading representing a "true positive" and little harm done if it is a "false positive." However, the incidence of hypoglycemia unawareness, hypoglycemia-associated autonomic failure (HAAF) and faulty counterregulatory response to hypoglycemia make the need for greater reliability at low levels particularly urgent in patients with type 1 diabetes mellitus, while this is seldom an issue in the more common form of the disease, type 2 diabetes mellitus.
In contrast, people who do not have diabetes may periodically have hypoglycemic symptoms but may also have a much higher rate of false positives to true, and a meter is not accurate enough to base a diagnosis of hypoglycemia upon. A meter can occasionally be useful in the monitoring of severe types of hypoglycemia (e.g., congenital hyperinsulinism) to ensure that the average glucose when fasting remains above 70 mg/dL (3.9 mmol/L).
See also
ISO/IEEE 11073
References
Diabetes-related supplies and medical equipment
Physiological instruments
Medical testing equipment
Drugs developed by Hoffmann-La Roche
American inventions | Glucose meter | [
"Technology",
"Engineering"
] | 4,667 | [
"Physiological instruments",
"Measuring instruments"
] |
1,627,142 | https://en.wikipedia.org/wiki/Flight%20spare | A flight spare is a copy of a spacecraft or spacecraft part which is held in reserve in case it is needed for the mission. Flight spares are built to the same specifications as the original equipment (the "flight model"), and can be substituted in the case of damage or other problems with the flight model, reducing launch delays. The extra cost of building a flight spare can be justified by the enormous cost of delaying a launch by even a short amount of time.
Primary function
Flight spares are constructed as contingencies. As such, spare parts may be swapped onto a craft before launch, or completed spare spacecraft may be launched if the flight model is lost.
NASA has two basic types of spares, development spares and operational spares. NASA makes a determination about which parts need spares based on whether parts are custom built, and the lead-time for procurement. It also makes determinations about the quantities of spares, based on whether the part is critical to system operation, failure rate, and the expected life of the part.
The flight spare can also be useful during a space mission if a change to the original plan is required, since the effect of changes can be safely tested on the ground.
Reusage
Flight spares that go unused in their initial missions are still considered valuable. A 2017 NASA report on flight spare inventory control mentions hundreds of millions of dollars' worth of inventory, not all of it catalogued properly.
New missions for old hardware
Individual spare components manufactured for one mission may eventually fly on another. As a cost-saving measure, the Magellan spacecraft was made largely out of such parts:
Reuse type legend
Flight spares on display
Since few space probes return to Earth intact, flight spares are useful for posterity, and may go to museums. The Mariner 10 flight spare is such an example.
References
See also
Orbit Replaceable Units
Spare part
Spacecraft components | Flight spare | [
"Astronomy"
] | 387 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
1,627,370 | https://en.wikipedia.org/wiki/Wood%20shaper | A wood shaper (usually just shaper in North America or spindle moulder in the UK and Europe), is a stationary woodworking machine in which a vertically oriented spindle drives one or more stacked cutter heads to mill profiles on wood stock.
Description
Wood shaper cutter heads typically have three blades, and turn at one-half to one-eighth the speed of smaller, much less expensive two-bladed bits used on a hand-held wood router. Adapters are sold allowing a shaper to drive router bits, a compromise on several levels. As are router tables, cost-saving adaptations of hand-held routers mounted to comparatively light-duty dedicated work tables.
The wood being fed into a moulder is commonly referred to as either stock or blanks. The spindle may be raised and lowered relative to the shaper's table, and rotates between 3,000 and 10,000 rpm, with stock running along a vertical fence.
Being both larger and much more powerful than routers, shapers can cut much larger profiles than routers such as for crown moulding and raised-panel doorsand readily drive custom-made bits fabricated with unique profiles. Shapers feature between belt-driven motors, which run much more quietly and smoothly than typically 10,000 to 25,000 rpm direct-drive routers. Speed adjustments are typically made by relocating the belts on a stepped pulley system, much like that on a drill press. Unlike routers, shapers are also able to run in reverse, which is necessary in performing some cuts.
The most common form of wood shaper has a vertical spindle, some with tilting spindles or tables.
Shapers can be adapted to perform specialized cuts employing accessories such as sliding tables, tenon tables, tilting arbor, tenoning hoods, and interchangeable spindles. The standard US spindle shaft is , with on small shapers and 30 mm on European models. Most spindles are tall enough to accommodate more than one cutter head, allowing rapid tooling changes by raising or lowering desired heads into position. Additional spindles can be fitted with pre-spaced cutter heads when more are needed for a job than fit on one.
A wood moulder (American English) is similar to a shaper, but is a more powerful and complex machine with multiple cutting heads at both 90-degrees and parallel to its table. A wood shaper has only a single cutting head, mounted on a perpendicular axis to its table.
Safety
The primary safety feature on a wood shaper is a guard mounted above the cutter protecting hands and garments from being drawn into its blades. Jigs, fixtures such as hold-downs, and accessories that include featherboards, also help prevent injury and generally result in better cuts. The starter, or fulcrum, pin is a metal rod which threads into the table a few inches away from the cutter allowing stock to be fed into it in a freehand cut.
In addition to aiding productivity and setting a consistent rate of milling, a power feeder keeps appendages and garments out of harm's way. It may be multi-speed, and employ rubber wheels to feed stock past the cutter head.
Types
Single head moulder (a "shaper" in the US):
Have a top (horizontal) head only.
Cost less to buy, and are less complex and easier to both set up and run.
Multi Head Moulder (a "moulder" in the US):
Has multiple cutting heads.
Can process more work on complex jobs than a single head tool, in a single pass.
May have (in its standard form) up to four cutting heads, two parallel to the table and two perpendicular to it.
An alternative configuration has two bottom and two top heads, in order to size the lumber with the first top and the first bottom head, and then shape the lumber with the remaining top and bottom heads. Machines with two or more right heads are more common in the furniture industry to give the ability to run shorter stock and deeper more detailed cuts on the edge of the stock.
Tooling
Tooling refers to cutters, knives, blades, as well as planer blades, and Most blades are made from either a tool steel alloy known as high speed steel (HSS), or from carbide steel. Cutter heads are normally made from either steel or aluminum. High Speed Steel, carbide, aluminium, and steel for the cutter heads all come in a wide variety of grades.
References
Bibliography
Welcome to the Architectural Woodwork Institute
External links
Shaper | Wood shaper | [
"Physics",
"Technology"
] | 933 | [
"Physical systems",
"Machines",
"Woodworking machines"
] |
1,627,473 | https://en.wikipedia.org/wiki/Anthophyllite | Anthophyllite is an orthorhombic amphibole mineral: ☐Mg2Mg5Si8O22(OH)2 (☐ is for a vacancy, a point defect in the crystal structure), magnesium iron inosilicate hydroxide. Anthophyllite is polymorphic with cummingtonite. Some forms of anthophyllite are lamellar or fibrous and are classed as asbestos. The name is derived from the Latin word anthophyllum, meaning clove, an allusion to the most common color of the mineral. The Anthophyllite crystal is characterized by its perfect cleavage along directions 126 degrees and 54 degrees.
Occurrence
Anthophyllite is the product of metamorphism of magnesium-rich rocks, especially ultrabasic igneous rocks and impure dolomitic shales. It also forms as a retrograde product rimming relict orthopyroxenes and olivine, and as an accessory mineral in cordierite-bearing gneisses and schists. Anthophyllite also occurs as a retrograde metamorphic mineral derived from ultramafic rocks along with serpentinite.
Occurrence in ultramafic rocks
Anthophyllite is formed by the breakdown of talc in ultramafic rocks in the presence of water and carbon dioxide as a prograde metamorphic reaction. The partial pressure of carbon dioxide (XCO2) in aqueous solution favors production of anthophyllite. Higher partial pressures of CO2 reduces the temperature of the anthophyllite-in isograd.
Ultramafic rocks in purely hydrous, CO2-free environments will tend to form serpentinite-antigorite-brucite-tremolite assemblages (dependent on MgO content) or at amphibolite to granulite metamorphic grade, metamorphic pyroxene or olivine. Thus, metamorphic assemblages of ultramafic rocks containing anthophyllite are indicative of at least greenschist facies metamorphism in the presence of carbon dioxide bearing metamorphic fluids.
The typical metamorphic assemblage reactions for low-magnesian (<25% MgO) and high-magnesian (>25% MgO) ultramafic rocks are;
Olivine + Tremolite + Talc → Olivine + Tremolite + Anthophyllite (low MgO, >550 °C, XCO2 <0.6)
Talc + Tremolite + Magnesite → Tremolite + Anthophyllite + Magnesite (High MgO, >500 °C, XCO2 >0.6)
Talc + Magnesite + Tremolite → Anthophyllite + Tremolite + Magnesite (Low MgO, >500 °C, XCO2 >0.6)
Retrogressive anthophyllite is relatively rare in ultramafic rocks and is usually poorly developed due to the lower energy state available for metamorphic reactions to progress and also the general dehydration of rock masses during metamorphism. Similarly, the need for substantial components of carbon dioxide in metamorphic fluid restricts the appearance of anthophyllite as a retrograde mineral. The usual metamorphic assemblage of retrograde-altered ultramafic rocks is thus usually a serpentinite or talc-magnesite assemblage.
Retrograde anthophyllite is present most usually in shear zones where fracturing and shearing of the rocks provides a conduit for carbonated fluids during retrogression.
Fibrous anthophyllite
Fibrous anthophyllite is one of the six recognised types of asbestos. It was mined in Finland and also in Matsubase, Japan where a large-scale open-cast asbestos mine and mill was in operation between 1883 and 1970.
In Finland anthophyllite asbestos was mined in two mines, the larger one Paakkila in the Tuusniemi commune started in 1918 and closed in 1975 due to the dust problems. The smaller mine, Maljasalmi in the commune of Outokumpu, was mined from 1944 to 1952. The anthophyllite was used in asbestos cement and for insulation, roofing material etc.
Anthophyllite is also known as azbolen asbestos.
References
Amphibole group
Magnesium minerals
Iron minerals
Asbestos
Orthorhombic minerals
Minerals in space group 62
Luminescent minerals | Anthophyllite | [
"Chemistry",
"Environmental_science"
] | 918 | [
"Luminescence",
"Toxicology",
"Asbestos",
"Luminescent minerals"
] |
1,627,853 | https://en.wikipedia.org/wiki/Newman%20projection | A Newman projection is a drawing that helps visualize the 3-dimensional structure of a molecule. This projection most commonly sights down a carbon-carbon bond, making it a very useful way to visualize the stereochemistry of alkanes. A Newman projection visualizes the conformation of a chemical bond from front to back, with the front atom represented by the intersection of three lines (a dot) and the back atom as a circle. The front atom is called proximal, while the back atom is called distal. This type of representation clearly illustrates the specific dihedral angle between the proximal and distal atoms.
This projection is named after American chemist Melvin Spencer Newman, who introduced it in 1952 as a partial replacement for Fischer projections, which are unable to represent conformations and thus conformers properly. This diagram style is an alternative to a sawhorse projection, which views a carbon–carbon bond from an oblique angle, or a wedge-and-dash style, such as a Natta projection. These other styles can indicate the bonding and stereochemistry, but not as much conformational detail.
A Newman projection can also be used to study cyclic molecules, such as the chair conformation of cyclohexane:
Because of the free rotation around single bonds, there are various conformations for a single molecule. Up to six unique conformations may be drawn for any given chemical bond. Each conformation is drawn by rotation of either the proximal or distal atom 60 degrees. Of these six conformations, three will be in a staggered conformation, while the other three will be in an eclipsed conformation. These six conformations can be represented in a relative energy diagram.
A staggered projection appears to have the surrounding species equidistant from each other. This kind of conformation tends to experience both anti and gauche interactions. Anti interactions refer to the molecules (usually of the same type) sitting exactly opposite of each other at 180° on the Newman projection. Gauche interactions refer to molecules (also usually of the same type) being 60° from each other on a Newman projection. Anti interactions experience less steric strain than gauche interactions, but both experience less steric strain than the eclipsed conformation.
An eclipsed projection appears to have the surrounding species almost on top of each other. In reality, these species are in line with each other, but are drawn slightly staggered to help format the projection onto paper. These types of conformations are generally higher in energy due to increased bond strain. However, this strain can be somewhat lower if a hydrogen is eclipsed over a larger species, as opposed to two large species eclipsed over each other.
See also
Haworth projection
Fischer projection
Natta projection
Stereochemistry
References | Newman projection | [
"Physics",
"Chemistry"
] | 559 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.