id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
15,956,651 | https://en.wikipedia.org/wiki/SN%202004gt | SN 2004GT was a type Ic supernova that happened in the interacting galaxy NGC 4038 on December 12, 2004. The event occurred in a region of condensed matter in the western spiral arm. The progenitor was not identified from older images of the galaxy, and is either a type WC Wolf-Rayet star with a mass over 40 times that of the Sun, or a star 20 to 40 times as massive as the Sun in a binary star system.
References
External links
Light curves and spectra on the Open Supernova Catalog
Simbad
Supernova remnants
Supernovae
Corvus (constellation)
20041212 | SN 2004gt | [
"Chemistry",
"Astronomy"
] | 126 | [
"Supernovae",
"Astronomical events",
"Constellations",
"Corvus (constellation)",
"Explosions"
] |
15,957,069 | https://en.wikipedia.org/wiki/SN%202007sr | SN 2007sr was a Type Ia supernova event that happened in the galaxy NGC 4038. It was announced on December 18, 2007, but was visible on images beginning December 7. It peaked at magnitude 12.7 on December 14.
References
External links
Light curves and spectra on the Open Supernova Catalog
Simbad
SN 2007sr
Supernova remnants
Supernovae
Corvus (constellation) | SN 2007sr | [
"Chemistry",
"Astronomy"
] | 82 | [
"Supernovae",
"Astronomical events",
"Constellations",
"Corvus (constellation)",
"Explosions"
] |
15,957,752 | https://en.wikipedia.org/wiki/WIDL%20%28Internet%20Standard%29 | WIDL (Web Interface Definition Language) is a 1997 proposal for interactions between website APIs. This interface description language is based on XML.
External links
Web Interface Definition Language (WIDL) at the W3C website
Web standards
XML markup languages | WIDL (Internet Standard) | [
"Technology"
] | 52 | [
"Computing stubs",
"World Wide Web stubs"
] |
15,958,803 | https://en.wikipedia.org/wiki/Splitting%20principle | In mathematics, the splitting principle is a technique used to reduce questions about vector bundles to the case of line bundles.
In the theory of vector bundles, one often wishes to simplify computations, say of Chern classes. Often computations are well understood for line bundles and for direct sums of line bundles. In this case the splitting principle can be quite useful.
The theorem above holds for complex vector bundles and integer coefficients or for real vector bundles with coefficients. In the complex case, the line bundles or their first characteristic classes are called Chern roots.
The fact that is injective means that any equation which holds in (say between various Chern classes) also holds in .
The point is that these equations are easier to understand for direct sums of line bundles than for arbitrary vector bundles, so equations should be understood in and then pushed down to .
Since vector bundles on are used to define the K-theory group , it is important to note that is also injective for the map in the above theorem.
The splitting principle admits many variations. The following, in particular, concerns real vector bundles and their complexifications:
Symmetric polynomial
Under the splitting principle, characteristic classes for complex vector bundles correspond to symmetric polynomials in the first Chern classes of complex line bundles; these are the Chern classes.
See also
K-theory
Grothendieck splitting principle for holomorphic vector bundles on the complex projective line
References
section 3.1
Raoul Bott and Loring Tu. Differential Forms in Algebraic Topology, section 21.
Characteristic classes
Vector bundles
Mathematical principles | Splitting principle | [
"Mathematics"
] | 316 | [
"Mathematical principles"
] |
15,959,032 | https://en.wikipedia.org/wiki/Infinitary%20combinatorics | In mathematics, infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets.
Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom.
Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals.
Ramsey theory for infinite sets
Write for ordinals, for a cardinal number (finite or infinite) and for a natural number. introduced the notation
as a shorthand way of saying that every partition of the set of -element subsets of into pieces has a homogeneous set of order type . A homogeneous set is in this case a subset of such that every -element subset is in the same element of the partition. When is 2 it is often omitted. Such statements are known as partition relations.
Assuming the axiom of choice, there are no ordinals with , so is usually taken to be finite. An extension where is almost allowed to be infinite is the notation
which is a shorthand way of saying that every partition of the set of finite subsets of into pieces has a subset of order type such that for any finite , all subsets of size are in the same element of the partition. When is 2 it is often omitted.
Another variation is the notation
which is a shorthand way of saying that every coloring of the set of -element subsets of with 2 colors has a subset of order type such that all elements of have the first color, or a subset of order type such that all elements of have the second color.
Some properties of this include: (in what follows is a cardinal)
In choiceless universes, partition properties with infinite exponents may hold, and some of them are obtained as consequences of the axiom of determinacy (AD). For example, Donald A. Martin proved that AD implies
Strong colorings
Wacław Sierpiński showed that the Ramsey theorem does not extend to sets of size by showing that . That is, Sierpiński constructed a coloring of pairs of real numbers into two colors such that for every uncountable subset of real numbers , takes both colors. Taking any set of real numbers of size and applying the coloring of Sierpiński to it, we get that . Colorings such as this are known as strong colorings and studied in set theory. introduced a similar notation as above for this.
Write for ordinals, for a cardinal number (finite or infinite) and for a natural number. Then
is a shorthand way of saying that there exists a coloring of the set of -element subsets of into pieces such that every set of order type is a rainbow set. A rainbow set is in this case a subset of such that takes all colors. When is 2 it is often omitted. Such statements are known as negative square bracket partition relations.
Another variation is the notation
which is a shorthand way of saying that there exists a coloring of the set of 2-element subsets of with colors such that for every subset of order type and every subset of order type , the set takes all colors.
Some properties of this include: (in what follows is a cardinal)
Large cardinals
Several large cardinal properties can be defined using this notation. In particular:
Weakly compact cardinals are those that satisfy
α-Erdős cardinals are the smallest that satisfy
Ramsey cardinals are those that satisfy
Notes
References
Set theory
Combinatorics | Infinitary combinatorics | [
"Mathematics"
] | 701 | [
"Discrete mathematics",
"Mathematical logic",
"Set theory",
"Combinatorics"
] |
13,267,882 | https://en.wikipedia.org/wiki/Ate%20complex | In chemistry, an ate complex is a salt formed by the reaction of a Lewis acid with a Lewis base whereby the central atom (from the Lewis acid) increases its valence and gains a negative formal charge. (In this definition, the meaning of valence is equivalent to coordination number).
Often in chemical nomenclature the term ate is suffixed to the element in question. For example, the ate complex of a boron compound is called a borate. Thus trimethylborane and methyllithium react to form the ate compound , lithium tetramethylborate(1-). This concept was introduced by Georg Wittig in 1958. Ate complexes are common for metals, including the transition metals (groups 3-11), as well as the metallic or semi-metallic elements of group 2, 12, and 13. They are also well-established for third-period or heavier elements of groups 14–18 in their higher oxidation states.
Ate complexes are a counterpart to onium ions.
Lewis acids form ate ions when the central atom reacts with a donor ( X-type ligand), gaining one more bond and becoming a negative-charged anion.
Lewis bases form onium ions when the central atom reacts with an acceptor ( Z-type ligand), gaining one more bond and becoming a positive-charged cation.
-ate suffix
The phrase -ate ion or ate ion can refer generically to many negatively charged anions. -ate compound or ate compound can refer to salts of the anions or esters of the functional groups.
Chemical terms ending in -ate (and -ite) generally refer to the negatively charged anions, neutral radicals, and covalently bonded functional groups that share the same chemical formulas (with different charges). For example, the nitrate anion, ; the nitrate functional group that forms nitrate esters, or ; and the nitrate radical or nitrogen trioxide, .
Most numerous are oxyanions (oxyacids that have lost one or more protons to deprotonation) and the radicals and functional groups that share their names.
Oxyanions derived from inorganic acids include:
Fully deprotonated oxyanions, such as borate, carbonate, nitrate, cyanate, isocyanate, thiocyanate, fulminate, aluminate, zincate, silicate, phosphate, sulfate and other sulfur oxoanions, chlorate, titanate, vanadate, chromate, manganate, ferrate, percobaltate, nickelate, germanate, arsenate, selenate, bromate, molybdate, pertechnate, perruthenate, stannate, antimonate, tellurate, iodate, perxenate, tungstate, plumbate, and bismuthate.
Partially deprotonated oxyanions, such as hydrogensulfate, hydrogenphosphate, and dihydrogenphosphate.
Oxyanions derived from organic acids include:
Carboxylate ions such as formate, acetate, propionate, butyrate, isobutyrate, and oxalate, along with their sulfur analogs, the thiocarboxylate ions, such as thioacetate.
Phosphonate and sulfonate ions.
Deprotonated alcohols such as methanolate (methoxide) and ethanolate (ethoxide), along with their sulfur analogs, the thiolates.
A lyate ion is a generic solvent molecule that has become a negative ion by loss of one or more protons.
The -ate suffix also applies to negative fluoroanions, fluorides which have gained one or more protons and twice as many electrons. Tetrafluoroborate, , is boron trifluoride, , which has gained one fluoride and two electrons.
References
Coordination chemistry | Ate complex | [
"Chemistry"
] | 813 | [
"Coordination chemistry"
] |
13,268,473 | https://en.wikipedia.org/wiki/HAZMAT%20Class%205%20Oxidizing%20agents%20and%20organic%20peroxides | An oxidizer is a chemical that readily yields oxygen in reactions, thereby causing or enhancing combustion.
Divisions
Division 5.1: Oxidizers
An oxidizer is a material that may, generally by yielding oxygen, cause or enhance the combustion of other materials.
A solid material is classed as a Division 5.1 material if, when tested in accordance with the UN Manual of Tests and Criteria, its mean burning time is less than or equal to the burning time of a 3:7 potassium bromate/cellulose mixture.
A liquid material is classed as a Division 5.1 material if, when tested in accordance with the UN Manual of Tests and Criteria, it spontaneously ignites or its mean time for a pressure rise from 690 kPa to 2070 kPa gauge is less than the time of a 1:1 nitric acid (65 percent)/cellulose mixture.
Division 5.2: Organic Peroxides
An organic peroxide is any organic compound containing oxygen (O) in the bivalent -O-O- structure and which may be considered a derivative of hydrogen peroxide, where one or more of the hydrogen atoms have been replaced by organic radicals, unless any of the following paragraphs applies:
The material meets the definition of an explosive as prescribed in subpart C of this part, in which case it must be classed as an explosive (applies to acetone peroxide, for example)
The material is forbidden from being offered for transportation according to 49CFR 172.101 of this subchapter or 49CFR 173.21;
The Associate Administrator for Hazardous Materials Safety has determined that the material does not present a hazard which is associated with a Division 5.2 material; or
The material meets one of the following conditions:
For materials containing no more than 1.0 percent hydrogen peroxide, the available oxygen, as calculated using the equation in paragraph (a)(4)(ii) of this section, is not more than 1.0 percent, or
For materials containing more than 1.0 percent but not more than 7.0 percent hydrogen peroxide, the available oxygen content (Oa) is not more than 0.5 percent, when determined using the equation:
Oa = 16x
where for a material containing k species of organic peroxides:
= number of -O-O- groups per molecule of the species
= concentration (mass percent) of the species
= molecular mass of the species
Placards
Prior to 2007, the placard for 'Organic Peroxide' (5.2) was entirely yellow, like placard 5.1.
Compatibility Table
Packing Groups
References
49 CFR 173.127(a)
49 CFR 173.128(a)
Hazardous materials | HAZMAT Class 5 Oxidizing agents and organic peroxides | [
"Physics",
"Chemistry",
"Technology"
] | 556 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,479 | https://en.wikipedia.org/wiki/HAZMAT%20Class%204%20Flammable%20solids | Flammable solids are any materials in the solid phase of matter that can readily undergo combustion in the presence of a source of ignition under standard circumstances, i.e. without:
Artificially changing variables such as pressure or density; or
Adding accelerants.
Divisions
Division 4.1: Flammable Solid
Flammable solids are any of the following four types of materials:
Desensitized Explosives: explosives that, when dry, are Explosives of Class 1 other than those of compatibility group A, which are wetted with sufficient water, alcohol, or plasticizer to suppress explosive properties; and are specifically authorized by name either in the 49CFR 172.101 Table or have been assigned a shipping name and hazard class by the Associate Administrator for Hazardous Materials Safety.
Self-Reactive Materials: materials that are thermally unstable and that can undergo a strongly exothermic decomposition even without participation of oxygen (air). Certain exclusions to this group do apply under 49 CFR.
Generic Types: Division 4.1 self-reactive materials are assigned to a generic system consisting of seven types. A self-reactive substance identified by technical name in the Self-Reactive Materials Table in 49CFR 173.224 is assigned to a generic type in accordance with that Table. Self-reactive materials not identified in the Self-Reactive Materials Table in 49CFR 173.224 are assigned to generic types under the procedures of paragraph (a)(2)(iii) of this section.
Readily Combustible Solids: materials that are solids which may cause a fire through friction, such as matches; show a burning rate faster than 2.2 mm (0.087 inches) per second when tested in accordance with UN Manual of Tests and Criteria; or are any metal powders that can be ignited and react over the whole length of a sample in 10 minutes or less, when tested in accordance with UN Manual of Tests and Criteria.
Division 4.2: Spontaneously Combustible
Spontaneously combustible material is:
Pyrophoric Material: A pyrophoric material is a liquid or solid that, even in small quantities and without an external ignition source, can ignite within five (5) minutes after coming in contact with air when tested according to the UN Manual of Tests and Criteria.
Self-Heating Material: A self-heating material is a material that, when in contact with air and without an energy supply, is liable to self-heat.
Division 4.3: Dangerous When Wet
Dangerous when wet material is material that, by contact with water, is liable to become spontaneously flammable or to give off flammable or toxic gas at a rate greater than 1 liter per kilogram of the material, per hour, when tested in accordance with the UN Manual of Tests and Criteria. Pure alkali metals are known examples of this.
Placards
Compatibility Table
References
49 CFR 173.124(a)
49 CFR 173.124(b)
49 CFR 173.124(c)
Hazardous materials | HAZMAT Class 4 Flammable solids | [
"Physics",
"Chemistry",
"Technology"
] | 613 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,481 | https://en.wikipedia.org/wiki/HAZMAT%20Class%203%20Flammable%20liquids | A flammable liquid is a liquid with flash point of not more than 60.5 °C (141 °F), or any material in a liquid phase with a flash point at or above 37.8 °C (100 °F) that is intentionally heated and offered for transportation or transported at or above its flash point in a bulk packaging.
Divisions
Class 3: Flammable Liquids
A flammable liquid is a liquid having a flash point of not more than 60 °C (140 °F), or any material in a liquid phase with a flash point at or above 37.8 °C (100 °F) that is intentionally heated and offered for transportation or transported at or above its flash point in a bulk packaging. The following exceptions apply:
Any liquid meeting one of the definitions specified in 49CFR 173.115.
Any mixture having one or more components with a flash point of 60.5 °C (141 °F) or higher, that make up at least 99 percent of the total volume of the mixture, if the mixture is not offered for transportation or transported at or above its flash point.
Any liquid with a flash point greater than 35 °C (95 °F) which does not sustain combustion according to ASTM 4206 or the procedure in Appendix H of this part.
Any liquid with a flash point greater than 35 °C (95 °F) and with a fire point greater than 100 °C (212 °F) according to ISO 2592.
Any liquid with a flash point greater than 35 °C (95 °F) which is in a water-miscible solution with a water content of more than 90 percent by mass.
Flash Point: The flash point is the minimum temperature at which a liquid gives off vapor within a test vessel in sufficient concentration to form an ignitable mixture with air near the surface of the liquid.
Placards
Alternate Placards and Labeling
Combustible Liquids:
A combustible liquid means any liquid that does not meet the definition of any other hazard class specified in this subchapter and has a flash point above 60.5 °C (141 °F) and below 93 °C (200 °F).
A flammable liquid with a flash point at or above 38 °C (100 °F) that does not meet the definition of any other hazard class may be reclassed as a combustible liquid. This provision does not apply to transportation by vessel or aircraft, except where other means of transportation is impracticable. An elevated temperature material that meets the definition of a Class 3 material because it is intentionally heated and offered for transportation or transported at or above its flash point may not be reclassed as a combustible liquid.
A combustible liquid which does not sustain combustion is not subject to the requirements of this subchapter as a combustible liquid. Either the test method specified in ASTM 4206 or the procedure in Appendix H of this part may be used to determine if a material sustains combustion when heated under test conditions and exposed to an external source of flame.
Gasoline: This placard is an alternative placard, which may be used for gasoline in non-bulk quantities.
Fuel Oil: This placard is an alternative placard, which may be used for fuel oil in non-bulk quantities.
Compatibility Table
Packing Groups
References
49 CFR 173.120 (U.S. Code)
49 CFR 173.120(a) (U.S. Code)
49 CFR 173.120(b)(1) (U.S. Code)
Hazardous materials | HAZMAT Class 3 Flammable liquids | [
"Physics",
"Chemistry",
"Technology"
] | 714 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,484 | https://en.wikipedia.org/wiki/HAZMAT%20Class%202%20Gases | The HAZMAT Class 2 in United States law includes all gases which are compressed and stored for transportation. Class 2 has three divisions: Flammable (also called combustible), Non-Flammable/Non-Poisonous, and Poisonous. This classification is based on the United Nations' Recommendations on the Transport of Dangerous Goods - Model Regulations. In Canada, the Transportation of Dangerous Goods Regulations, or TDGR, are also based on the UN Model Regulations and contain the same three divisions.
Divisions
A gas is a substance which
(a) at 50 °C (122 °F) has a vapor pressure greater than 300 kPa (43.51 PSI) or
(b) is completely gaseous at 20 °C (68 °F) at a standard pressure of 101.3 kPa (14.69 PSI).
Gases are assigned to one of three divisions
division 2.1 Flammable gas
division 2.2 Non flammable, Non-toxic gas
division 2.3 Toxic gas
Aerosols also fall into Class 2 divisions where an aerosol is defined as an article consisting of any non-refillable receptacle containing a gas compressed, liquefied or dissolved under pressure, the sole purpose of which is to expel a nonpoisonous (other than a Division 6.1 Packing Group III material) liquid, paste, or powder and fitted with a self-closing release device allowing the contents to be ejected by the gas.
Division 2.1: Flammable, Non-Toxic Gas
Flammable gas means any material that:
Is ignitable at 101.3 kPA (14.7 psia) when in a mixture of 13 percent or less by volume with air; or
Has a flammable range at 101.3 kPa with air of at least 12 percent regardless of the lower limit.
Is determined to be flammable in accordance with ASTM E681-85, Standard Test Method for Concentration Limits of Flammability of Chemicals
The following applies to aerosols:
An aerosol must be assigned to Division 2.1 if the contents include 85% by mass or more flammable components and the chemical heat of combustion is 30 kJ/g or more;
An aerosol must be assigned to Division 2.1 if it is deemed flammable in accordance with the appropriate tests of the UN Manual of Tests and Criteria for flammability.
Division 2.2: Non-Flammable, Non-Toxic Gas
This division includes compressed gas, liquefied gas, pressurized cryogenic gas, compressed gas in solution, asphyxiant gas and oxidizing gas. A non-flammable, nonpoisonous compressed gas (Division 2.2) means any material (or mixture) which:
A non-flammable gas means any material that:
Exerts in the packaging an absolute pressure of 280 kPa (40.6 psia) or greater at 20 °C (68 °F), and
Does not meet the definition of Division 2.1 or 2.3.
The following applies to aerosols:
An aerosol must be assigned to Division 2.2 if the contents contain 1% by mass or less flammable components and the heat of combustion is less than 20 kJ/g.
Division 2.3: Toxic Gas
Gas poisonous by inhalation means a material which is a gas at 20 °C or less and a pressure of 101.3 kPa (a material which has a boiling point of 20 °C or less at 101.3kPa (14.7 psi)) and which:
Is known to be so toxic to humans as to pose a hazard to health during transportation, or
In the absence of adequate data on human toxicity, is presumed to be toxic to humans because when tested on laboratory animals it has an LC50 value of not more than 5000 ml/m3. See 49CFR 173.116(a) for assignment of Hazard Zones A, B, C or D. LC50 values for mixtures may be determined using the formula in 49 CFR 173.133(b)(1)(i)
Placards
Compatibility table
References
United Nations, Recommendations on the Transport of Dangerous Goods - Model Regulations
49 CFR 173.115 (a) (U.S. Code)
49 CFR 173.115 (b) (U.S. Code)
49 CFR 177.848 (U.S. Code)
Hazardous materials | HAZMAT Class 2 Gases | [
"Physics",
"Chemistry",
"Technology"
] | 911 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,555 | https://en.wikipedia.org/wiki/HAZMAT%20Class%209%20Miscellaneous | The miscellaneous hazardous materials category encompasses all hazardous materials that do not fit one of the definitions listed in Class 1 through Class 8.
Divisions
The miscellaneous hazardous material is a material that presents a hazard during transportation but which does not meet the definition of any other hazard class. This class includes:
Any material which has an anesthetic, noxious or other similar property which could cause extreme annoyance or discomfort to a flight crew member so as to prevent the correct performance of assigned duties; or
Any material that meets the definition in 49 CFR 171.8 for an elevated temperature material, a hazardous substance, a hazardous waste, or a marine pollutant.
A new sub-class, class 9A, has been in effect since January 1, 2017. This is limited to the labeling of the transport of lithium batteries.
Placards
Compatibility Table
Packing Groups
The packing group of a Class 9 material is as indicated in Column 5 of the 49CFR 172.101 Table.
References
49 CFR 173.140
49 CFR 173.141
Hazardous materials | HAZMAT Class 9 Miscellaneous | [
"Physics",
"Chemistry",
"Technology"
] | 210 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,557 | https://en.wikipedia.org/wiki/HAZMAT%20Class%208%20Corrosive%20substances | A corrosive material is a liquid or solid that causes full thickness destruction of human skin at the site of contact within a specified period of time. A liquid that has a severe corrosion rate on steel or aluminum based on the criteria in 49CFR 173.137(c)(2) is also a corrosive material.
Divisions
454 kg (1001 lbs) or more gross weight of a corrosive material. Although the corrosive class includes both acids and bases, the hazardous materials load and segregation chart does not make any reference to the separation of various incompatible corrosive materials from each other. In spite of this, however, when shipping corrosives, care should be taken to ensure that incompatible corrosive materials can not become mixed, as many corrosives react very violently if mixed. If responding to a transportation incident involving corrosive materials (especially a mixture of corrosives), caution should be exercised.
Placards
Compatibility Table
Packing Groups
References
49 CFR 173.136
Hazardous materials | HAZMAT Class 8 Corrosive substances | [
"Physics",
"Chemistry",
"Technology"
] | 213 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,560 | https://en.wikipedia.org/wiki/HAZMAT%20Class%207%20Radioactive%20substances | Radioactive substances are materials that emit radiation.
Divisions
Any quantity of packages bearing the RADIOACTIVE YELLOW III label (LSA-III).
Some radioactive materials in "exclusive use" with low specific activity radioactive materials will not bear the label, however, the RADIOACTIVE placard is required.
Placards
Compatibility Table
References
49CFR 173 Subpart I
Hazardous materials | HAZMAT Class 7 Radioactive substances | [
"Physics",
"Chemistry",
"Technology"
] | 75 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,268,564 | https://en.wikipedia.org/wiki/HAZMAT%20Class%206%20Toxic%20and%20infectious%20substances | Poisonous material is a material, other than a gas, known to be so toxic to humans that it presents a health hazard during transportation.
Divisions
Division 6.1: Poisonous material is a material, other than a gas, which is known to be so toxic to humans as to afford a hazard to health during transportation, or which, in the absence of adequate data on human toxicity:
Is presumed to be toxic to humans because it falls within any one of the following categories when tested on laboratory animals (whenever possible, animal test data that has been reported in the chemical literature should be used):
Oral Toxicity: A liquid or solid with an LD50 for acute oral toxicity of not more than 300 mg/kg.
Dermal Toxicity. A material with an LD50 for acute dermal toxicity of not more than 1000 mg/kg.
Inhalation Toxicity: A dust or mist with an LC50 for acute toxicity on inhalation of not more than 4 mg/L; or a material with a saturated vapor concentration in air at 20 °C (68 °F) of more than one-fifth of the LC50 for acute toxicity on inhalation of vapors and with an LC50 for acute toxicity on inhalation of vapors of not more than 5000 mL/m3; or
Is an irritating material, with properties similar to tear gas, which causes extreme irritation, especially in confined spaces.
Division 6.2: Biohazards.
Placards
Poison: 454 kg (1001 lb) or more gross weight of poisonous materials that are not in Hazard Zone A or B (see Assignment of packing groups and hazard zones below). For U.S. Domestic Use only.
Inhalation Hazard: Any quantity of a material that is in Hazard Zone A or B (see Assignment of packing groups and hazard zones below).
Toxic: May be used instead of POISON placard on 454 kg (1001 lb) or more gross weight of poisonous materials that are not in Hazard Zone A or B (see Assignment of packing groups and hazard zones below). For international shipments the label must say Toxic if it will be worded.
PG III (Packing Group III): May be used instead of POISON placard on 454 kg (1001 lb) or more gross weight of Poison PG III materials (see Assignment of packing groups and hazard zones below).
Lethality
Lethal Dose 50
Oral Toxicity: LD50 for acute oral toxicity means that dose of the material administered to both male and female young adult albino rats which causes death within 14 days in half the animals tested. The number of animals tested must be sufficient to give statistically valid results and be in conformity with good pharmacological practices. The result is expressed in mg/kg body mass.
Dermal Toxicity: LD50 for acute dermal toxicity means that dose of the material which, administered by continuous contact for 24 hours with the shaved intact skin (avoiding abrading) of an albino rabbit, causes death within 14 days in half of the animals tested. The number of animals tested must be sufficient to give statistically valid results and be in conformity with good pharmacological practices. The result is expressed in mg/kg body mass.
Lethal Concentration 50
LC50 for acute toxicity on inhalation means that concentration of vapor, mist, or dust which, administered by continuous inhalation for one hour to both male and female young adult albino rats, causes death within 14 days in half of the animals tested. If the material is administered to the animals as a dust or mist, more than 90 percent of the particles available for inhalation in the test must have a diameter of 10 micrometres or less if it is reasonably foreseeable that such concentrations could be encountered by a human during transport. The result is expressed in mg/L of air for dusts and mists or in mL/m3 of air (parts per million) for vapors. See 49CFR 173.133(b) for LC50 determination for mixtures and for limit tests.
Compatibility table
Packing groups
References
49 CFR 173.132
Hazardous materials | HAZMAT Class 6 Toxic and infectious substances | [
"Physics",
"Chemistry",
"Technology"
] | 837 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,269,420 | https://en.wikipedia.org/wiki/Unistochastic%20matrix | In mathematics, a unistochastic matrix (also called unitary-stochastic) is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix.
A square matrix B of size n is doubly stochastic (or bistochastic) if all its entries are non-negative real numbers and each of its rows and columns sum to 1. It is unistochastic if there exists a unitary matrix U such that
This definition is analogous to that for an orthostochastic matrix, which is a doubly stochastic matrix whose entries are the squares of the entries in some orthogonal matrix. Since all orthogonal matrices are necessarily unitary matrices, all orthostochastic matrices are also unistochastic. The converse, however, is not true. First, all 2-by-2 doubly stochastic matrices are both unistochastic and orthostochastic, but for larger n this is not the case. For example, take and consider the following doubly stochastic matrix:
This matrix is not unistochastic, since any two vectors with moduli equal to the square root of the entries of two columns (or rows) of B cannot be made orthogonal by a suitable choice of phases. For , the set of orthostochastic matrices is a proper subset of the set of unistochastic matrices.
the set of unistochastic matrices contains all permutation matrices and its convex hull is the Birkhoff polytope of all doubly stochastic matrices
for this set is not convex
for the set of triangle inequality on the moduli of the raw is a sufficient and necessary condition for the unistocasticity
for the set of unistochastic matrices is star--shaped and unistochasticity of any bistochastic matrix B is implied by a non-negative value of its Jarlskog invariant
for the relative volume of the set of unistochastic matrices with respect to the Birkhoff polytope of doubly stochastic matrices is
for explicit conditions for unistochasticity are not known yet, but there exists a numerical method to verify unistochasticity based on the algorithm by Haagerup
The Schur-Horn theorem is equivalent to the following "weak convexity" property of the set of unistochastic matrices: for any vector the set is the convex hull of the set of vectors obtained by all permutations of the entries of the vector (the permutation polytope generated by the vector ).
The set of unistochastic matrices has a nonempty interior. The unistochastic matrix corresponding to the unitary matrix with the entries , where and , is an interior point of .
References
.
Matrices | Unistochastic matrix | [
"Mathematics"
] | 582 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
13,269,474 | https://en.wikipedia.org/wiki/Gliese%20176 | Gliese 176 is a small star with an orbiting exoplanet in the constellation of Taurus. With an apparent visual magnitude of 9.95, it is too faint to be visible to the naked eye. It is located at a distance of 30.9 light years based on parallax measurements, and is drifting further away with a heliocentric radial velocity of 26.4 km/s.
This is an M-type main-sequence star, sometimes called a red dwarf, with a stellar classification of M2V. It has 49% of the Sun's mass and 47% of the radius of the Sun. The star is radiating just 3.5% of the luminosity of the Sun from its photosphere at an effective temperature of 3,632 K. It is estimated to be around nine billion years old, and is spinning slowly with a rotation period of 40 days. The star is orbited by a Super-Earth.
Planetary system
A planetary companion to Gliese 176 was announced in 2008. Radial velocity observations with the Hobby-Eberly Telescope (HET) showed a 10.24-day periodicity, which was interpreted as being caused by a planet. With a semi-amplitude of 11.6 m/s, its minimum mass equated to 24.5 Earth masses, or approximately 1.4 Neptune masses.
Observations with the HARPS spectrograph could not confirm the 10.24-day variation. Instead, two other periodicities were detected at 8.78 and 40.0 days, with amplitudes below the HET observational errors. The 40-day variation coincides with the rotational period of the star and is therefore caused by activity, but the shorter-period variation is not explained by activity and is therefore caused by a planet. Its semi-amplitude of 4.1 m/s corresponds to a minimum mass of 8.4 Earth masses, making the planet a Super-Earth.
In an independent study, observations with Keck-HIRES also failed to confirm the 10.24-day signal. An 8.77-day periodicity - corresponding to the planet announced by the HARPS team - was detected to intermediate significance, though it was not deemed significant enough to claim a planetary cause with their data alone.
See also
List of extrasolar planets
Gliese 674
References
BD+18 683
285968
021932
0176
M-type main-sequence stars
Planetary systems with one confirmed planet
Taurus (constellation)
J04425581+1857285
TIC objects | Gliese 176 | [
"Astronomy"
] | 524 | [
"Taurus (constellation)",
"Constellations"
] |
13,271,090 | https://en.wikipedia.org/wiki/Ireland%E2%80%93Claisen%20rearrangement | The Ireland–Claisen rearrangement is a chemical reaction of an allylic ester with strong base to give an γ,δ-unsaturated carboxylic acid.
Several reviews have been published.
Mechanism
The Ireland–Claisen rearrangement is a type of Claisen rearrangement. The mechanism is therefore a concerted [3,3]-sigmatropic rearrangement which according to the Woodward–Hoffmann rules show a concerted, suprafacial, pericyclic reaction pathway.
See also
Cope rearrangement
Overman rearrangement
References
Rearrangement reactions
Name reactions | Ireland–Claisen rearrangement | [
"Chemistry"
] | 132 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
13,271,310 | https://en.wikipedia.org/wiki/NAD%2B%20kinase | {{DISPLAYTITLE:NAD+ kinase}}
NAD+ kinase (EC 2.7.1.23, NADK) is an enzyme that converts nicotinamide adenine dinucleotide (NAD+) into NADP+ through phosphorylating the NAD+ coenzyme. NADP+ is an essential coenzyme that is reduced to NADPH primarily by the pentose phosphate pathway to provide reducing power in biosynthetic processes such as fatty acid biosynthesis and nucleotide synthesis. The structure of the NADK from the archaean Archaeoglobus fulgidus has been determined.
In humans, the genes NADK and MNADK encode NAD+ kinases localized in cytosol and mitochondria, respectively. Similarly, yeast have both cytosolic and mitochondrial isoforms, and the yeast mitochondrial isoform accepts both NAD+ and NADH as substrates for phosphorylation.
Reaction
The reaction catalyzed by NADK is
ATP + NAD+ ADP + NADP+
Mechanism
NADK phosphorylates NAD+ at the 2’ position of the ribose ring that carries the adenine moiety. It is highly selective for its substrates, NAD and ATP, and does not tolerate modifications either to the phosphoryl acceptor, NAD, or the pyridine moiety of the phosphoryl donor, ATP. NADK also uses metal ions to coordinate the ATP in the active site. In vitro studies with various divalent metal ions have shown that zinc and manganese are preferred over magnesium, while copper and nickel are not accepted by the enzyme at all. A proposed mechanism involves the 2' alcohol oxygen acting as a nucleophile to attack the gamma-phosphoryl of ATP, releasing ADP.
Regulation
NADK is highly regulated by the redox state of the cell. Whereas NAD is predominantly found in its oxidized state NAD+, the phosphorylated NADP is largely present in its reduced form, as NADPH. Thus, NADK can modulate responses to oxidative stress by controlling NADP synthesis. Bacterial NADK is shown to be inhibited allosterically by both NADPH and NADH. NADK is also reportedly stimulated by calcium/calmodulin binding in certain cell types, such as neutrophils. NAD kinases in plants and sea urchin eggs have also been found to bind calmodulin.
Clinical significance
Due to the essential role of NADPH in lipid and DNA biosynthesis and the hyperproliferative nature of most cancers, NADK is an attractive target for cancer therapy. Furthermore, NADPH is required for the antioxidant activities of thioredoxin reductase and glutaredoxin. Thionicotinamide and other nicotinamide analogs are potential inhibitors of NADK, and studies show that treatment of colon cancer cells with thionicotinamide suppresses the cytosolic NADPH pool to increase oxidative stress and synergizes with chemotherapy.
While the role of NADK in increasing the NADPH pool appears to offer protection against apoptosis, there are also cases where NADK activity appears to potentiate cell death. Genetic studies done in human haploid cell lines indicate that knocking out NADK may protect from certain non-apoptotic stimuli.
See also
Oxidative phosphorylation
Electron transport chain
Metabolism
References
Further reading
External links
ENZYME entry on EC 2.7.1.23
BRENDA entry on EC 2.7.1.23
PDBe-KB provides an overview of all the structure information available in the PDB for Human NAD kinase
EC 2.7.1
Cellular respiration
Metabolism | NAD+ kinase | [
"Chemistry",
"Biology"
] | 775 | [
"Cellular processes",
"Cellular respiration",
"Biochemistry",
"Metabolism"
] |
13,271,682 | https://en.wikipedia.org/wiki/GPS/INS | GPS/INS is the use of GPS satellite signals to correct or calibrate a solution from an inertial navigation system (INS). The method is applicable for any GNSS/INS system.
Overview
GPS/INS method
The GPS gives an absolute drift-free position value that can be used to reset the INS solution or can be blended with it by use of a mathematical algorithm, such as a Kalman filter. The angular orientation of the unit can be inferred from the series of position updates from the GPS. The change in the error in position relative to the GPS can be used to estimate the unknown angle error.
The benefits of using GPS with an INS are that the INS may be calibrated by the GPS signals and that the INS can provide position and angle updates at a quicker rate than GPS. For high dynamic vehicles, such as missiles and aircraft, INS fills in the gaps between GPS positions. Additionally, GPS may lose its signal and the INS can continue to compute the position and angle during the period of lost GPS signal. The two systems are complementary and are often employed together.
Applications
GPS/INS is commonly used on aircraft for navigation purposes. Using GPS/INS allows for smoother position and velocity estimates that can be provided at a sampling rate faster than the GPS receiver. This also allows for accurate estimation of the aircraft attitude (roll, pitch, and yaw) angles. In general, GPS/INS sensor fusion is a nonlinear filtering problem, which is commonly approached using the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). The use of these two filters for GPS/INS has been compared in various sources, including a detailed sensitivity analysis. The EKF uses an analytical linearization approach using Jacobian matrices to linearize the system, while the UKF uses a statistical linearization approach called the unscented transform which uses a set of deterministically selected points to handle the nonlinearity. The UKF requires the calculation of a matrix square root of the state error covariance matrix, which is used to determine the spread of the sigma points for the unscented transform. There are various ways to calculate the matrix square root, which have been presented and compared within GPS/INS application. From this work it is recommended to use the Cholesky decomposition method.
In addition to aircraft applications, GPS/INS has also been studied for automobile applications such as autonomous navigation, vehicle dynamics control, or sideslip, roll, and tire cornering stiffness estimation.
See also
GNSS Augmentation
References
US Patent No. 6900760
Navigation
Aerospace engineering
Inertial navigation | GPS/INS | [
"Engineering"
] | 540 | [
"Aerospace engineering"
] |
13,273,033 | https://en.wikipedia.org/wiki/Bleaching%20of%20wood%20pulp | Bleaching of wood pulp is the chemical processing of wood pulp to lighten its color and whiten the pulp. The primary product of wood pulp is paper, for which whiteness (similar to, but distinct from brightness) is an important characteristic. These processes and chemistry are also applicable to the bleaching of non-wood pulps, such as those made from bamboo or kenaf.
Paper brightness
Brightness is the amount of incident light reflected from paper under specified conditions, usually reported as the percentage of light reflected, so a higher number means a brighter or whiter paper. In the US, the TAPPI T 452 or T 525 standards are used. The international community uses ISO standards.
The table shows how the two systems rate high-brightness papers, but there is no simple way to convert between the two systems because the test methods are so different. The ISO rating is higher and can be over 100. This is because contemporary white paper incorporates fluorescent whitening agents (FWA). Because the ISO standard only measures a narrow range of blue light, it is not directly comparable to human vision of whiteness or brightness.
Newsprint ranges from 55 to 75 ISO brightness. Writing and printer paper would typically be as bright as 104 ISO.
While the results are the same, the processes and fundamental chemistry involved in bleaching chemical pulps (like kraft or sulfite) are very different from those involved in bleaching mechanical pulps (like stoneground, thermomechanical or chemo-thermomechanical). Chemical pulps contain very little lignin, while mechanical pulps contain most of the lignin that was present in the wood used to make the pulp. Lignin is the main source of color in pulp due to the presence of a variety of chromophores naturally present in the wood or created in the pulp mill.
Bleaching mechanical pulps
Mechanical pulp retains most of the lignin present in the wood used to make the pulp and thus contain almost as much lignin as they do cellulose and hemicellulose. It would be impractical to remove this much lignin by bleaching, and undesirable since one of the big advantages of mechanical pulp is the high yield of pulp based on wood used. Therefore, the objective of bleaching mechanical pulp (also referred to as brightening) is to remove only the chromophores (color-causing groups). This is possible because the structures responsible for color are also more susceptible to oxidation or reduction.
Alkaline hydrogen peroxide is the most commonly used bleaching agent for mechanical pulp. The amount of base such as sodium hydroxide is less than that used in bleaching chemical pulps and the temperatures are lower. These conditions allow alkaline peroxide to selectively oxidize non-aromatic conjugated groups responsible for absorbing visible light. The decomposition of hydrogen peroxide is catalyzed by transition metals, and iron, manganese and copper are of particular importance in pulp bleaching. The use of chelating agents like EDTA to remove some of these metal ions from the pulp prior to adding peroxide allows the peroxide to be used more efficiently. Magnesium salts and sodium silicate are also added to improve bleaching with alkaline peroxide.
Sodium dithionite (Na2S2O4), also known as sodium hydrosulfite, is the other main reagent used to brighten mechanical pulps. In contrast to hydrogen peroxide, which oxidizes the chromophores, dithionite reduces these color-causing groups. Dithionite reacts with oxygen, so efficient use of dithionite requires that oxygen exposure be minimized during its use.
Chelating agents can contribute to brightness gain by sequestering iron ions, for example, as EDTA complexes, which are less colored than the complexes formed between iron and lignin.
The brightness gains achieved in bleaching mechanical pulps are temporary, since almost all of the lignin present in the wood is still present in the pulp. Exposure to air and light can produce new chromophores from this residual lignin. This is why newspaper yellows as it ages. Yellowing also occurs due to the acidic sizing.
Bleaching of recycled pulp
Hydrogen peroxide and sodium dithionite are used to increase the brightness of deinked pulp. The bleaching methods are similar for mechanical pulp, in which the goal is to make the fibers brighter.
Bleaching chemical pulps
Chemical pulps, such as those from the kraft process or sulfite pulping, contain much less lignin than mechanical pulps, (<5% compared to approximately 40%). The goal in bleaching chemical pulps is to remove essentially all of the residual lignin, hence the process is often referred to as delignification. Sodium hypochlorite (household bleach) was initially used to bleach chemical pulps, but was largely replaced in the 1930s by chlorine. Concerns about the release of organochlorine compounds into the environment prompted the development of elemental chlorine free (ECF) and totally chlorine free (TCF) bleaching processes.
Delignification of chemical pulps is frequently composed of four or more discrete steps, with each step designated by a letter:
A bleaching sequence from the 1950s could look like CEHEH the pulp would have been exposed to chlorine, extracted (washed) with a sodium hydroxide solution to remove lignin fragmented by the chlorination, treated with sodium hypochlorite, washed with sodium hydroxide again and given a final treatment with hypochlorite. An example of a modern totally chlorine-free (TCF) sequence is OZEPY, where the pulp would be treated with oxygen, then ozone, washed with sodium hydroxide, then treated in sequence with alkaline peroxide and sodium dithionite.
Chlorine and hypochlorite
Chlorine replaces hydrogen on the aromatic rings of lignin via aromatic substitution, oxidizes pendant groups to carboxylic acids and adds across carbon carbon double bonds in the lignin sidechains. Chlorine also attacks cellulose, but this reaction occurs predominantly at pH = 7, where un-ionized hypochlorous acid, HClO, is the main chlorine species in solution. To avoid excessive cellulose degradation, chlorination is carried out at pH < 1.5.
Cl2 + H2O ⇌ H+ + Cl− + HClO
At pH > 8 the dominant species is hypochlorite, ClO−, which is also useful for lignin removal. Sodium hypochlorite can be purchased or generated in situ by reacting chlorine with sodium hydroxide:
2 NaOH + Cl2 ⇌ NaOCl + NaCl + H2O
The main objection to the use of chlorine for bleaching pulp is the large amounts of soluble organochlorine compounds produced and released into the environment.
Chlorine dioxide
Chlorine dioxide, ClO2 is an unstable gas with moderate solubility in water. It is usually generated in an aqueous solution and used immediately because it decomposes and is explosive in higher concentrations. It is produced by reacting sodium chlorate with a reducing agent like sulfur dioxide:
2 NaClO3 + H2SO4 + SO2 → 2 ClO2 + 2 NaHSO4
Chlorine dioxide is sometimes used in combination with chlorine, but it is used alone in ECF (elemental-chlorine-free) bleaching sequences. It is used at moderately acidic pH (3.5 to 6). The use of chlorine dioxide minimizes the amount of organochlorine compounds produced. Chlorine dioxide (ECF technology) currently is the most important bleaching method worldwide. About 95% of all bleached kraft pulp is made using chlorine dioxide in ECF bleaching sequences.
Extraction or washing
All bleaching agents used to delignify chemical pulp, with the exception of sodium dithionite, break lignin down into smaller, oxygen-containing molecules. These breakdown products are generally soluble in water, especially if the pH is greater than 7 (many of the products are carboxylic acids). These materials must be removed between bleaching stages to avoid excessive use of bleaching chemicals, since many of these smaller molecules are still susceptible to oxidation. The need to minimize water use in modern pulp mills has driven the development of equipment and techniques for the efficient use of available water.
Oxygen
Oxygen exists as a ground-state triplet, which is relatively unreactive and needs free radicals or very electron-rich substrates such as deprotonated lignin phenolic groups. The production of these phenoxide groups requires that delignification with oxygen be carried out under very basic conditions (pH > 12). The reactions involved are primarily single-electron (radical) reactions. Oxygen opens rings and cleaves sidechains, giving a complex mixture of small oxygenated molecules. Transition-metal compounds, particularly those of iron, manganese and copper, which have multiple oxidation states, facilitate many radical reactions and impact oxygen delignification. While the radical reactions are largely responsible for delignification, they are detrimental to cellulose.
Oxygen-based radicals, especially hydroxyl radicals, HO•, can oxidize hydroxyl groups in the cellulose chains to ketones, and under the strongly basic conditions used in oxygen delignification, these compounds undergo reverse aldol reactions, leading to cleavage of cellulose chains. Magnesium salts are added to oxygen delignification to help preserve the cellulose chains, but mechanism of this protection has not been confirmed.
Hydrogen peroxide
Using hydrogen peroxide to delignify chemical pulp requires more vigorous conditions than for brightening mechanical pulp. Both pH and temperature are higher when treating chemical pulp. The chemistry is very similar to that involved in oxygen delignification, in terms of the radical species involved and the products produced. Hydrogen peroxide is sometimes used with oxygen in the same bleaching stage, and this give the letter designation Op in bleaching sequences. Redox-active metal ions, particularly manganese, Mn(II/IV), catalyze the decomposition of hydrogen peroxide, so some improvement in the efficiency of peroxide bleaching can be achieved if the metal levels are controlled.
Ozone
Ozone is a very powerful oxidizing agent, and the biggest challenge in using it to bleach wood pulp is to get sufficient selectivity so that the desirable cellulose is not degraded. Ozone reacts with the carbon–carbon double bonds in lignin, including those within aromatic rings. In the 1990s ozone was touted as good reagent to allow pulp to be bleached without any chlorine-containing chemicals (totally chlorine-free, TCF). The emphasis has changed, and ozone is seen as an adjunct to chlorine dioxide in bleaching sequences not using any elemental chlorine (elemental-chlorine-free, ECF). Over 25 pulp mills worldwide have installed equipment to generate and use ozone.
Chelant wash
The effect of transition metals such as Mn on some of the bleaching stages has already been mentioned. Sometimes it is beneficial to remove some of these redox-active metal ions from the pulp by washing the pulp with a chelating agent such as EDTA or DTPA. This is more common in TCF bleaching sequences for two reasons: the acidic chlorine or chlorine dioxide stages tend to remove metal ions (metal ions usually being more soluble at lower pH), and TCF stages rely more heavily on oxygen-based bleaching agents, which are more susceptible to the detrimental effects of these metal ions. Chelant washes are usually carried out at or near pH = 7. Lower-pH solutions are more effective at removing redox-active transition metals (Mn, Fe, Cu), but also remove most of the beneficial metal ions, especially magnesium. A negative impact of chelating agents, as DTPA, is their toxicity for the activated sludges in the treatment of kraft pulping effluent.
Other bleaching agents
A variety of less common bleaching agents have been used on chemical pulps. They include peroxyacetic acid, peroxyformic acid, potassium peroxymonosulfate (oxone), dimethyldioxirane, which is generated in situ from acetone and potassium peroxymonosulfate, and peroxymonophosphoric acid.
Enzymes like xylanase have been used in pulp bleaching to increase the efficiency of other bleaching chemicals. It is believed that xylanase does this by cleaving lignin–xylan bonds to make lignin more accessible to other reagents. It is possible that other enzymes such as those used by fungi to degrade lignin may be useful in pulp bleaching.
Environmental considerations
The bleaching of chemical pulps has the potential to cause significant environmental damage, primarily through the release of organic materials into waterways. Pulp mills are almost always located near large bodies of water because they require substantial quantities of water for their processes. An increased public awareness of environmental issues from the 1970s and 1980s, as evidenced by the formation of organizations like Greenpeace, influenced the pulping industry and governments to address the release of these materials into the environment.
Conventional bleaching using elemental chlorine produces and releases into the environment large amounts of chlorinated organic compounds, including chlorinated dioxins. Dioxins are recognized as a persistent environmental pollutant, regulated internationally by the Stockholm Convention on Persistent Organic Pollutants.
Dioxins are highly toxic, and health effects on humans include reproductive, developmental, immune and hormonal problems. They are known to be carcinogenic. Over 90% of human exposure is through food, primarily meat, dairy, fish and shellfish, as dioxins accumulate in the food chain in the fatty tissue of animals.
As a result, from the 1990s onwards, the use of elemental chlorine in the delignification process was substantially reduced and replaced with ECF (elemental chlorine free) and TCF (totally chlorine free) bleaching processes. In 2005, elemental chlorine was used in 19–20% of kraft pulp production globally, down from over 90% in 1990. 75% of kraft pulp used ECF, with the remaining 5–6% using TCF. Most TCF pulp is produced in Sweden and Finland for sale in Germany, all markets with a high level of environmental awareness. In 1999, TCF pulp represented 25% of the European market.
TCF bleaching, by removing chlorine from the process, reduces chlorinated organic compounds to background levels in pulp-mill effluent. ECF bleaching can substantially reduce but not fully eliminate chlorinated organic compounds, including dioxins, from effluent. While modern ECF plants can achieve chlorinated organic compounds (AOX) emissions of less than 0.05 kg per tonne of pulp produced, most do not achieve this level of emissions. Within the EU, the average chlorinated organic compound emissions for ECF plants is 0.15 kg per tonne.
However, there has been disagreement about the comparative environmental effects of ECF and TCF bleaching. Some researchers found that there is no environmental difference between ECF and TCF, while others concluded that among ECF and TCF effluents before and after secondary treatment, TCF effluents are the least toxic.
See also
Johan Richter – inventor of the continuous process for bleaching wood pulp
Paper chemicals
References
Papermaking
Pulp and paper industry
Chemical processes
Environmental impact of paper | Bleaching of wood pulp | [
"Chemistry"
] | 3,335 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
13,273,052 | https://en.wikipedia.org/wiki/Dimethylzinc | Dimethylzinc, also known as zinc methyl, DMZ, or DMZn, is a toxic organozinc compound with the chemical formula . It belongs to the large series of similar compounds such as diethylzinc.
Preparation
It is formed by the action of methyl iodide on zinc or zinc-sodium alloy at elevated temperatures.
Sodium assists the reaction of the zinc with the methyl iodide. Zinc iodide is formed as a byproduct.
Properties
Dimethylzinc is a colorless mobile volatile liquid, which has a characteristic disagreeable garlic-like odor. It is a very reactive and strong reducing agent. It is soluble in alkanes and often sold as a solution in hexanes. The triple point of dimethylzinc is ± 0.02 K. The monomeric molecule of dimethylzinc is linear at Zn center and tetragonal at C centers.
Toxicity and hazards
Inhalation of dimethylzinc mist or vapor causes immediate irritation of the upper respiratory tract, and may cause pneumonia and death. Eyes are immediately and severely irritated and burned by liquid, vapor, or dilute solutions. If not removed by thorough flushing with water, this chemical may permanently damage the cornea, eventually causing blindness. If dimethylzinc contacts the skin, it causes thermal and acid burns by reacting with moisture on skin. Unless washed quickly, skin may be scarred. Ingestion, while unlikely, also causes immediate burns. Nausea, vomiting, cramps, and diarrhea may follow, and tissues may ulcerate if not promptly treated. Upon heating, dimethylzinc vapor decomposes to irritating and toxic products.
Contact of dimethylzinc with oxidants may form explosive peroxides. Dimethylzinc oxidises in air very slowly, producing methylzinc methoxide .
Dimethylzinc is very pyrophoric and can spontaneously ignite in air. It burns in air with a blue flame, giving off a garlic-like odor. The products of decomposition (fire smoke) include zinc oxide, which itself is not toxic, but its fumes can irritate lungs and cause metal fume fever, severe injury, or death.
Dimethylzinc fire must be extinguished with dry sand. The fire reacts violently or explosively with water, generating very flammable methane gas which can explode in air upon catching fire, and lung-irritating smoke of zinc oxide. Dimethylzinc fire reacts violently or explosively with methanol, ethanol and 2,2-dichloropropane. It explodes in oxygen and ozone. Improperly handled containers of dimethylzinc can explode, causing serious injuries or death.
Structure
In the solid state the compound exists in two modifications. The tetragonal high-temperature phase shows a two-dimensional disorder, while the low-temperature phase which is monoclinic is ordered. The molecules are linear with Zn-C bond lengths measuring 192.7(6) pm. The structure of the gas-phase shows a very similar Zn-C distance of 193.0(2) pm.
History
Dimethylzinc was first prepared by Edward Frankland during his work with Robert Bunsen in 1849 at the University of Marburg. After heating a mixture of zinc and methyl iodide in an airtight vessel, a flame burst out when the seal was broken. In the laboratory, this synthesis method remains unchanged today, except that copper or copper compounds are used to activate the zinc.
Uses
Dimethylzinc has been of great importance in the synthesis of organic compounds. It was used for a long time to introduce methyl groups into organic molecules or to synthesize organometallic compounds containing methyl groups. Grignard reagents, (organo-magnesium compounds), which are easier to handle and less flammable, replaced organo-zinc compounds in most laboratory syntheses. Due to differences in reactivity (as well as in reaction byproducts) between organo-zinc compounds and Grignard reagents, organo-zinc compounds may be preferred in some syntheses.
Its high vapor pressure has led to extensive uses in the production of semiconductors, e.g. metalorganic chemical vapor deposition (MOCVD) for the preparation of wide band gap II–VI semiconducting films (e.g. ZnO, ZnS, ZnSe, ZnTe, ) and as p-dopant precursors for III–V semiconductors (e.g. AlN, AlP, , GaAs, InP), which have many electronic and photonic applications.
It is used as an accelerator in rubber vulcanization, as a fungicide, and as a methylating agent in methyltitanium trichloride.
References
Methylating agents
Organozinc compounds
Foul-smelling chemicals
Methyl complexes
Pyrophoric materials | Dimethylzinc | [
"Chemistry",
"Technology"
] | 1,029 | [
"Methylation",
"Methylating agents"
] |
13,274,389 | https://en.wikipedia.org/wiki/Articulated%20body%20pose%20estimation | Articulated body pose estimation in computer vision is the study of algorithms and systems that recover the pose of an articulated body, which consists of joints and rigid parts using image-based observations. It is one of the longest-lasting problems in computer vision because of the complexity of the models that relate observation with pose, and because of the variety of situations in which it would be useful.
Description
Perception of human beings in their neighboring environment is an important capability that robots must possess. If a person uses gestures to point to a particular object, then the interacting machine should be able to understand the situation in real world context. Thus pose estimation is an important and challenging problem in computer vision, and many algorithms have been deployed in solving this problem over the last two decades. Many solutions involve training complex models with large data sets.
Pose estimation is a difficult problem and an active subject of research because the human body has 244 degrees of freedom with 230 joints. Although not all movements between joints are evident, the human body is composed of 10 large parts with 20 degrees of freedom. Algorithms must account for large variability introduced by differences in appearance due to clothing, body shape, size, and hairstyles. Additionally, the results may be ambiguous due to partial occlusions from self-articulation, such as a person's hand covering their face, or occlusions from external objects. Finally, most algorithms estimate pose from monocular (two-dimensional) images, taken from a normal camera. Other issues include varying lighting and camera configurations. The difficulties are compounded if there are additional performance requirements. These images lack the three-dimensional information of an actual body pose, leading to further ambiguities. There is recent work in this area wherein images from RGBD cameras provide information about color and depth.
Sensors
The typical articulated body pose estimation system involves a model-based approach, in which the pose estimation is achieved by maximizing/minimizing a similarity/dissimilarity between an observation (input) and a template model. Different kinds of sensors have been explored for use in making the observation, including the following:
Visible wavelength imagery,
Long-wave thermal infrared imagery,
Time-of-flight imagery, and
Laser range scanner imagery.
These sensors produce intermediate representations that are directly used by the model. The representations include the following:
Image appearance,
Voxel (volume element) reconstruction,
3D point clouds, and sum of Gaussian kernels
3D surface meshes.
Classical models
Part models
The basic idea of part based model can be attributed to the human skeleton. Any object having the property of articulation can be broken down into smaller parts wherein each part can take different orientations, resulting in different articulations of the same object. Different scales and orientations of the main object can be articulated to scales and orientations of the corresponding parts. To formulate the model so that it can be represented in mathematical terms, the parts are connected to each other using springs. As such, the model is also known as a spring model. The degree of closeness between each part is accounted for by the compression and expansion of the springs. There is geometric constraint on the orientation of
springs. For example, limbs of legs cannot move 360 degrees. Hence parts cannot have that extreme orientation. This reduces the possible permutations.
The spring model forms a graph G(V,E) where V (nodes) corresponds to the parts and E (edges) represents springs connecting two neighboring parts. Each location in the image can be reached by the and coordinates of the pixel location. Let be point at location. Then the cost associated in joining the spring between and the point can be given by . Hence the
total cost associated in placing components at locations is given by
The above equation simply represents the spring model used to describe body pose. To estimate pose from images, cost or energy function must be minimized. This energy function consists of two terms. The first is related to how each component matches the image data and the second deals with how much the
oriented (deformed) parts match, thus accounting for articulation along with object detection.
The part models, also known as pictorial structures, are of one of the basic models on which other efficient models are built by slight modification. One such example is the flexible mixture model which reduces the database of hundreds or thousands of deformed parts by exploiting the notion of local rigidity.
Articulated model with quaternion
The kinematic skeleton is constructed by a tree-structured chain. Each rigid body segment has its local coordinate system that can be transformed to the world coordinate system via a 4×4 transformation matrix ,
where denotes the local transformation from body segment to its parent . Each joint in the body has 3 degrees of freedom (DoF) rotation. Given a transformation matrix , the joint position at the T-pose can be transferred to its corresponding position in the world coordination. In many works, the 3D joint rotation is expressed as a normalized quaternion due to its continuity that can facilitate gradient-based optimization in the parameter estimation.
Deep learning based models
Since about 2016, deep learning has emerged as the dominant method for performing accurate articulated body pose estimation. Rather than building an explicit model for the parts as above, the appearances of the joints and relationships between the joints of the body are learned from large training sets. Models generally focus on extracting the 2D positions of joints (keypoints), the 3D positions of joints, or the 3D shape of the body from either a single or multiple images.
Supervised
2D joint positions
The first deep learning models that emerged focused on extracting the 2D positions of human joints in an image. Such models take in an image and pass it through a convolutional neural network to obtain a series of heatmaps (one for each joint) which take on high values where joints are detected.
When there are multiple people per image, two main techniques have emerged for grouping joints within each person. In the first, "bottom-up" approach, the neural network is trained to also generate "part affinity fields" which indicate the location of limbs. Using these fields, joints can be grouped limb by limb by solving a series of assignment problems. In the second, "top-down" approach, an additional network is used to first detect people in the image and then the pose estimation network is applied to each image.
3D joint positions
With the advent of multiple datasets with human pose annotated in multiple views, models which detect 3D joint positions became more popular. These again fell into two categories In the first, a neural network is used to detect 2D joint positions from each view and these detections are then triangulated to obtain 3D joint positions. The 2D network may be refined to produce better detections based on the 3D data. Furthermore, such approaches often have filters in both 2D and 3D to refine the detected points. In the second, a neural network is trained end-to-end to predict 3D joint positions directly from a set of images, without 2D joint position intermediate detections. Such approaches often project image features into a cube and then use a 3D convolutional neural network to predict a 3D heatmap for each joint.
3D shape
Concurrently with the work above, scientists have been working on estimating the full 3D shape of a human or animal from a set of images. Most of the work is based on estimating the appropriate pose of the skinned multi-person linear (SMPL) model within an image. Variants of the SMPL model for other animals have also been developed. Generally, some keypoints and a silhouette are detected for each animal within the image, and then the parameters 3D shape model are fit to match the position of keypoints and silhouette.
Unsupervised
The above algorithms all rely on annotated images, which can be time-consuming to produce. To address this issue, computer vision researchers have developed new algorithms which can learn 3D keypoints given only annotated 2D images from a single view or identify keypoints given videos without any annotations.
Applications
Assisted living
Personal care robots may be deployed in future assisted living homes. For these robots, high-accuracy human detection and pose estimation is necessary to perform a variety of tasks, such as fall detection. Additionally, this application has a number of performance constraints.
Character animation
Traditionally, character animation has been a manual process. However, poses can be synced directly to a real-life actor through specialized pose estimation systems. Older systems relied on markers or specialized suits. Recent advances in pose estimation and motion capture have enabled markerless applications, sometimes in real time.
Intelligent driver assisting system
Car accidents account for about two percent of deaths globally each year. As such, an intelligent system tracking driver pose may be useful for emergency alerts . Along the same lines, pedestrian detection algorithms have been used successfully in autonomous cars, enabling the car to make smarter decisions.
Video games
Commercially, pose estimation has been used in the context of video games, popularized with the Microsoft Kinect sensor (a depth camera). These systems track the user to render their avatar in-game, in addition to performing tasks like gesture recognition to enable the user to interact with the game. As such, this application has a strict real-time requirement.
Medical Applications
Pose estimation has been used to detect postural issues such as scoliosis by analyzing abnormalities in a patient's posture, physical therapy, and the study of the cognitive brain development of young children by monitoring motor functionality.
Other applications
Other applications include video surveillance, animal tracking and behavior understanding, sign language detection, advanced human–computer interaction, and markerless motion capturing.
Related technology
A commercially successful but specialized computer vision-based articulated body pose estimation technique is optical motion capture. This approach involves placing markers on the individual at strategic locations to capture the 6 degrees-of-freedom of each body part.
Research groups
A number of groups and companies are researching pose estimation, including groups at Brown University, Carnegie Mellon University, MPI Saarbruecken, Stanford University, the University of California, San Diego, the University of Toronto, the École Centrale Paris, ETH Zurich, National University of Sciences and Technology (NUST), the University of California, Irvine and Polytechnic University of Catalonia.
Companies
At present, several companies are working on articulated body pose estimation.
Bodylabs: Bodylabs is a Manhattan-based software provider of human-aware artificial intelligence.
References
External links
Michael J. Black, Professor at Brown University
Research Project Page of German Cheung at Carnegie Mellon University
Homepage of Dr.-Ing at MPI Saarbruecken
Markerless Motion Capture Project at Stanford
Computer Vision and Robotics Research Laboratory at the University of California, San Diego
Research Projects of David J. Fleet at the University of Toronto
Ronald Poppe at the University of Twente.
Professor Nikos Paragios at the Ecole Centrale de Paris
Articulated Pose Estimation with Flexible Mixtures of Parts Project at UC Irvine
http://screenrant.com/crazy3dtechnologyjamescameronavatarkofi3367/
2D articulated human pose estimation software
Articulated Pose Estimation with Flexible Mixtures of Parts
Computer vision | Articulated body pose estimation | [
"Engineering"
] | 2,268 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
13,274,813 | https://en.wikipedia.org/wiki/Edmonson%20v.%20Leesville%20Concrete%20Co. | Edmonson v. Leesville Concrete Company, 500 U.S. 614 (1991), was a United States Supreme Court case which held that peremptory challenges may not be used to exclude jurors on the basis of race in civil trials. Edmonson extended the court's similar decision in Batson v. Kentucky (1986), a criminal case. The Court applied the equal protection component of the Due Process Clause of the Fifth Amendment, as determined in Bolling v. Sharpe (1954), in finding that such race-based challenges violated the Constitution.
Background
A construction worker, Thaddeus Donald Edmonson, was injured during work on federal property. He sued Leesville Concrete Company for negligence leading to his injuries. During jury selection, Leesville used two of their three peremptory challenges on black jurors, leaving a panel of twelve with one African-American. Edmonson, citing Batson, requested that the trial court require Leesville give a race-neutral reason for the peremptory challenges to black jurors, but the court refused. The jury found that Leesville was responsible for 20% of Edmonson's injury and awarded him $18,000. The United States Court of Appeals for the Fifth Circuit reversed the decision, holding that parties become state actors during jury selection, and so Batson requires race-neutral selection in civil cases. When the Fifth Circuit reheard the case en banc, they affirmed the original District Court decision. Recognizing a circuit split, the Supreme Court granted certiorari.
Opinion of the Court
Justice Anthony Kennedy wrote the opinion for the majority. Justice Kennedy began with a long line of cases where the court held that racial discrimination was impermissible in jury selection before a criminal trial. He then pointed out that although the Court had never indicated such discrimination was permitted in a civil trial, either, it also holds that federal law restrains the actions of government, not private actors. To decide whether to apply federal law, Justice Kennedy applied a two-part test from Lugar v. Edmondson Oil Co. The first part of the test is whether the constitutional deprivation, in this case the right to a fair and impartial jury, resulted from a right rooted in state authority. Kennedy found, almost summarily, that peremptory challenges' intimate role in shaping a jury meant the case met the first part of the test. The second part of the test is whether the private party, Leesville and its counsel, was acting as a "state actor".
In determining whether the Leesville was acting as a state actor, Justice Kennedy considered three issues and relevant precedent. The first issue was whether the actor relies on governmental assistance, and Justice Kennedy found that the system of jury selection clearly existed within the sphere of judicial proceedings and would not be possible without the assistance of the judge and all other constituent elements of the institution. The second consideration was whether the actor is performing a traditional function of government. Justice Kennedy first found that the jury was clearly performing a traditional function of government by serving as the finder-of-fact in a civil trial. Second, he drew a parallel between jury selection and elections, indicating that constitutional constraints apply to all the machinery involved in choosing representatives and juries (such as when parties control primary elections). This is unlike any other aspect of civil litigation, none of which involve a government function like jury selection. The third consideration was whether the injury caused was aggravated in a unique way by the incidents of governmental authority. Justice Kennedy said racial discrimination inside the courtroom diminishes the integrity of the courts and "compounds the racial insult" of discrimination.
Justice Kennedy then dealt with the question of whether litigants could raise violations of jurors' rights on their behalf. The relevant precedent in that consideration was Powers v. Ohio, a similar case that dealt with race-based exclusion of jurors during jury selection in a criminal trial. In Powers, the Court held that litigants generally cannot make a claim due to violations of others' rights, except where the litigant has suffered an injury the courts can resolve, has a close relation with the third party, and the third party is hindered in protecting his or her own interests. Justice Kennedy held that all three conditions were met in Edmonson's case, including the resolvable injury. The concrete resolvable injury arose, in Justice Kennedy's view, whenever racial discrimination took place within criminal or civil trials.
The Court did not make a holding regarding whether prima facie evidence of racial discrimination in Edmonson's case actually existed, and remanded the case to the trial court to determine that issue.
Dissent
Three justices dissented, arguing that there was no state action (which is required for any Fifth or Fourteenth Amendment violation) because the litigants are private parties. Justice Sandra Day O'Connor wrote the dissent, joined by Chief Justice William Rehnquist and Justice Antonin Scalia. Justice O'Connor wrote that "the Court's final argument is that the exercise of a peremptory challenge by a private litigant is state action because it takes place in a courtroom. [But] the actions of a lawyer in a courtroom do not become those of the government by virtue of their location. This is true even if those actions are based on race." "Constitutional 'liability attaches only to those wrongdoers who carry a badge of authority of [the government] and represent it in some capacity.' Tarkanian, 488 U.S., at 191 [double-internal quotation marks omitted]." Therefore, although "[r]acism is a terrible thing ... [t]he Government is not responsible for a peremptory challenge by a private litigant."
References
External links
A documentary on Edmonson v. Leesville Concrete Company
Batson challenge case law
Construction law
United States Fifth Amendment case law
United States Supreme Court cases
United States Supreme Court cases of the Rehnquist Court
1991 in United States case law
United States racial discrimination case law | Edmonson v. Leesville Concrete Co. | [
"Engineering"
] | 1,243 | [
"Construction",
"Construction law"
] |
13,275,145 | https://en.wikipedia.org/wiki/International%20Association%20of%20People-Environment%20Studies | The International Association of People-Environment Studies (IAPS), has been promoting the interdisciplinary exchange of ideas between planning and social scientists for 35 years – above all between spatial planning, architecture, psychology, and sociology. IAPS was officially founded in 1981, although its origins can be traced back to a series of successful conferences in several European countries from 1969 to 1979.
The objectives of IAPS are:
To facilitate communication among those concerned with the relationships between people and their physical environment
To stimulate research and innovation for improving human well-being and the physical environment
To promote the integration of research, education, policy and practice
People-environment studies, originating from environmental psychology (Lewin, Barker, Brunswik), have always tried to close the "mind gaps" between natural sciences, engineering, arts, and social sciences by an epistemological approach that encompasses denotations (objects and techniques) as well as connotations (subjective social, cultural meanings).
Membership
Benefits of membership include:
The right to vote and stand for membership of the Board
Reduced fees for attending conferences and seminars
Free copies of the IAPS newsletter. This contains research summaries, articles, reviews, letters, lists of references, and general news of the research field
The right to be listed in and receive a copy of the Directory of IAPS members
Reduced subscription rates for specified journals
List of conferences
The biannual conference is the main event organised under auspices of the association. In the past years, the following conferences have been organised:
Post conference publications
People, Places, and Sustainability (2002) Editors: Moser, G. / Pol, E. / Bernard, Y. / Bonnes, M. / Corraliza, J.A. / Giuliani, V.
Culture, Quality of Life and Globalization – Problems and Challenges for the New Millennium (2003) Editors: García Mira, R. / Sabucedo Cameselle, J.M. / Martínez, J.R.
Designing Social Innovation - Planning, Building, Evaluating (2005) Editors: Martens, B. / Keul, A.G.
Environment, Health, and Sustainable Development (2010) Editors: Abdel-Hadi, A. / Tolba, M.K. / Soliman, S.
IAPS Digital Library
A database of all 4,400 abstracts from conferences since 1969, which permits a full-text search through the history of environmental psychology.
Notable people
Perla Serfaty, inducted into the IAPS Hall of Fame in 2018
References
External links
Homepage of IAPS
Homepage of the IAPS Digital Library
Environmental psychology
Environmental social science
International environmental organizations
de:Umweltpsychologie
fr:Psychologie environnementale | International Association of People-Environment Studies | [
"Environmental_science"
] | 554 | [
"Environmental social science",
"Environmental psychology"
] |
13,276,879 | https://en.wikipedia.org/wiki/Racetrack%20memory | Racetrack memory or domain-wall memory (DWM) is an experimental non-volatile memory device under development at IBM's Almaden Research Center by a team led by physicist Stuart Parkin. It is a current topic of active research at the Max Planck Institute of Microstructure Physics in Dr. Parkin's group. In early 2008, a 3-bit version was successfully demonstrated. If it were to be developed successfully, racetrack memory would offer storage density higher than comparable solid-state memory devices like flash memory.
Description
Racetrack memory uses a spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire about 200 nm across and 100 nm thick. As current is passed through the wire, the domains pass by magnetic read/write heads positioned near the wire, which alter the domains to record patterns of bits. A racetrack memory device is made up of many such wires and read/write elements. In general operational concept, racetrack memory is similar to the earlier bubble memory of the 1960s and 1970s. Delay-line memory, such as mercury delay lines of the 1940s and 1950s, are a still-earlier form of similar technology, as used in the UNIVAC and EDSAC computers. Like bubble memory, racetrack memory uses electrical currents to "push" a sequence of magnetic domains through a substrate and past read/write elements. Improvements in magnetic detection capabilities, based on the development of spintronic magnetoresistive sensors, allow the use of much smaller magnetic domains to provide far higher bit densities.
In production, it was expected that the wires could be scaled down to around 50 nm. There were two arrangements considered for racetrack memory. The simplest was a series of flat wires arranged in a grid with read and write heads arranged nearby. A more widely studied arrangement used U-shaped wires arranged vertically over a grid of read/write heads on an underlying substrate. This would allow the wires to be much longer without increasing its 2D area, although the need to move individual domains further along the wires before they reach the read/write heads results in slower random access times. Both arrangements offered about the same throughput performance. The primary concern in terms of construction was practical; whether or not the three dimensional vertical arrangement would be feasible to mass-produce.
Comparison to other memory devices
Projections in 2008 suggested that racetrack memory would offer performance on the order of 20-32 ns to read or write a random bit. This compared to about 10,000,000 ns for a hard drive, or 20-30 ns for conventional DRAM. The primary authors discussed ways to improve the access times with the use of a "reservoir" to about 9.5 ns. Aggregate throughput, with or without the reservoir, would be on the order of 250-670 Mbit/s for racetrack memory, compared to 12800 Mbit/s for a single DDR3 DRAM, 1000 Mbit/s for high-performance hard drives, and 1000 to 4000 Mbit/s for flash memory devices. The only current technology that offered a clear latency benefit over racetrack memory was SRAM, on the order of 0.2 ns, but at a higher cost. Larger feature size "F" of about 45 nm (as of 2011) with a cell area of about 140 F2.
Racetrack memory is one among several emerging technologies that aim to replace conventional memories such as DRAM and Flash, and potentially offer a universal memory device applicable to a wide variety of roles. Other contenders included magnetoresistive random-access memory (MRAM), phase-change memory (PCRAM) and ferroelectric RAM (FeRAM). Most of these technologies offer densities similar to flash memory, in most cases worse, and their primary advantage is the lack of write-endurance limits like those in flash memory. Field-MRAM offers excellent performance as high as 3 ns access time, but requires a large 25-40 F² cell size. It might see use as an SRAM replacement, but not as a mass storage device. The highest densities from any of these devices is offered by PCRAM, with a cell size of about 5.8 F², similar to flash memory, as well as fairly good performance around 50 ns. Nevertheless, none of these can come close to competing with racetrack memory in overall terms, especially density. For example, 50 ns allows about five bits to be operated in a racetrack memory device, resulting in an effective cell size of 20/5=4 F², easily exceeding the performance-density product of PCM. On the other hand, without sacrificing bit density, the same 20 F² area could fit 2.5 2-bit 8 F² alternative memory cells (such as resistive RAM (RRAM) or spin-torque transfer MRAM), each of which individually operating much faster (~10 ns).
In most cases, memory devices store one bit in any given location, so they are typically compared in terms of "cell size", a cell storing one bit. Cell size itself is given in units of F², where "F" is the feature size design rule, representing usually the metal line width. Flash and racetrack both store multiple bits per cell, but the comparison can still be made. For instance, hard drives appeared to be reaching theoretical limits around 650 nm²/bit, defined primarily by the capability to read and write to specific areas of the magnetic surface. DRAM has a cell size of about 6 F², SRAM is much less dense at 120 F². NAND flash memory is currently the densest form of non-volatile memory in widespread use, with a cell size of about 4.5 F², but storing three bits per cell for an effective size of 1.5 F². NOR flash memory is slightly less dense, at an effective 4.75 F², accounting for 2-bit operation on a 9.5 F² cell size. In the vertical orientation (U-shaped) racetrack, nearly 10-20 bits are stored per cell, which itself would have a physical size of at least about 20 F². In addition, bits at different positions on the "track" would take different times (from ~10 to ~1000 ns, or 10 ns/bit) to be accessed by the read/write sensor, because the "track" would move the domains at a fixed rate of ~100 m/s past the read/write sensor.
Development challenges
One limitation of the early experimental devices was that the magnetic domains could be pushed only slowly through the wires, requiring current pulses on the orders of microseconds to move them successfully. This was unexpected, and led to performance equal roughly to that of hard drives, as much as 1000 times slower than predicted. Recent research has traced this problem to microscopic imperfections in the crystal structure of the wires which led to the domains becoming "stuck" at these imperfections. Using an X-ray microscope to directly image the boundaries between the domains, their research found that domain walls would be moved by pulses as short as a few nanoseconds when these imperfections were absent. This corresponds to a macroscopic performance of about 110 m/s.
The voltage required to drive the domains along the racetrack would be proportional to the length of the wire. The current density must be sufficiently high to push the domain walls (as in electromigration). A difficulty for racetrack technology arises from the need for high current density (>108 A/cm2); a 30 nm x 100 nm cross-section would require >3 mA. The resulting power draw becomes higher than that required for other memories, e.g., spin-transfer torque memory (STT-RAM) or flash memory.
Another challenge associated with racetrack memory is the stochastic nature in which the domain walls move, i.e., they move and stop at random positions. There have been attempts to overcome this challenge by producing notches at the edges of the nanowire. Researchers have also proposed staggered nanowires to pin the domain walls precisely. Experimental investigations have shown the effectiveness of staggered domain wall memory. Recently researchers have proposed non-geometrical approaches such as local modulation of magnetic properties through composition modification. Techniques such as annealing induced diffusion and ion-implantation are used.
See also
Giant magnetoresistance (GMR) effect
Magnetoresistive random-access memory (MRAM)
Spintronics
Spin transistor
References
External links
Redefining the Architecture of Memory
IBM Moves Closer to New Class of Memory (YouTube video)
IBM Racetrack Memory Project
Computer memory
Non-volatile memory
IBM storage devices
Spintronics | Racetrack memory | [
"Physics",
"Materials_science"
] | 1,757 | [
"Spintronics",
"Condensed matter physics"
] |
13,276,958 | https://en.wikipedia.org/wiki/Initial%20value%20formulation%20%28general%20relativity%29 | The initial value formulation of general relativity is a reformulation of Albert Einstein's theory of general relativity that describes a universe evolving over time.
Each solution of the Einstein field equations encompasses the whole history of a universe – it is not just some snapshot of how things are, but a whole spacetime: a statement encompassing the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein's theory appears to be different from most other physical theories, which specify evolution equations for physical systems; if the system is in a given state at some given moment, the laws of physics allow you to extrapolate its past or future. For Einstein's equations, there appear to be subtle differences compared with other fields: they are self-interacting (that is, non-linear even in the absence of other fields); they are diffeomorphism invariant, so to obtain a unique solution, a fixed background metric and gauge conditions need to be introduced; finally, the metric determines the spacetime structure, and thus the domain of dependence for any set of initial data, so the region on which a specific solution will be defined is not, a priori, defined.
There is, however, a way to re-formulate Einstein's equations that overcomes these problems. First of all, there are ways of rewriting spacetime as the evolution of "space" in time; an earlier version of this is due to Paul Dirac, while a simpler way is known after its inventors Richard Arnowitt, Stanley Deser and Charles Misner as ADM formalism. In these formulations, also known as "3+1" approaches, spacetime is split into a three-dimensional hypersurface with interior metric and an embedding into spacetime with exterior curvature; these two quantities are the dynamical variables in a Hamiltonian formulation tracing the hypersurface's evolution over time. With such a split, it is possible to state the initial value formulation of general relativity. It involves initial data which cannot be specified arbitrarily but needs to satisfy specific constraint equations, and which is defined on some suitably smooth three-manifold ; just as for other differential equations, it is then possible to prove existence and uniqueness theorems, namely that there exists a unique spacetime which is a solution of Einstein equations, which is globally hyperbolic, for which is a Cauchy surface (i.e. all past events influence what happens on , and all future events are influenced by what happens on it), and has the specified internal metric and extrinsic curvature; all spacetimes that satisfy these conditions are related by isometries.
The initial value formulation with its 3+1 split is the basis of numerical relativity; attempts to simulate the evolution of relativistic spacetimes (notably merging black holes or gravitational collapse) using computers. However, there are significant differences to the simulation of other physical evolution equations which make numerical relativity especially challenging, notably the fact that the dynamical objects that are evolving include space and time itself (so there is no fixed background against which to evaluate, for instance, perturbations representing gravitational waves) and the occurrence of singularities (which, when they are allowed to occur within the simulated portion of spacetime, lead to arbitrarily large numbers that would have to be represented in the computer model).
See also
ADM formalism
Notes
References
Kalvakota, Vaibhav R. (July 1, 2021). "A brief account of the Cauchy problem in General Relativity".
General relativity | Initial value formulation (general relativity) | [
"Physics"
] | 737 | [
"General relativity",
"Theory of relativity"
] |
13,277,180 | https://en.wikipedia.org/wiki/Priming%20%28steam%20locomotive%29 | Priming is a condition in the boiler of a steam locomotive in which water is carried over into the steam delivery. It may be caused by impurities in the water, which foams up as it boils, or simply too high a water level. It is harmful to the valves and pistons, as lubrication is washed away, and can be dangerous as any water collecting in the cylinders is not compressible and if trapped may fracture the cylinder head or piston.
Causes
The most frequent cause is running the locomotive with too high a level of water in the boiler and is most apparent when the regulator is opened sharply or steam demand is high. Thus, sensible locomotive management by the operators will help to prevent the occurrence. The phenomenon is particularly evident in areas of impure water, where boiled water creates a foam, or a mist of droplets, filling the space that collects steam at the top of the boiler, to be drawn down the steam collector pipe in the form of slugs of water. If boiler water is condensed and re-used, any oil or grease must be extracted as this form of contamination is particularly likely to give trouble.
Remedy
Early designers fitted curved sheets below the steam collector pipe, but these were not successful as the whole of the steam space could contain foam. In districts where the feed water is unsuitable, blowdown valves ("scum valves"), either continuously working while the regulator is open or operated in conjunction with the boiler feed, are fitted. Valves at water level reduce surface scum; those towards the bottom of the boiler help remove precipitated solids. Other forms of prevention include the chemical treatment of water before it enters the boiler. In the event of priming (and also when steam is admitted through cold piping or into a cold cylinder) the operators need to open the cylinder cocks, which are designed to release trapped water. Once occurring, the problem can affect the level indicated in the boiler's gauge glass and for this reason is difficult to put right without reducing the water level to the extent that the firebox crown becomes dangerously exposed.
See also
Carryover with steam
References
Steam boilers
Steam power
Steam engines
Steam locomotive technologies | Priming (steam locomotive) | [
"Physics"
] | 439 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
13,277,538 | https://en.wikipedia.org/wiki/Thermal%20conductivity%20detector | The thermal conductivity detector (TCD), also known as a katharometer, is a bulk property detector and a chemical specific detector commonly used in gas chromatography. This detector senses changes in the thermal conductivity of the column eluent and compares it to a reference flow of carrier gas. Since most compounds have a thermal conductivity much less than that of the common carrier gases of helium or hydrogen, when an analyte elutes from the column the effluent thermal conductivity is reduced, and a detectable signal is produced.
Operation
The TCD consists of an electrically heated filament in a temperature-controlled cell. Under normal conditions there is a stable heat flow from the filament to the detector body. When an analyte elutes and the thermal conductivity of the column effluent is reduced, the filament heats up and changes resistance. This resistance change is often sensed by a Wheatstone bridge circuit which produces a measurable voltage change. The column effluent flows over one of the resistors while the reference flow is over a second resistor in the four-resistor circuit.
A schematic of a classic thermal conductivity detector design utilizing a Wheatstone bridge circuit is shown. The reference flow across resistor 4 of the circuit compensates for drift due to flow or temperature fluctuations. Changes in the thermal conductivity of the column effluent flow across resistor 3 will result in a temperature change of the resistor and therefore a resistance change which can be measured as a signal.
Since all compounds, organic and inorganic, have a thermal conductivity different from helium or hydrogen, virtually all compounds can be detected. That's why the TCD is often called a universal detector.
Used after a separation column (in a chromatograph), a TCD measures the concentrations of each compound contained in the sample. Indeed, the TCD signal changes when a compound passes through it, shaping a peak on a baseline. The peak position on the baseline reflects the compound type. The peak area (computed by integrating the TCD signal over time) is representative of the compound concentration. A sample whose compounds concentrations are known is used to calibrate the TCD: concentrations are affected to peak areas through a calibration curve.
The TCD is a good general purpose detector for initial investigations with an unknown sample compared to the FID that will react only to combustible compounds (Ex: hydrocarbons). Moreover, the TCD is a non-specific and non-destructive technique. The TCD is also used in the analysis of permanent gases (argon, oxygen, nitrogen, carbon dioxide) because it responds to all these substances unlike the FID which cannot detect compounds which do not contain carbon-hydrogen bonds.
Considering detection limit, both TCD and FID reach low concentration levels (inferior to ppm or ppb).
Both of them require pressurized carrier gas (Typically: H2 for FID, He for TCD) but due to the risk associated with storing H2 (high flammability, see Hydrogen safety), TCD with He should be considered in locations where safety is crucial.
Considerations
One thing to be aware of when operating a TCD is that gas flow must never be interrupted when the filament is hot, as doing so may cause the filament to burn out. While the filament of a TCD is generally chemically passivated to prevent it from reacting with oxygen, the passivation layer can be attacked by halogenated compounds, so these should be avoided wherever possible.
If analyzing for hydrogen, the peak will appear as negative when helium is used as the reference gas. This problem can be avoided if another reference gas is used, for example argon or nitrogen, although this will significantly reduce the detector's sensitivity towards any compounds other than hydrogen.
Process description
It functions by having two parallel tubes both containing gas and heating coils. The gases are examined by comparing the rate of loss of heat from the heating coils into the gas. The coils are arranged in a bridge circuit so that resistance changes due to unequal cooling can be measured. One channel normally holds a reference gas and the mixture to be tested is passed through the other channel.
Applications
Katharometers are used medically in lung function testing equipment and in gas chromatography. The results are slower to obtain compared to a mass spectrometer, but the device is inexpensive, and has good accuracy when the gases in question are known, and it is only the proportion that must be determined.
Monitoring of hydrogen purity in hydrogen-cooled turbogenerators.
Detection of helium loss from the helium vessel of an MRI superconducting magnet.
Also used within the brewing industry to quantify the amount of carbon dioxide within beer samples.
Used within the energy industry to quantify the amount (calorific value) of methane within biogas samples.
Used within the food and drink industry to quantify and/or validate food packaging gases.
Used within the oil&gas industry to quantify the percentage of HCs when drilling into a formation.
References
Gas chromatography
Measuring instruments | Thermal conductivity detector | [
"Chemistry",
"Technology",
"Engineering"
] | 1,060 | [
"Chromatography",
"Gas chromatography",
"Measuring instruments"
] |
13,277,557 | https://en.wikipedia.org/wiki/Hummelstown%20brownstone | Hummelstown brownstone is a medium-grain, dense sandstone quarried near Hummelstown in Dauphin County, Pennsylvania, USA. It is a dark brownstone with reddish to purplish hues, and was once widely used as a building stone in the United States.
History
The Hummelstown Brownstone Company quarried high quality brownstone near Hummelstown from 1863 to 1929 and sold it across the U.S. as a preferred masonry material of builders. Because of its durability, it was used for a wide range of building projects, especially as trim and ornamentation on large buildings, but also as bridge piers and in the foundations and walls of buildings or the sculptures that decorated them. Frequently, entire buildings were dressed in Hummelstown brownstone. An example of this is the Barbour County Courthouse (1903–05) in Philippi, West Virginia.
Hummelstown brownstone and similar sandstones were known as “freestone” because of properties allowing them to be worked freely in every direction, rather than in one direction along a “grain”. This characteristic made them very popular with stone cutters and masons.
References
Masonry
Stone (material)
Geologic formations of Pennsylvania
Sandstone in the United States
Building materials | Hummelstown brownstone | [
"Physics",
"Engineering"
] | 254 | [
"Masonry",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
9,276,269 | https://en.wikipedia.org/wiki/Aircraft%20on%20ground | Aircraft On Ground or AOG is a term in aviation maintenance indicating that a problem is serious enough to prevent an aircraft from flying. This can involve problems as simple as a light bulb being out, or as complex as a damaged engine. Boeing estimates that a 1-2 hour AOG situation will cost an airline from $10,000 to $20,000 and possibly even as high as $150,000 per hour depending on the type of aircraft and route flown.
Causes of AOG incidents
AOG incidents can arise from various factors, including:
Mechanical failures: Unexpected mechanical issues or failures during flight or pre-flight checks can ground an aircraft.
Maintenance delays: Scheduled maintenance that extends beyond the expected timeframe can result in AOG status.
Supply chain issues: Delays in obtaining necessary parts or components can hinder timely repairs, leading to extended grounding periods
Regulatory compliance: Aircraft may be grounded due to non-compliance with safety regulations or certification requirements.
AOG can also refer to an aircraft waiting for a flight crew at a different airport, where flight crew are not available. Crew sked can designate an inbound deadheading crew, to operate this flight, as "AOG" which makes this crew the highest priority to get seats on the flight. Passengers, even in first class can be "bumped" to accommodate the AOG crew.
References
Air freight
Aircraft maintenance | Aircraft on ground | [
"Engineering"
] | 278 | [
"Aircraft maintenance",
"Aerospace engineering"
] |
9,276,391 | https://en.wikipedia.org/wiki/Navajo%20Dam | Navajo Dam is a dam on the San Juan River, a tributary of the Colorado River, in northwestern New Mexico in the United States. The high earthen dam is situated in the foothills of the San Juan Mountains about upstream and east of Farmington, New Mexico. It was built by the U.S. Bureau of Reclamation (Reclamation) in the 1960s to provide flood control, irrigation, domestic and industrial water supply, and storage for droughts. A small hydroelectric power plant was added in the 1980s.
The dam is a major feature of the Colorado River Storage Project, which is designed to regulate water resources across the entire Upper Colorado River Basin. The reservoir, Navajo Lake, is a popular recreation area and one of the largest bodies of water in New Mexico, with its upper portion extending into Colorado.
Specifications
Navajo is a rolled earthfill embankment dam, composed of three "zones" of alternating cobbles, gravel, sand and clay. The dam is high and long, with a width of at the base and at the crest. The dam contains of material. The crest of the dam is above sea level. The dam forms Navajo Lake, one of the largest lakes in both New Mexico and Colorado with up to of surface water. Navajo Lake extends for up the San Juan River, up the Los Pinos River (Pine River) and up the Piedra River. The capacity of Navajo Lake is , of which or 60.6 percent is considered "active" or usable storage.
Prior to the dam's construction, the San Juan River flow was high during spring snowmelt and summer monsoon, and a relative trickle at other times of the year. The dam has enabled a constant water flow throughout the year which benefits water supply, recreation, and flood control. Up to of water is released into the San Juan River via a 32 megawatt hydroelectric plant owned by the city of Farmington. Hydropower generated at Navajo Dam serves about 37,000 customers in northwest New Mexico and averaged a production of 135,226,000 kilowatt hours for the period 1989–1999.
Floodwaters can be released through a tunnel outlet with a capacity of per second, and an ungated concrete spillway with a capacity of . The spillway is wide at the crest and falls through a chute to a stilling basin. Flood releases are not to exceed the safe San Juan channel capacity of below the dam. When the dam was first constructed, water releases were prioritized to meet irrigation demands and provide flood control; however, since the late 1990s the operating criteria have been changed in order to meet environmental restoration goals in the San Juan River (see below).
Navajo Dam is part of the federal Colorado River Storage Project (CRSP), a massive system of dams, reservoirs and irrigation projects across the Upper Colorado River Basin. Operations at Navajo Dam are crucial to ensuring enough water is available for both local San Juan basin users and contract holders of the San Juan–Chama Project, which diverts water from the San Juan River to the Rio Grande valley serving Albuquerque, New Mexico. The San Juan-Chama project uses up to of water that would otherwise have flowed into Navajo Lake. At Navajo Dam, a large fraction of San Juan River water is diverted into the Navajo Indian Irrigation Project (NIIP), which irrigates of farmland to the south of the river. About , or 30 percent of the lake's capacity, are allocated to the NIIP. Current water use by NIIP is about per year, and is expected to increase as more of the NIIP lands are brought into agricultural production. Both the San Juan-Chama and Navajo Irrigation Projects are participating units of the CRSP.
History
The first studies for a dam on the San Juan River were made in 1904, but there was little need for such a project at the time, due to the remoteness of the area. The growth of Farmington and surrounding towns in the 1920s due to agriculture and a petroleum boom created the need for additional water supply as well as flood control, both equal in importance due to the seasonal nature of the San Juan River. The Navajo Nation's growing population was suffering from food and employment shortages, which the Bureau of Reclamation envisioned could be solved by a large irrigation project. The 1908 Supreme Court decision Winters v. United States ruled that a federally established Indian reservation was "entitled to the water needed to create a permanent homeland". In other words, the reservation carried implicit water rights dating back to its creation in 1868; in this case, the Navajo peoples' rights to San Juan River water. This ruling eventually led to development of the Navajo Dam project, which was first outlined in a 1955 Bureau of Indian Affairs report that proposed a "distribution system for irrigation of of new land within and adjacent to the Navajo Indian Reservation, all in New Mexico."
The high dam proposed for the San Juan River ultimately became part of the Bureau of Reclamation's Colorado River Storage Project. Navajo Dam was authorized by Congress in the Colorado River Storage Project Act of April 11, 1956, which also authorized Glen Canyon Dam and numerous irrigation and power projects along the Green River, Gunnison River and other tributaries. The initial authorization for Navajo Dam did not include the Navajo irrigation project, which was authorized much later, in June 1962, along with the San Juan-Chama diversion project.
Preparatory work at the Navajo Dam site started on October 8, 1956 with archaeological excavations of Native American sites along the canyons and bottom lands of the San Juan River; however, these investigations were limited in scope and only preserved a handful of artifacts from the area. A number of sites sacred to the Navajo people would later be flooded with the filling of the lake. Other features to be demolished or relocated included the towns of Rosa and Los Arboles, several cemeteries along the Los Pinos River, and sections of the Denver and Rio Grande Western Railroad. The primary construction contract for the dam was awarded to Morrison–Knudsen, Henry J. Kaiser Company, and F&S Contracting Company on June 25, 1958 and work began a month later on July 30.
The first facilities to be constructed were the water works allowing for diversion of the San Juan River. The main diversion tunnel was completed on January 27, 1959 and the auxiliary tunnel on April 27, 1959. Concrete lining was complete in October and the river was diverted on January 4, 1960 allowing the start of construction on the embankment dam. More than of material had to be placed to form the main embankment with a record of placed during August 1960 alone. As the dam continued to rise, clearing operations in the reservoir basin began on June 30, 1961 and took about a year to complete. The concrete spillway, located to the west of the dam, was finished on September 15, 1961.
On June 27, 1962 the diversion tunnel was blocked, and water was stored in Navajo Lake for the first time. Navajo Lake was the first reservoir of the Colorado River Storage Project to begin filling; the next, Flaming Gorge, would not start filling until November. The main embankment was topped out shortly thereafter on August 22, 1962; however, work on other features of the project, including riprap placement and final concrete work, continued. The dam was dedicated on September 15, 1962 by Secretary of the Interior Stewart Udall, but was not formally completed until April 20, 1963. The total cost of the Navajo Dam project was about $35 million ($ in dollars).
A number of cracks and small slides developed in the dam after its construction in 1964, but were either repaired or settled by themselves without significant structural damage. The dam abutments and the spillway leaked considerably until the 1970s, when the Bureau of Reclamation finally took corrective action by installing drainage systems and placing stabilizing fill. The canal head works and diversion tunnel for the Navajo Indian Irrigation Project were started in 1966 and were completed on July 3, 1967; this allowed the commencement of irrigation water deliveries from Navajo Lake, although the main canal was not completed to its full length until 1977. Operations of the irrigation system were transferred to the Bureau of Indian Affairs shortly thereafter. The reservoir did not fill completely until spring 1973, when it flowed over the spillway for the first time.
Also in 1973, the Bureau of Reclamation also began planning for a hydroelectric power plant at Navajo, which had not been included in the original design of the dam. Construction began in 1976, but was halted when the National Wildlife Federation sued, citing that fluctuating power releases into the San Juan River could cause environmental damage. As a result, Congress did not approve the project. However, in 1981 the city of Farmington applied to the Federal Energy Regulatory Commission for a permit to construct a private hydro power plant at the dam. The Navajo Nation protested, because if the power plant were to be constructed as a private project rather than a public Reclamation project, the Navajo would lose the potential benefits. Ultimately, the Navajo reached an agreement with the city and the power plant was approved for construction in 1986, with the first power generated in 1989.
Environmental impacts
Navajo Dam has greatly changed the ecology of part of the San Juan River, from a warm, muddy and highly seasonal river to a cold-water stream with relatively constant flow. The dam's impacts are most pronounced in the stretch above Farmington (where the San Juan is joined by the mostly undammed Animas River and regains some of its seasonal variations); further downstream, the dam "apparently has had no significant effect" on the river channel and sediment flow. Under natural conditions, the San Juan River supported native fishes including Colorado pikeminnow, razorback sucker and roundtail chub, but these have been largely been eliminated in favor of rainbow trout, non-native (introduced) brown trout and other salmonids which thrive in the cold water released from the base of Navajo Dam. The San Juan is designated a Blue Ribbon fishery and is one of the most popular fly-fishing waters in the western United States.
After a federal biological assessment in 1999, the San Juan River Basin Recovery Implementation Program (SJRIP) was established in order to help recover native fish populations in the river. Under the program, spring peak releases of are made from Navajo Dam. The peak release can be up to 60 days during wet years, but may be suspended during dry years depending on available reservoir storage and predicted inflow. Meanwhile, the dry season base flow is reduced from to . In combination, these mimic historic high and low flow conditions in the San Juan River before the dam was built. The high flows have been observed to benefit trout, but the low flows have been estimated to result in a 34 percent reduction of trout habitat.
See also
List of dams in the Colorado River system
List of dams and reservoirs in the United States
References
Works cited
External links
Navajo Lake current levels
Interview with Leonard Trujillo, construction worker on Navajo Dam
Buildings and structures in Rio Arriba County, New Mexico
Buildings and structures in San Juan County, New Mexico
Dams in New Mexico
Colorado River Storage Project
United States Bureau of Reclamation dams
Dams completed in 1962
Earth-filled dams
Dams in the Colorado River basin
1962 establishments in New Mexico | Navajo Dam | [
"Engineering"
] | 2,254 | [
"Colorado River Storage Project"
] |
9,276,466 | https://en.wikipedia.org/wiki/Parthenogenesis | Parthenogenesis (; from the Greek + ) is a natural form of asexual reproduction in which the embryo develops directly from an egg without need for fertilization. In animals, parthenogenesis means development of an embryo from an unfertilized egg cell. In plants, parthenogenesis is a component process of apomixis. In algae, parthenogenesis can mean the development of an embryo from either an individual sperm or an individual egg.
Parthenogenesis occurs naturally in some plants, algae, invertebrate animal species (including nematodes, some tardigrades, water fleas, some scorpions, aphids, some mites, some bees, some Phasmatodea, and parasitic wasps), and a few vertebrates, such as some fish, amphibians, and reptiles. This type of reproduction has been induced artificially in animal species that naturally reproduce through sex, including fish, amphibians, and mice.
Normal egg cells form in the process of meiosis and are haploid, with half as many chromosomes as their mother's body cells. Haploid individuals, however, are usually non-viable, and parthenogenetic offspring usually have the diploid chromosome number. Depending on the mechanism involved in restoring the diploid number of chromosomes, parthenogenetic offspring may have anywhere between all and half of the mother's alleles. In some types of parthenogenesis the offspring having all of the mother's genetic material are called full clones and those having only half are called half clones. Full clones are usually formed without meiosis. If meiosis occurs, the offspring get only a fraction of the mother's alleles since crossing over of DNA takes place during meiosis, creating variation.
Parthenogenetic offspring in species that use either the XY or the X0 sex-determination system have two X chromosomes and are female. In species that use the ZW sex-determination system, they have either two Z chromosomes (male) or two W chromosomes (mostly non-viable but rarely a female), or they could have one Z and one W chromosome (female).
Life history types
Parthenogenesis is a form of asexual reproduction in which the embryo develops directly from an egg without need for fertilization. It occurs naturally in some plants, algae, invertebrate animal species (including nematodes, some tardigrades, water fleas, some scorpions, aphids, some mites, some bees, some Phasmatodea, and parasitic wasps), and a few vertebrates, such as some fish, amphibians, reptiles, and birds. This type of reproduction has been induced artificially in a number of animal species that naturally reproduce through sex, including fish, amphibians, and mice.
Some species reproduce exclusively by parthenogenesis (such as the bdelloid rotifers), while others can switch between sexual reproduction and parthenogenesis. This is called facultative parthenogenesis (other terms are cyclical parthenogenesis, heterogamy or heterogony). The switch between sexuality and parthenogenesis in such species may be triggered by the season (aphid, some gall wasps), or by a lack of males or by conditions that favour rapid population growth (rotifers and cladocerans like Daphnia). In these species asexual reproduction occurs either in summer (aphids) or as long as conditions are favourable. This is because in asexual reproduction a successful genotype can spread quickly without being modified by sex or wasting resources on male offspring who will not give birth. Some species can produce both sexually and through parthenogenesis, and offspring in the same clutch of a species of tropical lizard can be a mix of sexually produced offspring and parthenogenically produced offspring. In California condors, facultative parthenogenesis can occur even when a male is present and available for a female to breed with. In times of stress, offspring produced by sexual reproduction may be fitter as they have new, possibly beneficial gene combinations. In addition, sexual reproduction provides the benefit of meiotic recombination between non-sister chromosomes, a process associated with repair of DNA double-strand breaks and other DNA damages that may be induced by stressful conditions.
Many taxa with heterogony have within them species that have lost the sexual phase and are now completely asexual. Many other cases of obligate parthenogenesis (or gynogenesis) are found among polyploids and hybrids where the chromosomes cannot pair for meiosis.
The production of female offspring by parthenogenesis is referred to as thelytoky (e.g., aphids) while the production of males by parthenogenesis is referred to as arrhenotoky (e.g., bees). When unfertilized eggs develop into both males and females, the phenomenon is called deuterotoky.
Types and mechanisms
Parthenogenesis can occur without meiosis through mitotic oogenesis. This is called apomictic parthenogenesis. Mature egg cells are produced by mitotic divisions, and these cells directly develop into embryos. In flowering plants, cells of the gametophyte can undergo this process. The offspring produced by apomictic parthenogenesis are full clones of their mother, as in aphids.
Parthenogenesis involving meiosis is more complicated. In some cases, the offspring are haploid (e.g., male ants). In other cases, collectively called automictic parthenogenesis, the ploidy is restored to diploidy by various means. This is because haploid individuals are not viable in most species. In automictic parthenogenesis, the offspring differ from one another and from their mother. They are called half clones of their mother.
Automixis
Automixis includes several reproductive mechanisms, some of which are parthenogenetic.
Diploidy can be restored by the doubling of the chromosomes without cell division before meiosis begins or after meiosis is completed. This is an endomitotic cycle. Diploidy can also be restored by fusion of the first two blastomeres, or by fusion of the meiotic products. The chromosomes may not separate at one of the two anaphases (restitutional meiosis)l; or the nuclei produced may fuse; or one of the polar bodies may fuse with the egg cell at some stage during its maturation.
Some authors consider all forms of automixis sexual as they involve recombination. Many others classify the endomitotic variants as asexual and consider the resulting embryos parthenogenetic. Among these authors, the threshold for classifying automixis as a sexual process depends on when the products of anaphase I or of anaphase II are joined. The criterion for sexuality varies from all cases of restitutional meiosis, to those where the nuclei fuse or to only those where gametes are mature at the time of fusion. Those cases of automixis that are classified as sexual reproduction are compared to self-fertilization in their mechanism and consequences.
The genetic composition of the offspring depends on what type of automixis takes place. When endomitosis occurs before meiosis or when central fusion occurs (restitutional meiosis of anaphase I or the fusion of its products), the offspring get all to more than half of the mother's genetic material and heterozygosity is mostly preserved (if the mother has two alleles for a locus, it is likely that the offspring will get both). This is because in anaphase I the homologous chromosomes are separated. Heterozygosity is not completely preserved when crossing over occurs in central fusion. In the case of pre-meiotic doubling, recombination, if it happens, occurs between identical sister chromatids.
If terminal fusion (restitutional meiosis of anaphase II or the fusion of its products) occurs, a little over half the mother's genetic material is present in the offspring and the offspring are mostly homozygous. This is because at anaphase II the sister chromatids are separated and whatever heterozygosity is present is due to crossing over. In the case of endomitosis after meiosis, the offspring is completely homozygous and has only half the mother's genetic material. This can result in parthenogenetic offspring being unique from each other and from their mother.
Sex of the offspring
In apomictic parthenogenesis, the offspring are clones of the mother and hence (except for aphids) are usually female. In the case of aphids, parthenogenetically produced males and females are clones of their mother except that the males lack one of the X chromosomes (XO).
When meiosis is involved, the sex of the offspring depends on the type of sex determination system and the type of apomixis. In species that use the XY sex-determination system, parthenogenetic offspring have two X chromosomes and are female. In species that use the ZW sex-determination system the offspring genotype may be one of ZW (female), ZZ (male), or WW (non-viable in most species, but a fertile, viable female in a few, e.g., boas). ZW offspring are produced by endoreplication before meiosis or by central fusion. ZZ and WW offspring occur either by terminal fusion or by endomitosis in the egg cell.
In polyploid obligate parthenogens, like the whiptail lizard, all the offspring are female.
In many hymenopteran insects such as honeybees, female eggs are produced sexually, using sperm from a drone father, while the production of further drones (males) depends on the queen (and occasionally workers) producing unfertilized eggs. This means that females (workers and queens) are always diploid, while males (drones) are always haploid, and produced parthenogenetically.
Facultative
Facultative parthenogenesis occurs when a female can produce offspring either sexually or via asexual reproduction. Facultative parthenogenesis is extremely rare in nature, with only a few examples of animal taxa capable of facultative parthenogenesis. One of the best-known examples of taxa exhibiting facultative parthenogenesis are mayflies; presumably, this is the default reproductive mode of all species in this insect order. Facultative parthenogenesis has generally been believed to be a response to a lack of a viable male. A female may undergo facultative parthenogenesis if a male is absent from the habitat or if it is unable to produce viable offspring. However, California condors and the tropical lizard Lepidophyma smithii both can produce parthenogenic offspring in the presence of males, indicating that facultative parthenogenesis may be more common than previously thought and is not simply a response to a lack of males.
In aphids, a generation sexually conceived by a male and a female produces only females. The reason for this is the non-random segregation of the sex chromosomes 'X' and 'O' during spermatogenesis.
Facultative parthenogenesis is often used to describe cases of spontaneous parthenogenesis in normally sexual animals. For example, many cases of spontaneous parthenogenesis in sharks, some snakes, Komodo dragons, and a variety of domesticated birds were widely attributed to facultative parthenogenesis. These cases are examples of spontaneous parthenogenesis. The occurrence of such asexually produced eggs in sexual animals can be explained by a meiotic error, leading to eggs produced via automixis.
Obligate
Obligate parthenogenesis is the process in which organisms exclusively reproduce through asexual means. Many species have transitioned to obligate parthenogenesis over evolutionary time. Well documented transitions to obligate parthenogenesis have been found in numerous metazoan taxa, albeit through highly diverse mechanisms. These transitions often occur as a result of inbreeding or mutation within large populations. Some documented species, specifically salamanders and geckos, that rely on obligate parthenogenesis as their major method of reproduction. As such, there are over 80 species of unisex reptiles (mostly lizards but including a single snake species), amphibians and fishes in nature for which males are no longer a part of the reproductive process. A female produces an ovum with a full set (two sets of genes) provided solely by the mother. Thus, a male is not needed to provide sperm to fertilize the egg. This form of asexual reproduction is thought in some cases to be a serious threat to biodiversity for the subsequent lack of gene variation and potentially decreased fitness of the offspring.
Some invertebrate species that feature (partial) sexual reproduction in their native range are found to reproduce solely by parthenogenesis in areas to which they have been introduced. Relying solely on parthenogenetic reproduction has several advantages for an invasive species: it obviates the need for individuals in a very sparse initial population to search for mates; and an exclusively female sex distribution allows a population to multiply and invade more rapidly (potentially twice as fast). Examples include several aphid species and the willow sawfly, Nematus oligospilus, which is sexual in its native Holarctic habitat but parthenogenetic where it has been introduced into the Southern Hemisphere.
Natural occurrence
Parthenogenesis does not apply to isogamous species. Parthenogenesis occurs naturally in aphids, Daphnia, rotifers, nematodes, and some other invertebrates, as well as in many plants. Among vertebrates, strict parthenogenesis is only known to occur in lizards, snakes, birds, and sharks. Fish, amphibians, and reptiles make use of various forms of gynogenesis and hybridogenesis (an incomplete form of parthenogenesis). The first all-female (unisexual) reproduction in vertebrates was described in the fish Poecilia formosa in 1932. Since then at least 50 species of unisexual vertebrate have been described, including at least 20 fish, 25 lizards, a single snake species, frogs, and salamanders.
Artificial induction
Use of an electrical or chemical stimulus can produce the beginning of the process of parthenogenesis in the asexual development of viable offspring.
During oocyte development, high metaphase promoting factor (MPF) activity causes mammalian oocytes to arrest at the metaphase II stage until fertilization by a sperm. The fertilization event causes intracellular calcium oscillations, and targeted degradation of cyclin B, a regulatory subunit of MPF, thus permitting the MII-arrested oocyte to proceed through meiosis.
To initiate unfertilised development of swine oocytes, various methods exist to induce an artificial activation that mimics sperm entry, such as calcium ionophore treatment, microinjection of calcium ions, or electrical stimulation. Treatment with cycloheximide, a non-specific protein synthesis inhibitor, enhances the development of unfertilised eggs in swine presumably by continual inhibition of MPF/cyclin B. As meiosis proceeds, extrusion of the second polar is blocked by exposure to cytochalasin B. This treatment results in a diploid (2 maternal genomes) parthenote The resulting embryos can be surgically transferred to a recipient oviduct for further development, but will succumb to developmental failure after ≈30 days of gestation. The swine placenta in these cases often appears hypo-vascular: see free image (Figure 1) in linked reference.
Induced parthenogenesis of this type in mice and monkeys results in abnormal development. This is because mammals have imprinted genetic regions, where either the maternal or the paternal chromosome is inactivated in the offspring for development to proceed normally. A mammal developing from parthenogenesis would have double doses of maternally imprinted genes and lack paternally imprinted genes, leading to developmental abnormalities. It has been suggested that defects in placental folding or interdigitation are one cause of swine parthenote abortive development. As a consequence, research on the induced development of unfertilised eggs in humans is focused on the production of embryonic stem cells for use in medical treatment, not as a reproductive strategy.
In 2022, researchers reported that they have produced viable offspring born from unfertilized eggs in mice, addressing the problems of genomic imprinting by "targeted DNA methylation rewriting of seven imprinting control regions".
In humans
In 1955, Helen Spurway, a geneticist specializing in the reproductive biology of the guppy (Lebistes reticulatus), claimed that parthenogenesis may occur (though very rarely) in humans, leading to so-called "virgin births". This created some sensation among her colleagues and the lay public alike. Sometimes an embryo may begin to divide without fertilization, but it cannot fully develop on its own; so while it may create some skin and nerve cells, it cannot create others (such as skeletal muscle) and becomes a type of benign tumor called an ovarian teratoma. Spontaneous ovarian activation is not rare and has been known about since the 19th century. Some teratomas can even become primitive fetuses (fetiform teratoma) with imperfect heads, limbs and other structures, but are non-viable.
In 1995, there was a reported case of partial human parthenogenesis; a boy was found to have some of his cells (such as white blood cells) to be lacking in any genetic content from his father. Scientists believe that an unfertilized egg began to self-divide but then had some (but not all) of its cells fertilized by a sperm cell; this must have happened early in development, as self-activated eggs quickly lose their ability to be fertilized. The unfertilized cells eventually duplicated their DNA, boosting their chromosomes to 46. When the unfertilized cells hit a developmental block, the fertilized cells took over and developed that tissue. The boy had asymmetrical facial features and learning difficulties but was otherwise healthy. This would make him a parthenogenetic chimera (a child with two cell lineages in his body). While over a dozen similar cases have been reported since then (usually discovered after the patient demonstrated clinical abnormalities), there have been no scientifically confirmed reports of a non-chimeric, clinically healthy human parthenote (i.e. produced from a single, parthenogenetic-activated oocyte).
In 2007, the International Stem Cell Corporation of California announced that Elena Revazova had intentionally created human stem cells from unfertilized human eggs using parthenogenesis. The process may offer a way for creating stem cells genetically matched to a particular female to treat degenerative diseases. The same year, Revazova and ISCC published an article describing how to produce human stem cells that are homozygous in the HLA region of DNA. These stem cells are called HLA homozygous parthenogenetic human stem cells (hpSC-Hhom) and would allow derivatives of these cells to be implanted without immune rejection. With selection of oocyte donors according to HLA haplotype, it would be possible to generate a bank of cell lines whose tissue derivatives, collectively, could be MHC-matched with a significant number of individuals within the human population.
After an independent investigation, it was revealed that the discredited South Korean scientist Hwang Woo-Suk unknowingly produced the first human embryos resulting from parthenogenesis. Initially, Hwang claimed he and his team had extracted stem cells from cloned human embryos, a result later found to be fabricated. Further examination of the chromosomes of these cells show indicators of parthenogenesis in those extracted stem cells, similar to those found in the mice created by Tokyo scientists in 2004. Although Hwang deceived the world about being the first to create artificially cloned human embryos, he contributed a major breakthrough to stem cell research by creating human embryos using parthenogenesis.
Similar phenomena
Gynogenesis
A form of asexual reproduction related to parthenogenesis is gynogenesis. Here, offspring are produced by the same mechanism as in parthenogenesis, but with the requirement that the egg merely be stimulated by the presence of sperm in order to develop. However, the sperm cell does not contribute any genetic material to the offspring. Since gynogenetic species are all female, activation of their eggs requires mating with males of a closely related species for the needed stimulus. Some salamanders of the genus Ambystoma are gynogenetic and appear to have been so for over a million years. The success of those salamanders may be due to rare fertilization of eggs by males, introducing new material to the gene pool, which may result from perhaps only one mating out of a million. In addition, the Amazon molly is known to reproduce by gynogenesis.
Hybridogenesis
Hybridogenesis is a mode of reproduction of hybrids. Hybridogenetic hybrids (for example AB genome), usually females, during gametogenesis exclude one of parental genomes (A) and produce gametes with unrecombined genome of second parental species (B), instead of containing mixed recombined parental genomes. First genome (A) is restored by fertilization of these gametes with gametes from the first species (AA, sexual host, usually male). Hybridogenesis is not completely asexual, but hemiclonal: half the genome is passed to the next generation clonally, unrecombined, intact (B), other half sexually, recombined (A). This process continues, so that each generation is half (or hemi-) clonal on the mother's side and has half new genetic material from the father's side.
This form of reproduction is seen in some live-bearing fish of the genus Poeciliopsis as well as in some of the Pelophylax spp. ("green frogs" or "waterfrogs"):
P. kl. esculentus (edible frog): P. lessonae × P. ridibundus,
P. kl. grafi (Graf's hybrid frog): P. perezi × P. ridibundus
P. kl. hispanicus (Italian edible frog) – unknown origin: P. bergeri × P. ridibundus or P. kl. esculentus
Other examples where hybridogenesis is at least one of modes of reproduction include i.e.
Iberian minnow Tropidophoxinellus alburnoides (Squalius pyrenaicus × hypothetical ancestor related with Anaecypris hispanica)
spined loaches Cobitis hankugensis × C. longicorpus
Bacillus stick insects B. rossius × Bacillus grandii benazzii
In human culture
Parthenogenesis, in the form of reproduction from a single individual (typically a god), is common in mythology, religion, and folklore around the world, including in ancient Greek myth; for example, Athena was born from the head of Zeus. In Christianity and Islam, there is the virgin birth of Jesus; there are stories of miraculous births in other religions including Islam.
The theme is one of several aspects of reproductive biology explored in science fiction.
See also
Androgenesis - a form of quasi-sexual reproduction in which a male is the sole source of the nuclear genetic material in the embryo
Telescoping generations
– conducted experiments that established what is now termed parthenogenesis in aphids
– Polish apiarist and a pioneer of parthenogenesis among bees
– caused the eggs of sea urchins to begin embryonic development without sperm
– plants with seedless fruit
References
Further reading
Dawley, Robert M. & Bogart, James P. (1989). Evolution and Ecology of Unisexual Vertebrates. Albany: New York State Museum.
Futuyma, Douglas J. & Slatkin, Montgomery. (1983). Coevolution. Sunderland, Mass: Sinauer Associates.
Maynard Smith, John. (1978). The Evolution of Sex. Cambridge: Cambridge University Press.
Michod, Richard E. & Levin, Bruce R. (1988). The Evolution of Sex. Sunderland, Mass: Sinauer Associates.
Stearns, Stephan C. (1988). The Evolution of Sex and Its Consequences (Experientia Supplementum, Vol. 55). Boston: Birkhauser.
External links
Reproductive behavior in whiptails at Crews Laboratory
Types of asexual reproduction
Parthenogenesis in Incubated Turkey Eggs from Oregon State University
National Geographic News: Virgin Birth Expected at Christmas – By Komodo Dragon
"'Virgin births' for giant lizards (Komodo dragon)" BBC News
Reuther: Komodo dragon proud mum (and dad) of five
Female sharks capable of virgin birth
Scientists confirm shark's 'virgin birth' Article by Steve Szkotak AP updated 1:49 a.m. ET, Fri., 10 October 2008
Asexual reproduction in animals
Zoology | Parthenogenesis | [
"Biology"
] | 5,377 | [
"Zoology"
] |
9,276,503 | https://en.wikipedia.org/wiki/Dan%20Segal | Daniel Segal (born 1947) is a British mathematician and a Professor of Mathematics at the University of Oxford. He specialises in algebra and group theory.
He studied at Peterhouse, Cambridge, before taking a PhD at Queen Mary College, University of London, in 1972, supervised by Bertram Wehrfritz, with a dissertation on group theory entitled Groups of Automorphisms of Infinite Soluble Groups. He is an Emeritus Fellow of All Souls College at Oxford, where he was sub-warden from 2006 to 2008.
His postgraduate students have included Marcus du Sautoy and Geoff Smith. He is the son of psychoanalyst Hanna Segal and brother of philosopher Gabriel Segal as well as Michael Segal, a senior civil servant.
Publications
Articles
Books
Polycyclic Groups, Cambridge University Press 1983; 2005 pbk edition
with J. Dixon, M. Du Sautoy, A. Mann Analytic pro-p-groups, Cambridge University Press 1999, Paperback edn. 2003
ed. with M. Du Sautoy, A. Shalev New horizons in pro-p-groups, Birkhäuser 2000 Paperback edn. 2012
with Alexander Lubotzky Subgroup growth, Birkhäuser 2003 Paperback edn. 2012
Words: notes on verbal width in groups, London Mathematical Society Lecture Notes, vol. 361, Cambridge University Press 2009
References
Living people
20th-century British mathematicians
21st-century British mathematicians
Alumni of Peterhouse, Cambridge
Alumni of Queen Mary University of London
Fellows of All Souls College, Oxford
Group theorists
Algebraists
1947 births | Dan Segal | [
"Mathematics"
] | 315 | [
"Algebra",
"Algebraists"
] |
9,277,801 | https://en.wikipedia.org/wiki/Deuterated%20acetone | Deuterated acetone ((CD)CO), also known as acetone-d, is a form (isotopologue) of acetone (CH)CO in which the hydrogen atom (H) is replaced with deuterium (heavy hydrogen) isotope (H or D). Deuterated acetone is a common solvent used in NMR spectroscopy.
Properties
As with all deuterated compounds, the properties of deuterated acetone are virtually identical to those of regular acetone.
Manufacture
Deuterated acetone is prepared by the reaction of acetone with heavy water, HO or DO, in the presence of a base. In this case, the base used is deuterated lithium hydroxide:
In order to fully deuterate the acetone, the process is repeated several times, distilling off the acetone from the heavy water, and re-running the reaction in a fresh batch of heavy water.
References
Deuterated solvents | Deuterated acetone | [
"Chemistry"
] | 199 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Organic compounds",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,277,809 | https://en.wikipedia.org/wiki/Deuterated%20benzene | Deuterated benzene (C6D6) is an isotopologue of benzene (C6H6) in which the hydrogen atom ("H") is replaced with deuterium (heavy hydrogen) isotope ("D").
Properties
The properties of deuterated benzene are very similar to those of normal benzene, however, the increased atomic weight of deuterium relative to protium means that the melting point of C6D6 is about 1.3 °C higher than that of the nondeuterated analogue. The boiling points of both compounds, however, are the same: 80 °C.
Applications
Deuterated benzene is a common solvent used in NMR spectroscopy. It is widely used for taking spectra of organometallic compounds, which often react with the cheaper deuterated chloroform.
A slightly more exotic application of C6D6 is in the synthesis of molecules containing a deuterated phenyl group. Deuterated benzene will undergo all the same reactions its normal analogue will, just a little more slowly due to the kinetic isotope effect. For example, deuterated benzene could be used in the synthesis of deuterated benzoic acid, if desired:
Many simple monosubstituted aromatic compounds bearing the deuterated phenyl (C6D5) group may be purchased commercially, such as aniline, acetophenone, nitrobenzene, bromobenzene, and more.
References
Deuterated solvents
Benzene | Deuterated benzene | [
"Chemistry"
] | 317 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,277,828 | https://en.wikipedia.org/wiki/Deuterated%20DMF | Deuterated dimethylformamide ((CD3)2NCOD), also known as deuterated DMF, is an isotopologue of DMF ((CH3)2NCOH) in which the hydrogen atom ("H") is replaced with a deuterium isotope ("D"). Deuterated DMF is a relatively uncommon solvent used in NMR spectroscopy.
References
Deuterated solvents | Deuterated DMF | [
"Chemistry"
] | 97 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Organic compounds",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,277,843 | https://en.wikipedia.org/wiki/Deuterated%20ethanol | Deuterated ethanol (C2D5OD) is a form (called an isotopologue) of ethanol (C2H5OH) in which the hydrogen atom ("H") is replaced with deuterium (heavy hydrogen) isotope ("D"). Deuterated ethanol is an uncommon solvent used in NMR spectroscopy.
References
Deuterated solvents
Ethanol | Deuterated ethanol | [
"Chemistry"
] | 79 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs",
"Nuclear chemistry stubs"
] |
9,277,860 | https://en.wikipedia.org/wiki/Deuterated%20methanol | Deuterated methanol (CD3OD), is a form (called an isotopologue) of methanol (CH3OH) in which the hydrogen atoms ("H") are replaced with deuterium (heavy hydrogen) isotope ("D"). Deuterated methanol is a common solvent used in NMR spectroscopy.
Deuterated methanol was first detected in interstellar space was Orion-KL in 1988 by scientists at the Max Planck Institute for Radio Astronomy.
References
Deuterated solvents
Methanol | Deuterated methanol | [
"Chemistry"
] | 117 | [
"Deuterated solvents",
"Nuclear magnetic resonance",
"Organic compounds",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Organic compound stubs",
"Organic chemistry stubs"
] |
9,277,886 | https://en.wikipedia.org/wiki/Deuterated%20THF | Deuterated tetrahydrofuran (d8-THF) is a colourless, organic liquid at standard temperature and pressure. This heterocyclic compound has the chemical formula C4D8O, and is an isotopologue of tetrahydrofuran. Deuterated THF is used as a solvent in NMR spectroscopy, though its expense can often be prohibitive.
References
Deuterated solvents | Deuterated THF | [
"Chemistry"
] | 93 | [
"Deuterated solvents",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs"
] |
9,278,016 | https://en.wikipedia.org/wiki/Dodecacalcium%20hepta-aluminate | Dodecacalcium hepta-aluminate (12CaO·7Al2O3, Ca12Al14O33 or ) is an inorganic solid that occurs rarely in nature as the mineral mayenite. It is an important phase in calcium aluminate cements and is an intermediate in the manufacture of Portland cement. Its composition and properties have been the subject of much debate, because of variations in composition that can arise during its high-temperature formation.
Synthesis
Polycrystalline can be prepared via a conventional solid-state reaction, i.e., heating a mixture of calcium carbonate and aluminium oxide or aluminium hydroxide powders, in air. It is not formed in oxygen or in moisture-free atmosphere. It can be regrown into single crystals using the Czochralski or zone melting techniques.
In Portland cement kilns, is an early reaction product of aluminium and calcium oxides in the temperature range 900–1200 °C. With the onset of melt-phases at higher temperatures, it reacts with further calcium oxide to form tricalcium aluminate. It thus can appear in under-burned kiln products. It also occurs in some natural cements.
Composition and structure
The mineral as normally encountered is a solid solution series with end-members Ca12Al14O33 and Ca6Al7O16(OH). The latter composition loses water only at high temperature, and has lost most of it by the melting point (around 1400 °C). If material heated to this temperature is rapidly cooled to room temperature, the anhydrous composition is obtained. The rate of re-absorption of water to form the hydrous composition is negligible below 930 °C.
has a cubic crystal symmetry; Ca12Al14O33 has a lattice constant of 1.1989 nm and a density of 2.680 g·cm−3 while Ca6Al7O16(OH) has 1.1976 nm and 2.716 g·cm−3. The unit cell consists of 12 cages with the inner diameter of 0.44 nm and a formal charge of +1/3, two of them host free O2− ions (not shown in the infobox structure). These ions can easily move through the material and can be replaced by F−, Cl− (as in the mineral chlormayenite) or OH− ions.
The confusion regarding composition contributed to the mistaken assignment of the composition Ca5Al3O33. Studies of the system have shown that the solid solution series extends also to the accommodation of other species in place of the hydroxyl group, including halides, sulfide and oxide ions.
Properties and applications
is an important mineral phase in calcium aluminate cements and is an intermediate in the manufacture of Portland cement. It reacts rapidly with water, with considerable heat evolution, to form 3CaO·Al2O3·6H2O and Al(OH)3 gel. The formation of the hydrate from this mineral and from monocalcium aluminate represents the first stage of strength development in aluminous cements. Because of its higher reactivity, leading to excessively rapid hydration, aluminous cements contain relatively low amounts of dodecacalcium hepta-aluminate, or none at all.
has potential applications in optical, bio and structural ceramics. Some amorphous calcium aluminates are photosensitive and hence are candidates for optical information storage devices. They also have desirable infrared transmission properties for optical fibers.
While undoped is a wide-bandgap insulator, electron-doped electride :e− is a metallic conductor with a conductivity reaching 1500 S/cm at room temperature; it may even exhibit superconductivity upon cooling to 0.2–0.4 K. :e− is also a catalyst that has potential applications in the ambient-pressure synthesis of ammonia. Electron doping is achieved by extracting O2− ions from the structure via chemical reduction. The injected electrons occupy a unique conduction band called 'the cage conduction band', and migrate through the :e− crystal by tunneling. They can be readily and reversibly replaced with hydride ions (H−) by heating :e− in a hydrogen atmosphere. Owing to this reversibility, :e− does not suffer from hydrogen poisoning – irreversible deterioration of properties upon exposure to hydrogen which is common to traditional catalysts used in the ammonia synthesis.
References
Cement
Calcium compounds
Aluminates
Electrides | Dodecacalcium hepta-aluminate | [
"Chemistry"
] | 932 | [
"Electron",
"Electrides",
"Salts"
] |
9,278,167 | https://en.wikipedia.org/wiki/Angular%20velocity%20tensor | The angular velocity tensor is a skew-symmetric matrix defined by:
The scalar elements above correspond to the angular velocity vector components .
This is an infinitesimal rotation matrix.
The linear mapping Ω acts as a cross product :
where is a position vector.
When multiplied by a time difference, it results in the angular displacement tensor.
Calculation of angular velocity tensor of a rotating frame
A vector undergoing uniform circular motion around a fixed axis satisfies:
Let be the orientation matrix of a frame, whose columns , , and are the moving orthonormal coordinate vectors of the frame. We can obtain the angular velocity tensor Ω(t) of A(t) as follows:
The angular velocity must be the same for each of the column vectors , so we have:
which holds even if A(t) does not rotate uniformly. Therefore, the angular velocity tensor is:
since the inverse of an orthogonal matrix is its transpose .
Properties
In general, the angular velocity in an n-dimensional space is the time derivative of the angular displacement tensor, which is a second rank skew-symmetric tensor.
This tensor Ω will have independent components, which is the dimension of the Lie algebra of the Lie group of rotations of an n-dimensional inner product space.
Duality with respect to the velocity vector
In three dimensions, angular velocity can be represented by a pseudovector because second rank tensors are dual to pseudovectors in three dimensions. Since the angular velocity tensor Ω = Ω(t) is a skew-symmetric matrix:
its Hodge dual is a vector, which is precisely the previous angular velocity vector .
Exponential of Ω
If we know an initial frame A(0) and we are given a constant angular velocity tensor Ω, we can obtain A(t) for any given t. Recall the matrix differential equation:
This equation can be integrated to give:
which shows a connection with the Lie group of rotations.
Ω is skew-symmetric
We prove that angular velocity tensor is skew symmetric, i.e. satisfies .
A rotation matrix A is orthogonal, inverse to its transpose, so we have . For a frame matrix, taking the time derivative of the equation gives:
Applying the formula ,
Thus, Ω is the negative of its transpose, which implies it is skew symmetric.
Coordinate-free description
At any instant , the angular velocity tensor represents a linear map between the position vector and the velocity vectors of a point on a rigid body rotating around the origin:
The relation between this linear map and the angular velocity pseudovector is the following.
Because Ω is the derivative of an orthogonal transformation, the bilinear form
is skew-symmetric. Thus we can apply the fact of exterior algebra that there is a unique linear form on that
where is the exterior product of and .
Taking the sharp L of L we get
Introducing , as the Hodge dual of L, and applying the definition of the Hodge dual twice supposing that the preferred unit 3-vector is
where
by definition.
Because is an arbitrary vector, from nondegeneracy of scalar product follows
Angular velocity as a vector field
Since the spin angular velocity tensor of a rigid body (in its rest frame) is a linear transformation that maps positions to velocities (within the rigid body), it can be regarded as a constant vector field. In particular, the spin angular velocity is a Killing vector field belonging to an element of the Lie algebra SO(3) of the 3-dimensional rotation group SO(3).
Also, it can be shown that the spin angular velocity vector field is exactly half of the curl of the linear velocity vector field v(r) of the rigid body. In symbols,
Rigid body considerations
The same equations for the angular speed can be obtained reasoning over a rotating rigid body. Here is not assumed that the rigid body rotates around the origin. Instead, it can be supposed rotating around an arbitrary point that is moving with a linear velocity V(t) in each instant.
To obtain the equations, it is convenient to imagine a rigid body attached to the frames and consider a coordinate system that is fixed with respect to the rigid body. Then we will study the coordinate transformations between this coordinate and the fixed laboratory frame.
As shown in the figure on the right, the lab system's origin is at point O, the rigid body system origin is at and the vector from O to is R. A particle (i) in the rigid body is located at point P and the vector position of this particle is Ri in the lab frame, and at position ri in the body frame. It is seen that the position of the particle can be written:
The defining characteristic of a rigid body is that the distance between any two points in a rigid body is unchanging in time. This means that the length of the vector is unchanging. By Euler's rotation theorem, we may replace the vector with where is a 3×3 rotation matrix and is the position of the particle at some fixed point in time, say . This replacement is useful, because now it is only the rotation matrix that is changing in time and not the reference vector , as the rigid body rotates about point . Also, since the three columns of the rotation matrix represent the three versors of a reference frame rotating together with the rigid body, any rotation about any axis becomes now visible, while the vector would not rotate if the rotation axis were parallel to it, and hence it would only describe a rotation about an axis perpendicular to it (i.e., it would not see the component of the angular velocity pseudovector parallel to it, and would only allow the computation of the component perpendicular to it). The position of the particle is now written as:
Taking the time derivative yields the velocity of the particle:
where Vi is the velocity of the particle (in the lab frame) and V is the velocity of (the origin of the rigid body frame). Since is a rotation matrix its inverse is its transpose. So we substitute :
or
where is the previous angular velocity tensor.
It can be proved that this is a skew symmetric matrix, so we can take its dual to get a 3 dimensional pseudovector that is precisely the previous angular velocity vector :
Substituting ω for Ω into the above velocity expression, and replacing matrix multiplication by an equivalent cross product:
It can be seen that the velocity of a point in a rigid body can be divided into two terms – the velocity of a reference point fixed in the rigid body plus the cross product term involving the orbital angular velocity of the particle with respect to the reference point. This angular velocity is what physicists call the "spin angular velocity" of the rigid body, as opposed to the orbital angular velocity of the reference point about the origin O.
Consistency
We have supposed that the rigid body rotates around an arbitrary point. We should prove that the spin angular velocity previously defined is independent of the choice of origin, which means that the spin angular velocity is an intrinsic property of the spinning rigid body. (Note the marked contrast of this with the orbital angular velocity of a point particle, which certainly does depend on the choice of origin.)
See the graph to the right: The origin of lab frame is O, while O1 and O2 are two fixed points on the rigid body, whose velocity is and respectively. Suppose the angular velocity with respect to O1 and O2 is and respectively. Since point P and O2 have only one velocity,
The above two yields that
Since the point P (and thus ) is arbitrary, it follows that
If the reference point is the instantaneous axis of rotation the expression of the velocity of a point in the rigid body will have just the angular velocity term. This is because the velocity of the instantaneous axis of rotation is zero. An example of the instantaneous axis of rotation is the hinge of a door. Another example is the point of contact of a purely rolling spherical (or, more generally, convex) rigid body.
References
Tensor physical quantities
Angle
Velocity | Angular velocity tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 1,629 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical phenomena",
"Tensors",
"Physical quantities",
"Quantity",
"Tensor physical quantities",
"Motion (physics)",
"Vector physical quantities",
"Velocity",
"Wikipedia categories named after physical quantities",
"Angle"
] |
9,278,355 | https://en.wikipedia.org/wiki/Chsh | chsh (an abbreviation of "change shell") is a command on Unix-like operating systems that is used to change a login shell. Users can either supply the pathname of the shell that they wish to change to on the command line, or supply no arguments, in which case allows the user to change the shell interactively.
Usage
is a setuid program that modifies the file, and only allows ordinary users to modify their own login shells. The superuser can modify the shells of other users, by supplying the name of the user whose shell is to be modified as a command-line argument. For security reasons, the shells that both ordinary users and the superuser can specify are limited by the contents of the file, with the pathname of the shell being required to be exactly as it appears in that file. (This security feature is alterable by re-compiling the source code for the command with a different configuration option, and thus is not necessarily enabled on all systems.) The superuser can, however, also modify the password file directly, setting any user's shell to any executable file on the system without reference to and without using .
On most systems, when is invoked without the command-line option (to specify the name of the shell), it prompts the user to select one. On Mac OS X, if invoked without the option, displays a text file in the default editor (initially set to vim) allowing the user to change all of the features of their user account that they are permitted to change, the pathname of the shell being the name next to "Shell:". When the user quits vim, the changes made there are transferred to the /etc/passwd file which only root can change directly.
Using the option (for example: ) greatly simplifies the task of changing shells.
Depending on the system, may or may not prompt the user for a password before changing the shell, or entering interactive mode. On some systems, use of by non-root users is disabled entirely by the sysadmin.
On many Linux distributions, the command is a PAM-aware application. As such, its behaviour can be tailored, using PAM configuration options, for individual users. For example, an directive that specifies the module can be used to deny access to individual users, by specifying a file of the usernames to deny access to with the option to that module (along with the option).
Portability
POSIX does not describe utilities such as , which are used for modifying the user's entry in . Most Unix-like systems provide . SVr4-based systems provided a similar capability with passwd. Two of the three remaining systems (IBM AIX and HP-UX) provide in addition to . The exception is Solaris, where non-administrators are unable to change their shell unless a network name server such as NIS or NIS+ is installed. The obsolete SGI SVr4 system IRIX64 also lacked .
See also
Comparison of command shells
References
Further reading
— some examples of invoking with the and options
External links
Unix user management and support-related utilities
Standard Unix programs | Chsh | [
"Technology"
] | 649 | [
"Computing commands",
"Standard Unix programs"
] |
9,278,859 | https://en.wikipedia.org/wiki/8-Chlorotheophylline | 8-Chlorotheophylline, also known as 1,3-dimethyl-8-chloroxanthine, is a stimulant drug of the xanthine chemical class, with physiological effects similar to caffeine. Its main use is in combination (salt) with diphenhydramine in the antiemetic dimenhydrinate (Dramamine). Diphenhydramine reduces nausea but causes drowsiness, and the stimulant properties of 8-Chlorotheophylline help reduce that side effect.
Despite being classified as a xanthine stimulant, 8-chlorotheophylline can generally not produce any locomotor activity above control in mice and does not appear to cross the blood-brain barrier well.
The 8-chloro modification is not selected for pharmacological properties; instead, it was to raise the acidity of the xanthine amine group enough to form a co-salt with diphenhydramine.
The drug is also sold in combination with promethazine, again as a salt.
References
Xanthines
Chloroarenes
Adenosine receptor antagonists | 8-Chlorotheophylline | [
"Chemistry"
] | 251 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
9,279,338 | https://en.wikipedia.org/wiki/Thin-filament%20pyrometry | Thin-filament pyrometry (TFP) is an optical method used to measure temperatures. It involves the placement of a thin filament in a hot gas stream. Radiative emissions from the filament can be correlated with filament temperature. Filaments are typically silicon carbide (SiC) fibers with a diameter of 15 micrometres. Temperatures of about 800–2500 K can be measured.
History
TFP in flames was first used by Vilimpoc et al. (1988). More recently, this was demonstrated by Pitts (1996), Blevins et al. (1999), and Maun et al. (2007).
Technique
The typical TFP apparatus consists of a flame or other hot gas stream, a filament, and a camera.
Advantages
TFP has several advantages, including the ability to simultaneously measure temperatures along a line and minimal intrusiveness. Most other forms of pyrometry are not capable of providing gas-phase temperatures.
Drawbacks
Calibration is required. Calibration typically is performed with a thermocouple. Both thermocouples and filaments require corrections in estimating gas temperatures from probe temperatures. Also, filaments are fragile and typically break after about an hour in a flame.
Applications
The primary application is to combustion and fire research.
See also
ASTM Subcommittee E20.02 on Radiation Thermometry
References
Combustion
Measurement
Radiometry | Thin-filament pyrometry | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 303 | [
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Combustion",
"Radiometry"
] |
9,279,449 | https://en.wikipedia.org/wiki/Pressurisation%20ductwork | Pressurisation ductwork is a passive fire protection system. It is used to supply fresh air to any area of refuge, designated emergency evacuation or egress route.
Purpose
The purpose of pressurisation ductwork is to maintain positive pressure in building spaces to prevent smoke from entering from other spaces in which a fire is occurring. It is typically used in exit stairways, corridors, and lobbies.
Requirements
Pressurisation ductwork is certified on the basis of fire testing such as ISO 6944.
Systems
There are two means of providing fire-resistance rated ductwork:
Inherently fire-resistant, or proprietary factory assembled ducts which are made of sheet metal shells filled with mixtures of rockwool, fiber and silicon dioxide
Sheet metal duct with exterior fireproofing materials such as blanket rockwool, ceramic fiber, or intumescent paint.
See also
Heat and smoke vent
Fire protection
Smoke exhaust ductwork
Emergency evacuation
External links
ISO 6944-1:2008 Fire containment -- Elements of building construction -- Part 1: Ventilation ducts
Active fire protection
Pressure
Heating, ventilation, and air conditioning | Pressurisation ductwork | [
"Physics"
] | 223 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
9,279,531 | https://en.wikipedia.org/wiki/Lauryl%20methyl%20gluceth-10%20hydroxypropyl%20dimonium%20chloride | Lauryl methyl gluceth-10 hydroxypropyl dimonium chloride is an ingredient in some types of soaps and personal care products. It is used as a substantive conditioning humectant.
This chemical is a type of methyl glucoside derivative, which has been modified by ethoxylation and quaternization. A synthetic pathway for lauryl methyl gluceth-10
hydroxypropyldimonium chloride and other methyl glucoside humectants has been outlined in trade literature.
Lauryl methyl gluceth-10 hydroxypropyldimonium chloride is listed as a trade-named raw material, Glucquat 125, in cosmetic and toiletry products.
References
Cationic surfactants
Polyethers
Chlorides
Dodecyl compounds | Lauryl methyl gluceth-10 hydroxypropyl dimonium chloride | [
"Chemistry"
] | 168 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
9,279,963 | https://en.wikipedia.org/wiki/Thom%E2%80%93Mather%20stratified%20space | In topology, a branch of mathematics, an abstract stratified space, or a Thom–Mather stratified space is a topological space X that has been decomposed into pieces called strata; these strata are manifolds and are required to fit together in a certain way. Thom–Mather stratified spaces provide a purely topological setting for the study of singularities analogous to the more differential-geometric theory of Whitney. They were introduced by René Thom, who showed that every Whitney stratified space was also a topologically stratified space, with the same strata. Another proof was given by John Mather in 1970, inspired by Thom's proof.
Basic examples of Thom–Mather stratified spaces include manifolds with boundary (top dimension and codimension 1 boundary) and manifolds with corners (top dimension, codimension 1 boundary, codimension 2 corners), real or complex analytic varieties, or orbit spaces of smooth transformation groups.
Definition
A Thom–Mather stratified space is a triple where is a topological space (often we require that it is locally compact, Hausdorff, and second countable), is a decomposition of into strata,
and is the set of control data where is an open neighborhood of the stratum (called the tubular neighborhood), is a continuous retraction, and is a continuous function. These data need to satisfy the following conditions.
Each stratum is a locally closed subset and the decomposition is locally finite.
The decomposition satisfies the axiom of the frontier: if and , then . This condition implies that there is a partial order among strata: if and only if and .
Each stratum is a smooth manifold.
. So can be viewed as the distance function from the stratum .
For each pair of strata , the restriction is a submersion.
For each pair of strata , there holds and (both over the common domain of both sides of the equation).
Examples
One of the original motivations for stratified spaces were decomposing singular spaces into smooth chunks. For example, given a singular variety , there is a naturally defined subvariety, , which is the singular locus. This may not be a smooth variety, so taking the iterated singularity locus will eventually give a natural stratification. A simple algebreo-geometric example is the singular hypersurface
where is the prime spectrum.
See also
Singularity theory
Whitney conditions
Stratifold
Intersection homology
Thom's first isotopy lemma
stratified space
References
Goresky, Mark; MacPherson, Robert Stratified Morse theory, Springer-Verlag, Berlin, 1988.
Goresky, Mark; MacPherson, Robert Intersection homology II, Invent. Math. 72 (1983), no. 1, 77--129.
Mather, J. Notes on topological stability, Harvard University, 1970.
Thom, R. Ensembles et morphismes stratifiés, Bulletin of the American Mathematical Society 75 (1969), pp.240-284.
Generalized manifolds
Singularity theory
Stratifications | Thom–Mather stratified space | [
"Mathematics"
] | 641 | [
"Topology stubs",
"Topology",
"Stratifications"
] |
9,280,871 | https://en.wikipedia.org/wiki/Toshiba%20T1200 | The Toshiba T1200 is a discontinued laptop that was manufactured by the Toshiba Corporation, first made in 1987. It is an upgraded version of the Toshiba T1100 Plus.
It is equipped with an Intel 80C86 processor at of which 384 KB can be used for LIM EMS or as a RAMdisk, CGA graphics card, one 720 KB 3.5" floppy drive and one 20 MB hard drive (Some models had two floppy drives and no hard drive controller card.) MS-DOS 3.30 is included with the laptop. It is the first laptop with a swappable battery pack. Its original price was 6499 USD.
The T1200's hard drive has an unusual 26-pin interface made by JVC, incompatible with ST506/412 or ATA interfaces. Floppy drives are connected using similar 26-pin connectors.
The computer has many unique functions, such as Hard RAM - a small part of RAM is battery-backed and can be used as a non-volatile hard drive. Another function allows to suspend the system or power control the hard drive (which is still dependent on the hard disk's on/off switch).
The Toshiba T1200xe is an upgraded model of this laptop, which contained a 12 MHz 80C286 processor and a 20 or 40 MB hard disk drive. It also has 1 MB of RAM expandable to 5 MB. The floppy drive was also upgraded from 720 KB to 1.44 MB.
See also
Toshiba T1100
Toshiba T1000
Toshiba T3100
Toshiba T1000LE
References
External links
Computer Museum article on the Toshiba T1200
Products introduced in 1987
Computer-related introductions in 1987
IBM PC compatibles
T1200
Early laptops | Toshiba T1200 | [
"Technology"
] | 373 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
9,281,549 | https://en.wikipedia.org/wiki/Eumycetozoa | Eumycetozoa (), or true slime molds, is a diverse group of protists that behave as slime molds and develop fruiting bodies, either as sorocarps or as sporocarps. It is a monophyletic group or clade within the phylum Amoebozoa that contains the myxogastrids, dictyostelids and protosporangiids.
Characteristics
Eumycetozoa is a clade that includes three groups of amoebozoan protists: Myxogastria, Dictyostelia and Protosporangiida—also known as Myxomycetes, Dictyosteliomycetes and Ceratiomyxomycetes, respectively. It is defined on a node-based approach as the least inclusive clade containing the species Dictyostelium discoideum (a dictyostelid), Physarum polycephalum (a myxogastrid) and Ceratiomyxa fruticulosa (a protosporangiid).
All known members of Eumycetozoa generate fruiting bodies, either as sorocarps (in dictyostelids) or as sporocarps (in myxogastrids and protosporangiids). Within their life cycle, they may appear as a single haploid amoeboid cells (in dictyostelids), or as flagellated amoebae with two cilia that give rise to obligate amoebae with no cilia, from which the sporocarps develop (in myxogastrids and protosporangiids).
The flagellated amoebae of myxogastrids and protosporangiids and non-flagellated amoebae of dictyostelids have a flat cell shape. They form wide pseudopodia with acutely pointed subpseudopodia (i.e. smaller pseudopodia that grow beneath). Unlike other amoebae, the pseudopodia lack a prominent streaming of granular cytoplasm.
In eumycetozoans where sexual reproduction is well studied, the zygote cannibalizes on haploid amoebae.
Evolution
Eumycetozoa is a well supported clade within Amoebozoa. In independent phylogenetic analyses, it has been consistently recovered as the sister group to Archamoebae. The Eumycetozoa+Archamoebae clade is, in turn, the sister group to Variosea. Within Eumycetozoa, Dictyostelia has a basal position while Myxogastria and Protosporangiida form a clade. Together, these three groups are part of the larger clade Conosa. The following cladogram is based on a 2022 analysis:
Taxonomy
The name Eumycetozoa was first used by German mycologist Friedrich Wilhelm Zopf in 1884, although no formal taxonomic rank was given. In 1975, mycologist Lindsay Shepherd Olive reintroduced the name Eumycetozoa as a class containing the three groups of fruiting amoebae traditionally included in this taxon: Myxogastria, Dictyostelia and Protostelia. Olive hypothesized that all fruiting amoebae were grouped by this monophyletic taxon, and that the Myxogastria and Dictyostelia were also monophyletic taxa that evolved from a paraphyletic grade of Protostelia. This definition of Eumycetozoa, which included protostelids, was maintained in the 2005 cladistic classification of eukaryotes, where the name was synonymized with Mycetozoa.
Amoebozoa
Eumycetozoa [=Mycetozoa ]
Protostelia (P)
Myxogastria [=Myxomycetes ]
Dictyostelia
Incertae sedis Eumycetozoa: Copromyxa, Copromyxella, Fonticula
However, studies in the 2000s decade disproved this hypothesis. Both morphological and molecular studies showed that Eumycetozoa includes a number of non-fruiting amoeboid groups. More importantly, the Protostelia were discovered to be polyphyletic. The protosteloid type of fruiting body formation, initially considered the ancestral feature shared between all Eumycetozoa, has evolved independently at least in eight lineages within Amoebozoa (e.g. soliformoviids, cavosteliids, schizoplasmodiids, protosporangiids). This discovery lead to the conclusion that the entirety of Amoebozoa became a synonym of Eumycetozoa, and was treated as such in the 2012 cladistic classification of eukaryotes. The term Amoebozoa was conserved as a familiar well-established name of popular usage, despite the term Eumycetozoa having priority as the older name.
To preserve this widely used name, biologist Seungho Kang and his coauthors redefined Eumycetozoa in 2017 to include only one group of protosteloid amoebae, the Protosporangiida (also known as Ceratiomyxomycetes), which are a monophyletic taxon. This usage corresponds to the 1975 hypothesis from Olive that postulates a clade of exclusively fruiting protists that includes myxogastrids, dictyostelids, and some protosteloid amoebae (in this case, the protosporangiids). As of 2019, this renewed definition is accepted by the scientific community and appears in the modern cladistic classification of eukaryotes, revised by the International Society of Protistologists. The name Macromycetozoa was suggested earlier, but Eumycetozoa was chosen for being the oldest term.
Amoebozoa
Evosea
Eumycetozoa [=Macromycetozoa ]
Dictyostelia
Myxogastria [=Myxomycetes ]
Protosporangiida
The name Mycetozoa was maintained in traditional classifications by some authors like Thomas Cavalier-Smith, who also used a renewed definition to include only protosporangiids. However this scheme did not acquire wide usage.
Notes
References
External links
Eumycetozoa at UniEuk Taxonomy App.
Protista
Taxa described in 1884 | Eumycetozoa | [
"Biology"
] | 1,397 | [
"Eukaryotes",
"Protists"
] |
9,282,128 | https://en.wikipedia.org/wiki/Hypoelliptic%20operator | In the theory of partial differential equations, a partial differential operator defined on an open subset
is called hypoelliptic if for every distribution defined on an open subset such that is (smooth), must also be .
If this assertion holds with replaced by real-analytic, then is said to be analytically hypoelliptic.
Every elliptic operator with coefficients is hypoelliptic. In particular, the Laplacian is an example of a hypoelliptic operator (the Laplacian is also analytically hypoelliptic). In addition, the operator for the heat equation ()
(where ) is hypoelliptic but not elliptic. However, the operator for the wave equation ()
(where ) is not hypoelliptic.
References
Partial differential equations
Differential operators | Hypoelliptic operator | [
"Mathematics"
] | 167 | [
"Mathematical analysis",
"Differential operators"
] |
9,282,456 | https://en.wikipedia.org/wiki/Francisco%20Jos%C3%A9%20de%20Caldas | Francisco José de Caldas (October 4, 1768 – October 28, 1816) was a Neogranadine lawyer, military engineer, self-taught naturalist, mathematician, geographer and inventor (he created the first hypsometer), who was executed by orders of General Pablo Morillo during the Spanish American Reconquista for being a forerunner of the fight for the independence of New Granada (modern day Colombia). Arguably the first Colombian scientist, he is often nicknamed "El Sabio" (Spanish for "The learned," "The sage" or "The wise").
Biography
Early life
Caldas was born in Popayán, in 1768. His parents were José de Caldas and Vicenta Tenorio, the aunt of fellow independence hero Camilo Torres Tenorio. Like his cousin, Caldas studied in the Seminary of Popayán, where he met others of the leaders of the Colombian independence movement like Francisco Antonio Zea. Also like his cousin, in 1788 and pressed by his father he moved to Santafé (modern day Bogotá) to study jurisprudence in the Colegio del Rosario, where he obtained a bachelor's degree in 1793. As a student, Caldas was always interested in the study of mathematics, astronomy and the natural sciences, and he only studied law as a result of his father's pressure. Following this, he relocated to Popayán, to administer the family businesses and as a trader, a craft in which he was very unsuccessful.
Scientific and academic career
During his many business trips to Santafé, Caldas was more concerned with scientific observation and devoted long hours to determine geographical coordinates and to make observations. He was particularly concerned with determining the geographical location and altitude of different places, so he was always using a barometer, a thermometer, and a compass. His interest on determining altitude and the fortuitous breaking of a thermometer led to his development of the hypsometer, an apparatus that determined altitude as a function of the boiling point of water. His studies and records of the time survive in both his letters and memoirs, including a map of the course of the Prado river in the department of Tolima, notes about medicinal trees, a description of the stone hieroglyphs in Aipe and of the statues at San Agustín, experiments to determine whether an insect was venomous, and many others. His descriptions about the leveling in the plants growing close to the equinoctial line were sent to José Celestino Mutis, and as a consequence Mutis appointed him to the Royal Botanical Expedition to New Granada.
Following a trip to Quito, he then traveled to Ibarra to meet Alexander von Humboldt and Aimé Bonpland, on December 31, 1801. Considering that he was situated in the relative backwater of Popayán, Humboldt was impressed with his scientific accomplishments. Caldas gave Humboldt and Bonpland data on the altitudes in the region and became their personal friend, and was mentored by them in the study of botany. Together they did some exploration in the surroundings of Quito. Unable to continue traveling with Humboldt, he devoted wholeheartedly to scientific enterprises, and to write his memoirs.
After traveling through Peru and Ecuador, and across the New Kingdom of Granada exploring the newfound land, studying flora, fauna, geography, meteorology and cartography, Caldas returned to Santafé in 1805, where he started working for the Botanical Expedition. Mutis charged him with directing the recently built Astronomic Observatory. During this time he also created a newspaper, "El Semanario," in 1808, where many of his academic writings were published. Caldas expected to be appointed director of the Botanical Expedition following Mutis' death in 1808, but Mutis had appointed his nephew Sinforoso Mutis, instead. Caldas was, nevertheless, confirmed as director of the Astronomical Observatory, and was in charge of studying the flora of Bogotá. He was appointed also as a lecturer of elementary mathematics at the Colegio del Rosario.
Political and military life
July 20, 1810, and the Colombian Declaration of Independence
In 1809, following the death of Mutis, future independence leaders like Caldas' cousin Camilo Torres, and Antonio Nariño, started meeting clandestinely in one of the halls of the Observatory. While Caldas certainly allowed the meetings, his involvement was minimal as he was more interested in his scientific enterprises. During this period he published his "Scientific Memoirs," a continuation of his Semanario.
Caldas was actively involved, nonetheless, in the events of July 20, 1810. Following the lead of cities like Cartagena de Indias, which had created their own juntas, a plot was developed to stimulate the formation of a junta in Santafé. The plot famously consisted of borrowing a flower vase or some other object from a Peninsular Spaniard, José González Llorente, to use it in a celebration for the arrival of commissioner of the Regency Antonio Villavicencio to the city, taking advantage of the fact that Villavicencio's arrival had brought hundreds of people to the city. The plot creators were hoping that Llorente would refuse, and would use the refusal to call for the formation of a Junta, and to do so, Caldas agreed to drop by at the time of the request so that he could be "reprimanded" for dealing with a Spaniard who was mistreating the creoles. As planned, the "offended" started shouting the offenses by Peninsular Spaniards, and calling for the installation of a Junta. This led, as planned, to a city revolt following which Viceroy Amar peacefully agreed to the formation of the Santafé junta. The date of the formation of this junta is considered the official Day of Independence of Colombia.
After the Declaration of Independence
Following the events of July 20, Caldas and Joaquín Camacho were asked to create the first newspaper of the newly founded Republic, called the "Diario Político de Santafé de Bogotá" (Political Journal of Santafé de Bogotá), first published on August 27, 1810. The Diario published a complete description of all the events surrounding the creation of the Junta, and published articles about political economics and the political decisions of the Junta. Caldas kept publishing his Scientific Memoirs during this period.
In September 1811, Antonio Nariño was appointed as President of the Free and Independent State of Cundinamarca. One of his first actions as a president was the formation of the Army Engineer Corps, and Caldas was then appointed to them as a Captain, and charged with making plots of roads and itineraries. Caldas was part of the troops sent by Nariño, under the command of General Antonio Baraya, to defeat the federalists that were assembled in the Congress of the United Provinces in Tunja. Baraya, however, decided to switch factions and support the federalist forces, and Caldas joined him, signing the act that declared Nariño an usurper and a tyrant, and supporting the Congress in May, 1812. Caldas was appointed as member of the Military Commission of the Congress, and was given the rank of Lieutenant Colonel, and was involved in the Battle of Ventaquemada on December 2, 1812, on which the federalist troops were victor, and in the Battle of San Victorino (or Battle of Santafé de Bogotá, San Victorino y Las Cruces), on January 9, 1813, where the federalist troops were utterly defeated.
After being defeated in the rebellion, Caldas, fearing reprisals, escaped to Popayán, but finding that it had been overtaken by the royal troops commanded by future viceroy Juan Sámano, went then to the province of Antioquia. Antioquia had declared independence as the "State of Antioquia" or the "Free and Sovereign State of Antioquia." The Antioquia state had appointed Juan del Corral as a dictator, and del Corral welcomed Caldas and appointed him to create a Military School and as Director of Rifle Factories and General Engineer, as well as giving him the rank of Colonel. As an engineer, Caldas was in charge of erecting buildings, powder mills, and gun factories, as well as coin minting. He also taught in the Academy of Engineers in Medellín, in 1814. Between 1813 and 1814, he took charge of the fortifications along the Cauca River and the installation of a rifle and gunpowder factory. By the end of 1814, Nariño had been defeated and arrested by the Spanish crown, and Bolívar and his army had forced the submission of Cundinamarca to the United Provinces. The federalist General Government, which had been established in Santafé, with growing concerns about the possibility of a Spanish reconquest following the start of Morillo's campaign, then called Caldas to appoint him with the creation of a similar Military School, and to build bridges, trenches, and fortifications around the city. He was sent to the northern army and to fortify roads in Quindío.
Death
Pablo Morillo finally captured Santafé on May 6, 1816. Like the other leaders of the Independence movement, Caldas escaped the city, originally with the goal of getting to Buenaventura to escape abroad. On the way, however, future viceroy Sámano gained a victory over the Republican troops in the Battle of la Cuchilla del Tambo, reconquering Popayán, and Caldas was forced to hide in the Paisbamba Farm in Sotará, where he was soon arrested by the Spanish Royalists.
He was then sent back to Santafé, and executed by a firing squad on October 29, 1816, in the San Francisco Plaza by orders of Morillo, Count of Cartagena. When Caldas was about to be executed and the people present at the place appealed for the life of the scientist, Morillo responded: "Spain does not need savants" (Spanish: "España no necesita sabios"). Before dying Caldas wrote on the wall a large Greek letter θ, which has been interpreted as exclaiming "Oh long and dark departure!" (Spanish: ¡Oh larga y negra partida!. In classical Athens, Theta was used as an abbreviation for the Greek θάνατος (thanatos, “death”).
His body was buried in the Church of Veracruz, which was later turned into the Panteón Nacional (National Pantheon) but later moved to the Panteón de los Próceres in his hometown, Popayán.
Legacy
Caldas helped fund the New Kingdom of Granada Seminary, intended to be a scientific institution during the first decade of the 1800s. In 1810, he founded the Diario Político de Santa Fe (Political Diary of Santa Fe) which ultimately defended the independentist movement. During this time Caldas became engineer's colonel designing an artillery apparatus for the revolutionaries.
Due to his work in the Army Engineer Corps, he is considered by some authors the "father of Colombian engineering".
The Colombian department of Caldas is named for Francisco José de Caldas.
The Francisco José de Caldas District University, a large public university in Bogotá is named after him.
The “Francisco José de Caldas” Scholarship for Doctoral Programs is awarded by The Departamento Administrativo de Ciencia, Tecnología e Innovación (Colciencias) for Colombians to study toward a PhD.
Caldas' face appeared in the $20 Colombian peso banknotes.
Books
"El estado de la geografía del virreinato con relación a la economía y al comercio" (1807)
"El influjo del clima sobre los seres organizados" (1808)
"La Memoria sobre la Nivelación de las Plantas del Ecuador, Historia de Nuestra Revolución, Educación de Menores, Importancia del Cultivo de la Cochinilla y Chinchografía y Geografía de los Arboles de Quina
References
Further reading
Appel, John Wilton. Francisco José de Caldas: A scientist at work in Nueva Granada. Philadelphia: American Philosophical Society, 1994.
Glick, Thomas F. "Science and Independence in Latin America (with Special Reference to New Granada". The Hispanic American Historical Review, Duke UP, Vol. 71 #2, 5/1991, 307-334.
External links
colombialink.com article on Francisco Jose de Caldas
Francisco José de Caldas. Polymath Virtual Library, Fundación Ignacio Larramendi
1768 births
1816 deaths
Del Rosario University alumni
People of the Colombian War of Independence
19th-century Colombian botanists
18th-century naturalists
19th-century naturalists
Executed Colombian people
People executed by New Spain
Deaths by firearm in Colombia
Military engineers
People executed by Spain by firearm
19th-century executions by Spain
People from Popayán
18th-century Colombian botanists
Viceroyalty of New Granada people | Francisco José de Caldas | [
"Engineering"
] | 2,651 | [
"Military engineers",
"Military engineering"
] |
9,285,398 | https://en.wikipedia.org/wiki/Peco%20%28unit%29 | peco is the unit of measurement of the dielectric properties of concrete and other hydrating materials.
The dielectric constant of concrete is about 4.5, but changes with time, as concrete hydrates, and with changes in formulation and/or ingredients.
The unit of peco is derived from the two elements of the dielectric constant (which is a ratio), permittivity and conductivity. peco was coined by Hydronix, Ltd. to represent a unit of output from their TitanCSM device which measures the dielectric properties of concrete as it hardens
Concrete | Peco (unit) | [
"Engineering"
] | 124 | [
"Structural engineering",
"Concrete"
] |
9,285,474 | https://en.wikipedia.org/wiki/Behavioral%20modeling%20in%20computer-aided%20design | In computer-aided design, behavioral modeling is a high-level circuit modeling technique where behavior of logic is modeled.
The Verilog-AMS and VHDL-AMS languages are widely used to model logic behavior.
Other modeling approaches
Register transfer level modeling: logic is modeled at register level
Structural modeling: logic is modeled at both register level and gate level
References
Analog Behavioral Modeling with the Verilog-A Language by Dan FitzPatrick, Ira Miller.
Computer-aided design | Behavioral modeling in computer-aided design | [
"Technology",
"Engineering"
] | 98 | [
"Computer-aided design",
"Design engineering",
"Computer science stubs",
"Computer science",
"Computing stubs"
] |
9,285,550 | https://en.wikipedia.org/wiki/Jump%20seat | A jump seat (sometimes spelled jumpseat) is an auxiliary seat in an automobile, train or aircraft, typically folding or spring-loaded to collapse out of the way when not used. The term originated in the United States c. 1860 for a movable carriage seat.
History
Jump seats originated in horse-drawn carriages and were carried over to various forms of motorcar. A historic use still found today is in limousines, along with delivery vans (either as an auxiliary seat or an adaptation of the driver's seat to improve ease of entry and exit for their many deliveries) and various forms of extended cab
pickup trucks (to permit a ready trade-off - and transition - between seating and storage space behind the front seat).
In aviation
Jump seats are found both in the utility areas of the passenger cabin for flight attendant use (required during takeoff and landing) and in the cockpit— officially termed auxiliary crew stations— for individuals not involved in operating the aircraft. Cockpit uses may include trainee pilots observing the flight crew, off-duty crew members deadheading to another airport, or official observers such as regulatory agency or airline inspectors. Airline personnel merely in transit may be assigned auxiliary jump seats in the cabin or designated empty row seating.
Cabin crew jump seats are normally located near emergency exits so that flight attendants can quickly open exit doors in an emergency and aid in evacuation.
Security requirements for both flight deck and cabin jump seat use have been tightened significantly since September 11, 2001.
See also
Fold down seating
Folding chair
Folding seat
List of seats
Rumble seat
References
External links
The Jump Seat
1860 introductions
Aircraft cabin components
Vehicle parts
Auto parts
Carriages
Seats | Jump seat | [
"Technology"
] | 329 | [
"Vehicle parts",
"Components"
] |
9,286,561 | https://en.wikipedia.org/wiki/Stotting | Stotting (also called pronking or pronging) is a behavior of quadrupeds, particularly gazelles, in which they spring into the air, lifting all four feet off the ground simultaneously. Usually, the legs are held in a relatively stiff position. Many explanations of stotting have been proposed, though for several of them there is little evidence either for or against.
The question of why prey animals stot has been investigated by evolutionary biologists including John Maynard Smith, C. D. Fitzgibbon, and Tim Caro; all of them conclude that the most likely explanation given the available evidence is that it is an honest signal to predators that the stotting animal would be difficult to catch. Such a signal is called "honest" as it is not deceptive in any way, and would benefit both predator and prey: the predator as it avoids a costly and unproductive chase, and the prey as it does not get chased.
Etymology
Stot is a common Scots and Northern English verb meaning "bounce" or "walk with a bounce". Uses in this sense include stotting a ball off a wall, and rain stotting off a pavement. Pronking comes from the Afrikaans verb pronk-, which means "show off" or "strut", and is a cognate of the English verb "prance".
Taxonomic distribution
Stotting occurs in several deer species of North America, including mule deer, pronghorn, and Columbian black-tailed deer, when a predator is particularly threatening, and in a variety of ungulate species from Africa, including Thomson's gazelle and springbok. It is also said to occur in the blackbuck, a species found in India.
Stotting occurs in domesticated livestock such as sheep and goats, typically only in young animals.
Possible explanations
Stotting makes a prey animal more visible, and uses up time and energy that could be spent on escaping from the predator. Since it is dangerous, the continued performance of stotting by prey animals must bring some benefit to the animal (or its family group) performing the behavior. Several possible explanations have been proposed, namely that stotting may be:
A good means of rapid escape or jumping over obstructions. However, this cannot be true in Thomson's gazelles because these prey animals do not stot when a predator is less than approximately 40 m away.
An anti-ambush behavior; animals living in tall grass may leap into the air to detect potential predators. There is some evidence for this.
An alarm signal to other members of the herd that a predator is hazardously close thereby increasing the survival rate of the herd. This would be an instance of group selection, a theory heavily criticized by evolutionary biologists.
A socially cohesive behavior to escape predators by coordinated stotting, thereby making it more difficult for a predator to target any individual during an attack. This too would be group selection, subject to the same objections.
An honest signal of the animal's fitness. Stotting could be a way of deterring pursuit by warning a predator of the animal's unsuitability as prey: the prey benefits by not being chased (because it is in fact very fit); the predator benefits by not wasting time chasing an animal it is unlikely to catch. This signaling explanation avoids the group selection connotations of the "alarm signal" and "socially cohesive" escape hypotheses.
An instance of Amotz Zahavi's handicap principle, whereby stotting is signaling to predators that the animal is so fit it can escape even if it deliberately slows itself down with some apparently useless behavior (i.e. stotting).
A predator detection signal whereby the animal signals to the predator that it has been seen and therefore does not have the advantage of surprise. Many such signals exist in different groups of animals. Again, this would be an honest pursuit deterrence signal, benefiting the prey by not being chased (because it can be seen to be aware of the predator and ready to escape immediately) and benefitting the predator by not wasting time stalking prey when it has already been seen. Evidence for this hypothesis is that cheetahs abandon more hunts when their gazelle prey stots, and when they do give chase to a stotting gazelle, they are far less likely to make a kill. However, gazelles stot less often to cheetahs (which stalk and would therefore probably give up when detected) than to African wild dogs, which "course" (chase prey relentlessly, not relying on surprise).
A fitness display to potential mates in a sexual selection process rather than an antipredator adaptation.
Play, especially in young animals, which may help to prepare them for adult life. In favor of this hypothesis, stotting is sometimes observed in immature animals; against this is the fact that stotting is generally seen in adult prey responding to predators.
The English evolutionary biologist John Maynard Smith concludes that "the natural explanation is that stotting is an index of condition and of escape capability", used as a signal especially to coursing predators. He also observes that "it is hard to see how it could be a handicap", unless perhaps it is a signal to other gazelles of the same species. C. D. Fitzgibbon agrees that it is most likely an honest signal of condition. Tim Caro comments that there is insufficient evidence "to support or refute a startle effect of stotting, prey signalling its health, the Social Cohesion hypothesis or [an] alarm function of stotting"; in his view, stotting informs a predator that it has been detected.
See also
Aposematism
Deimatic behaviour
References
External links
Signalling theory
Ethology
Terrestrial locomotion | Stotting | [
"Biology"
] | 1,190 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
9,286,935 | https://en.wikipedia.org/wiki/Head%20%28vessel%29 | A head is one of the end caps on a cylindrically shaped pressure vessel.
Principle
Vessel dished ends are mostly used in storage or pressure vessels in industry. These ends, which in upright vessels are the bottom and the top, use less space than a hemisphere (which is the ideal form for pressure containments) while requiring only a slightly thicker wall.
Manufacturing
The manufacturing of such an end is easier than that of a hemisphere. The starting material is first pressed to a radius r1 and then curled at the edge creating the second radius r2. Vessel dished ends can also be welded together from smaller pieces.
Shapes
The shape of the heads used can vary. The most common head shapes are:
Hemispherical head
A sphere is the ideal shape for a head, because the stresses are distributed evenly through the material of the head. The radius (r) of the head equals the radius of the cylindrical part of the vessel.
Ellipsoidal head
This is also called an elliptical head. The shape of this head is more economical, because the height of the head is just a fraction of the diameter. Its radius varies between the major and minor axis; usually the ratio is 2:1.
Semi–Ellipsoidal Dished Heads
2:1 Semi-Ellipsoidal dished heads are deeper and stronger than the more popular torispherical dished heads.
The greater depth results in the head being more difficult to form, and this makes them more expensive to manufacture. However, the cost is offset by a potential reduction in the specified thickness due to the dished head having greater overall strength and resistance to pressure.
Torispherical head (or flanged and dished head)
These heads have a dish with a fixed radius (r1), the size of which depends on the type of torispherical head. The transition between the cylinder and the dish is called the knuckle. The knuckle has a toroidal shape. The most common types of torispherical heads are:
ASME F&D head
Commonly used for ASME pressure vessels, these torispherical heads have a crown radius equal to the outside diameter of the head (), and a knuckle radius equal to 6% of the outside diameter (). The ASME design code does not allow the knuckle radius to be any less than 6% of the outside diameter.
Klöpper head
This is a torispherical head. The dish has a radius that equals the diameter of the cylinder it is attached to (). The knuckle has a radius that equals a tenth of the diameter of the cylinder (), hence its alternative designation "decimal head".
Also other sizes are: ,(page13) rest of height () .
Korbbogen head
This is a torispherical head also named Semi ellipsoidal head (According to DIN 28013). The radius of the dish is 80% of the diameter of the cylinder (). The radius of the knuckle is ().
Also other sizes are , rest of height () . This shape finds its origin in architecture; see Korbbogen, architectural information.
80-10 head
These heads have a crown radius of 80% of outside diameter, and a knuckle radius of 10% of outside diameter.
Flat head
This is a head consisting of a toroidal knuckle connecting to a flat plate. This type of head is typically used for the bottom of cookware.
Diffuser head
This type of head is often found on the bottom of aerosol spray cans. It is an inverted torispherical head.
Conical head
This is a cone-shaped head.
Heat treatment
Heat treatment may be required after cold forming, but not for heads formed by hot forming.
References
Pressure vessels | Head (vessel) | [
"Physics",
"Chemistry",
"Engineering"
] | 777 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
9,287,165 | https://en.wikipedia.org/wiki/Tris%28bipyridine%29ruthenium%28II%29%20chloride | Tris(bipyridine)ruthenium(II) chloride is the chloride salt coordination complex with the formula [Ru(bpy)3]Cl2. This polypyridine complex is a red crystalline salt obtained as the hexahydrate, although all of the properties of interest are in the cation [Ru(bpy)3]2+, which has received much attention because of its distinctive optical properties. The chlorides can be replaced with other anions, such as PF6−.
Synthesis and structure
This salt is prepared by treating an aqueous solution of ruthenium trichloride with 2,2'-bipyridine. In this conversion, Ru(III) is reduced to Ru(II), and hypophosphorous acid is typically added as a reducing agent. [Ru(bpy)3]2+ is octahedral, containing a central low spin d6 Ru(II) ion and three bidentate bpy ligands. The Ru-N distances are 2.053(2), shorter than the Ru-N distances for [Ru(bpy)3]3+. The complex is chiral, with D3 symmetry. It has been resolved into its enantiomers. In its lowest lying triplet excited state the molecule is thought to attain lower C2 symmetry, as the excited electron is localized primarily on a single bipyridyl ligand.
Photochemistry of [Ru(bpy)3]2+
[Ru(bpy)3]2+ absorbs ultraviolet and visible light. Aqueous solutions of [Ru(bpy)3]Cl2 are orange due to a strong MLCT absorption at 452 ± 3 nm (extinction coefficient of 14,600 M−1cm−1). Further absorption bands are found at 285 nm corresponding to ligand centered π*← π transitions and a weak transition around 350 nm (d-d transition). Light absorption results in formation of an excited state have a relatively long lifetime of 890 ns in acetonitrile and 650 ns in water. The excited state relaxes to the ground state by emission of a photon or non-radiative relaxation. The quantum yield is 2.8% in air-saturated water at 298 K and the emission maximum wavelength is 620 nm. The long lifetime of the excited state is attributed to the fact that it is triplet, whereas the ground state is a singlet state and in part due to the fact that the structure of the molecule allows for charge separation. Singlet-triplet transitions are forbidden and therefore often slow.
Like all molecular excited states, the triplet excited state of [Ru(bpy)3]2+ has both stronger oxidizing and reducing properties than its ground state. This situation arises because the excited state can be described as an Ru3+ complex containing a bpy•− radical anion as a ligand. Thus, the photochemical properties of [Ru(bpy)3]2+ are reminiscent of the photosynthetic assembly, which also involves separation of an electron and a hole.
[Ru(bpy)3]2+ has been examined as a photosensitizer for both the oxidation and reduction of water. Upon absorbing a photon, [Ru(bpy)3]2+ converts to the aforementioned triplet state, denoted [Ru(bpy)3]2+*. This species transfers an electron, located on one bpy ligand, to a sacrificial oxidant such as peroxodisulfate (S2O82−). The resulting [Ru(bpy)3]3+ is a powerful oxidant and oxidizes water into O2 and protons via a catalyst. Alternatively, the reducing power of [Ru(bpy)3]2+* can be harnessed to reduce methylviologen, a recyclable carrier of electrons, which in turn reduces protons at a platinum catalyst. For this process to be catalytic, a sacrificial reductant, such as EDTA4− or triethanolamine is provided to return the Ru(III) back to Ru(II).
Derivatives of [Ru(bpy)3]2+ are numerous. Such complexes are widely discussed for applications in biodiagnostics, photovoltaics and organic light-emitting diode, but no derivative has been commercialized. Application of [Ru(bpy)3]2+ and its derivatives to fabrication of optical chemical sensors is arguably one of the most successful areas so far.
[Ru(bpy)3]2+ and photoredox catalysis
Photoredox catalysis exploits [Ru(bpy)3]2+ as a sensitizer as a strategy for organic synthesis. Many analogues of [Ru(bpy)3]2+ are employed as well. These transformations exploit the redox properties of [Ru(bpy)3]2+* and its reductively quenched derivative [Ru(bpy)3]+.
Safety
Metal bipyridine as well as related phenanthroline complexes are generally bioactive, as they can act as intercalating agents.
See also
Primogenic Effect
Tris(bipyridine)iron(II) chloride
References
Ruthenium complexes
Photochemistry
Bipyridine complexes
Ruthenium(II) compounds
Chlorides
Pyridine complexes | Tris(bipyridine)ruthenium(II) chloride | [
"Chemistry"
] | 1,136 | [
"Chlorides",
"Inorganic compounds",
"nan",
"Salts"
] |
9,288,194 | https://en.wikipedia.org/wiki/Atmospheric%20chemistry%20observational%20databases | Over the last two centuries many environmental chemical observations have been made from a variety of ground-based, airborne, and orbital platforms and deposited in databases. Many of these databases are publicly available. All of the instruments mentioned in this article give online public access to their data. These observations are critical in developing our understanding of the Earth's atmosphere and issues such as climate change, ozone depletion and air quality. Some of the external links provide repositories of many of these datasets in one place. For example, the Cambridge Atmospheric Chemical Database, is a large database in a uniform ASCII format. Each observation is augmented with the meteorological conditions such as the temperature, potential temperature, geopotential height, and equivalent PV latitude.
Ground-based and balloon observations
NDSC observations. The Network for the Detection for Stratospheric Change (NDSC) is a set of high-quality remote-sounding research stations for observing and understanding the physical and chemical state of the stratosphere. Ozone and key ozone-related chemical compounds and parameters are targeted for measurement. The NDSC is a major component of the international upper atmosphere research effort and has been endorsed by national and international scientific agencies, including the International Ozone Commission, the United Nations Environment Programme (UNEP), and the World Meteorological Organization (WMO). The primary instruments and measurements are: Ozone lidar (vertical profiles of ozone from the tropopause to at least 40 km altitude; in some cases tropospheric ozone will also be measured). Temperature lidar (vertical profiles of temperature from about 30 to 80 km). Aerosol lidar (vertical profiles of aerosol optical depth in the lower stratosphere). Water vapor lidar (vertical profiles of water vapor in the lower stratosphere). Ozone microwave (vertical profiles of stratospheric ozone from 20 to 70 km). H2O microwave (vertical profiles water vapor from about 20 to 80 km). ClO microwave (vertical profiles of ClO from about 25 to 45 km, depending on latitude). Ultraviolet/Visible spectrograph (column abundance of ozone, NO2, and, at some latitudes, OClO and BrO). Fourier Transform Infrared spectrometer (column abundances of a broad range of species including ozone, HCl, NO, NO2, ClONO2, and HNO3).
MkIV observations. The MkIV Interferometer is a Fourier Transform Infra-Red (FTIR) Spectrometer, designed and built at the Jet Propulsion Laboratory in 1984, to remotely sense the composition of the Earth's atmosphere by the technique of solar absorption spectrometry. This was born out of concern that man-made pollutants (e.g. chlorofluorocarbons, aircraft exhaust) might perturb the ozone layer. Since 1984, the MkIV Interferometer has participated in 3 NASA DC-8 polar aircraft campaigns, and has successfully completed 15 balloon flights. In addition, the MkIV Interferometer made over 900 days of ground-based observations from many different locations, including McMurdo, Antarctica in 1986.
Sonde observations. The World Ozone and Ultraviolet Radiation Data Centre (WOUDC) is one of five World Data Centres which are part of the Global Atmosphere Watch (GAW) programme of the World Meteorological Organization (WMO). The WOUDC is operated by the Experimental Studies Division of the Meteorological Service of Canada (MSC) — formerly Atmospheric Environment Service (AES), Environment Canada and is located in Toronto. The WOUDC began as the World Ozone Data Centre (WODC) in 1960 and produced its first data publication of Ozone Data for the World in 1964. In June 1992, the AES agreed to a request from the WMO to add ultraviolet radiation data to the WODC. The Data Centre has since been renamed to the World Ozone and Ultraviolet Radiation Data Centre (WOUDC) with the two component parts: the WODC and the World Ultraviolet Radiation Data Centre (WUDC).
Airborne observations
Aircraft observations. Many aircraft campaigns have been conducted as part of the Suborbital Science Program and by the Earth Science Project Office an overview of these campaigns is available. The data can be accessed from the Earth Science Project Office archives.
MOZAIC observations. The MOZAIC program (Measurement of OZone and water vapour by AIrbus in-service airCraft) was initiated in 1993 by European scientists, aircraft manufacturers and airlines to collect experimental data. Its goal is to help understand the atmosphere and how it is changing under the influence of human activity, with particular interest in the effects of aircraft. MOZAIC consists of automatic and regular measurements of ozone and water vapour by five long range passenger airliners flying all over the world. The aim is to build a large database of measurements to allow studies of chemical and physical processes in the atmosphere, and hence to validate global chemistry transport models. MOZAIC data provide, in particular, detailed ozone and water vapour climatologies at 9–12 km where subsonic aircraft emit most of their exhaust and which is a very critical domain (e.g. radiatively and S/T exchanges) still imperfectly described in existing models. This will be valuable to improve knowledge about the processes occurring in the upper troposphere/ lower stratosphere (UT/LS), and the model treatment of near tropopause chemistry and transport. The MOZAIC data is restricted access, to obtain access the forms need to be filled out.
CARIBIC observations. The CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container) project is an innovative scientific project to study and monitor important chemical and physical processes in the Earth's atmosphere. Detailed and extensive measurements are made during long distance flights on board the Airbus A340-600 "Leverkusen" (http://www.flightradar24.com/data/airplanes/D-AIHE/). We deploy an airfreight container with automated scientific apparatuses, which are connected to an air and particle (aerosol) inlet underneath the aircraft. In contrast to MOZAIC, CARIBIC is only installed on one aircraft, but it measures a much wider spectrum of atmospheric constituents (CARIBIC -> instrumentation). Both, CARIBIC and MOZAIC are integrated in IAGOS. Data exist from 1998-2002 and from 2004-today. It can be requested via CARIBIC -> data access.
Space shuttle observations
ATMOS observations. The Atmospheric Trace Molecule Spectroscopy experiment (ATMOS) is an infrared spectrometer (a Fourier transform interferometer) that is designed to study the chemical composition of the atmosphere. In this section you will be able to read both general and detailed information as to why and how the instrument works. The ATMOS instrument has flown four times on the Space Shuttle since 1985. The predecessor to ATMOS, flown on aircraft and high-altitude balloon platforms, was born in the early 1970s out of concern for the effects of Super Sonic Transport exhaust products on the ozone layer. The experiment was redesigned for the Space Shuttle when the potential for ozone destruction by man-made chlorofluorocarbons was discovered and the need for global measurements became crucial.
CRISTA observations. CRISTA is short for CRyogenic Infrared Spectrometers and Telescopes for the Atmosphere. It is a limb-scanning satellite experiment, designed and developed by the University of Wuppertal to measure infrared emissions of the Earth's atmosphere. Equipped with three telescopes and four spectrometers and cooled with liquid helium, CRISTA acquires global maps of temperature and atmospheric trace gases with very high horizontal and vertical resolution. The design enables the observation of small scale dynamical structures in the 15–150 km altitude region.
Satellite observations
ACE observations. The Atmospheric Chemistry Experiment (ACE) satellite, also known as SCISAT-1, is a Canadian satellite that makes measurements of the Earth's atmosphere and follows in heritage of ATMOS.
Aura observations. Aura flies in formation with the NASA EOS "A Train," a collection of several other satellites (Aqua, CALIPSO, CloudSat and the French PARASOL). Aura carries four instruments for studies of atmospheric chemistry: MLS, HIRDLS, TES and OMI.
ILAS observations. ILAS (Improved Limb Atmospheric Spectrometer) developed by MOE (the Ministry of the Environment) (formerly EA - Environment Agency of Japan) is boarded on ADEOS (Advanced Earth Observing Satellite). On August 17, 1996, ADEOS was launched by the H-II rocket from the Tanegashima Space Center of Japan (ADEOS was renamed as "MIDORI") and stopped its operation on June 30, 1997. Data obtained by ILAS are processed, archived, and distributed by NIES (National Institute for Environmental Studies).
POAM observations. The Polar Ozone and Aerosol Measurement II (POAM II) instrument was developed by the Naval Research Laboratory (NRL) to measure the vertical distribution of atmospheric ozone, water vapor, nitrogen dioxide, aerosol extinction, and temperature. POAM II measures solar extinction in nine narrow band channels, covering the spectral range from approximately 350 to 1060 nm.
Sulfate aerosol observations from SAGE and HALOE. The SAGE II (Stratospheric Aerosol and Gas Experiment II) sensor was launched into a 57 degree inclination orbit aboard the Earth Radiation Budget Satellite (ERBS) in October 1984. During each sunrise and sunset encountered by the orbiting spacecraft, the instrument uses the solar occultation technique to measure attenuated solar radiation through the Earth's limb in seven channels centered at wavelengths ranging from 0.385 to 1.02 micrometers. The retrieval of stratospheric aerosol size distributions based on HALOE multi-wavelength particle extinction measurements was described by Hervig et al. [1998]. That approach yields unimodal lognormal size distributions, which describe the aerosol concentration versus radius using three parameters: total aerosol concentration, median radius, and distribution width. This site offers results based on the Hervig et al. [1998] technique, with one exception. The retrieval results reported here are based on sulfate refractive indices for 215 K, where Hervig et al. [1998] used room temperature indices adjusted to stratospheric temperatures using the Lorentz-Lorenz rule. Size distributions were only retrieved at altitudes above tropospheric cloud tops. Clouds were identified using techniques described by Hervig and McHugh [1999]. The HALOE size distributions are offered in NetCDF files containing data for a single year. The results are reported on a uniform altitude grid ranging from 6 to 33 km at 0.3 km spacing. The native HALOE altitude spacing is 0.3 km, so this interpolation has little or no effect on the data. The files report profile data including: altitude, pressure, temperature, aerosol concentration, median radius, distribution width, aerosol composition. Aerosol surface area and volume densities can be easily calculated from the size distribution parameters using the relationships given here.
Upper Atmosphere Research Satellite (UARS) observations. Data from the UARS is available from the GES Distributed Active Archive Center (DAAC). The UARS satellite was launched in 1991 by the Space Shuttle Discovery. It is long, in diameter, weighs 13,000 pounds, and carries 10 instruments. UARS orbits at an altitude of with an orbital inclination of 57 degrees. UARS measured ozone and chemical compounds found in the ozone layer which affect ozone chemistry and processes. UARS also measured winds and temperatures in the stratosphere as well as the energy input from the Sun. Together, these helped define the role of the upper atmosphere in climate and climate variability.
Related observations
Surface albedo. The surface reflectivity is of importance for atmospheric photolysis. Instruments such as the Total Ozone Mapping Spectrometer (TOMS) and the Ozone Monitoring Instrument (OMI) provide daily global fields.
See also
Acid rain
Atmospheric chemistry
Greenhouse gas
International Global Atmospheric Chemistry
Ozone
Pollution
Scientific Assessment of Ozone Depletion
External links
The British Atmospheric Data Centre.
The Cambridge Atmospheric Chemical Database is a large database in a uniform ASCII format. Each observation is augmented with the meteorological conditions such as temperature, potential temperature, geopotential height, and equivalent PV latitude.
GOME data.
The NASA Earth Science Project Office archives.
The NASA GSFC Distributed Active Archive Center.
The NASA Langley Distributed Active Archive Center.
The Network for the Detection for Stratospheric Change.
NADIR NILU's Atmospheric Database for Interactive Retrieval.
NOAA SBUV-2 data.
The World Ozone and Ultraviolet Radiation Data Centre (WOUDC).
World Ozone and Ultraviolet Radiation Data Centre on NOSA
Satellite meteorology
Atmosphere
Environmental chemistry
Environmental science databases | Atmospheric chemistry observational databases | [
"Chemistry",
"Environmental_science"
] | 2,645 | [
"Environmental chemistry",
"Environmental science databases",
"nan"
] |
4,102,989 | https://en.wikipedia.org/wiki/Sanistand | Sanistand was a female urinal manufactured by Japanese toilet maker giant TOTO from 1951 to 1971 and marketed by American Standard from 1950 to 1973. It appeared in a bathroom in the National Stadium for female athletes during the 1964 Summer Olympics in Tokyo. The urinal encouraged women to urinate from a standing position, without the need to sit on a shared seat.
See also
Female urinal
Female urination device
Pollee
External links
TOTO Library article, March 20, 2000
TOTO Kids article on female urinals
Background Information on female urinals
Toilets
Products introduced in 1951
Urine
Urinals | Sanistand | [
"Biology"
] | 122 | [
"Urine",
"Excretion",
"Animal waste products",
"Toilets"
] |
4,103,373 | https://en.wikipedia.org/wiki/International%20Association%20of%20Public%20Transport | The International Association of Public Transport (; UITP) is a non-profit member-led organisation for public transport authorities, networks and operators, policy decision-makers, scientific institutes and the public transport supply and service industry, that works to advance sustainable urban mobility.
History
Founded on 17 August 1885, initially as the Union Internationale des Tramways (International Union of Tramways), the association is headquartered in Brussels, Belgium, with 13 offices around the world. With more than 1900 members in over 100 countries, UITP advocates for sustainable mobility and produces publications, oversees projects, hosts global events and brings together all those with a vested interested in advancing public transport.
Starting off, the association mainly focused on the development of tramway systems across Europe. However, as urban mobility increased, the scope of the association expanded multinationally. This expansion introduced the integration of buses, railways, metros, etc.
Model
The International Association of Public Transport (UITP) envisions a future where public transportation systems are more sustainable, accessible, and integrated with new technologies. Key strategies include increasing the use of zero-emission vehicles, promoting multimodal transportation options, and enhancing digitalization for better efficiency and passenger experience. UITP also emphasizes the importance of partnerships between the public and private sectors to meet evolving urban mobility needs and reduce carbon footprints. These plans align with global efforts to combat climate change and make cities more livable.
Organization
UITP represents an international network of more than 1,900 member companies in over 100 countries and covers all modes of public transport: metro, light rail, regional and suburban railways, bus, trolleybus, taxi and ride-hailing, and waterborne transport. It also represents collective transport in a broader sense, with active committees and working bodies on digitalisation, I.T., sustainable development, design and culture, human resources, transport economics, security, and more.
UITP is headquartered in Brussels, Belgium, with thirteen regional and liaison offices worldwide, located in Abidjan, Casablanca, Dubai, Hong Kong, Istanbul, Johannesburg, New York, São Paulo, Singapore, Mexico & Central America, New Delhi, and Auckland).
The General Secretariat is managed by Mohamed Mezghani, who has been working for more than 30 years in public transport and urban mobility-related fields and became the association’s Secretary General in January 2018. He previously served in a number of internal positions, including as UITP Deputy Secretary General. His mandate was renewed for a second term, beginning in January 2023.
The President of UITP is Renée Amilcar, the General Manager of OC Transpo in Ottawa, who was voted into office in June 2023 as the association’s first female President. Joining the City of Ottawa as the General Manager in 2021, Renée oversees many projects in her daily role, including the electrification of Ottawa’s transit fleet and the deployment of 350 zero-emission buses.
Being a nonprofit, funding partially comes through memberships and collaborations with companies and individuals. The association selects its members based on their role and contributions to the public transport sector. Membership is open to a wide range of stakeholders, including public transport operators, authorities, policymakers, researchers, and industry suppliers. The organization emphasizes collaboration and innovation, bringing together those who are committed to advancing sustainable urban mobility. Key members include prominent transport authorities like Transport for London (TfL) and industry leaders such as Siemens Mobility, Alstom, and Bombardier.
Activities
UITP gathers and analyses facts and figures to provide quantitative and qualitative information on key aspects of public transport and urban mobility.
UITP manages an online information center, MyLibrary, which gives access to the full texts of UITP’s studies and conference papers, as well as references to books, articles, and websites. A picture library and statistics on public transport operators are also available.
UITP carries out studies, projects, and surveys; the results are made available in brochures and reports.
UITP leads projects for international institutions, such as the European Commission. Under the framework of these projects, UITP launches and participates in thematic networks of mobility experts on public transport policy and organisation.
UITP issues official positions on global mobility issues, representing the views of the sector.
UITP tries to engage a number of international bodies, such as the United Nations (UNEP, UNDESA, UNFCCC, UNHABITAT), the World Bank, and European institutions.
UITP organises training courses, workshops, and seminars for public transport experts.
UITP is a member of the Group of Representative Bodies.
See also
List of metro systems
Sustainable transport
Bibliography
Loo, B. P. Y., & Tsang, K. W. (2021). "Future directions for public transport policy" Transport Reviews, 41(3), 270-290
Mohamed Mezghani - Agenda contributor. World Economic Forum. (n.d.). https://www.weforum.org/agenda/authors/mohamed-mezghani/
Surveys. UITP. (2018). https://uitp.org/surveys/
The future of public transport is safe and inclusive: UITP at TRA 2024. UITP. (2022, April 22). https://www.uitp.org/news/the-future-of-public-transport-is-safe-and-inclusive-uitp-at-tra-2024/
References
Sustainable transport
Sustainable urban planning
Transportation planning
Public transport advocacy organizations
Environmental organisations based in Belgium
International environmental organizations
International transport organizations
Trade associations based in Belgium
Organizations established in 1885 | International Association of Public Transport | [
"Physics"
] | 1,144 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
4,103,495 | https://en.wikipedia.org/wiki/Nuclear%20gene | A nuclear gene is a gene that has its DNA nucleotide sequence physically situated within the cell nucleus of a eukaryotic organism. This term is employed to differentiate nuclear genes, which are located in the cell nucleus, from genes that are found in mitochondria or chloroplasts. The vast majority of genes in eukaryotes are nuclear.
Endosymbiotic theory
Mitochondria and plastids evolved from free-living prokaryotes into current cytoplasmic organelles through endosymbiotic evolution. Mitochondria are thought to be necessary for eukaryotic life to exist. They are known as the cell's powerhouses because they provide the majority of the energy or ATP required by the cell. The mitochondrial genome (mtDNA) is replicated separately from the host genome. Human mtDNA codes for 13 proteins, most of which are involved in oxidative phosphorylation (OXPHOS). The nuclear genome encodes the remaining mitochondrial proteins, which are then transported into the mitochondria. The genomes of these organelles have become far smaller than those of their free-living predecessors. This is mostly due to the widespread transfer of genes from prokaryote progenitors to the nuclear genome, followed by their elimination from organelle genomes. In evolutionary timescales, the continuous entry of organelle DNA into the nucleus has provided novel nuclear genes. Furthermore, Mitochondria depend on nuclear genes for essential protein production as they cannot generate all necessary proteins independently.
Endosymbiotic organelle interactions
Though separated from one another within the cell, nuclear genes and those of mitochondria and chloroplasts can affect each other in a number of ways. Nuclear genes play major roles in the expression of chloroplast genes and mitochondrial genes. Additionally, gene products of mitochondria can themselves affect the expression of genes within the cell nucleus. This can be done through metabolites as well as through certain peptides trans-locating from the mitochondria to the nucleus, where they can then affect gene expression.
Structure
Eukaryotic genomes have distinct higher-order chromatin structures that are closely packaged functional relates to gene expression. Chromatin compresses the genome to fit into the cell nucleus, while still ensuring that the gene can be accessed when needed, such as during gene transcription, replication, and DNA repair. The entirety of genome function is based on the underlying relationship between nuclear organization and the mechanisms involved in genome organization, in which there are a number of complex mechanisms and biochemical pathways which can affect the expression of individual genes within the genome. The remaining mitochondrial proteins, metabolic enzymes, DNA and RNA polymerases, ribosomal proteins, and mtDNA regulatory factors are all encoded by nuclear genes. Because nuclear genes constitute the genetic foundation of all eukaryotic organisms, anything that might change their genetic expression has a direct impact on the organism's cellular genotypes and phenotypes. The nucleus also contains a number of distinct subnuclear foci known as nuclear bodies, which are dynamically controlled structures that help numerous nuclear processes run more efficiently. Active genes, for instance, might migrate from chromosomal regions and concentrate into subnuclear foci known as transcription factories.
Protein synthesis
The majority of proteins in a cell are the product of messenger RNA transcribed from nuclear genes, including most of the proteins of the organelles, which are produced in the cytoplasm like all nuclear gene products and then transported to the organelle. Genes in the nucleus are arranged in a linear fashion upon chromosomes, which serve as the scaffold for replication and the regulation of gene expression. As such, they are usually under strict copy-number control, and replicate a single time per cell cycle. Nuclear cells such as platelets do not possess nuclear DNA and therefore must have alternative sources for the RNA that they need to generate proteins. With the nuclear genome's 3.3 billion DNA base pairs in humans, one good example of a nuclear gene is MDH1 or the malate dehydrogenase 1 gene. In various metabolic pathways, including the citric acid cycle, MDH1 is a protein-coding gene that encodes an enzyme that catalyzes the NAD/NADH-dependent, reversible oxidation of malate to oxaloacetate. This gene codes for the cytosolic isozyme, which is involved in the malate-aspartate shuttle, which allows malate to cross past the mitochondrial membrane and be converted to oxaloacetate to perform further cellular functions. This gene among many exhibits its huge purposeful role in the entirety of an organism’s physiologic function. Although non-nuclear genes may exist in its functional nature, the role of nuclear genes in response and in coordination with non-nuclear genes is fundamental.
Significance
Many nuclear-derived transcription factors have played a role in respiratory chain expression. These factors may have also contributed to the regulation of mitochondrial functions. Nuclear respiratory factor (NRF-1) fuses to respiratory encoding genes proteins, to the rate-limiting enzyme in biosynthesis, and to elements of replication and transcription of mitochondrial DNA, or mtDNA. The second nuclear respiratory factor (NRF-2) is necessary for the production of cytochrome c oxidase subunit IV (COXIV) and Vb (COXVb) to be maximized.
The studying of gene sequences for the purpose of speciation and determining genetic similarity is just one of the many uses of modern day genetics, and the role that both types of genes have in that process is important. Though both nuclear genes and those within endosymbiotic organelles provide the genetic makeup of an organism, there are distinct features that can be better observed when looking at one compared to the other. Mitochondrial DNA is useful in the study of speciation as it tends to be the first to evolve in the development of a new species, which is different from nuclear genes' chromosomes that can be examined and analyzed individually, each giving its own potential answer as to the speciation of a relatively recently evolved organism.
Low-copy nuclear genes in plants are valuable for improving phylogenetic reconstructions, especially when universal markers like Chloroplast DNA, or cpDNA and Nuclear ribosomal DNA, or nrDNA fall short. Challenges in using these genes include limited universal markers and the complexity of gene families. Nonetheless, they are essential for resolving close species relationships and understanding plant phylogenetic studies. While using low-copy nuclear genes requires additional lab work, advances in sequencing and cloning techniques have made it more accessible. Fast-evolving introns in these genes can offer crucial phylogenetic insights near species boundaries. This approach, along with the analysis of developmentally important genes, enhances the study of plant diversity and evolution.
As nuclear genes are the genetic basis of all eukaryotic organisms, anything that can affect their expression therefore directly affects characteristics about that organism on a cellular level. The interactions between the genes of endosymbiotic organelles like mitochondria and chloroplasts are just a few of the many factors that can act on the nuclear genome.
References
Genes
Molecular biology | Nuclear gene | [
"Chemistry",
"Biology"
] | 1,479 | [
"Biochemistry",
"Molecular biology"
] |
4,103,640 | https://en.wikipedia.org/wiki/Probabilistic%20forecasting | Probabilistic forecasting summarizes what is known about, or opinions about, future events. In contrast to single-valued forecasts (such as forecasting that the maximum temperature at a given site on a given day will be 23 degrees Celsius, or that the result in a given football match will be a no-score draw), probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast. Thus, probabilistic forecasting is a type of probabilistic classification.
Weather forecasting represents a service in which probability forecasts are sometimes published for public consumption, although it may also be used by weather forecasters as the basis of a simpler type of forecast. For example, forecasters may combine their own experience together with computer-generated probability forecasts to construct a forecast of the type "we expect heavy rainfall".
Sports betting is another field of application where probabilistic forecasting can play a role. The pre-race odds published for a horse race can be considered to correspond to a summary of bettors' opinions about the likely outcome of a race, although this needs to be tempered with caution as bookmakers' profits needs to be taken into account. In sports betting, probability forecasts may not be published as such, but may underlie bookmakers' activities in setting pay-off rates, etc.
Weather forecasting
Probabilistic forecasting is used in a weather forecasting in a number of ways. One of the simplest is the publication of about rainfall in the form of a probability of precipitation.
Ensembles
The probability information is typically derived by using several numerical model runs, with slightly varying initial conditions. This technique is usually referred to as ensemble forecasting by an Ensemble Prediction System (EPS). EPS does not produce a full forecast probability distribution over all possible events, and it is possible to use purely statistical or hybrid statistical/numerical methods to do this. For example, temperature can take on a theoretically infinite number of possible values (events); a statistical method would produce a distribution assigning a probability value to every possible temperature. Implausibly high or low temperatures would then have close to zero probability values.
If it were possible to run the model for every possible set of initial conditions, each with an associated probability, then according to how many members (i.e., individual model runs) of the ensemble predict a certain event, one could compute the actual conditional probability of the given event. In practice, forecasters try to guess a small number of perturbations (usually around 20) that they deem are most likely to yield distinct weather outcomes. Two common techniques for this purpose are breeding vectors (BV) and singular vectors (SV). This technique is not guaranteed to yield an ensemble distribution identical to the actual forecast distribution, but attaining such probabilistic information is one goal of the choice of initial perturbations. Other variants of ensemble forecasting systems that have no immediate probabilistic interpretation include those that assemble the forecasts produced by different numerical weather prediction systems.
Examples
Canada has been one of the first countries to broadcast their probabilistic forecast by giving chances of precipitation in percentages. As an example of fully probabilistic forecasts, recently, distribution forecasts of rainfall amounts by purely statistical methods have been developed whose performance is competitive with hybrid EPS/statistical rainfall forecasts of daily rainfall amounts.
Probabilistic forecasting has also been used in combination with neural networks for energy generation. This is done via improved weather forecasting using probabilistic intervals to account for uncertainties in wind and solar forecasting, as opposed to traditional techniques such as point forecasting.
Economic forecasting
Macroeconomic forecasting is the process of making predictions about the economy for key variables such as GDP and inflation, amongst others, and is generally presented as point forecasts. One of the problems with point forecasts is that they do not convey forecast uncertainties, and this is where the role of probability forecasting may be helpful. Most forecasters would attach probabilities to a range of alternative outcomes or scenarios outside of their central forecasts. These probabilities provide a broader assessment of the risk attached to their central forecasts and are influenced by unexpected or extreme shifts in key variables.
Prominent examples of probability forecasting are those undertaken in surveys whereby forecasters are asked, in addition to their central forecasts, for their probability estimates within a specified range. The Monetary Authority of Singapore (MAS) is one such organisation which publishes probability forecasts in its quarterly MAS Survey of Professional Forecasters. Another is Consensus Economics, a macroeconomic survey firm, which publishes a special survey on forecast probabilities each January in its Consensus Forecasts, Asia Pacific Consensus Forecasts and Eastern Europe Consensus Forecasts publications.
Besides survey firms covering this subject, probability forecasts are also a topic of academic research. This was discussed in a 2000 research paper by Anthony Garratt, Kevin Lee, M. Hashem Pesaran and Yongcheol Shin entitled 'Forecast Uncertainties in Macroeconometric Modelling: An Application to the UK Economy'. The MAS released an article on the topic in its Macroeconomic Review in October 2015 called A Brief Survey of Density Forecasting in Macroeconomics.
Energy forecasting
Probabilistic forecasts have not been investigated extensively to date in the context of energy forecasting. However, the situation is changing. While the Global Energy Forecasting Competition (GEFCom) in 2012 was on point forecasting of electric load and wind power, the 2014 edition aimed at probabilistic forecasting of electric load, wind power, solar power and electricity prices. The top two performing teams in the price track of GEFCom2014 used variants of Quantile Regression Averaging (QRA), a new technique which involves applying quantile regression to the point forecasts of a small number of individual forecasting models or experts, hence allows to leverage existing development of point forecasting.
Lumina Decision Systems has created an example probabilistic forecast of energy usage for the next 25 years using the US Department of Energy's Annual Energy Outlook (AEO) 2010.
Population forecasting
Probability forecasts have also been used in the field of population forecasting.
Assessment
Assessing probabilistic forecasts is more complex than assessing deterministic forecasts. If an ensemble-based approach is being used, the individual ensemble members need first to be combined and expressed in terms of a probability distribution. There exist probabilistic (proper) scoring rules such as the continuous ranked probability score for evaluating probabilistic forecasts. One example of such a rule is the Brier score.
See also
Consensus forecast
Energy forecasting
Forecasting
Forecast skill
Global Energy Forecasting Competitions
References
External links
Online results from EPS (from the World Meteorological Organisation)
Statistical forecasting
Probability assessment
Weather forecasting
Climate and weather statistics | Probabilistic forecasting | [
"Physics"
] | 1,413 | [
"Weather",
"Physical phenomena",
"Climate and weather statistics"
] |
4,104,496 | https://en.wikipedia.org/wiki/Billy%27s%20Boots | Billy's Boots was a popular British comic strip by writer Fred Baker and artist John Gillatt, later continued by Mike Western. The original Billy's Boots was an earlier humorous series, written and drawn by Frank Purcell, which appeared in Tiger from December 23rd 1961 until July 13th 1963, with a similar premise to this later series. The later more serious Billy appeared in the first issue of Scorcher in 1970, and later moved to Tiger when the two comics merged in 1974. In 1985, Tiger in turn merged with Eagle and the strip moved again. Just a year later, Billy's adventures relocated once more, this time to Roy of the Rovers. New adventures were included in the weekly comic until May 1990 (later followed by reprints), before he switched to Best of Roy of the Rovers Monthly. The strip also appeared in annuals, including annuals for comics which had themselves ceased publication. The strip is still fondly remembered by fans of the "golden age" of British boys' comics. In Finland and Sweden, Billy's Boots was published in Buster magazine. In the UK, stories based on Billy's earliest adventures appeared in Total Football magazine until it closed in 2001, and Billy's story was also reprinted for a few months in the defunct Striker comic.
Story overview
The series concerned Billy Dane, a schoolboy and aspiring footballer, who was an extremely poor player until he discovered a pair of old style, ankle high, football boots while cleaning his grandmother's loft. The boots, which his grandfather had bought as a souvenir, had belonged, decades before, to a famous professional striker called Charles "Dead Shot" Keen. In a manner which was never explained in the story, the boots possess special abilities which turn Billy into a fantastic football player when he wore them. In addition to giving Billy the physical skill to score great goals, the boots also granted him the intuition to be in the right place at the time on the pitch, leading him to feel that they have a "mind of their own".
Each week, the strip was introduced with the words, "Billy Dane found an ancient pair of football boots that used to belong to old-time soccer star, "Dead-Shot" Keen. In some strange way, the boots enabled Billy to play in the same style as Dead Shot..."
However, despite the boots' obvious importance to him, he would repeatedly lose them or have them stolen.
The boots fell apart after a few matches due to their age and could not be repaired. Fearing that he would lose his new-found ability and knowing that "Dead Shot" Keen had played for the local club, Amhurst Albion, Billy went to their ground to see if any of Keen's other boots remained there. Having secretly entered the stadium, he found the boot room and discovered another pair of Keen's old boots which, much repaired, he used for the remainder of the story.
The boots endowed Billy with sufficient ability to make regular appearances in schoolboy representative matches, appearing for Southern Schools against their Western, Northern and Eastern counterparts, and the full England Schoolboys team, with whom he travelled on tours to France and Germany.
In 1971, while playing for England in one such tour match in France, the boots split and Billy took them to a local shoe repairer's shop. When he went to collect them, the elderly owner told Billy that he recognised the boots as a pair he had made as a special order for Keen many years earlier. Billy asked him to make an identical pair, as a contingency against future damage or loss of the original boots. When Billy wore the new boots in his school's next match, they did not enable Billy to play in Keen's style, and he missed a penalty, so he had to revert to the original pair at half time with the consequent restoration of his abilities.
Billy was often able to anticipate future events in his own life by reading Keen's book The Life of Dead Shot Keen. Billy's life often mirrored Keen's, such as the time when he came on as a substitute in a school match with his team losing 0-7, and scored 8 goals himself to win the match, or when he accidentally got into trouble by being selected for both sides in a schools' cup final. He had previously read about Keen's similar experiences while turning out for his teams. He was thus able to foresee events and work out solutions to problems.
In February 1971 Billy sat his 11+. Despite his gran forbidding him to play football so he could concentrate on his schoolwork, he failed to qualify for the Grammar School, but achieved a good enough grade to attend the local Secondary School, Kenwood Technical.
Billy lived with his grandmother, but the fate of his parents was only addressed very briefly early on when a teacher offered him a lift to a match if his dad couldn't take him. Billy replied, "M-My dad's n-not alive, sir". In 1973 Billy and his grandmother moved to the village of Groundwood to live with his grandmother's elderly sister Kate, who owned a large house there.
By the early 1980s, Billy was playing as centre forward for Groundwood School, alongside pals such as Jimmy Dawson, Reg Wood, Marvin Soames and Harvey Crisp. The strip regularly involved mishaps involving his boots, which were periodically lost, stolen or damaged, resulting in Billy underperforming and thus being dropped from the school team. In several instances, he turned out for opposing sides such as "Merlin" or "Brand X", scoring against the school first team, thus embarrassing the sports teacher, Mr Harris.
During the strip's run in Eagle, the football element of the story was downplayed somewhat, focusing instead on Billy's exploits whilst on the run from a council home where he had been placed when his grandmother (with whom he lived) had been taken ill. There would often be no football action for several weeks, which was odd given that the central premise of the strip was football-based. When the strip moved to Roy of the Rovers, football once again became the central element in the strip. These years focused on playing for Groundwood School, with the emphasis often placed on whether he could help them win cup competitions rather than needing the boots to be successful.
Keen was also a skilled cricketer, and Billy discovered a pair of his old cricket boots, which had similar beneficial effects on his performance on the cricket field during the summer months.
Despite his adventures lasting for more than 20 years, Billy remained about 12 or 13 years old throughout the storyline.
In popular culture
The Wirral-based rock band Half Man Half Biscuit included the line "Is this me, or is this Dead-Shot Keen?" - in reference to Billy's oft-voiced wondering about his ability - in the song "Our Tune" on their 1991 album MacIntyre, Treadmore and Davitt.
In a review of the film Like Mike, the British magazine TV Choice stated that the film would "have some dads thinking wistfully back to the comic-strip days of Billy's Boots", years after it has ceased publication.
The 2000 film There's Only One Jimmy Grimble, starring Ray Winston, Robert Carlyle and Lewis Mckenzie as Jimmy Grimble, bears a resemblance to the strip.
The They Think It's All Over Annual 1997 featured a parody of the strip, Willie's Boots, in which the influence of the boots made Willie resemble a 1930s-era footballer in more ways than his playing ability, until he eventually dies of rickets.
Translations
Billy Dane is called:
Dutch: Sjakie Meulemans, Swedish: Benny Guldfot, Finnish: Benny Dane, Benny Kultajalka, Icelandic: Kalli í knattspyrnu (Kalli the footballer)
Dead Shot Keen is called:
Dutch: Voltreffer Vick, Swedish: Kanon-Keen, Finnish: Kanuuna-Keen Bengali (India): Bilash, Bili or Biley.
Billy's Boots used to be regularly translated into Bengali and published in the popular Bengali monthly magazine "Shuktaara" as "Billir Boot", circulated mainly in West Bengal, India. Its Bengali version also appeared in Anandamela Pujo Sonkhya (Festival edition).
Billy's Boots also was published in Turkish in the 1970s as comic series under the name "Sihirli Ayakkabılar" (Translation:Magical Shoes) in a children magazine called "Doğan Kardeş". "Dead Shot Ken" was named "Bombacı Ken" (Ken the Bomber).
References
Sources
McAlpine, Duncan, The Comic Book Price Guide 1996/97 Edition (Titan Books, 1996)
Official Roy of the Rovers website
Scorcher page at britishcomics.com
1970 comics debuts
1990 comics endings
British comic strips
Drama comics
Association football comics
Comics about children
Child characters in comics
Male characters in comics
Fictional British people
Fictional association football players
Eagle (comic) characters
Eagle comic strips
Comics characters introduced in 1970
Fictional footwear
Comics about magic
Magic items
Comics set in the United Kingdom | Billy's Boots | [
"Physics"
] | 1,874 | [
"Magic items",
"Physical objects",
"Matter"
] |
4,104,613 | https://en.wikipedia.org/wiki/Pattress | A pattress or pattress box or fitting box (in the United States and Canada, electrical wall switch box, electrical wall outlet box, electrical ceiling box, switch box, outlet box, electrical box, etc.) is the container for the space behind electrical fittings such as power outlet sockets, light switches, or fixed light fixtures. Pattresses may be designed for either surface mounting (with cabling running along the wall surface) or for embedding in the wall or skirting board. Some electricians use the term "pattress box" to describe a surface-mounted box, although simply the term "pattress" suffices.
The term "flush box" is used for a mounting box that goes inside the wall, although some use the term "wall box". Boxes for installation within timber or plasterboard walls are usually called "cavity boxes" or "plasterboard boxes". A ceiling-mounted pattress (most often used for light fixtures) is referred to as a "ceiling pattress" or "ceiling box". British English speakers also tend to say "pattress box" instead of just "pattress".
Pattress is alternatively spelt patress. The word pattress, despite being attested from the late 19th century, is still rarely found in dictionaries. It is etymologically derived from pateras (Latin for bowls, saucers). The term is not used by electricians in the United States.
Pattresses
Pattresses contain devices for input (switches) and output (sockets and fixtures), with transfer managed by junction boxes. A pattress may be made of metal or plastic. In the United Kingdom, surface-mounted boxes in particular are often made from urea-formaldehyde resin or alternatively PVC and usually white. Wall boxes are commonly made of thin galvanised metal. A pattress box is made to standard dimensions and may contain embedded bushings (in standard positions) for the attachment of wiring devices (switches and sockets). Internal pattress boxes themselves do not include the corresponding faceplates, since the devices to be contained in the box specify the required faceplate. External pattress boxes may offer include corresponding faceplates, limiting the devices to be contained in the box.
Although cables may be joined inside pattress boxes, due simply to their presence at convenient points in the wiring, their main purpose is to accommodate switches and sockets. They allow switches and sockets to be recessed into the wall for a better appearance. Enclosures primarily for joining wires are called junction boxes.
Types of outlet boxes
Outlet boxes can be surface or sub-surface mounted. The latter type can be further divided by the type of wall it is intended for: sub-surface outlet boxes are available for mounting in drywall, in brick walls or in concrete walls.
North America
In North America, outlet boxes are rectangular in shape and are available in different sizes, to accommodate varying numbers of switches. Boxes for drywall are commonly available in two types: new work and old work. New work boxes are designed to be installed in a new installation. They are typically designed with nail or screw holes to attach directly to wall studs. Old work boxes are designed to attach to already-installed wall material (usually drywall). The boxes will almost always have two or more parsellas (from Latin: small wing or part). The parsellas flip out when the box screws are screwed, securing the box to the wall with the help of the four or more tabs on the front of the box.
Europe
In most of Europe, outlet boxes are round, with a standard diameter of 68 mm, to accommodate a single insert. This is often a single switch or wall socket, but inserts with two (sometimes even three) switches, a switch and an AC outlet, or two outlets (which protrudes from the wall slightly more than other inserts) are available. The round shape allows the corresponding holes to be simply drilled out with a hole saw rather than requiring a rectangular cavity to be cut out. This is an advantage especially when installing outlet boxes in brick or concrete walls, which are much more common in Europe than in North America.
Boxes intended for drywall always have parsellas, similar to the North American old work type (the distinction between old and new work is not used in Europe), as the round shape of the boxes and prevalence of light-gauge steel framing in modern drywall make nailing to a stud impractical.
Most outlet boxes can be connected to form a chain, limited in length only by the availability of faceplates for the inserts; depending on the product series, between 3 and 5 inserts can be combined in this manner. Some manufacturers also produce 2-gang, 3-gang or 4-gang boxes. The center distance between two inserts is always 71 mm.
Even with those round-hole systems, the faceplates that cover them are mostly rectangular.
Surface-mounted boxes are uncommon and surface-mounted switches or AC outlets are normally used instead.
Belgium
Single gang boxes for installation in plasterboard are of the standard European type. Boxes for installation in brick walls are rectangular in shape and can be connected to form a chain, similar to their standard European counterparts. For plasterboard, 2-gang, 3-gang or 4-gang boxes are available instead. The center distance between inserts is 71 mm, as in most of the rest of Europe, for horizontal combinations; for vertical ones, it is 71, 60 or 57 mm. Single gang boxes, as well as multi-gang boxes or rows of boxes with a center distance of 71 mm, can accommodate standard European inserts.
British Isles
In the UK, also in Ireland, outlet boxes are rectangular. Single gang boxes have roughly the same dimensions as the European box and can accommodate European inserts, but usually not vice versa. Larger boxes are also available, to accommodate a 2-gang outlet, as well as boxes to accommodate two inserts side by side. Metal boxes, uncommon in Europe, are available in the UK.
Italy
Italy uses rectangular boxes. Inserts consist of modules, with a center distance of approximately 22 mm horizontally and 45 mm vertically, although some can be double or even triple width, and a mounting frame. The size of the 3-module box, the most frequently used type, was derived from the North American single gang box and is similar enough to be used interchangeably, although Italian boxes are installed horizontally rather than vertically. Two-module boxes are similar in size to those used in Europe and the British Isles, and can be used interchangeably. Other sizes accommodate 4 modules, 7 modules, 2 rows of 3 modules or 3 rows of 6 modules. Single-module boxes are available for installation in metal profiles of modular office walls.
Like in the rest of Europe, there is no distinction between old and new work types, and drywall boxes always have parsellas. They are usually designed so that the cutout can be made with a 68 mm hole saw. Surface-mounted boxes are available, but some switches and AC outlets can be directly mounted to the wall surface without a box.
This type of wall boxes and inserts is also used in Romania and parts of North Africa.
See also
Wall anchor plates are also known as pattress plates.
Junction box, an enclosure housing electrical connections
Electrical wiring in the United Kingdom
Electrical wiring in North America
Junction box
Light switch
AC power plugs and sockets
References
External links
DIY Wiki Pattress page – more information on (British) pattresses and terminology
Cables
Electrical wiring | Pattress | [
"Physics",
"Engineering"
] | 1,562 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
4,104,862 | https://en.wikipedia.org/wiki/Hemileia%20vastatrix | Hemileia vastatrix is a multicellular basidiomycete fungus of the order Pucciniales (previously also known as Uredinales) that causes coffee leaf rust (CLR), a disease affecting the coffee plant. Coffee serves as the obligate host of coffee rust, that is, the rust must have access to and come into physical contact with coffee (Coffea sp.) in order to survive.
CLR is one of the most economically important diseases of coffee, worldwide. Previous epidemics have destroyed coffee production of entire countries. In more recent history, an epidemic in Central America in 2012 reduced the region's coffee output by 16%.
The primary pathological mechanism of the fungus is a reduction in the plant's ability to derive energy through photosynthesis by covering the leaves with fungus spores and/or causing leaves to drop from the plant. The reduction in photosynthetic ability (plant's metabolism) results in a reduction in quantity and quality of flower and fruit production, which ultimately reduces the beverage quality.
Appearance
The mycelium with uredinia looks yellow-orange and powdery, and appears on the underside of leaves as points ~0.1 mm in diameter. Young lesions appear as chlorotic or pale yellow spots some millimetres in diameter, the older being a few centimetres in diameter. Hyphae are club-shaped with tips bearing numerous pedicels on which clusters of urediniospores are produced.
Telia are pale yellowish teliospores often produced in uredinia; teliospores more or less spherical to limoniform, 26–40 × 20–30 μm in diameter, wall hyaline to yellowish, smooth, 1 μm thick, thicker at the apex, pedicel hyaline.
Urediniospores are more or less reniform, 26–40 × 18-28 μm, with hyaline to pale yellowish wall, 1–2 μm thick, strongly warted on the convex side, smooth on the straight or concave side, warts frequently longer (3–7 μm) on spore edges.
There have been no known reports of a host capable of supporting an aecial stage of the fungus.
Life cycle
Hemileias life cycle begins with the germination of uredospores through germ pores in the spore. It mainly attacks the leaves and is only rarely found on young stems and fruit. Appressoria are produced, which in turn produce vesicles, from which entry into the substomatal cavity is gained. Within 24–48 hours, infection is completed. After successful infection, the leaf blade is colonized and sporulation will occur through the stomata. One lesion produces 4–6 spore crops over a 3–5 month period releasing 300–400,000 spores.
There is currently no known alternate host nor reported cases of infection by basidiospores of H. vastatrix, yet the fungus is able to overcome resistance by plants and scientists do not know exactly how. The predominant hypothesis is that H. vastatrix is heteroecious, completing its life cycle on an alternate host plant which has not yet been found. An alternative hypothesis is that H. vastatrix actually represents an early-diverging autoecious rust, in which the teliospores are non-functional and vestigial, and the sexual life cycle is completed by the urediniospores. Hidden meiosis and sexual reproduction (cryptosexuality) have been found within the generally asexual urediniospores. This finding may explain why new physiological races have arisen so often and so quickly in H. vastatrix.
Control
Recent studies and research papers have shown that CLR is under-researched compared to pathogens of other cash crops and that there are many factors that can influence the incidence and severity of the disease. Therefore, an integrated approach that includes genetic, chemical, and cultural controls is the best course of action.
Resistant cultivars
The most effective and durable strategy against CLR is the use of resistant cultivars. This has a number of benefits beyond disease control and can include the reduction in use of agrochemicals as control. A reduction in chemical application also has positive economic effects for farmers by reducing the cost of production. However, in lieu of deploying new, resistant plant stock, or in the interim between initiation of a renewal program and complete renewal, other methods of control are available.
Professional research and breeding programs such as CIRAD are developing F1 hybrid coffee trees such as Starmaya that have broad genetic resistance to CLR as well as good yield and cup quality, with research showing that F1 hybrids have higher yields and cup quality than conventional Coffea arabica cultivars. Research is also being done on how to democratize the use of F1 hybrids by smallholder coffee farmers who too often can not afford to utilize F1 hybrids. For example, Starmaya is the first F1 hybrid coffee tree that can be propagated in a seed garden rather than the more complicated and expensive process of somatic embryogenesis.
Chemicals
There are social, environmental, and economic concerns associated with any chemical control of plant diseases and some of these have a more direct and immediate impact than others on a farmer's decision to use chemicals. The use of chemicals must first and foremost make economic sense, and the cost of their use can be as much as 50% of the total cost of production. For smallholder farmers, this can be cost-prohibitive. Copper-based fungicides, such as Bordeaux mixture, have proven to be effective and economical, and work best when applied at inoculum levels below 10%.
Typically copper-based mixtures are used as preventative measures and systemic fungicides are used as curative measures.
By reducing disease incidence, chemical control can help mitigate the reduction in fruit quality and quantity that is caused by the disease.
Cultural
The extended presence of water on the leaves allows H vastatrix to infect the plant more easily and therefore cultural methods can be directed at reducing the time and the amount of water that remains on leaves. Cultural methods such as pruning branches to allow more air circulation and light penetration can help dry the moisture on the leaves. Increasing spacing between rows and preventing weed growth also allows for more air circulation and light penetration.
Plant nutrition
The correct amount of plant nutrients can also play a role in host resistance. Adequate nutrition allows the plant's natural, biochemical defenses to perform at optimal levels. For example, nitrogen and potassium are two critical, macronutrients that assist a coffee tree to resist infection. Nitrogen is a critical component of chlorophyll, which is central to photosynthesis. Potassium helps to increase the thickness of a leaf's epidermis, which acts as a barrier to pathogen attack. It also aids in recovery of tissues after an attack by H. vastatrix.
Pruning
Experiments have shown that removal of infected leaves can possibly reduce the final amount of the disease by a significant amount.
Fruit thinning
Fruit thinning combined with chemical application (cyproconazole and epoxiconazole for example) can increase effective control.
Shade
There is a complex interaction between shade, meteorological effects such as rainfall or dry periods, and aerial dispersal of rust. Researchers have found that shade may suppress spore dispersal under dry conditions but assist spore dispersal during wet conditions. The researchers acknowledge the need for further research on the topic.
Ecology
Hemileia vastatrix is an obligate parasite that lives mainly on the plants of genus Coffea but is also capable of invading Arabidopsis thaliana but does not develop haustoria.
The rust needs suitable temperatures to develop (between 16 °C and 28 °C). High altitude plantations are generally colder, so inoculum will not develop as easily as in plantations located in warmer regions. The presence of free water is required for infection to be completed. Loss of moisture after germination starts inhibits the whole infection process.
Sporulation is most influenced by temperature, humidity, and host resistance. The colonization process is not dependent on leaf wetness but is influenced greatly by temperature and by plant resistance. The main effect of temperature is to determine the length of time for the colonization process (incubation period).
Hemileia vastatrix has two fungal parasites, Verticillium haemiliae and Verticillium psalliotae.
The fungus is of East African origin, but is currently endemic to all producing regions. Coffee originates in high altitude regions of Ethiopia, Sudan, and Kenya, and the rust pathogen is believed to have originated in the same mountains. The earliest reports of the disease hail from the 1860s. It was reported first by a British explorer from regions of Kenya around Lake Victoria in 1861, from where it is believed to have spread to Asia and the Americas.
Rust was first reported in the major coffee growing regions of Sri Lanka (then called Ceylon) in 1867. The causal fungus was first fully described by the English mycologist Michael Joseph Berkeley and his collaborator Christopher Edmund Broome after an analysis of specimens of a "coffee leaf disease" collected by George H.K. Thwaites in Ceylon. Berkeley and Broome named the fungus Hemileia vastatrix, "Hemileia" referring to the half smooth characteristic of the spores and "vastatrix" for the devastating nature of the disease.
It is unknown exactly how the rust reached Ceylon from Ethiopia. Over the years that followed, the disease was recorded in India in 1870, Sumatra in 1876, Java in 1878, and the Philippines in 1889. During 1913 it crossed the African continent from Kenya to the Congo, where it was found in 1918, before spreading to West Africa, the Ivory Coast (1954), Liberia (1955), Nigeria (1962–63) and Angola (1966).
Uredospores are disseminated across long distances mainly by wind and can end up thousands of miles from where they were produced. Over short distances, uredospores are disseminated by both wind and rain splash. Other agents, such as animals, mainly insects and contaminated equipment, occasionally have been shown to be involved with dissemination.
Pathogenesis
Hemileia vastatrix affects the plant by covering part of the leaf surface area or inducing defoliation, both resulting in a reduction in the rate of photosynthesis. Because berry yield is generally linked to the amount of foliage, a reduction in photosynthesis and more importantly, defoliation can affect yield. Continuous colonization of the pathogen depletes the plants resources for surviving until the plant no longer has enough energy to grow or survive.
Coffee plants bred for resistance succeed because of cytological and biochemical resistance mechanisms. Such mechanisms involve transmitting signals to the infection site to stop cell function. The plants' cell degradation response frequently occurs after the formation of the first haustorium and results in rapid hypersensitive cell death. Because Hemileia vastatrix is an obligate parasite, it can no longer survive when surrounded by dead cells. This can be recognized by the presence of browning cells in local regions on a leaf.
Environment
Temperature and moisture specifically play the largest role in infection rate of the coffee plant. Humidity is not enough to allow infection to occur. There must be a presence of water on the leaf for the urediospores to infect, although dry urediospores can survive up to six weeks without water. Dispersal happens primarily by wind, rain, or a combination of both. Transmission over large distances is likely the result of human intervention by spores clinging to clothes, tools, or equipment. Dispersal by insects is unlikely and therefore insignificant. Spore germination only happens when the temperature is , and peaks at ; furthermore. Appressorium formation is highest at and has a linear decline in production until , when there is little to no production. Although temperature and moisture are key factors for infection, dispersal, and colonization, plant resistance is also important in determining whether Hemileia vastatrix will survive.
History
The disease coffee leaf rust (CLR) was first described and named by Berkley and Broom in the November 1869 edition of the Gardeners Chronicle. They used specimens sent from Sri Lanka, where the disease was already causing enormous damage to productivity. Many coffee estates in Sri Lanka were forced to collapse or convert their crops to alternatives not affected by CLR, such as tea. The planters nicknamed the disease "Devastating Emily" and it affected Asian coffee production for over twenty years. By 1890, the coffee industry in Sri Lanka was nearly destroyed, although coffee estates still exist in some areas. Historians suggest that the devastated coffee production in Sri Lanka is one of the reasons why Britons have come to prefer tea, as Sri Lanka switched to tea production as a consequence of the disease.
By the 1920s CLR was widely found across much of Africa and Asia, as well as Indonesia and Fiji. It reached Brazil in 1970 and from there it rapidly spread at a rate enabling it to infect all coffee areas in the country by 1975. From Brazil, the disease spread to most coffee-growing areas in Central and South America by 1981, hitting Costa Rica and Colombia in 1983.
As of 1990, coffee rust has become endemic in all major coffee-producing countries.
2012 coffee leaf rust epidemic
In 2012, there was a major increase in coffee rust across ten Latin American and Caribbean countries. The disease became an epidemic and the resulting crop losses led to a fall in supply, outstripping demand. Coffee prices rose as a result, although other factors such as growing demand for gourmet beans in China, Brazil, and India also contributed.
USAID estimates that between 2012 and 2014, CLR caused $1 billion in damage and affected over 2 million people in Latin America.
The reasons for the epidemic remain unclear but an emergency rust summit meeting in Guatemala in April 2013 compiled a long list of shortcomings. These included a lack of resources to control the rust, the dismissal of early warning signs, ineffective fungicide application techniques, lack of training, poor infrastructure and conflicting advice. In a keynote talk at the "Let's Talk Roya" meeting (El Salvador, November 4, 2013), Dr Peter Baker, a senior scientist at CAB International, raised several key points regarding the epidemic including the proportional lack of investment in research and development in such a high value industry and the lack of investment in new varieties in key coffee producing countries such as Colombia.
Typical coffee cultivars maintained by farmers before the epidemic included Caturra, Bourbon, Mundo Novo, and Typica, all of which are susceptible to H. vastatrix. Also before the epidemic of 2012, 82% of farms were certified organic, which limits the agrochemicals farmers can use. However, there are a number of fungicides that can be used in certified organic systems, such as copper-based Bordeaux mix as well as commercial mixtures.
Honduras
During this period, Honduras experienced a significant epidemic of CLR. 80,000 hectares of coffee farms were infected and The Honduran National Institute of Coffee (IHCAFE) estimates that 30,000 farmers lost over half of their coffee production capacity and a third of those—10,000 farmers—suffered a complete loss of coffee production capacity. Roughly 84% of coffee producers in Honduras are smallholders and are therefore more vulnerable to loss of production than estate farmers.
Further
Coffee crops in Guatemala have been ruined by coffee rust, and a state of emergency has been declared in February 2013.
CLR has been a problem in Mexico.
CLR disease is a big problem in coffee plantations in Peru, declared in sanitary emergency by government (Decreto Supremo N° 082-2013-PCM).
In late October 2020, USDA ARS detected rust on Maui. Immediately the Hawaii Department of Agriculture began inspections around the state, not just on Maui itself. They initially found plants they suspect to also be infected in Hilo on the big island, however these plants tested negative to CLR, though it was detected on plants in the Kailua-Kona region of the island. In January, 2021, additional infections have been found on the islands of Oahu and Lanai, and plant quarantines have gone into effect as of March 2021 for interisland transport of coffee plants or parts between the four islands that CLR has been found on.
Economic impact
Coffee leaf rust (CLR) has direct and indirect economic impacts on coffee production. Direct impacts include decreased quantity and quality of yield produced by the diseased plant and the cost of inputs meant specifically to control the disease. Indirect impacts include increased costs to combat and control the disease. Methods of combating and controlling the disease include fungicide application and stumping diseased plants and replacing them with resistant breeds. Both methods include significant labor and material costs and in the case of stumping, include a years-long decline in production (coffee seedlings are not fully productive for three to five years after planting).
Due to the complexity of accurately accounting for losses attributed to CLR, there are few records quantifying yield losses. Estimates of yield loss vary by country and can range anywhere between 15 and 80%. Worldwide loss is estimated at 15%.
Some early data from Ceylon documenting the losses in the late 19th century indicate coffee production was reduced by 75%. As farmers shifted from coffee to other crops not affected by CLR, land used for growing coffee was reduced by 80%, from 68,787 to 14,170 ha.
In addition to the costs mentioned above, additional costs include research and development costs in producing resistant cultivars. These costs are normally borne by the industry, local and national governments and international aid agencies.
Colombia's National Federation of Coffee Growers (Fedecafe) set up a research lab specifically designed to find ways to stop the disease, as the country is a leading exporter of the Coffea arabica bean that is particularly prone to the disease.
References
External links
Hemileia vastatrix description at Plantvillage.com
Coffee Research Institute: Coffee rust
University of Nebraska-Lincoln: Coffee rust
The University of Hawaii page on Hemileia vastatrix
U.S.Dept.Agriculture page on Coffee Leaf Rust
Pucciniales
Coffee diseases
Fungi of Africa
Fungi of Asia
Fungi of South America
Fungi of Colombia
Fungi described in 1869
Taxa named by Miles Joseph Berkeley
Taxa named by Christopher Edmund Broome
Fungus species | Hemileia vastatrix | [
"Biology"
] | 3,791 | [
"Fungi",
"Fungus species"
] |
4,104,986 | https://en.wikipedia.org/wiki/How%20to%20Solve%20it%20by%20Computer | How to Solve it by Computer is a computer science book by R. G. Dromey, first published by Prentice-Hall in 1982.
It is occasionally used as a textbook, especially in India.
It is an introduction to the whys of algorithms and data structures.
Features of the book:
The design factors associated with problems
The creative process behind coming up with innovative solutions for algorithms and data structures
The line of reasoning behind the constraints, factors and the design choices made.
The very fundamental algorithms portrayed by this book are mostly presented in pseudocode and/or Pascal notation.
See also
How to Solve It, by George Pólya, the author's mentor and inspiration for writing the book.
References
1982 non-fiction books
Algorithms
Computer science books
Heuristics
Problem solving
Prentice Hall books | How to Solve it by Computer | [
"Mathematics"
] | 160 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
4,105,321 | https://en.wikipedia.org/wiki/Ogden%27s%20lemma | In the theory of formal languages, Ogden's lemma (named after William F. Ogden) is a generalization of the pumping lemma for context-free languages.
Despite Ogden's lemma being a strengthening of the pumping lemma, it is insufficient to fully characterize the class of context-free languages. This is in contrast to the Myhill-Nerode theorem, which unlike the pumping lemma for regular languages is a necessary and sufficient condition for regularity.
Statement
We will use underlines to indicate "marked" positions.
Special cases
Ogden's lemma is often stated in the following form, which can be obtained by "forgetting about" the grammar, and concentrating on the language itself:
If a language is context-free, then there exists some number (where may or may not be a pumping length) such that for any string of length at least in and every way of "marking" or more of the positions in , can be written as
with strings and , such that
has at least one marked position,
has at most marked positions, and
for all .
In the special case where every position is marked, Ogden's lemma is equivalent to the pumping lemma for context-free languages. Ogden's lemma can be used to show that certain languages are not context-free in cases where the pumping lemma is not sufficient. An example is the language .
Example applications
Non-context-freeness
The special case of Ogden's lemma is often sufficient to prove some languages are not context-free. For example, is a standard example of non-context-free language,
Similarly, one can prove the "copy twice" language is not context-free, by using Ogden's lemma on .
And the given example last section is not context-free by using Ogden's lemma on .
Inherent ambiguity
Ogden's lemma can be used to prove the inherent ambiguity of some languages, which is implied by the title of Ogden's paper.
Example: Let . The language is inherently ambiguous. (Example from page 3 of Ogden's paper.)
Similarly, is inherently ambiguous, and for any CFG of the language, letting be the constant for Ogden's lemma, we find that has at least different parses. Thus has an unbounded degree of inherent ambiguity.
Undecidability
The proof can be extended to show that deciding whether a CFG is inherently ambiguous is undecidable, by reduction to the Post correspondence problem. It can also show that deciding whether a CFG has an unbounded degree of inherent ambiguity is undecidable. (page 4 of Ogden's paper)
Generalized condition
Bader and Moura have generalized the lemma to allow marking some positions that are not to be included in . Their dependence of the parameters was later improved by Dömösi and Kudlek. If we denote the number of such excluded positions by , then the number of marked positions of which we want to include some in must satisfy , where is some constant that depends only on the language. The statement becomes that every can be written as
with strings and , such that
has at least one marked position and no excluded position,
has at most marked positions, and
for all .
Moreover, either each of has a marked position, or each of has a marked position.
References
Formal languages
Lemmas | Ogden's lemma | [
"Mathematics"
] | 683 | [
"Formal languages",
"Mathematical logic",
"Mathematical problems",
"Mathematical theorems",
"Lemmas"
] |
4,105,326 | https://en.wikipedia.org/wiki/GPS%20tracking%20unit | A GPS tracking unit, geotracking unit, satellite tracking unit, or simply tracker is a navigation device normally on a vehicle, asset, person or animal that uses satellite navigation to determine its movement and determine its WGS84 UTM geographic position (geotracking) to determine its location. Satellite tracking devices may send special satellite signals that are processed by a receiver.
Locations are stored in the tracking unit or transmitted to an Internet-connected device using the cellular network (GSM/GPRS/CDMA/LTE or SMS), radio, or satellite modem embedded in the unit or WiFi work worldwide.
GPS antenna size limits tracker size, often smaller than a half-dollar (diameter 30.61 mm). In 2020 tracking is a $2 billion business plus military-in the gulf war 10% or more targets used trackers. Virtually every cellphone tracks its movements.
Tracks can be map displayed in real time, using GPS tracking software and devices with GPS capability.
Architecture
A GPS "track me" essentially contains a GPS module that receives the GPS signal and calculates the coordinates. For data loggers, it contains large memory to store the coordinates. Data pushers additionally contain a GSM/GPRS/CDMA/LTE modem to transmit this information to a central computer either via SMS or GPRS in form of IP packets. Satellite-based GPS tracking units will operate anywhere on the globe using satellite technology such as GlobalStar or Iridium. They do not require a cellular connection.
Types
There are three types of GPS trackers, though most GPS-equipped phones can work in any of these modes depending on the mobile applications installed:
Data loggers
GPS loggers log the position of the device at regular intervals in its internal memory. GPS loggers may have either a memory card slot, or internal flash memory card and a USB port. Some act as a USB flash drive, which allows downloading the track log data for further computer analysis. The track list or point of interest list may be in GPX, KML, NMEA or other format.
Most digital cameras save the time a photo was taken. Provided the camera clock is reasonably accurate or used GPS as its time source, this time can be correlated with GPS log data, to provide an accurate location. This can be added to the Exif metadata in the picture file. Cameras with a GPS receiver built in can directly produce such a geotagged photograph.
In some private investigation cases, data loggers are used to keep track of a target vehicle. The private investigator need not follow the target too closely, and always has a backup source of data.
Data pushers
A data pusher is the most common type of GPS tracking unit, used for asset tracking, personal tracking and vehicle tracking systems. Virtually every cell phone is in this mode per user agreement, even if shut off or disabled storing the data for future transmission.
Also known as a "GPS beacon", this kind of device push (i.e. "sends"), at regular intervals, the position of the device as well as other information like speed or altitude to a determined server, that can store and analyze the data instantly.
A GPS navigation device and a mobile phone sit side-by-side in the same box, powered by the same battery. At regular intervals, the phone sends a text message via SMS or GPRS, containing the data from the GPS receiver. Newer GPS-integrated smartphones running GPS tracking software can turn the phone into a data pusher (or logger) device. As of 2009, open source and proprietary applications are available for common Java ME enabled phones, iPhone, Android, Windows Mobile, and Symbian.
Most 21st-century GPS trackers provide data "push" technology, enabling sophisticated GPS tracking in business environments, specifically organizations that employ a mobile workforce, such as a commercial fleet. Typical GPS tracking systems used in commercial fleet management have two core parts: location hardware (or tracking device) and tracking software. This combination is often referred to as an Automatic Vehicle Location system. The tracking device is most often hardwired installed in the vehicle, connected to the CAN-bus, ignition system switch, battery. It allows collection of extra data, which is later transferred to the GPS tracking server. There it is available for viewing, in most cases via a website accessed over the Internet, where fleet activity can be viewed live or historically using digital maps and reports.
GPS tracking systems used in commercial fleets are often configured to transmit location and telemetry input data at a set update rate or when an event (door open/close, auxiliary equipment on/off, geofence border cross) triggers the unit to transmit data. Live GPS tracking used in commercial fleets generally refers to systems that update regularly at one-minute, two-minute or five-minute intervals while the ignition status is on. Some tracking systems combine timed updates with heading change triggered updates.
GPS tracking solutions such as Telematics 2.0, an IoT based telematics technology for the automotive industry, are being used by mainstream commercial auto insurance companies.
Data pullers
GPS data pullers are also known as "GPS transponders". Unlike data pushers that send the position of the devices at regular intervals (push technology), these devices are always on, and can be queried as often as required (pull technology). This technology is not in widespread use, but an example of this kind of device is a computer connected to the Internet and running gpsd.
These can often be used in the case where the location of the tracker will only need to be known occasionally (e.g. placed in property that may be stolen, or that does not have a constant source of energy to send data on a regular basis, like freight or containers.)
Data Pullers are coming into more common usage in the form of devices containing a GPS receiver and a cell phone which, when sent a special SMS message reply to the message with their location.
Covert GPS trackers
Covert GPS trackers contain the same electronics as regular GPS trackers but are constructed in such a way as to appear to be an everyday object. One use for covert GPS trackers is for power tool protection; these devices can be concealed within power tool boxes and traced if theft occurs.
Applications
The applications of GPS trackers include:
Personal tracking
Race control: in some sports, such as gliding, participants are required to carry a tracker. In particular, this allows race officials to know if the participants are cheating, taking unexpected shortcuts, and how far apart they are. This use was illustrated in the movie Rat Race.
Law enforcement: an arrested suspect out on bail may have to wear a GPS tracker, usually an ankle monitor, as a bail condition. GPS tracking may also be ordered for persons subject to a restraining order.
Espionage/surveillance: a tracker on a person or vehicle allows movements to be tracked.
Vehicle tracking: some people use GPS Trackers to monitor activity of their own vehicle, especially in the event of a vehicle being used by a friend or family member.
GPS personal tracking devices are used in the care of the elderly and vulnerable, and can be used to track small children who may get into danger. Some devices can send text alerts to carers if the wearer moves into an unexpected place. Some devices allow users to call for assistance, and optionally allow designated carers to locate the user's position, typically within five to ten meters. Their use helps promote independent living and social inclusion for the elderly. Devices often incorporate either one-way otwo-way voice communication. Some devices also allow the user to call several phone numbers using pre-programmed speed dial buttons. Trials using GPS personal tracking devices for people living with dementia are underway in several countries. Text and voice communication is usually provided by a connection to mobile telephony, but GPS devices are available that use satellite communications, always available even if out of mobile telephone range.
Some Internet Web 2.0 pioneers have created their own personal web pages that show their position constantly, and in real time, on a map within their website. These usually use data push from a GPS enabled cell phone or a personal GPS tracker.
Sports: the movements of a ramblers, cyclists, and so on, can be tracked. Statistics such as instantaneous and average speed, and distance travelled, are logged. In the rugby union Six Nations Championship, all players wear trackers, sewn into their shirts. Some rugby clubs also use GPS units on their players. The England Rugby Union team uses GPS.
Adventure sports: GPS tracking devices such as the SPOT Satellite Messenger are available to allow the position of a person to be tracked. In particular, this allows rescue personnel to locate the carrier. These devices also allow the carrier to send messages and emergency alerts, even when out of cellular telephone range.
Monitoring employees: GPS-handled tracking devices with a built-in cellphone are used to monitor employees by various companies, especially those engaged in fieldwork.
Lone workers: GPS is ideal for improving the safety of employees working in distant, isolated work sites. Maintenance workers, forestry workers, mining workers, and employees in similar fields may be required to work in remote areas without any contact nearby. In such scenarios the risk of their well-being increases.
Asset tracking
Solar Powered: the advantage of some solar powered units is that they have much more power over their lifetime than battery-powered units. This gives them the advantage of reporting their position and status much more often than battery units which need to conserve energy to extend their life. Some wireless solar-powered units, such as the RailRider can report more than 20,000 times per year and work indefinitely on solar power, eliminating the need to change batteries.
Aircraft tracking
Aircraft can be tracked either by ADS-B (primarily airliners and General Aviation aircraft with ADS-B-out enabled transponder), or by FLARM data packets picked up by a network of ground stations (primarily used by General Aviation aircraft, gliders and UAVs), both of which are data pushers. ADS-B is to be superseded by ADS-C, a data puller.
Animal tracking
Animal monitoring (GPS wildlife tracking): when put on a wild animal (e.g. in a GPS collar), it allows scientists to study the animal's activities and migration patterns. Vaginal implant transmitters mark the location where pregnant females give birth. Animal tracking collars may also be put on domestic animals, to locate them in case they get lost.
Legislation
Australian law
There are no Australian Federal Laws for surveillance and GPS tracker legality.
However, most states have statutes covering the use and restrictions of tracking devices used for surveillance.
The below states have formal statutes. At present, only Queensland and Tasmania do not have legislation.
United States law
In the United States, the use of GPS trackers by government authorities is limited by the Fourth Amendment to the United States Constitution. So police, for example, usually require a search warrant. While police have placed GPS trackers in vehicles without a warrant, this usage was questioned in court in early 2009.
Use by private citizens is regulated in some states, such as California, where California Penal Code Section 637.7 states:
(a) No person or entity in this state shall use an electronic tracking device to determine the location or movement of a person.
(b) This section shall not apply when the registered owner, lesser, or lessee of a vehicle has consented to the use of the electronic tracking device with respect to that vehicle.
(c) This section shall not apply to the lawful use of an electronic tracking device by a law enforcement agency.
(d) As used in this section, "electronic tracking device" means any device attached to a vehicle or other movable thing that reveals its location or movement by transmission of electronic signals.
(g) A violation of this section is a misdemeanor.
(f) A violation of this section by a person, business, firm, company, association, partnership, or corporation licensed under Division 3 (commencing with Section 5000) of the Business and Professions Code shall constitute grounds for revocation of the license issued to that person, business, firm, company, association, partnership, or corporation, pursuant to the provisions that provide for the revocation of the license as set forth in Division 3 (commencing with Section 5000) of the Business and Professions Code.
Note that 637.7 pertains to all electronic tracking devices, and does not differentiate between those that rely on GPS technology or not. As the laws catch up with the times, it is plausible that all 50 states will eventually enact laws similar to those of California.
Other laws, like the common law invasion of privacy tort as well as state criminal wiretapping statutes (for example, the wiretapping statute of the Commonwealth of Massachusetts, which is extremely restrictive) potentially cover the use of GPS tracking devices by private citizens without consent of the individual being so tracked. Privacy can also be a problem when people use the devices to track the activities of a loved one. GPS tracking devices have also been put on religious statues to track the whereabouts of the statue if stolen.
In 2009, debate ensued over a Georgia proposal to outlaw hidden GPS tracking, with an exception for law enforcement officers but not for private investigators. See Georgia HB 16 - Electronic tracking device; location of person without consent (2009).
United Kingdom law
The law in the UK has not specifically addressed the use of GPS trackers, but several laws may affect the use of this technology as a surveillance tool.
Data Protection Act 1998
It is quite clear that if client instructions (written or digitally transmitted) that identify a person and a vehicle are combined with a tracker, the information gathered by the tracker becomes personal data as defined by the Data Protection Act 1998. The document “What is personal data? – A quick reference guide” published by the Information Commissioner's Office (ICO) makes clear that data identifying a living individual is personal data. If a living individual can be identified from the data, with or without additional information that may become available, is personal data.
Identifiability
An individual is 'identified' if distinguished from other members of a group. In most cases, an individual's name, together with some other information, will be sufficient to identify them, but a person can be identified even if their name is not known. Start by looking at the means available to identify an individual and the extent to which such means are readily available to you.
Does the data 'relate to' the identifiable living individual, whether in personal or family life, business or profession?
Relates to means: Data which identifies an individual, even without an associated name, may be personal data which is processed to learn or record something about that individual, or the processing of information that affects the individual. Therefore, data may 'relate to' an individual in several different ways.
Is the data 'obviously about a particular individual?
Data 'obviously about' an individual will include their medical history, criminal record, record of work, or their achievements in a sporting activity. Data that is not 'obviously about' a particular individual may include information about their activities. Data such as personal bank statements or itemised telephone bills will be personal data about the individual operating the account or contracting for telephone services. Where data is not 'obviously about' an identifiable individual it may be helpful to consider whether the data is being processed, or could easily be processed, to learn, record or decide something about an identifiable individual. Information may be personal data where the aim, or an incidental consequence, of the processing, is that one learns or records something about an identifiable individual, or the processing could affect an identifiable individual. Data from a Tracker would be to identify the individual or their activities. It is therefore personal data within the meaning of the Data Protection Act 1998.
Any individual who wishes to gather personal data must be registered with the Information Commissioner's Office (ICO) and have a DPA number. It is a criminal offense to process data and not have a DPA number.
Trespass
It may be a civil trespass for an individual to deploy a tracker on another's car. But in the OSC's annual inspection, the OSC's Chief Surveillance Commissioner Sir Christopher Rose stated "putting an arm into a wheel arch or under the frame of a vehicle is straining the concept of trespass".
However, entering a person's private land to deploy a tracker is clearly a trespass which is a civil tort.
Prevention of Harassment Act 1997
At times, the public misinterprets surveillance, in all its forms, as stalking. Whilst there is no specific legislation to address this kind of harassment, a long-term pattern of persistent and repeated efforts at contact with a particular victim is generally considered stalking.
The Protection of Freedoms Act 2012 created two new offenses of stalking by inserting new sections 2A and 4A into the PHA 1997. The new offences which came into force on 25 November 2012, are not retrospective. Section 2A (3) of the PHA 1997 sets out examples of acts or omissions which, in particular circumstances, are ones associated with stalking. Examples are: following a person, watching or spying on them, or forcing contact with the victim through any means, including social media.
Such behavior curtails a victim's freedom, leaving them feeling that they constantly have to be careful. In many cases, the conduct might appear innocent (if considered in isolation), but when carried out repeatedly, so as to amount to a course of conduct, it may then cause significant alarm, harassment or distress to the victim.
The examples given in section 2A (3) are not an exhaustive list but an indication of the types of behavior that may be displayed in a stalking offense.
Stalking and harassment of another or others can include a range of offenses such as those under the Protection from Harassment Act 1997; the Offences Against the Person Act 1861; the Sexual Offences Act 2003; and the Malicious Communications Act 1988.
Examples of the types of conduct often associated with stalking include direct communication; physical following; indirect contact through friends, colleagues, family or technology; or, other intrusions into the victim's privacy. The behavior curtails a victim's freedom, leaving them feeling that they constantly have to be careful.
If the subject of inquiry is aware of the tracking, then this may amount to harassment under the Prevention of Harassment Act 1997. There is a case at the Royal Courts of Justice where a private investigator is being sued under this act for the use of trackers. In December 2011, a Claim was brought against Richmond Day & Wilson Limited (First Defendant) and Bernard Matthews Limited (Second Defendant), Britain's leading Turkey Provider.
The case relates to the discovery of a tracking device found in August 2011 on a vehicle supposedly connected to Hillside Animal Sanctuary.
Regulation of Investigatory Powers Act 2000
Property Interference: The Home Office published a document entitled "Covert Surveillance and Property Interference, Revised Code of Practice, Pursuant to section 71 of the Regulation of Investigatory Powers Act 2000" where it suggests in Chapter 7, page 61 that;
General basis for lawful activity
7. 1 Authorizations under section 5 of the 1994 Act or Part III of the 1997 Act should be sought wherever members of the intelligence services, the police, the services police, Serious and Organised Crime Agency (SOCA), Scottish Crime and Drug Enforcement Agency (SCDEA), HM Revenue and Customs (HMRC) or Office of Fair Trading (OFT), or persons acting on their behalf, conduct entry on, or interference with, property or with wireless telegraphy that would be otherwise unlawful.
7. 2 For the purposes of this chapter, "property interference" shall be taken to include entry on, or interference with, property or with wireless telegraphy.
Example: The use of a surveillance device for providing information about the location of a vehicle may involve some physical interference with that vehicle as well as subsequent directed surveillance activity. Such an operation could be authorized by a combined authorization for property interference (under Part III of the 1997 Act) and, where appropriate, directed surveillance (under the 2000 Act). In this case, the necessity and proportionality of the property interference element of the authorization would need to be considered by the appropriate authorizing officer separately to the necessity and proportionality of obtaining private information by means of the directed surveillance.
This can be interpreted to mean that placing a tracker on a vehicle without the consent of the owner is illegal unless you obtain authorization from the Surveillance Commissionaire under the RIPA 2000 laws. Since a member of the public cannot obtain such authorizations, it is therefore illegal property interference.
Another interpretation is that it is illegal to do so if you are acting under the instruction of a public authority and you do not obtain authorization. The legislation makes no mention of property interference for anyone else.
Currently, there is no legislation in place that deals with the deployment of trackers in a criminal sense except RIPA 2000 and that RIPA 2000 only applies to those agencies and persons mentioned in it.
Uses in marketing
In August 2010, Brazilian company Unilever ran an unusual promotion where GPS trackers were placed in boxes of Omo laundry detergent. Teams would then track consumers who purchased the boxes of detergent to their homes where they would be awarded a prize for their purchase. The company also launched a website (in Portuguese) to show the approximate location of the winners' homes.
See also
Automatic Packet Reporting System
Data privacy
Electronic tagging
GPS aircraft tracking
GPS navigation device
GPS tracking server
GPS watch
GPS wildlife tracking
Intelligent transportation system (ITS)
IVMS
Mobile phone
Moving map display
Radio clock#GPS clocks
Real-time locating
Telematics
Telematics 2.0
Vehicle infrastructure integration
Vehicle tracking system
References
External links
Global Positioning System
Surveillance
Geopositioning
Navigational equipment
de:Track Log
fr:Géolocalisation | GPS tracking unit | [
"Technology",
"Engineering"
] | 4,451 | [
"Global Positioning System",
"Aerospace engineering",
"Wireless locating",
"Aircraft instruments"
] |
4,105,384 | https://en.wikipedia.org/wiki/Harassment%20in%20the%20United%20Kingdom | Harassment is a topic which, in the past few decades, has been taken increasingly seriously in the United Kingdom, and has been the subject of a number of pieces of legislation.
Introduction
Racial and sexual discrimination have been unlawful under the Race Relations Acts and the Equality Act 2010 (originally the Sex Discrimination Act 1975 which was repealed). Respectively, it is only comparatively recently that specific legislation has defined harassment specifically as unlawful.
Because of the rise recently in awareness of the issues involved in harassment, recent trends have shown significant rises in the number of people making claims of harassment at Employment Tribunals. If the complaint is serious, high damages may be awarded against the Employer, so it is important for the Employer to take seriously any allegation of harassment at an early stage and take steps to quickly resolve it.
There is also legislation in place to be able to deal with discrimination, and this legislation is distinct to that provided under the Sex Discrimination Act 1975 and the Race Relations Acts.
Definition
Under the Protection from Harassment Act 1997 - this statute makes harassment a crime and a civil wrong:
Section 1(1): 'A course of conduct which amounts to harassment, and which the defendant knows or ought to know amounts to harassment is prohibited.'
"A person must not pursue a course of conduct
'(a) which amounts to harassment of another, and
(b) which he knows or ought to know amounts to harassment of the other.'
'As for what the defendant 'ought to know', the test is whether a reasonable person in possession of the same information would think it amounts to harassment' - Section 1(1)(2). The defendant need not know that their conduct amounts to harassment of the other, so long as you ought to know that your course of conduct ought to amount to harassment of the other:
Per Section 1(2) - 'the person whose course of conduct is in question ought to know that it amounts to [or involves] harassment of another if a reasonable person in possession of the same information would think the course of conduct amounted to harassment of the other.Harassment also occurs when, on the grounds of race, disability, sex, sexual orientation, belief or religion, an employer - or their agent such as another employee or a manager - engages in unwanted conduct which has the purpose or effect of violating an individual's dignity or creating an interrogating, degrading, hostile offensive or humiliating environment for the employee in question. This is wide spectrum, and covers all types of harassment.
Such actions can be:
Physical conduct;
Verbal conduct; and
Non-verbal conduct.
In addition, while the conduct must be unwanted by the recipient, it does not necessarily have to be that the harasser has a motive or an intention to harass. So it is still harassment even if the harasser does not know there is harm caused by their actions.
Requirements for the Tort of Harassment
Whilst under Section 2 of the Protection from Harassment Act 1997, states that harassment is a crime, the statute also provides a civil remedy, for actual or apprehended harassment under section 3(1).
'In life one has to put up with a certain amount of annoyance', but 'To cross the boundary from the regrettable to the unacceptable, the gravity of the misconduct must be of an order which would sustain criminal liability under section 2.' Under the Protection from Harassment Act 1997, Section 7(2) 'References to harassing a person include alarming the person or causing the person distress.'
Elements
Alarm or distress
'As with any common English word, what amounts to harassment, alarm or distress in any given situation is a question of fact for the magistrates.'
However, whilst it is 'evident that the 1997 Act created an offence of potentially enormous scope, 'Not any trivial act of harassment will do; there is a minimum level of alarm and distress which must be suffered in order to constitute harassment.' This has been specified in previous case law that mere alarm or distress might not be enough to result in the defendant being liable for harassment; the defendant's behaviour must be oppressive. Moreover, in the case of Hayes v Willoughby, harassment was described as a persistent and deliberate course of unreasonable and oppressive conduct, targeted at another person, calculated to cause 'alarm, fear and/or distress.'
Furthermore, following Pill LJ's judgment: 'To harass as defined in the Concise Oxford Dictionary, Tenth Edition, is to "torment by subjecting to constant interference or intimidation. The conduct must be unacceptable to a degree which would sustain criminal liability and also must be oppressive. Course of conduct
There 'must be a course of conduct, that is to say conduct on at least two occasions.' 'A single act of harassment will not amount to an offence.' Under Section 7(3): A course of conduct must involve-
(a) 'In the case of conduct in relation to a single person, conduct on at least two occasions in relation to that person, or
(b) in the case of conduct in relation to two or more persons, conduct on at least one occasion in relation to each of those persons.'
So, a single occasion is not enough. For example, in the case of R v Curtis where the defendant was in a volatile relationship with the claimant. The court 'required proof of a course of conduct,' and it was held the 'course of conduct [present here] amounted to harassment.'
What about where the conduct is not targeted at the claimant?
It is not necessary that the victim themselves be the target:
'Liability is incurred where the defendant engaged in a course of conduct which they knew, or ought to have known, amounted to harassment.
The conduct does not need to be targeted at the claimant, although it must be foreseeable that the claimant will suffer the harm.'
For example, in the case of Levi v Bates [2015], the defendant published the claimant's address and number, however, the wife of the claimant also suffered distress and alarm resultant of this. The court held that although the husband was the target of the harassment, the wife could also sue because it was foreseeable that she too would have been harassed by the defendant's conduct.
Employer's liability
An employer is liable, as is the case for many other acts, for the actions of their employees during the course of employment. Though it would be relatively easier to prove that a manager or supervisor to the recipient could be guilty of harassing "during the course of employment", it may require more proof if the harasser is in a subordinate position.
Employers can avoid liability for discriminatory harassment if they can prove that they took such steps that were reasonably practical to prevent harassment from occurring.
However, employers cannot use this defence to a claim of harassment under the Protection from Harassment Act 1997, under which they will have vicarious liability for the actions of their employees.
Legislation
The United Kingdom has a "rag bag of statutes" relating to harassment.
Administration of Justice Act 1970
Section 40 of the Administration of Justice Act 1970 creates the offence of harassing a contract debtor.
Protection from Eviction Act 1977
The marginal note to section 1 of the Protection from Eviction Act 1977 refers to "harassment of occupier".
Public Order Act 1986
Section 4A of the Public Order Act 1986, inserted by the Criminal Justice and Public Order Act 1994, creates the offence of intentional harassment, alarm or distress.
Section 5 creates the offence of harassment, alarm or distress.
Protection from Harassment Act 1997
This Act was primarily created to provide protection against stalkers, but it has been used in other ways.
Under this Act, it is now an offence for a person to pursue a course of action which amounts to harassment of another individual, and that they know or ought to know amounts to harassment. Under this act the definition of harassment is behaviour which causes alarm or distress. This Act provides for a jail sentence of up to six months or a fine. There are also a variety of civil remedies that can be used including awarding of damages, and restraining orders backed by the power of arrest.
The introduction of this legislation considers 'emotional harm generally,' which was considered a 'radical change to the law.'
Employers have vicarious liability for harassment by their employees under the Protection from Harassment Act 1997, (see Majrowski v Guy's and St Thomas' NHS Trust). For employees this may provide an easier route to compensation than claims based on discrimination legislation or personal injury claims for stress at work, as the elements of harassment are likely to be easier to prove, the statutory defence is not available to the employer, and it may be easier to establish a claim for compensation. Also as the claim can be made in the County Court costs are recoverable and legal aid is available.
In Scotland the Act works slightly differently:
A jail term of up to five years in very serious cases can be imposed.
Civil remedies include damages, interdict and non-harassment orders backed by powers of arrest.
Defences
Under the Protection from Harassment Act ‘1(3) Subsection (1) [or (1A)] does not apply to a course of conduct if the person who pursued it shows—
(a) that it was pursued for the purpose of preventing or detecting crime
(b) that it was pursued under any enactment or rule of law or to comply with any condition or requirement imposed by any person under an enactment, or
(c) that in the particular circumstances the pursuit of the course of conduct was reasonable.'
For example, shown again in the case of Hayes v Willoughby where the defendant was making allegations against his former employer of fraud, embezzlement and tax evasion and engaged in a six-year campaign, writing to the police of the Department of Trade and Industry, among others. However, after investigation there was found to be nothing behind the allegations. Yet, even after the police shared their conclusions the defendant continued to make allegations. This amounted to harassment, as there was no further rational basis to continue his 'investigations.' Hence, the defence under Section 1(3)(a) was not upheld:[13] It cannot be the case that the mere existence of a belief, however absurd, in the mind of the harasser that he is detecting or preventing a possibly non-existent crime, will justify him in persisting in a course of conduct which the law characterises as oppressive. Some control mechanism is required, even if it falls well short of requiring the alleged harasser to prove that his alleged purpose was objectively reasonable'.[15]''' ‘Before an alleged harasser can be said to have had the purpose of preventing or detecting crime, he must have sufficiently applied his mind to the matter. He must have thought rationally about the material suggesting the possibility of criminality and formed the view that the conduct said to constitute harassment was appropriate for the purpose of preventing or detecting it. If he has done these things, then he has the relevant purpose.’
Damages
See the main article: Damages
Under Section 3(2), compensatory damages may be awarded for anxiety and financial loss resultant of the harassment. Harassment incurs liability for all direct losses, and not merely those which were reasonably foreseeable.
Potential remedies also include an injunction, and if that injunction is transgressed, the claimant may apply for an arrest warrant under Section 3(3). For example, in the case of Brand v Berki,'' the claimant made repeated serious criminal allegations against Russell Brand, the comedian. She had reported the matter to the police, who investigated and said there was no cause to answer. However, she continued with the allegations in the national press and on Twitter. Resultantly, an interim injunction was granted, pending trial.
See also
United Kingdom employment equality law
Sexual harassment
United Kingdom labour law
Workplace harassment
References
External links
Majrowski v Guy's & St Thomas' NHS Trust
Neighbours From Hell in Britain: Harassment from your Neighbour
Weaver v. NATHFE Race Discrimination Case
Anti-Harassment Club
United Kingdom
English law
Law of the United Kingdom | Harassment in the United Kingdom | [
"Biology"
] | 2,462 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
4,105,786 | https://en.wikipedia.org/wiki/Nouvelle%20Plan%C3%A8te | Nouvelle Planète is a non-profit organization founded on Albert Schweitzer's examples, ideas and ethics; it is strictly neutral in religion and politics, and works to support small practical projects in countries in the southern hemisphere, setting up direct relations between people in the North and the South, so as to help people help themselves.
Historical origins
Nouvelle Planète grew out of the project to add an extension to the Albert Schweitzer Hospital in Lambaréné (Gabon). The founder, Willy Randin, former director of the hospital, and Maurice Lack, an architect specializing in bioclimatics, proposed a project based on renewable sources of energy. Research was conducted in this direction, but the people in charge of the Albert Schweitzer hospital were not interested by the project. Instead of simply abandoning their ideas, Lack and Randin wanted to develop the appropriate technologies with interested people in other parts of the world. To do this, they founded the Albert Schweitzer Ecological Centre (CEAS) and the Sahel Action of Schweitzer's Work, Nouvelle Planète, in Switzerland.
At the time, Willy Randin was working for a big development agency in Switzerland, and he had been able to see the extent to which citizens had the desire to understand the reality of Southern countries, and to mobilize themselves in view of backing small projects by establishing direct relations with the beneficiaries.
In 1986, given the success of Sahel Action of Schweitzer's Work, it was decided to extend activities to Haiti, then to the Amazon With contributions from Jeremy Narby, while continuing with Sahel-based projects with the CEAS. At that point, the name of the organization was changed to Nouvelle Planète.
Objectives
The objectives are
- Improve food, financial and land security, in order to increase the autonomy of populations and give them new perspectives, and face the consequences of climate change,
- Promote the rights of marginalized and vulnerable populations, particularly women and indigenous peoples, throuth education, training in appropriate agricultural methods, and access to and strengthening of basic services,
- Protecting the environment, involving local populations and seeking the best symbiosis between humans and their environment,
- Raise awareness of international solidarity and the global challenges of the rural world, by providing information to people in Switzerland, organizing solidarity trips, and coordinating volunteer groups.
Philosophy
Nouvelle Planète is based on the ethics of Albert Schweitzer, who said: I am life wanting to live with life that wants to live. This ethical position implies a respect for all forms of life inasmuch as it is possible; a balance between humans, animals and plants ensues.
Nouvelle Planète embraces political, economic and religious neutrality. It works with groups in the South and the North, starting from their own initiatives, from their knowledge and their know-how.
The organization feels that the problems of human development and ecology have never been worse, the gap between rich and poor countries has never been greater, and misunderstanding of these problems is growing.
People in northern countries have difficulty mobilizing themselves and demonstrating solidarity with those in need. People who are prepared to invest time, skills or money often do not know how to go about it.
References
External links
Nouvelle Planète website
Alliance website
Sustainable building
Non-profit organisations based in Switzerland
International development organizations | Nouvelle Planète | [
"Engineering"
] | 684 | [
"Construction",
"Sustainable building",
"Building engineering"
] |
4,106,274 | https://en.wikipedia.org/wiki/Standard%20components%20%28food%20processing%29 | Standard components is a food technology term, when manufacturers buy in a standard component they would use a pre-made product in the production of their food.
They help products to be the same in consistency, they are quick and easy to use in batch production of food products.
Some examples are pre-made stock cubes, marzipan, icing, ready made pastry.
Usage
Manufacturers use standard components as they save time and sometimes cost a lot less and it also helps with consistency in products.
If a manufacturer is to use a standard component from another supplier it is essential that a precise and accurate specification is produced by the manufacturer so that the component meets the standards set by the manufacturer.
Advantages
Saves preparation time.
Fewer steps in the production process
Less effort and skill required by staff
Less machinery and equipment needed
Good quality
Saves money from all aspects
Can be bought in bulk
High-quality consistency
Food preparation is hygienic
Disadvantages
Have to rely on other manufacturers to supply products
Fresh ingredients may taste better
May require special storage conditions
Less reliable than doing it yourself
Cost more to make
Can't control the nutritional value of the product
There is a larger risk of cross contamination.
GCSE food technology
References
Food Technology Nelson Thornes, 2001 pg. 144
Components
Food industry
Food ingredients | Standard components (food processing) | [
"Technology"
] | 256 | [
"Food ingredients",
"Components"
] |
4,106,763 | https://en.wikipedia.org/wiki/Donald%20MacRae%20%28astronomer%29 | Donald Alexander MacRae ( – ) was a Canadian astronomer.
Born in Halifax, Nova Scotia, he was the Chair of the Department of Astronomy (now Astronomy and Astrophysics) at the University of Toronto and Director of the David Dunlap Observatory from 1965 to 1978. He was one of a few Canadians who were early Ph.D. graduates in Astronomy from Harvard (1943), where he enrolled after graduating from the University of Toronto in 1937.
He appeared in the Academy Award-nominated NFB documentary Universe (1960) as the astronomer.
He introduced radio astronomy to Toronto, constructing a radio telescope. It was small and so worked at higher frequencies than previous radio telescopes. He saw a strong signal, but failed to publish.
He died December 6, 2006, aged 90, shortly after his wife, Margaret Malcolm.
External links
The Journal of the Royal Astronomical Society of Canada, December 1999
A Memorial Tribute in Cassiopeia, December 2006 in PDF or in HTML.
Guide to the Donald Alexander MacRae Papers 1943-1946 at the University of Chicago Special Collections Research Center
1916 births
20th-century Canadian astronomers
Canadian people of Scottish descent
Fellows of the Royal Society of Canada
People from Halifax, Nova Scotia
Harvard Graduate School of Arts and Sciences alumni
University of Toronto alumni
Academic staff of the University of Toronto
2006 deaths | Donald MacRae (astronomer) | [
"Astronomy"
] | 263 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
4,106,777 | https://en.wikipedia.org/wiki/Pelargonic%20acid | Pelargonic acid, also called nonanoic acid, is an organic compound with structural formula . It is a nine-carbon fatty acid. Nonanoic acid is a colorless oily liquid with an unpleasant, rancid odor. It is nearly insoluble in water, but very soluble in organic solvents. The esters and salts of pelargonic acid are called pelargonates or nonanoates.
The acid is named after the pelargonium plant, since oil from its leaves contains esters of the acid.
Preparation
Together with azelaic acid, it is produced industrially by ozonolysis of oleic acid.
Alternatively, pelargonic acid can be produced in a two-step process beginning with coupled dimerization and hydroesterification of 1,3-butadiene. This step produces a doubly unsaturated C9-ester, which can be hydrogenated to give esters of pelargonic acid.
A laboratory preparation involves permanganate oxidation of 1-decene.
Occurrence and uses
Pelargonic acid occurs naturally as esters in the oil of Pelargonium.
Synthetic esters of pelargonic acid, such as methyl pelargonate, are used as flavorings. Pelargonic acid is also used in the preparation of plasticizers and lacquers. The derivative 4-nonanoylmorpholine is an ingredient in some pepper sprays.
The ammonium salt of pelargonic acid, ammonium pelargonate, is a herbicide. It is commonly used in conjunction with glyphosate, a non-selective herbicide, for a quick burn-down effect in the control of weeds in turfgrass. It works by causing leaks in plant cell membranes, allowing chlorophyll molecules to escape the chloroplast. Under sunlight, these misplaced molecules cause immense oxidative damage to the plant.
The methyl form and ethylene glycol pelargonate act as nematicides against Meloidogyne javanica on Solanum lycopersicum, and the methyl against Heterodera glycines and M. incognita on Glycine max.
Esters of pelargonic acid are precursors to lubricants.
Pharmacological effects
Pelargonic acid may be more potent than valproic acid in treating seizures. Moreover, in contrast to valproic acid, pelargonic acid exhibited no effect on HDAC inhibition, suggesting that it is unlikely to show HDAC inhibition-related teratogenicity.
See also
List of saturated fatty acids
List of carboxylic acids
References
External links
MSDS at affymetrix.com
Alkanoic acids
Herbicides
Nematicides | Pelargonic acid | [
"Biology"
] | 583 | [
"Herbicides",
"Biocides"
] |
4,106,793 | https://en.wikipedia.org/wiki/Wiener%27s%20Tauberian%20theorem | In mathematical analysis, Wiener's tauberian theorem is any of several related results proved by Norbert Wiener in 1932. They provide a necessary and sufficient condition under which any function in or
can be approximated by linear combinations of translations of a given function.
Informally, if the Fourier transform of a function vanishes on a certain set , the Fourier transform of any linear combination of translations of also vanishes on . Therefore, the linear combinations of translations of cannot approximate a function whose Fourier transform does not vanish
on .
Wiener's theorems make this precise, stating that linear combinations of translations of are dense if and only if the zero set of the Fourier
transform of is empty (in the case of ) or of Lebesgue measure zero (in the case of ).
Gelfand reformulated Wiener's theorem in terms of commutative C*-algebras, when it states that the spectrum of the group ring
of the group of real numbers is the dual group of . A similar result is true when
is replaced by any locally compact abelian group.
Introduction
A typical tauberian theorem is the following result, for . If:
as
as ,
then
Generalizing, let be a given function, and be the proposition
Note that one of the hypotheses and the conclusion of the tauberian theorem has the form , respectively, with and
The second hypothesis is a "tauberian condition".
Wiener's tauberian theorems have the following structure:
If is a given function such that , , and , then holds for all "reasonable" .
Here is a "tauberian" condition on , and is a special condition on the kernel . The power of the theorem is that holds, not for a particular kernel , but for all reasonable kernels .
The Wiener condition is roughly a condition on the zeros the Fourier transform of . For instance, for functions of class , the condition is that the Fourier transform does not vanish anywhere. This condition is often easily seen to be a necessary condition for a tauberian theorem of this kind to hold. The key point is that this easy necessary condition is also sufficient.
The condition in
Let be an integrable function. The span of translations
is dense in if and only if the Fourier transform of has no real zeros.
Tauberian reformulation
The following statement is equivalent to the previous result, and explains why Wiener's result is a Tauberian theorem:
Suppose the Fourier transform of has no real zeros, and suppose the convolution
tends to zero at infinity for some . Then the convolution tends to zero at infinity for any
.
More generally, if
for some the Fourier transform of which has no real zeros, then also
for any .
Discrete version
Wiener's theorem has a counterpart in
: the span of the translations of is dense if and only if the Fourier series
has no real zeros. The following statements are equivalent version of this result:
Suppose the Fourier series of has no real zeros, and for some bounded sequence the convolution
tends to zero at infinity. Then also tends to zero at infinity for any .
Let be a function on the unit circle with absolutely convergent Fourier series. Then has absolutely convergent Fourier series
if and only if has no zeros.
showed that this is equivalent to the following property of the Wiener algebra ,
which he proved using the theory of Banach algebras, thereby giving a new proof of Wiener's result:
The maximal ideals of are all of the form
The condition in
Let be a square-integrable function. The span of translations is dense in
if and only if the real zeros of the Fourier transform of form a set of zero Lebesgue measure.
The parallel statement in is as follows: the span of translations of a sequence is dense if and only if the zero set of the Fourier series
has zero Lebesgue measure.
Notes
References
External links
Real analysis
Harmonic analysis
Tauberian theorems | Wiener's Tauberian theorem | [
"Mathematics"
] | 803 | [
"Theorems in mathematical analysis",
"Tauberian theorems"
] |
4,106,859 | https://en.wikipedia.org/wiki/Constant%20altitude%20plan%20position%20indicator | The constant altitude plan position indicator, better known as CAPPI, is a radar display which gives a horizontal cross-section of data at constant altitude. It has been developed by McGill University in Montreal by the Stormy Weather Group to circumvent some problems with the PPI:
Altitude changing with distance to the radar.
Ground echoes problems near the radar.
Definition and history
In 1954, McGill University obtained a new radar (CPS-9) which had a better resolution and used FASE (Fast Azimuth Slow Elevation) to program multi-angle soundings of the atmosphere.
In 1957, Langleben and Gaherty developed a scheme with FASE to keep only the data at a certain height at each angle and scan on 360 degrees. If we look at the diagram, each angle of elevation or PPI has data at height X at a certain distance from the radar. Using the data at the right distance, one forms an annular ring of data at height X. Assembling all the rings coming from the different angles gives you the CAPPI.
The CAPPI is composed of data from each angle that is at the height requested for the cross-section (bold lines in zig-zag on the left diagram). In the early days, the scan data collected where shown directly on the cathodic screen and a photo sensitive device captured each ring as it was completed. Then all those photographed rings were assembled. By 1958, East developed a real time assembly instead of a delayed one. By the mid-1970s, computer developments made it possible to gather data in electronic form and make CAPPIs more easily.
Today, weather radars collect in real-time data on a large number of angles. Many countries such as Canada, UK and Australia, scan a large enough number of angles with their radars to have an almost continuous vertical view (taking into account the radar beam width) and produce CAPPIs. Other countries, like France and United States, use fewer angles and prefer PPIs or composite of maximum reflectivities above a point.
Usage
Above right is an example of a CAPPI at 1.5 km altitude. Looking at the diagram of angles, that depending on the height of the CAPPI, there comes a distance where no data is available. The portion beyond this distance on a CAPPI is then showing data from the lowest PPI. The higher is the CAPPI above ground, the smaller is that PPI zone.
References
Bibliography
David Atlas, Radar in Meteorology: Battan Memorial and 40th Anniversary Radar Meteorology Conference, published by American Meteorological Society, Boston, 1990, 806 pages, , AMS Code RADMET.
Yves Blanchard, Le radar, 1904–2004: histoire d'un siècle d'innovations techniques et opérationnelles , published by Ellipses, Paris, France, 2004
Richard Doviak et Dusan S. Zrnic, Doppler Radar and Weather Observations, Academic Press. Second Edition, San Diego Cal., 1993 p. 562.
Roger M. Wakimoto and Ramesh Srivastava, Radar and Atmospheric Science: A Collection of Essays in Honor of David Atlas, publié par l'American Meteorological Society, Boston, August 2003. Series: Meteorological Monograph, Volume 30, number 52, 270 pages, ; AMS Code MM52.
Meteorological instrumentation and equipment
Radar meteorology
Radar | Constant altitude plan position indicator | [
"Technology",
"Engineering"
] | 688 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
4,107,063 | https://en.wikipedia.org/wiki/Royal%20Astronomical%20Society%20of%20Canada | The Royal Astronomical Society of Canada (RASC) is a national, non-profit, charitable organization devoted to the advancement of astronomy and related sciences. At present, there are 30 local branches of the Society, called Centres, in towns and cities across the country from St. John's, Newfoundland, to Victoria, British Columbia, and as far north as Whitehorse, Yukon. There are about 5100 members from coast to coast to coast, and internationally. The membership is composed primarily of amateur astronomers and also includes numerous professional astronomers and astronomy educators. The RASC is the Canadian equivalent of the British Astronomical Association.
History
The RASC has its original roots in Toronto, Ontario, Canada, where in 1868 a group of friends began meeting as part of the "Toronto Astronomical Club." The club was formally incorporated as "The Astronomical and Physical Society of Toronto" in 1890, and this is considered the founding date of the Society. The club grew over time, and by 1900, surrounding communities were affiliated with the group. On 1903 March 3, the club was renamed to "The Royal Astronomical Society of Canada" after petitioning King Edward VII to use the prefix "Royal" in the group's name. At the time it had 120 members. In the more than a century since its formal incorporation, the RASC has expanded across Canada with Centres in 30 cities, reaching every province of Canada with the exception of Prince Edward Island.
Organization
Mandate
The RASC mandate is five-fold:
to stimulate interest and to promote and increase knowledge in astronomy and related sciences;
to acquire and maintain equipment, libraries, and other property necessary for the pursuit of its aims;
to publish journals, books, and other material containing information on the progress of astronomy and the work of the Society;
to receive and administer gifts, donations, and bequests from members of the Society and others;
to make contributions and render assistance to individuals and institutions engaged in the study and advancement of astronomy.
Society Office
The Society Office in Toronto employs three staff.
Board of directors
President (1-year term)
1st Vice-President (1-year term, Chair of Publications Committee and Chair of Constitution Committee)
2nd Vice-President (1-year term, Chair of Nominating Committee)
Treasurer (1-year term, Chair of Finance Committee)
National Secretary (1-year term)
Up to four (4) Directors
Executive Director (Appointed - non-voting)
National Council
Centre Representatives
At least one representative from each Centre, plus two Unattached Member reps
Past Presidents
Immediate Past President
Editors
Observer's Handbook Editor (5-year term)
Journal Editor (5-year term)
Observer's Calendar Editor (5-year term)
Bulletin Editor (5-year term)
Staff (Non-voting)
Finance Officer
Permanent Committees (Chairs)
Astroimaging
Awards (Past President)
Constitution (1st Vice-President)
Education and Public Outreach
Finance (Treasurer)
Fundraising
History
Information Technology
Light-Pollution Abatement
Membership and Development
Nominating (2nd Vice-President)
Observing
Publications (1st Vice-President)
Conduct of Business
The RASC conducts business through a Board of Directors with regular meetings, plus two scheduled meetings at the General Assembly, which is traditionally held on the May or July long weekend (GA). The GA is hosted by one of the Centres, with annual meetings alternating between eastern and western Canada. Meetings follow Robert's Rules of Order and are governed by the By-Laws of the Society.
Centres
Each of the Centres of the Society conduct a variety of activities of interest to its members and to the public. At regular meetings, well-known professional and amateur astronomers give lectures on a variety of topics of current interest. In addition, there are study and special-interest groups. Most Centres publish their own newsletters and hold their own group-observing events. Some members take part in regular observations of variable stars, lunar occultations, sunspots, meteors, comets, and other phenomena; others develop special skills such as astroimaging at workshops.
Outreach
Most Centres have public education programs, including special outreach star nights when the public is given an opportunity to look through a telescope courtesy of a RASC volunteer. In 2009, the International Year of Astronomy, many Centres were instrumental in organizing events of educational astronomy outreach for their local communities. The RASC's Light-Pollution Abatement Committee also administers Canada's Dark-sky preserve program, working with provincial and national parks to create management agreements to preserve the darkness of the nighttime sky.
Resources
Many Centres have observing equipment, libraries, and observing locations. For example, the Victoria Centre has telescopes and a large library of books and periodicals available to members in good standing. Additionally the Victoria Centre built and operates the "RASC Victoria Centre Observatory (RASC VCO)" which is located at the Dominion Astrophysical Observatory. The Society has recently purchased a robotic telescope.
Publications and awards
The RASC publishes a number of books and periodicals, and issues awards to recognize accomplishments in astronomy and outreach activities.
Recurring Publications
The annual Observer's Handbook (2021: ) can be found in observatory control rooms and astronomers' reference shelves worldwide. Published in the autumn of the year, the 352-page Handbook contains detailed information on astronomical events in the upcoming year and is an in-depth reference of significant astronomical data such as observing techniques, physical constants, and optical properties of telescopes. The first two editions were published in 1907 and 1908, respectively. For the following two years information from the Observer's Handbook was integrated into the main Journal, but it was decided eventually that the Handbook return to circulation. The 3rd edition of the Observer's Handbook was published in January 1911, with Editor C. A. Chant aiming to publish the 1912 edition in the autumn of that year. The 110th edition was published in 2017, covering events in 2018. In addition, for the first time, a USA Edition was created for the American audience, in cooperation with the Astronomical League. The publication is currently in its 113th edition published in 2020, covering the events of 2021.
The Journal of the Royal Astronomical Society of Canada (ISSN 0035-872X) (bib. code - JRASC), continuously published since 1907, is a bi-monthly periodical that features articles about Canadian astronomers, activities of the RASC and its Centres, and peer-reviewed research papers.
The Observer's Calendar (2017: ) features photos of an astronomical subject taken by amateur astronomers using CCD and other camera equipment on amateur instruments. Each photograph is given an informative caption along with comprehensive astronomical data for dates throughout each month.
Explore the Universe Guide; An Introduction to the RASC ETU Certificate Program () is a book for the casual backyard astronomer who is thinking about getting serious.
See also
Société d'astronomie de Montréal
List of astronomical societies
References
External links
Links to individual RASC Centres' Web sites.
Archival papers of Frank Scott Hogg, president and assistant editor, held at the University of Toronto Archives and Records Management Services
Archival papers of Ruth Josephine Northcott, first female chair (1942–1943) and editor (1956–1969), held at the University of Toronto Archives and Records Management Services
Astronomy organizations
Astronomy societies
Amateur astronomy organizations
Higher education in Canada
Learned societies of Canada
Professional associations based in Canada
Organizations based in Canada with royal patronage
Scientific organizations established in 1868
Astronomy in Canada
1868 establishments in Ontario | Royal Astronomical Society of Canada | [
"Astronomy"
] | 1,497 | [
"Astronomy societies",
"Amateur astronomy organizations",
"Astronomy organizations"
] |
5,470,607 | https://en.wikipedia.org/wiki/Mycangium | The term mycangium (pl., mycangia) is used in biology for special structures on the body of an animal that are adapted for the transport of symbiotic fungi (usually in spore form). This is seen in many xylophagous insects (e.g. horntails and bark beetles), which apparently derive much of their nutrition from the digestion of various fungi that are growing amidst the wood fibers. In some cases, as in ambrosia beetles (Coleoptera: Curculionidae: Scolytinae and Platypodinae), the fungi are the sole food, and the excavations in the wood are simply to make a suitable microenvironment for the fungus to grow. In other cases (e.g., the southern pine beetle, Dendroctonus frontalis), wood tissue is the main food, and fungi weaken the defense response from the host plant.
Some species of phoretic mites that ride on the beetles, have their own type of mycangium, but for historical reasons, mite taxonomists use the term acarinarium. Apart from riding on the beetles, the mites live together with them in their burrows in the wood.
Origin
These structures were first systematically described by Helene Francke-Grosmann at 1956. Then Lekh R. Batra coined the word mycangia: modern Latin, from Greek myco 'fungus' + angeion 'vessel'.
Function
The most common function of mycangia is preserving and releasing symbiotic inoculum. Usually, the symbiotic inoculum in mycangia will benefit their vectors (typically insect or mites), helping them to adapt to the new environment or provide nutrients of the vectors themselves and their descendants.
For example, the ambrosia beetle (Euwallacea fornicatus) carries the symbiotic fungus Fusarium. When the beetle bores a host plant, it releases the symbiotic fungus from its mycangium. The symbiotic fungus becomes a plant pathogen, acting to weaken the resistance of host plant. In the meantime, the fungus grows quickly in the galleries as the main food of beetle. After reproduction, maturing beetles will fill their mycangia with symbiont before hunting for a new host plant.
Therefore, mycangia play an important role in protecting the inoculum from degradation and contamination. The structures of mycangia always resemble a pouch or a container, with caps or a small opening that reduce the possibility of contaminants from outside. How mycangia release their inoculum is still unknown.
Mycangia and symbiotic inoculum
Most of the inoculum in mycangia are fungi. The symbiotic inoculum of most bark and ambrosia beetles are fungi belonging to Ophiostomatales (Ascomycota: Sordariomycetidae) and Microascales (Ascomycota: Hypocreomycetidae). Symbiotic fungi in mycangia of woodwasps are Amylostereaceae (Basidiomycota: Russulales). Symbiotic fungi in mycangia of lizard beetles are yeast (Ascomycota: Saccharomycetales). Symbiotic fungi in mycangia of ship-timber beetles are Endomyces (Ascomycota: Dipodascaceae). Symbiotic fungi in mycangia of leaf-rolling weevils are Penicillium fungi (Ascomycota: Trichocomaceae). In addition to the above primary symbiotic fungi, secondary fungi and some bacteria have been isolated from mycangia.
Mycangia in insects
Mycangia in bark and ambrosia beetles
Mycangia of bark and ambrosia beetles (Curculionidae: Scolytinae and Platypodinae) are often complex cuticular invaginations for transport of symbiotic fungi. Phloem-feeding bark beetles (Curculionidae: Scolytinae) have usually numerous small pits on the surface of their body, while ambrosia beetles (many Scolytinae and all Platypodinae), which are completely dependent on their fungal symbiont, have deep and complicated pouches. These mycangia are often equipped with glands secreting substances to support fungal spores and perhaps to nourish mycelium during transport. In many cases, the entrance to a mycangium is surrounded by tufts of setae, aiding in scraping mycelium and spores from walls of the tunnels and directing the spores into the mycangium. The mycangia of ambrosia beetle are highly diverse. Different genera or tribes with different kinds of mycangia. Some are oral mycangia in the head, such as genus Ambrosiodmus and Euwallacea. Some are pronotal mycangia, such genus Xylosandrus and Cnestus.
Mycangia in woodwasps (horntails)
Mycangia of the woodwasps (Hymenoptera: Siricidae) were first described by Buchner. Different from highly diverse types in bark and ambrosia beetles, woodwasps only have a pair of mycangia on the top of their ovipositor. Then when females deposit their eggs inside the host plant, they inject the symbiotic fungi from mycangia and phytotoxic mucus from another reservoir-like structure.
Mycangia in lizard beetles
One species of lizard beetle Doubledaya bucculenta (Coleoptera: Erotylidae: Erotylidae) has mycangia on the tergum of the eighth abdominal segment. This ovipositor-associated mycangia is only present in adult females. Before Doubledaya bucculentnta deposit their eggs and inject the symbiotic microorganisms on a recently dead bamboo, they will excavate a small hole through the bamboo culm.
Mycangia in ship-timber beetles
The ship-timber beetle (Coleoptera: Lymexylidae) is another family of wood-boring beetles that live with symbiotic fungi. Buchner first discovered their mycangia located on the ventral side of the long ovipositor. These mycangia form a pair of integumental pouches at either side near the tip of oviduct. When the female lays the eggs, new eggs are coated with the fungal spores.
Mycangia in leaf-rolling weevils
Females of the leaf-rolling weevil in the genus Euops (Coleoptera: Attelabidae) store symbiotic fungi in the mycangia, which is between the first ventral segment of the abdomen and the thorax. Different from ovipositor-associate mycangia in woodwasps, lizard beetles, and ship-timber beetles, mycangia of leaf-rolling weevils is a pair of spore incubators at the anterior end of the abdomen. This mycangium is formed by the coxa and the metendosternite at the posterior end of the thorax.
Mycangia in stag beetles
Mycangia of the stag beetles (Coleoptera: Lucanidae) were discovered in Japan only this century. This ovipositor-associated mycangium is located in a dorsal fold of the integument between the last two tergal plates of the adult females. It has been examined in many species. A female everts the mycangium for the first time soon after eclosion; this is to retrieve the symbionts left by the larva on the pupal chamber when it emptied its gut before pupating. Later, when ovipositing, she everts it to pass on the inoculum to the next generation.
References
Symbiosis
Insect anatomy | Mycangium | [
"Biology"
] | 1,667 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
5,471,083 | https://en.wikipedia.org/wiki/Axiom%20A | In mathematics, Smale's axiom A defines a class of dynamical systems which have been extensively studied and whose dynamics is relatively well understood. A prominent example is the Smale horseshoe map. The term "axiom A" originates with Stephen Smale. The importance of such systems is demonstrated by the chaotic hypothesis, which states that, 'for all practical purposes', a many-body thermostatted system is approximated by an Anosov system.
Definition
Let M be a smooth manifold with a diffeomorphism f: M→M. Then f is an axiom A diffeomorphism if
the following two conditions hold:
The nonwandering set of f, Ω(f), is a hyperbolic set and compact.
The set of periodic points of f is dense in Ω(f).
For surfaces, hyperbolicity of the nonwandering set implies the density of periodic points, but this is no longer true in higher dimensions. Nonetheless, axiom A diffeomorphisms are sometimes called hyperbolic diffeomorphisms, because the portion of M where the interesting dynamics occurs, namely, Ω(f), exhibits hyperbolic behavior.
Axiom A diffeomorphisms generalize Morse–Smale systems, which satisfy further restrictions (finitely many periodic points and transversality of stable and unstable submanifolds). Smale horseshoe map is an axiom A diffeomorphism with infinitely many periodic points and positive topological entropy.
Properties
Any Anosov diffeomorphism satisfies axiom A. In this case, the whole manifold M is hyperbolic (although it is an open question whether the non-wandering set Ω(f) constitutes the whole M).
Rufus Bowen showed that the non-wandering set Ω(f) of any axiom A diffeomorphism supports a Markov partition. Thus the restriction of f to a certain generic subset of Ω(f) is conjugated to a shift of finite type.
The density of the periodic points in the non-wandering set implies its local maximality: there exists an open neighborhood U of Ω(f) such that
Omega stability
An important property of Axiom A systems is their structural stability against small perturbations. That is, trajectories of the perturbed system remain in 1-1 topological correspondence with the unperturbed system. This property is important, in that it shows that Axiom A systems are not exceptional, but are in a sense 'robust'.
More precisely, for every C1-perturbation fε of f, its non-wandering set is formed by two compact, fε-invariant subsets Ω1 and Ω2. The first subset is homeomorphic to Ω(f) via a homeomorphism h which conjugates the restriction of f to Ω(f) with the restriction of fε to Ω1:
If Ω2 is empty then h is onto Ω(fε). If this is the case for every perturbation fε then f is called omega stable. A diffeomorphism f is omega stable if and only if it satisfies axiom A and the no-cycle condition (that an orbit, once having left an invariant subset, does not return).
See also
Ergodic flow
References
Ergodic theory
Diffeomorphisms | Axiom A | [
"Mathematics"
] | 702 | [
"Ergodic theory",
"Dynamical systems"
] |
5,472,952 | https://en.wikipedia.org/wiki/Ernst%20R.%20G.%20Eckert | Ernst Rudolph Georg Eckert (September 13, 1904 – July 8, 2004) was an Austrian American engineer and scientist who advanced the film cooling technique for aeronautical engines. He earned his Diplom Ingenieur and doctorate in 1927 and 1931, respectively, and habilitated in 1938. Eckert worked as a jet engine scientist at the Hermann Göring Aviation Research Institute near Braunschweig, Germany, then via Operation Paperclip, began jet propulsion research in 1945 at Wright-Patterson Air Force Base. In 1951, Eckert joined the University of Minnesota in the department of mechanical engineering. Eckert published over 550 scientific papers and books. The Eckert number in fluid dynamics was named after him.
In 1995 the National Academy of Engineering honored Eckert with its thirteenth Founders Award.
Eckert's son-in-law Horst Henning Winter, a specialist in rheology, is professor at UMass Amherst.
References and notes
External links
Short biography of Ernst R. G. Eckert
1904 births
2004 deaths
American aerospace engineers
American science writers
American technology writers
People from Austria-Hungary
Immigrants to the United States
German people of World War II
20th-century German physicists
NASA people
Scientists from Prague
Engineering educators
University of Minnesota faculty
Fluid dynamicists
Aerodynamicists
Czech Technical University in Prague alumni
German aerospace engineers
Operation Paperclip
20th-century Austrian engineers
20th-century American physicists
20th-century American engineers | Ernst R. G. Eckert | [
"Chemistry"
] | 289 | [
"Fluid dynamicists",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
5,473,033 | https://en.wikipedia.org/wiki/Inhabited%20set | In mathematics, a set is inhabited if there exists an element .
In classical mathematics, the property of being inhabited is equivalent to being non-empty. However, this equivalence is not valid in constructive or intuitionistic logic, and so this separate terminology is mostly used in the set theory of constructive mathematics.
Definition
In the formal language of first-order logic, set has the property of being if
Related definitions
A set has the property of being if , or equivalently . Here stands for the negation .
A set is if it is not empty, that is, if , or equivalently .
Theorems
Modus ponens implies , and taking any a false proposition for establishes that is always valid. Hence, any inhabited set is provably also non-empty.
Discussion
In constructive mathematics, the double-negation elimination principle is not automatically valid. In particular, an existence statement is generally stronger than its double-negated form. The latter merely expresses that the existence cannot be ruled out, in the strong sense that it cannot consistently be negated. In a constructive reading, in order for to hold for some formula , it is necessary for a specific value of satisfying to be constructed or known. Likewise, the negation of a universal quantified statement is in general weaker than an existential quantification of a negated statement. In turn, a set may be proven to be non-empty without one being able to prove it is inhabited.
Examples
Sets such as or are inhabited, as e.g. witnessed by . The set is empty and thus not inhabited. Naturally, the example section thus focuses on non-empty sets that are not provably inhabited.
It is easy to give such examples by using the axiom of separation, as with it logical statements can always be translated to set theoretical ones. For example, with a subset defined as , the proposition may always equivalently be stated as . The double-negated existence claim of an entity with a certain property can be expressed by stating that the set of entities with that property is non-empty.
Example relating to excluded middle
Define a subset via
Clearly and , and from the principle of non-contradiction one concludes . Further, and in turn
Already minimal logic proves , the double-negation for any excluded middle statement, which here is equivalent to . So by performing two contrapositions on the previous implication, one establishes . In words: It cannot consistently be ruled out that exactly one of the numbers and inhabits . In particular, the latter can be weakened to , saying is proven non-empty.
As example statements for , consider the infamous provenly theory-independent statement such as the continuum hypothesis, consistency of the sound theory at hand, or, informally, an unknowable claim about the past or future. By design, these are chosen to be unprovable. A variant of this is to consider mathematical propositions that are merely not yet established - see also Brouwerian counterexamples.
Knowledge of the validity of either or is equivalent to knowledge about as above, and cannot be obtained.
Given neither nor can be proven in the theory, it will also not prove to be inhabited by some particular number. Further, a constructive framework with the disjunction property then cannot prove either. There is no evidence for , nor for , and constructive unprovability of their disjunction reflects this. Nonetheless, since ruling out excluded middle is provenly always inconsistent, it is also established that is not empty. Classical logic adopts axiomatically, spoiling a constructive reading.
Example relating to choice
There are various easily characterized sets the existence of which is not provable in , but which are implied to exist by the full axiom of choice . As such, that axiom is itself independent of . It in fact contradicts other potential axioms for a set theory. Further, it indeed also contradicts constructive principles, in a set theory context. A theory that does not permit excluded middle does also not validate the function existence principle .
In , the is equivalent to the statement that for every vector space there exists basis. So more concretely, consider the question of existence of a Hamel bases of the real numbers over the rational numbers. This object is elusive in the sense that there are different models that either negate and validate its existence. So it is also consistent to just postulate that existence cannot be ruled out here, in the sense that it cannot consistently be negated. Again, that postulate may be expressed as saying that the set of such Hamel bases is non-empty. Over a constructive theory, such a postulate is weaker than the plain existence postulate, but (by design) is still strong enough to then negate all propositions that would imply the non-existence of a Hamel basis.
Model theory
Because inhabited sets are the same as nonempty sets in classical logic, it is not possible to produce a model in the classical sense that contains a nonempty set but does not satisfy " is inhabited".
However, it is possible to construct a Kripke model that differentiates between the two notions. Since an implication is true in every Kripke model if and only if it is provable in intuitionistic logic, this indeed establishes that one cannot intuitionistically prove that " is nonempty" implies " is inhabited".
See also
Type inhabitation in type theory.
References
D. Bridges and F. Richman. 1987. Varieties of Constructive Mathematics. Oxford University Press.
Basic concepts in set theory
Concepts in logic
Constructivism (mathematics)
Mathematical objects
Set theory | Inhabited set | [
"Mathematics"
] | 1,140 | [
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Basic concepts in set theory",
"Constructivism (mathematics)"
] |
5,473,402 | https://en.wikipedia.org/wiki/Pair%20distribution%20function | The pair distribution function describes the distribution of distances between pairs of particles contained within a given volume. Mathematically, if a and b are two particles, the pair distribution function of b with respect to a, denoted by is the probability of finding the particle b at distance from a, with a taken as the origin of coordinates.
Overview
The pair distribution function is used to describe the distribution of objects within a medium (for example, oranges in a crate or nitrogen molecules in a gas cylinder). If the medium is homogeneous (i.e. every spatial location has identical properties), then there is an equal probability density for finding an object at any position :
,
where is the volume of the container. On the other hand, the likelihood of finding pairs of objects at given positions (i.e. the two-body probability density) is not uniform. For example, pairs of hard balls must be separated by at least the diameter of a ball. The pair distribution function is obtained by scaling the two-body probability density function by the total number of objects and the size of the container:
.
In the common case where the number of objects in the container is large, this simplifies to give:
.
Simple models and general properties
The simplest possible pair distribution function assumes that all object locations are mutually independent, giving:
,
where is the separation between a pair of objects. However, this is inaccurate in the case of hard objects as discussed above, because it does not account for the minimum separation required between objects. The hole-correction (HC) approximation provides a better model:
where is the diameter of one of the objects.
Although the HC approximation gives a reasonable description of sparsely packed objects, it breaks down for dense packing. This may be illustrated by considering a box completely filled by identical hard balls so that each ball touches its neighbours. In this case, every pair of balls in the box is separated by a distance of exactly where is a positive whole number. The pair distribution for a volume completely filled by hard spheres is therefore a set of Dirac delta functions of the form:
.
Finally, it may be noted that a pair of objects which are separated by a large distance have no influence on each other's position (provided that the container is not completely filled). Therefore,
.
In general, a pair distribution function will take a form somewhere between the sparsely packed (HC approximation) and the densely packed (delta function) models, depending on the packing density .
Radial distribution function
Of special practical importance is the radial distribution function, which is independent of orientation. It is a major descriptor for the atomic structure of amorphous materials (glasses, polymers) and liquids. The radial distribution function can be calculated directly from physical measurements like light scattering or x-ray powder diffraction by performing a Fourier Transform.
In Statistical Mechanics the PDF is given by the expression
Applications
Thin Film Pair Distribution Function
When thin films are disordered, as they are in electronic devices, pair distribution is used to view the strain and structure-properties of that material or composition. They have these properties that cannot be exploited in the bulk or crystalline form. There is a method with the radial distribution that is able to view the local structure of a disordered thin film of GeSe2. But the creators of this method called a need for a better method to view the mid-range order of disordered films. The creation of thin-film Pair Distribution Function (tfPDF) uses a statistical distribution of a material’s mid-range order that enables viewing important details like the disorder. In this technique, 2D data from a scattering method is integrated and Fourier transformed into 1D data that shows the probability of bonds in that material. TfPDF works best when in conjunction with other characterization methods like transmission electron microscopy. Although a developing methodology, tfPDF can give complete structure-property relationships through a reliable characterization technique.
See also
classical-map hypernetted-chain method
References
Fischer-Colbrie, Bienenstock, Fuoss, Marcus. Phys. Rev. B (1988) 38, 12388
Jensen, K. M., Billinge, S. J. (2015). IUCrJ, 2(5), 481-489.
Statistical mechanics
Condensed matter physics | Pair distribution function | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 867 | [
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
5,473,827 | https://en.wikipedia.org/wiki/Parachlamydiaceae | Parachlamydiaceae is a family of bacteria in the order Chlamydiales. Species in this family have a Chlamydia–like cycle of replication and their ribosomal RNA genes are 80–90% identical to ribosomal genes in the Chlamydiaceae. The Parachlamydiaceae naturally infect amoebae and can be grown in cultured Vero cells. The Parachlamydiaceae are not recognized by monoclonal antibodies that detect Chlamydiaceae lipopolysaccharide.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
Unassigned species:
"Ca. Mesochlamydia elodeae" Corsaro et al. 2012
"Ca. Metachlamydia lacustris" Corsaro et al. 2010
Isolated Endosymbionts include:
Hall's coccus
P9
UV-7
endosymbiont of Acanthamoeba sp. TUME1
endosymbiont of Acanthamoeba sp. UWC22
endosymbiont of Acanthamoeba sp. UWE1
Uncultured lineages include:
Neochlamydia turtle type 1
environmental Neochlamydia
corvenA4
cvC15
cvC7
cvE5
Parachlamydia acanthamoebae has variable Gram staining characteristics and is mesophilic. Trophozoites of Acanthamoeba hosting these strains were isolated from asymptomatic women in Germany and also in an outbreak of humidifier fever (‘Hall’s coccus’) in Vermont USA. Four patients from Nova Scotia whose sera recognized Hall's coccus did not show serological cross-reaction with antigens from the Chlamydiaceae.
See also
List of bacterial orders
List of bacteria genera
Notes
Metachlamydia lacustris and Protochlamydia species were found at the National Center for Biotechnology Information (NCBI) but have no standing with the Bacteriological Code (1990 and subsequent Revision) as detailed by List of Prokaryotic names with Standing in Nomenclature (LPSN) as a result of the following reasons:
• No pure culture isolated or available for prokaryotes.
• Not validly published because the effective publication only documents deposit of the type strain in a single recognized culture collection.
• Not approved and published by the International Journal of Systematic Bacteriology or the International Journal of Systematic and Evolutionary Microbiology (IJSB/IJSEM).
References
Chlamydiota | Parachlamydiaceae | [
"Biology"
] | 555 | [
"Bacteria stubs",
"Bacteria"
] |
5,474,018 | https://en.wikipedia.org/wiki/Auxiliary-field%20Monte%20Carlo | Auxiliary-field Monte Carlo is a method that allows the calculation, by use of Monte Carlo techniques, of averages of operators in many-body quantum mechanical (Blankenbecler 1981, Ceperley 1977) or classical problems (Baeurle 2004, Baeurle 2003, Baeurle 2002a).
Reweighting procedure and numerical sign problem
The distinctive ingredient of "auxiliary-field Monte Carlo" is the fact that the interactions are decoupled by means of the application of the Hubbard–Stratonovich transformation, which permits the reformulation of many-body theory in terms of a scalar auxiliary-field representation. This reduces the many-body problem to the calculation of a sum or integral over all possible auxiliary-field configurations. In this sense, there is a trade-off: instead of dealing with one very complicated many-body problem, one faces the calculation of an infinite number of simple external-field problems.
It is here, as in other related methods, that Monte Carlo enters the game in the guise of importance sampling: the large sum over auxiliary-field configurations is performed by sampling over the most important ones, with a certain probability. In classical statistical physics, this probability is usually given by the (positive semi-definite) Boltzmann factor. Similar factors arise also in quantum field theories; however, these can have indefinite sign (especially in the case of Fermions) or even be complex-valued, which precludes their direct interpretation as probabilities. In these cases, one has to resort to a reweighting procedure (i.e., interpret the absolute value as probability and multiply the sign or phase to the observable) to get a strictly positive reference distribution suitable for Monte Carlo sampling. However, it is well known that, in specific parameter ranges of the model under consideration, the oscillatory nature of the weight function can lead to a bad statistical convergence of the numerical integration procedure. The problem is known as the numerical sign problem and can be alleviated with analytical and numerical convergence acceleration procedures (Baeurle 2002, Baeurle 2003a).
See also
Quantum Monte Carlo
References
Implementations
ALF
QUEST
QMCPACK
External links
Theory and Computation of Advanced Materials and Sensors Group
Quantum mechanics
Monte Carlo methods
Quantum Monte Carlo | Auxiliary-field Monte Carlo | [
"Physics",
"Chemistry"
] | 464 | [
"Monte Carlo methods",
"Quantum chemistry",
"Quantum Monte Carlo",
"Computational physics"
] |
5,474,047 | https://en.wikipedia.org/wiki/EnergyGuide | The EnergyGuide provides consumers in the United States information about the energy consumption, efficiency, and operating costs of appliances and consumer products.
Clothes washers, dishwashers, refrigerators, freezers, televisions, water heaters, window air conditioners, mini split air conditioners, central air conditioners, furnaces, boilers, heat pumps, and other electronic appliances are all required to have EnergyGuide labels. The label must show the model number, the size, key features, and display largely a graph showing the annual operating cost in range with similar models, and the estimated yearly energy cost.
Appliance energy labeling was mandated by the Energy Policy and Conservation Act of 1975, which directed the Federal Trade Commission to "develop and administer a mandatory energy labeling program covering major appliances, equipment, and lighting." The first appliance labeling rule was established in 1979 and all products were required to carry the label starting in 1980.
Energy Star is a similar labeling program, but requires more stringent efficiency standards for an appliance to become qualified, and is not a required program, but rather a voluntary one.
See also
EnerGuide – A similar label used in Canada which also includes a whole-house evaluation
European Union energy label – A similar label used in European Union
Energy rating label – A similar label in Australia & New Zealand
References
Certification marks
Product certification
Energy conservation in the United States
Environmental certification marks | EnergyGuide | [
"Mathematics"
] | 285 | [
"Symbols",
"Certification marks"
] |
5,474,056 | https://en.wikipedia.org/wiki/Distributed%20design%20patterns | In software engineering, a distributed design pattern is a design pattern focused on distributed computing problems.
Classification
Distributed design patterns can be divided into several groups:
Distributed communication patterns
Security and reliability patterns
Event driven patterns
Saga pattern
Examples
MapReduce
Bulk synchronous parallel
Remote Session
See also
Software engineering
List of software engineering topics
References
Software design patterns
Distributed computing architecture | Distributed design patterns | [
"Technology"
] | 71 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
5,474,092 | https://en.wikipedia.org/wiki/Waddlia | Waddlia is a genus of bacteria in its own family, Waddliaceae. Species in this genus have a Chlamydia-like cycle of replication and their ribosomal RNA genes are 80–90% identical to ribosomal genes in the Chlamydiaceae.
The type species is Waddlia chondrophila strain WSU 86-1044T, which was isolated from the tissues of a first-trimester aborted bovine fetus. Isolated in 1986, this species was originally characterized as a Rickettsia. DNA sequencing of the ribosomal genes corrected the characterization. Another W. chondrophila strain, 2032/99, was found along with Neospora caninum in a septic stillborn calf.
Waddlia chondrophila may be linked to miscarriages in pregnant women. A study found Waddlia chondrophila present in the placenta and vagina of 32 women, 10 of which who had miscarriages. It is hypothesized that the bacterial grows in placental cells, damaging the placenta.
The species Waddlia malaysiensis G817 has been proposed. W. malaysiensis was identified in the urine of Malaysian fruit bats (Eonycteris spelaea).
See also
List of bacterial orders
List of bacteria genera
References
Chlamydiota
Bacteria genera | Waddlia | [
"Biology"
] | 282 | [
"Bacteria stubs",
"Bacteria"
] |
5,474,276 | https://en.wikipedia.org/wiki/Clinton%20Haines | Clinton 'Clint' Haines (10 April 1976 – 10 April 1997) was an Australian computer hacker. He was also known as Harry McBungus, TaLoN and Terminator-Z.
Haines attended Ipswich Grammar School. He wrote his first computer virus in assembly language using the A86 assembler in the early 1990s.
Haines was responsible for the viruses NoFrills, Dudley, X-Fungus/PuKE, Daemaen and 1984. NoFrills infected the Australian Taxation Office (ATO). It was described by anti-virus company manager Len Grooves as "totally unimpressive". Grooves added: "This is a very average virus...It could have been written by any first-year computer student. In fact, it had serious design faults and programming bugs. I would not hire the writer." Nevertheless, the ATO decided to turn off all of its 15,000 computers until the virus was eradicated, to avoid the infection spreading.
His virus Dudley also infected the computers of Telecom Australia), shutting down their system in two hours. The Dudley virus was a variant of the No Frills code with the text [Oi Dudley!][PuKE].
Haines died from a heroin overdose in 1997, in Saint Lucia, Brisbane, celebrating his 21st birthday. At the time of his death he was completing an undergraduate science degree in microbiology at the University of Queensland. A computer virus was written in his honour (RIP Terminator-Z by VLAD). The virus, named 'Memorial', pays acknowledgement to Haines by placing a message on an infected user's screen.
References
1976 births
1997 deaths
Deaths by heroin overdose in Australia
Hackers
People from Brisbane
Accidental deaths in Queensland
People educated at Ipswich Grammar School | Clinton Haines | [
"Technology"
] | 363 | [
"Lists of people in STEM fields",
"Hackers"
] |
5,475,830 | https://en.wikipedia.org/wiki/SGPIO | Serial general-purpose input/output (SGPIO) is a four-signal (or four-wire) bus used between a host bus adapter (HBA) and a backplane. Of the four signals, three are driven by the HBA and one by the backplane. Typically, the HBA is a storage controller located inside a server, desktop, rack or workstation computer that interfaces with hard disk drives or solid-state drives to store and retrieve data. It is considered an extension of the general-purpose input/output (GPIO) concept.
–
The SGPIO specification is maintained by the Small Form Factor Committee in the SFF-8485 standard. The International Blinking Pattern Interpretation indicates how SGPIO signals are interpreted into blinking light-emitting diodes (LEDs) on disk arrays and storage back-planes.
History
SGPIO was developed as an engineering collaboration between American Megatrends Inc, at the time makers of back-planes, and LSI-Logic in 2004. SGPIO was later published by the SFF committee as specification SFF-8485.
Host bus adapters
The SGPIO signal consists of 4 electrical signals; it typically originates from a host bus adapter (HBA). iPass connectors (Usually SFF-8087 or SFF-8484) carry both SAS/SATA electrical connections between the HBA and the hard drives as well as the 4 SGPIO signals.
Backplanes with SGPIO bus interface
A backplane is a circuit board with connectors and power circuitry into which hard drives are attached; they can have multiple slots, each of which can be populated with a hard drive. Typically the back-plane is equipped with LEDs which by their color and activity, indicate the slot's status; typically, a slot's LED will emit a particular color or blink pattern to indicate its current status.
SGPIO interpretation and LED blinking patterns
Although many hardware vendors define their own proprietary LED blinking pattern, the common standard for SGPIO interpretation and LED blinking pattern can be found in the IBPI specification.
On back-planes, vendors use typically 2 or 3 LEDs per slot – in both implementations a green LED indicates presence and/or activity – for back-planes with 2 LEDs per slot, the second LED indicates Status whereas in back-planes with 3 LEDs the second and third indicate Locate and Fail.
Electrical characteristics of the SGPIO bus
The SGPIO bus consists of 4 signal lines and originates at the HBA, referred to as the initiator and ends at a back-plane, referred to as the target. If a back-plane (or target) is not present the HBA may still drive the bus without any harm to the system; if one does exist, it can communicate back to the HBA using the 4th wire.
The SGPIO bus is an open collector bus with 2.0 kΩ pull-up resistors located at the HBA and the back-plane – as on any open collector bus information is transferred by devices on the bus pulling the lines to ground (GND) using an open collector transistor or open drain FET.
Signal lines of the SGPIO bus
SClock
The SGPIO bus has a dedicated clock line driven by the initiator (its maximum clock rate is 100 kHz), although many implementations use slower ones (typically 48 kHz).
SLoad
This line is synchronous to the clock and is used to indicate the start of a new frame of data; a new SGPIO frame is indicated by SLoad being high at a rising edge of a clock after having been low for at least 5 clock cycles. The following 4 falling clock edges after a start condition is used to carry a 4-bit value from the HBA to the back-plane; the definition of this value is proprietary and varies between system vendors.
SDataOut
This line carries 3 bits of data from the HBA to the backplane: the first bit typically carries activity; the second bit carries locate; and the third bit carries fail. A low value for the first bit indicates no activity and a high value indicates activity.
SDataIn
This line is used by the back-plane and indicates some condition on the back-plane back to the HBA. The first bit being high commonly indicates the presence of a drive. The two following bits are typically unused, and driven low. Because this line would be high for all 3 bits when no backplane is connected, an HBA can detect the presence of a back-plane by the second or third bit of the SDataIn being driven low.
The SDataIn and SdataOut then repeats with 3 clocks per drive until the last drive is reached, and the cycle starts over again.
SGPIO implementation
There are varieties in how the SGPIO bus is implemented between vendors of HBAs and storage controllers - some vendors will send a continuous stream of data which is advantageous to quickly update the LEDs on a backplane after cables are removed and re-inserted, while others send data only when there is a need to update the LED pattern.
Adoption of the SGPIO specification
SGPIO and the SGPIO spec. is generally adopted and implemented in products from most major HBA and storage controller vendors such as LSI, Intel, Adaptec, Nvidia, Broadcom, Marvell Technology Group and PMC-Sierra. Most products shipping with support for SAS and SATA drives support this standard.
SGPIO timeout conditions
The SGPIO spec calls for the target to turn off all indicators when SClock, SLoad and SDataOut have been high for 64 ms; in practice this is not consistently followed by all vendors. Also, in some vendors' implementations the clock may be halted sporadically or stopped during or between cycles. Another – rather impractical – variation between vendors is the state in which the clock is left after a cycle.
Backplane Implementations of the SGPIO bus
The idea behind this specification was to be able to use low cost CPLDs or microcontrollers on a back-plane to drive LEDs; in practice, it has been found that there are variations in timing and interpretations of the bits between vendors, thus a simple CPLD would only work for a specific implementation thoroughly tested with one product from one vendor. A microcontroller is more applicable for this purpose, although the 4-bit SGPIO interface custom bus is not implemented on them – sampling of the 4-bit lines using GPIOs 100 kHz bit operations is too slow for many low-cost microcontrollers to handle whilst handling LED and other functions simultaneously. The length of the bit stream varies between HBA or storage controller; some vendors will stop the bit-stream when reaching the desired drive, while others will clock it all the way through. Some SAS-expander's bit streams may be as long as 108 (36×3) bits.
The safest implementation which ensures compatibility between all HBA and storage controller vendors is to use an ASIC, specifically, a combination of a microcontroller core with a hardware SGPIO interface; this concept was patented in 2006 by AMI and implemented in a series of backplane controller chips named the MG9071, MG9072, MG9077, and MG9082.
These chips will receive 1 or 2 SGPIO streams and drive LEDs accordingly; the latest chip from AMI, the MG9077, can be configured by pull-up and pull-down resistors to adopt to 16 different configurations of SGPIO buses and drive the LEDs accordingly. Since the availability of these chips from AMI, major OEMs including NEC, Hitachi, Supermicro, IBM, Sun Microsystems, and others are using them on their back-planes to receive the SGPIO streams from a variety of HBA vendors and on-board controller chips to consistently drive LEDs with a pre-determined blinking pattern.
External links
SFF-8485 Specification for Serial GPIO (SGPIO) Bus
SFF Documents (Documents & Specifications)
Serial buses
Communications protocols
SCSI
fi:GPIO
zh:GPIO | SGPIO | [
"Technology"
] | 1,663 | [
"Computer standards",
"Communications protocols"
] |
5,475,989 | https://en.wikipedia.org/wiki/Application%20profile | In the information sciences, an application profile consists of a set of metadata elements, policies, and guidelines defined for a particular application.
The elements may come from one or more element sets, thus allowing a given application to meet its functional requirements by using metadata from several element sets - including locally defined sets. For example, a given application might choose a subset of the Dublin Core that meets its needs, or may include elements from the Dublin Core, another element set, and several locally defined elements, all combined in a single schema. An application profile is not complete without documentation that defines the policies and best practices appropriate to the application. As another example, the legal document standard Akoma Ntoso is universal scope and very flexible, which creates the risk of ambiguous representations. Therefore, when AKN is to be used in a local domain, it can be advisable to reduce the overall flexibility and complexity by specifying a uniform usage of a subset of AKN XML elements for the given use case.
Advantages
Defines an application-appropriate set of properties in a public and communicable manner. This permits the building of loosely coupled systems (i.e. independent of each other's detailed specifications) that still offer powerful capabilities.
Disadvantages
Narrow application scope, which may limit a profile's widespread applicability and also limits the likely synergy from re-use of tools from other projects outside that scope.
Compared to the Dublin Core refinement approach (where a core property set may be made more specific, in a backwards-compatible manner), use of application profiles requires that applications must at least recognise these profiles and their roots. Even if the profile is based simply on Dublin Core, which the application already understands, this is of no use unless the application also recognises that this profile is treatable as Dublin Core.
Example profiles
Bath Profile
An International Z39.50 Specification for Library Applications and Resource Discovery
e-GMS
the UK e-Government Metadata Standard. An application profile of Dublin Core.
References
Metadata | Application profile | [
"Technology"
] | 408 | [
"Metadata",
"Data"
] |
5,476,081 | https://en.wikipedia.org/wiki/Flora%20Brasiliensis | Flora Brasiliensis is a book published between 1840 and 1906 by the editors Carl Friedrich Philipp von Martius, August Wilhelm Eichler, Ignatz Urban and many others. It contains taxonomic treatments of 22,767 species, mostly Brazilian angiosperms.
The work was begun by Stephan Endlicher and Martius.
Von Martius completed 46 of the 130 fascicles before his death in 1868, with the monograph being completed in 1906.
It was published by the Missouri Botanical Garden.
Book's structure
This Floras volumes are an attempt to systematically categorise the known plants of the region.
15 volumes
40 parts
10,367 pages
See also
Historia naturalis palmarum
References
External links
Flora Brasiliensis in English
Brazil
Botany in South America
Flora of Brazil
Books about Brazil
1906 non-fiction books
19th-century books in Latin
20th-century books in Latin | Flora Brasiliensis | [
"Biology"
] | 176 | [
"Flora",
"Florae (publication)"
] |
5,477,059 | https://en.wikipedia.org/wiki/Scale%20space%20implementation | In the areas of computer vision, image analysis and signal processing, the notion of scale-space representation is used for processing measurement data at multiple scales, and specifically enhance or suppress image features over different ranges of scale (see the article on scale space). A special type of scale-space representation is provided by the Gaussian scale space, where the image data in N dimensions is subjected to smoothing by Gaussian convolution. Most of the theory for Gaussian scale space deals with continuous images, whereas one when implementing this theory will have to face the fact that most measurement data are discrete. Hence, the theoretical problem arises concerning how to discretize the continuous theory while either preserving or well approximating the desirable theoretical properties that lead to the choice of the Gaussian kernel (see the article on scale-space axioms). This article describes basic approaches for this that have been developed in the literature, see also for an in-depth treatment regarding the topic of approximating the Gaussian smoothing operation and the Gaussian derivative computations in scale-space theory.
Statement of the problem
The Gaussian scale-space representation of an N-dimensional continuous signal,
is obtained by convolving fC with an N-dimensional Gaussian kernel:
In other words:
However, for implementation, this definition is impractical, since it is continuous. When applying the scale space concept to a discrete signal fD, different approaches can be taken. This article is a brief summary of some of the most frequently used methods.
Separability
Using the separability property of the Gaussian kernel
the N-dimensional convolution operation can be decomposed into a set of separable smoothing steps with a one-dimensional Gaussian kernel G along each dimension
where
and the standard deviation of the Gaussian σ is related to the scale parameter t according to t = σ2.
Separability will be assumed in all that follows, even when the kernel is not exactly Gaussian, since separation of the dimensions is the most practical way to implement multidimensional smoothing, especially at larger scales. Therefore, the rest of the article focuses on the one-dimensional case.
The sampled Gaussian kernel
When implementing the one-dimensional smoothing step in practice, the presumably simplest approach is to convolve the discrete signal fD with a sampled Gaussian kernel:
where
(with t = σ2) which in turn is truncated at the ends to give a filter with finite impulse response
for M chosen sufficiently large (see error function) such that
A common choice is to set M to a constant C times the standard deviation of the Gaussian kernel
where C is often chosen somewhere between 3 and 6.
Using the sampled Gaussian kernel can, however, lead to implementation problems, in particular when computing higher-order derivatives at finer scales by applying sampled derivatives of Gaussian kernels. When accuracy and robustness are primary design criteria, alternative implementation approaches should therefore be considered.
For small values of ε (10−6 to 10−8) the errors introduced by truncating the Gaussian are usually negligible. For larger values of ε, however, there are many better alternatives to a rectangular window function. For example, for a given number of points, a Hamming window, Blackman window, or Kaiser window will do less damage to the spectral and other properties of the Gaussian than a simple truncation will. Notwithstanding this, since the Gaussian kernel decreases rapidly at the tails, the main recommendation is still to use a sufficiently small value of ε such that the truncation effects are no longer important.
The discrete Gaussian kernel
]
A more refined approach is to convolve the original signal with the discrete Gaussian kernel T(n, t)
where
and denotes the modified Bessel functions of integer order, n. This is the discrete counterpart of the continuous Gaussian in that it is the solution to the discrete diffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.
This filter can be truncated in the spatial domain as for the sampled Gaussian
or can be implemented in the Fourier domain using a closed-form expression for its discrete-time Fourier transform:
With this frequency-domain approach, the scale-space properties transfer exactly to the discrete domain, or with excellent approximation using periodic extension and a suitably long discrete Fourier transform to approximate the discrete-time Fourier transform of the signal being smoothed. Moreover, higher-order derivative approximations can be computed in a straightforward manner (and preserving scale-space properties) by applying small support central difference operators to the discrete scale space representation.
As with the sampled Gaussian, a plain truncation of the infinite impulse response will in most cases be a sufficient approximation for small values of ε, while for larger values of ε it is better to use either a decomposition of the discrete Gaussian into a cascade of generalized binomial filters or alternatively to construct a finite approximate kernel by multiplying by a window function. If ε has been chosen too large such that effects of the truncation error begin to appear (for example as spurious extrema or spurious responses to higher-order derivative operators), then the options are to decrease the value of ε such that a larger finite kernel is used, with cutoff where the support is very small, or to use a tapered window.
Recursive filters
Since computational efficiency is often important, low-order recursive filters are often used for scale-space smoothing. For example, Young and van Vliet use a third-order recursive filter with one real pole and a pair of complex poles, applied forward and backward to make a sixth-order symmetric approximation to the Gaussian with low computational complexity for any smoothing scale.
By relaxing a few of the axioms, Lindeberg concluded that good smoothing filters would be "normalized Pólya frequency sequences", a family of discrete kernels that includes all filters with real poles at 0 < Z < 1 and/or Z > 1, as well as with real zeros at Z < 0. For symmetry, which leads to approximate directional homogeneity, these filters must be further restricted to pairs of poles and zeros that lead to zero-phase filters.
To match the transfer function curvature at zero frequency of the discrete Gaussian, which ensures an approximate semi-group property of additive t, two poles at
can be applied forward and backwards, for symmetry and stability. This filter is the simplest implementation of a normalized Pólya frequency sequence kernel that works for any smoothing scale, but it is not as excellent an approximation to the Gaussian as Young and van Vliet's filter, which is not normalized Pólya frequency sequence, due to its complex poles.
The transfer function, H1, of a symmetric pole-pair recursive filter is closely related to the discrete-time Fourier transform of the discrete Gaussian kernel via first-order approximation of the exponential:
where the t parameter here is related to the stable pole position Z = p via:
Furthermore, such filters with N pairs of poles, such as the two pole pairs illustrated in this section, are an even better approximation to the exponential:
where the stable pole positions are adjusted by solving:
The impulse responses of these filters are not very close to gaussian unless more than two pole pairs are used. However, even with only one or two pole pairs per scale, a signal successively smoothed at increasing scales will be very close to a gaussian-smoothed signal. The semi-group property is poorly approximated when too few pole pairs are used.
Scale-space axioms that are still satisfied by these filters are:
linearity
shift invariance (integer shifts)
non-creation of local extrema (zero-crossings) in one dimension
non-enhancement of local extrema in any number of dimensions
positivity
normalization
The following are only approximately satisfied, the approximation being better for larger numbers of pole pairs:
existence of an infinitesimal generator A (the infinitesimal generator of the discrete Gaussian, or a filter approximating it, approximately maps a recursive filter response to one of infinitesimally larger t)
the semi-group structure with the associated cascade smoothing property (this property is approximated by considering kernels to be equivalent when they have the same t value, even if they are not quite equal)
rotational symmetry
scale invariance
This recursive filter method and variations to compute both the Gaussian smoothing as well as Gaussian derivatives has been described by several authors. Tan et al. have analyzed and compared some of these approaches, and have pointed out that the Young and van Vliet filters are a cascade (multiplication) of forward and backward filters, while the Deriche and the Jin et al. filters are sums of forward and backward filters.
At fine scales, the recursive filtering approach as well as other separable approaches are not guaranteed to give the best possible approximation to rotational symmetry, so non-separable implementations for 2D images may be considered as an alternative.
When computing several derivatives in the N-jet simultaneously, discrete scale-space smoothing with the discrete analogue of the Gaussian kernel, or with a recursive filter approximation, followed by small support difference operators, may be both faster and more accurate than computing recursive approximations of each derivative operator.
Finite-impulse-response (FIR) smoothers
For small scales, a low-order FIR filter may be a better smoothing filter than a recursive filter. The symmetric 3-kernel , for t ≤ 0.5 smooths to a scale of t using a pair of real zeros at Z < 0, and approaches the discrete Gaussian in the limit of small t. In fact, with infinitesimal t, either this two-zero filter or the two-pole filter with poles at Z = t/2 and Z = 2/t can be used as the infinitesimal generator for the discrete Gaussian kernels described above.
The FIR filter's zeros can be combined with the recursive filter's poles to make a general high-quality smoothing filter. For example, if the smoothing process is to always apply a biquadratic (two-pole, two-zero) filter forward then backwards on each row of data (and on each column in the 2D case), the poles and zeros can each do a part of the smoothing. The zeros limit out at t = 0.5 per pair (zeros at Z = –1), so for large scales the poles do most of the work. At finer scales, the combination makes an excellent approximation to the discrete Gaussian if the poles and zeros each do about half the smoothing. The t values for each portion of the smoothing (poles, zeros, forward and backward multiple applications, etc.) are additive, in accordance with the approximate semi-group property.
The FIR filter transfer function is closely related to the discrete Gaussian's DTFT, just as was the recursive filter's. For a single pair of zeros, the transfer function is
where the t parameter here is related to the zero positions Z = z via:
and we require t ≤ 0.5 to keep the transfer function non-negative.
Furthermore, such filters with N pairs of zeros, are an even better approximation to the exponential and extend to higher values of t :
where the stable zero positions are adjusted by solving:
These FIR and pole-zero filters are valid scale-space kernels, satisfying the same axioms as the all-pole recursive filters.
Real-time implementation within pyramids and discrete approximation of scale-normalized derivatives
Regarding the topic of automatic scale selection based on normalized derivatives, pyramid approximations are frequently used to obtain real-time performance. The appropriateness of approximating scale-space operations within a pyramid originates from the fact that repeated cascade smoothing with generalized binomial kernels leads to equivalent smoothing kernels that under reasonable conditions approach the Gaussian. Furthermore, the binomial kernels (or more generally the class of generalized binomial kernels) can be shown to constitute the unique class of finite-support kernels that guarantee non-creation of local extrema or zero-crossings with increasing scale (see the article on multi-scale approaches for details). Special care may, however, need to be taken to avoid discretization artifacts.
Other multi-scale approaches
For one-dimensional kernels, there is a well-developed theory of multi-scale approaches, concerning filters that do not create new local extrema or new zero-crossings with increasing scales. For continuous signals, filters with real poles in the s-plane are within this class, while for discrete signals the above-described recursive and FIR filters satisfy these criteria. Combined with the strict requirement of a continuous semi-group structure, the continuous Gaussian and the discrete Gaussian constitute the unique choice for continuous and discrete signals.
There are many other multi-scale signal processing, image processing and data compression techniques, using wavelets and a variety of other kernels, that do not exploit or require the same requirements as scale space descriptions do; that is, they do not depend on a coarser scale not generating a new extremum that was not present at a finer scale (in 1D) or non-enhancement of local extrema between adjacent scale levels (in any number of dimensions).
See also
Scale space
Pyramid (image processing)
Multi-scale approaches
Gaussian filter
External links
pyscsp: Scale-space toolbox for Python at GitHub (including implementations of different methods for approximating Gaussian smoothing for discrete data)
References
Image processing
Computer vision
Gaussian function | Scale space implementation | [
"Engineering"
] | 2,848 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
5,477,219 | https://en.wikipedia.org/wiki/Bone%20exercise%20monitor | A bone exercise monitor is an instrument which is used to measure and analyze the bone strengthening qualities of physical activity and to help in prevention of osteoporosis with physical activity and exercise.
The bone strengthening quality of physical exercise is very difficult to assess and monitor due to the great variability of intensity of exercise modes and individual differences in exercise patterns. Bone exercise monitors utilize accelerometers for measurement, and the collected data is analyzed with a specially developed algorithm.
The bone is stimulated by the acceleration and deceleration forces also known as G-forces causing impacts on the body, stimulating bone growth by adding new bone and by improving its architectural strength.
The Bone Exercise Monitor is worn on the hip during the daily chores or during exercise. The monitor measures the accelerations and deceleration of the body and analyzes the results. The daily (and weekly) achieved bone exercise is shown on the monitor's display.
Bibliography
Aki Vainionpää, Raija Korpelainen, Juhani Leppäluoto and Timo Jämsä. (2005) "Effects of high-impact exercise on bone mineral density: a randomized controlled trial in premenopausal women". Osteoporosis International. Volume 16, Number 2. pp. 191–97.
A. Vainionpää, R. Korpelainen, E. Vihriälä, A. Rinta–Paavola, J. Leppäluoto and T. Jämsä (2006) "Intensity of exercise is associated with bone density change in premenopausal women". Osteoporosis International. Volume 17, Number 3, pp. 455–63.
Timo Jämsä, Aki Vainionpää, Raija Korpelainen, Erkki Vihriälä and Juhani Leppäluoto (2006) "Effect of daily physical activity on proximal femur". Clinical Biomechanics. Volume 21, Issue 1, pp. 1–7.
References
External links
Newtest Oy - The manufacturer of the Bone Exercise Monitor
Exercise equipment
Physiological instruments | Bone exercise monitor | [
"Technology",
"Engineering"
] | 429 | [
"Physiological instruments",
"Measuring instruments"
] |
5,477,257 | https://en.wikipedia.org/wiki/Polymelia | Polymelia is a birth defect in which an affected individual has more than the usual number of limbs. It is a type of dysmelia. In humans and most land-dwelling vertebrates, this means having five or more limbs. The extra limb is most commonly shrunken and/or deformed. The term is from Greek πολυ- "many", μέλεα "limbs".
Sometimes an embryo started as conjoined twins, but one twin degenerated completely except for one or more limbs, which end up attached to the other twin.
Sometimes small extra legs between the normal legs are caused by the body axis forking in the dipygus condition.
Notomelia (from Greek for "back-limb-condition") is polymelia where the extra limb is rooted along or near the midline of the back. Notomelia has been reported in Angus cattle often enough to be of concern to farmers.
Cephalomelia (from Greek for "head-limb-condition") is polymelia where the extra limb is rooted on the head.
Origin
Tetrapod legs evolved in the Devonian or Carboniferous geological period from the pectoral fins and pelvic fins of their crossopterygian fish ancestors.
Fish fins develop along a "fin line", which runs from the back of the head along the midline of the back, round the end of the tail, and forwards along the underside of the tail, and at the cloaca splits into left and right fin lines which run forwards to the gills. In the paired ventral part of the fin line, normally only the pectoral and pelvic fins survive (but the Devonian acanthodian fish Mesacanthus developed a third pair of paired fins); but along the non-paired parts of the fin line, other fins develop.
In tetrapods, only the four paired fins normally persisted, and became the four legs. Notomelia and cephalomelia are atavistic reappearances of dorsal fins. Some other cases of polymelia are extra development along the paired part of the fin lines, or along the ventral posterior non-paired part of the fin line.
Notable cases
Humans
1529: A male child was born in Germany on January 9 with all four limbs duplicated at the elbows. He was described by Ambroise Pare in Of Monsters & Prodigies.
1889: Frank Lentini, an Italian-American sideshow performer, was born with a third leg, as well as a fourth foot and two sets of genitals
1995: Somali baby girl born with three left arms.
March 2006: a baby boy identified only as Jie-jie was born in Shanghai with a fully formed third arm: he had two full-sized left arms, one ventral to the other. This is the only documented case of a child born with a fully formed supernumerary arm. It is an example of an extra limb on a normal body axis.
November 6, 2007: doctors at Bangalore's Sparsh Hospital in Bangalore, India successfully completed surgery on a two-year-old girl named Lakshmi Tatma who was born with four arms and four legs; this was not true polymelia but a case of ischiopagus conjoined twins where one twin's head had disappeared during development.
Other animals
A four-legged chicken was born at Brendle Farms in Somerset, Pennsylvania, in 2005. The story was carried on the major TV network news programs and USA Today. The bird was found living normally among the rest of the chickens after 18 months. She was adopted and named Henrietta by the farm owner's 13-year-old daughter, Ashley, who refuses to sell the chicken. The second (hind) legs are fully formed but non-functional.
Four-legged ducks are occasionally hatched, such as 'Stumpy', an individual born in February 2007 on a farm in Hampshire, England.
Frogs in the US sometimes are affected by polymelia when attacked in the tadpole stage by the Ribeiroia parasite.
A puppy, known as Lilly, was born in the United States with a fully formed fifth leg jutting out between her hind legs. She was initially set to be sold to a freak show, but was instead bought by a dog lover who had the extra leg removed.
An eight-legged lamb has been reported on at least two occasions including in 1898 and 2018.
In mythology
Many mythological creatures like dragons, winged horses, and griffins have six limbs: four legs and two wings. The dragon's science is discussed in Dragons: A Fantasy Made Real. Additionally, angels are often depicted with two arms, two legs, and two wings.
In Greek Mythology:
The Hekatonkheires were said to each have one hundred hands.
The Gegenees were a race of giants with six arms.
The centaurs had six limbs: four horse legs and two human arms.
Sleipnir, Odin's horse in Norse mythology, has eight normal horse legs, and is usually depicted with limbs twinned at the shoulder or hip.
Several Hindu deities are depicted with multiple arms and sometimes also multiple legs.
In popular culture
Edward Albee's stage play The Man Who Had Three Arms tells the story of a fictional individual who was normal at birth but eventually sprouted a third functional arm, protruding from between his shoulder blades. After several years of living with three arms, the extra limb was reabsorbed into his body and the man became physically normal again. In Albee's play, the title character is extremely angry that we (the audience) seem to be much more interested in the period of his life when he had three arms, rather than his normal life before and after that interval.
Monty Python's Flying Circus performed a skit about a man with three buttocks. He believes that he has been invited to be interviewed on television because he is a nice person, and is dismayed to learn that he has only been invited because the interviewer is curious about his unusual condition.
The Dark Backward is a 1991 comedy film directed and written by Adam Rifkin, which features Judd Nelson as an unfunny garbage man who pursues a stand-up comedy career. When the "comedian" grows a third arm out of his back, he becomes an overnight hit.
Justice, the main antagonist in the cartoon Afro Samurai, has a fully formed extra arm protruding from his right shoulder.
Paco, a playable character in Brawlout, is a four-armed frog- and shark-like creature.
Spiral, character in the X-Men stories of Marvel Comics, has six arms.
A 6-armed Spider-Man frequently appears as an alternate reality incarnation of the character, and is sometimes referred to as "Polymelian".
In the Mortal Kombat series, the Shokan is a race of humanoids known to have four arms.
Ibid: A Life is a fictional biography of Jonathan Blashette, a man with three legs.
Zaphod Beeblebrox, a character in Douglas Adams' Hitchhiker's Guide to the Galaxy, has a third arm as well as a second head.
The Pokémon Machamp (the second evolution of Machop) appears as a wrestler with four arms.
Ryomen Sukuna, the antagonist of the manga series Jujutsu Kaisen, appears as an oversized human with four arms and a malformed body.
in undertale, muffet is a spider lady with six arms.
See also
Dipygus
Dysmelia
Polydactyly
Polysyndactyly
Supernumerary body parts
References
Sources
Avian Diseases, 1985 Jan-Mar; 29(1): pp. 244-5. Polymelia in a broiler chicken., Anderson WI, Langheinrich KA, McCaskey PC.: "A polymelus monster was observed in a 7-week-old slaughterhouse chicken. The supernumerary limbs were smaller than the normal appendages but contained an equal number of digits.".
External links
2013: child with two left legs
Pathology
Rare diseases
Congenital disorders
Supernumerary body parts | Polymelia | [
"Biology"
] | 1,683 | [
"Pathology"
] |
5,477,326 | https://en.wikipedia.org/wiki/Egosurfing | Egosurfing (also vanity searching, egosearching, egogoogling, autogoogling, self-googling) is the practice of searching for one's own name, or pseudonym on a popular search engine in order to review the results. Similarly, an egosurfer is one who surfs the Internet for their own name to see what information appears. It has become increasingly popular with the rise of Internet search engines, as well as free blogging and web-hosting services. Though Google is the search engine most commonly mentioned when referring to egosurfing, other widely known search engines include Yahoo, Bing, and DuckDuckGo.
The term was coined by Sean Carton in 1995 and first appeared in print as an entry in Gareth Branwyn's March 1995 Jargon Watch column in Wired.
Egosurfing is employed by many people for a variety of reasons. According to a study by the Pew Internet & American life project, 47% of American adult Internet users have undertaken a vanity search in Google or another search engine. Some egosurf purely for entertainment, such as finding celebrities with the same name. However, many people egosurf as a means of online reputation management. Egosurfing can be used to find data spills, released information that is undesirable to have in the public eye. By searching one's own name in an online search engine, one can take on the perspective of a stranger attempting to find out personal information. Some egosurf in order to conceal personal images or information from potential employers, clients, identity thieves and the like. Similarly, some use egosurfing to maintain a positive public image and to achieve self-promotion.
Many social networking sites, such as Facebook, allow users to make their profiles "searchable," meaning that their profile will appear in the appropriate search results. As a result, those seeking to maintain their privacy often egosurf in order to ensure that their profile does not appear in search engine results. As more people create online personas, many feel the need to more cautiously monitor their digital footprint, including information that they have not chosen to share online, such as telephone numbers and public records.
Although personal information available online can be difficult to remove, in 2009 Google introduced a feature allowing users to create a small box listing personal information such as name, occupation and location that appears on the first page of results when their name is searched. The box links to a full profile page, similar to one seen on Facebook. This Google profile can be linked to other social networking sites, such as one's blog, company website or Twitter feed. The more information that one includes on one's Google profile, the higher one's informational box will rank in the results, thus essentially encouraging one to post personal information online and continue egosurfing.
See also
Kibozing – prior to the existence of search engines, a similar practice existed on Usenet, known as kibozing after James "Kibo" Parry, who was well known for replying in a surreal fashion to anyone who mentioned his name, on any newsgroup.
References
External links
People Search Andrew Czernek, Google Knol.
Auto Googling Chas Jones, Wri a Blow at Wired.com
Jargon Watch at Wired.com
egoSur egoSurf without the guilt.
Self
Google
Names
Social networks
Internet terminology
1995 neologisms | Egosurfing | [
"Technology"
] | 699 | [
"Computing terminology",
"Internet terminology"
] |
5,477,402 | https://en.wikipedia.org/wiki/DCMU | DCMU (3-(3,4-dichlorophenyl)-1,1-dimethylurea) is an algicide and herbicide of the aryl urea class that inhibits photosynthesis. It was introduced by Bayer in 1954 under the trade name of Diuron.
History
In 1952, chemists at E. I. du Pont de Nemours and Company patented a series of aryl urea derivatives as herbicides. Several compounds covered by this patent were commercialized as herbicides: chlortoluron (3-chloro-4-methylphenyl) and DCMU, the (3,4-dichlorophenyl) example. Subsequently, over thirty related urea analogs with the same mechanism of action reached the market worldwide.
Synthesis
As described in the du Pont patent, the starting material is 3,4-dichloroaniline, which is treated with phosgene to form a isocyanate derivative. This is subsequently reacted with dimethylamine to give the final product.
Aryl-NH2 + COCl2 → Aryl-NCO
Aryl-NCO + NH(CH3)2 → Aryl-NHCON(CH3)2
Mechanism of action
DCMU is a very specific and sensitive inhibitor of photosynthesis. It blocks the QB plastoquinone binding site of photosystem II, disallowing the electron flow from photosystem II to plastoquinone. This interrupts the photosynthetic electron transport chain in photosynthesis and thus reduces the ability of the plant to turn light energy into chemical energy (ATP and reductant potential).
DCMU only blocks electron flow from photosystem II, it has no effect on photosystem I or other reactions in photosynthesis, such as light absorption or carbon fixation in the Calvin cycle.
However, because it blocks electrons produced from water oxidation in PS II from entering the plastoquinone pool, "linear" photosynthesis is effectively shut down, as there are no available electrons to exit the photosynthetic electron flow cycle for reduction of NADP+ to NADPH.
In fact, it was found that DCMU not only does not inhibit the cyclic photosynthetic pathway, but, under certain circumstances, actually stimulates it.
Because of these effects, DCMU is often used to study energy flow in photosynthesis.
Toxicity
DCMU (Diuron) has been characterized as a known/likely human carcinogen based on animal testing.
References
Herbicides
Ureas
Anilines
Chlorobenzene derivatives
Suspected carcinogens | DCMU | [
"Chemistry",
"Biology"
] | 553 | [
"Organic compounds",
"Herbicides",
"Biocides",
"Ureas"
] |
5,477,631 | https://en.wikipedia.org/wiki/International%20Association%20for%20Hydro-Environment%20Engineering%20and%20Research | The International Association for Hydro-Environment Engineering and Research (IAHR), founded in 1935, is a worldwide, non-profit, independent organisation of engineers and water specialists working in fields related to the hydro-environment and in particular with reference to hydraulics and its practical application. IAHR was called the International Association of Hydraulic Engineering and Research until 2009.
Activities range from river and maritime hydraulics to water resources development, flood risk management and eco-hydraulics, through to ice engineering, hydroinformatics and continuing education and training. IAHR stimulates and promotes both research and its application, and by so doing strives to contribute to sustainable development, the optimisation of world water resources management and industrial flow processes. IAHR accomplishes its goals by a wide variety of member activities including: working groups, research agenda, congresses, specialty conferences, workshops and short courses; Journals, Monographs and Proceedings; by collaborating with international organisations such as UN Water, UNESCO, WMO, IDNDR, GWP, ICSU; and by co-operation with other water-related national and international organisations.
IAHR publishes several international scientific journals in collaboration with Taylor & Francis and Elsevier – the Journal of Hydraulic Research, the Journal of River Basin Management, the Journal of Water Engineering and Research, the Revista Iberoamericana del Agua RIBAGUA jointly with the World Council of Civil Engineers (WCCE), the Journal of Ecohydraulics and theJournal of Hydro-Environment Engineering and Research with the Korean Water Resources Association. It also publishes Hydrolink, a quarterly magazine now FREE ACCESS.
The activities of IAHR are carried out by two full-time professional secretariats with offices in Madrid, Spain, which is hosted by the consortium Spain Water (CEDEX, Direccion General del Agua, Direccion General de Costas, MAPAMA, Spain), and in Beijing, China, hosted by IWHR.
The governing body of the association is a council elected by member ballot every two years. The current president is Prof. Joseph Hun-wei Lee (Hong Kong, China). The current vice-presidents are: Prof. Silke Wieprecht (Germany), Dr. Robert Ettema (USA), and Prof. Hyoseop Woo (South Korea). Dr. Ramon Gutierrez-Serret and Dr. Peng Jing are secretaries general.
IAHR is a Scientific Associate of the International Council for Science (ICSU) and is a partner organisation of UN-Water.
The IAHR World Congress is one of the most important activities of the International Association for Hydro-Environment Engineering and Research (IAHR) which typically attracts between 800 and 1500 participants from around the world. The 2022 IAHR World Congress, under the overall theme "From Snow to Sea", took place in Granada, Spain.
Publications
IAHR publishes the Journal of Hydraulic Research in partnership with Taylor & Francis.
IAHR publishes the International Journal of River Basin Management together with the International Association of Hydrological Sciences and INBO and in partnership with Taylor & Francis.
IAHR publishes the International Journal of Applied Water Engineering and Research together with the World Council of Civil Engineers and in partnership with Taylor & Francis.
The IAHR Asia Pacific Division publishes the Journal of Hydro-Environment Research in collaboration with the KWRA, Korean Water Resources Association and Elsevier
The IAHR Latin America Division publishes the Revista Iberoamericana del Agua in collaboration with the World Council of Civil Engineers (WCCE)
References
Hydraulic engineering organizations
Members of the International Council for Science
Organizations established in 1935
Engineering societies
International organisations based in Spain
International scientific organizations
1935 establishments in the Netherlands
Organisations based in Madrid
Members of the International Science Council | International Association for Hydro-Environment Engineering and Research | [
"Engineering"
] | 756 | [
"Engineering societies",
"Hydraulic engineering organizations",
"Civil engineering organizations"
] |
5,477,889 | https://en.wikipedia.org/wiki/Verigy | Verigy Ltd was a Cupertino, California-based semiconductor automatic test equipment manufacturer. The company existed as a business within Hewlett-Packard before it was spun off in 2006 as a standalone company. It was purchased by Advantest in 2011.
History
Verigy was started by Hewlett-Packard, reported to David Packard in its early days, and was spun off from Agilent Technologies in 2006. The company went public on the NASDAQ in June 2006. The CEO was Keith Barnes, who later became Chairman and CEO. The CFO was Bob Nikl. In 2011 Mr. Barnes moved to Chairman of the Board of Directors and Jorge Titinger became CEO and President. The company's NASDAQ symbol was VRGY.
Verigy designs, develops, manufactures, sells and services advanced semiconductor test systems for the flash memory, high-speed memory and system-on-chip (SoC) markets. Verigy's products are used worldwide in design validation, characterization, and high-volume manufacturing test. The company began doing business as Verigy on June 1, 2006 with its global headquarters located in Singapore.
On December 6, 2007, Verigy announced the acquisition of Inovys, a privately held company that provides stuff for design debug, failure analysis and yield acceleration for complex semiconductor devices and processes.
On June 15, 2009, Verigy acquired Touchdown Technologies, a privately held company that designs, manufactures, and supports advanced Microelectromechanical systems probe cards to support the wafer test needs of semiconductor manufacturers worldwide.
On November 18, 2010, Verigy announced its intent to merge with LTX-Credence. On December 7, 2010, Advantest Japan made an all-cash offer for the company, announcing that it planned to acquire Verigy in March 2011, topping LTX-Credence's bid. On July 4, 2011, after two reviews of the transaction by the Department of Justice the company announced that Advantest Corporation (, ) completed its acquisition of Verigy in an all-cash deal valued at $1.100 billion. The resulting company was the largest manufacturer of semiconductor test equipment in the world. Trading of Verigy ordinary shares was suspended subsequently.
Products
The main test system platforms offered by Verigy were V101 for the low-cost IC market; V6000 for the flash and DRAM memory market; V93000 for the SoC/SiP market; and V93000 HSM for the high-speed memory market.
In addition it offered software for design debug, failure analysis and yield acceleration.
Market listing and competition
Verigy announced its initial public offering of 8.5 million shares of common stock on June 13, 2006, priced at $15.00 per share, and is listed on the NASDAQ National Market under the ticker symbol VRGY. Agilent spun off the remaining Verigy stock to its shareholders in November 2006. Sale of its shares were suspended on its sale to Adventest in 2011.
Verigy's principal competitors in the ATE business were Teradyne and its 2011 prospective merger partner, LTX-Credence.
References
External links
Verigy website
Equipment semiconductor companies
Companies formerly listed on the Nasdaq
Electronic test equipment manufacturers
Electronics companies established in 2006
Electronics companies disestablished in 2011
2006 establishments in California
2011 disestablishments in California | Verigy | [
"Engineering"
] | 707 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
5,478,118 | https://en.wikipedia.org/wiki/Lisofylline | Lisofylline (LSF) is a synthetic small molecule with novel anti-inflammatory properties. LSF can effectively prevent type 1 diabetes in preclinical models and improves the function and viability of isolated or transplanted pancreatic islets. It is a metabolite of pentoxifylline.
As well, LSF improves cellular mitochondrial function and blocks interleukin-12 (IL-12) signaling and STAT-4 activation in target cells and tissues. IL-12 and STAT-4 activation are important pathways linked to inflammation and autoimmune damage to insulin producing cells. Therefore, LSF and related analogs could provide a new therapeutic approach to prevent or reverse type 1 diabetes. LSF also directly reduces glucose-induced changes in human kidney cells suggesting that LSF and analogs have the potential to treat the complications associated with diabetes.
Synthesis
The R enantiomer of the pentoxyfylline analogue in which the ketone has been reduced to an alcohol shows enhanced activity as an inhibitor of acetyl CoA over the parent drug.
For analogs see:
Further reading
References
External links
University of Virginia Research Announcement
National Institute of Health on Lisofylline
Metabolism of lisofylline in the human liver
Anti-inflammatory agents
Xanthines
Anti-interleukin drugs | Lisofylline | [
"Chemistry"
] | 275 | [
"Alkaloids by chemical classification",
"Xanthines"
] |
5,478,160 | https://en.wikipedia.org/wiki/CLEO%20%28particle%20detector%29 | CLEO was a general purpose particle detector at the Cornell Electron Storage Ring (CESR), and the name of the collaboration of physicists who operated the detector. The name CLEO is not an acronym; it is short for Cleopatra and was chosen to go with CESR (pronounced Caesar). CESR was a particle accelerator designed to collide electrons and positrons at a center-of-mass energy of approximately 10 GeV. The energy of the accelerator was chosen before the first three bottom quark Upsilon resonances were discovered between 9.4 GeV and 10.4 GeV in 1977. The fourth Υ resonance, the Υ(4S), was slightly above the threshold for, and therefore ideal for the study of, B meson production.
CLEO was a hermetic detector that in all of its versions consisted of a tracking system inside a solenoid magnet, a calorimeter, particle identification systems, and a muon detector. The detector underwent five major upgrades over the course of its thirty-year lifetime, both to upgrade the capabilities of the detector and to optimize it for the study of B mesons. The CLEO I detector began collecting data in October 1979, and CLEO-c finished collecting data on March 3, 2008.
CLEO initially measured the properties of the Υ(1–3S) resonances below the threshold for producing B mesons. Increasing amounts of accelerator time were spent at the Υ(4S) as the collaboration became more interested in the study of B mesons.
Once the CUSB experiment was discontinued in the late 1980s, CLEO then spent most of its time at the Υ(4S) and measured many important properties of the B mesons.
While CLEO was studying the B mesons, it was also able to measure the properties of D mesons and tau leptons, and discover many new charm hadrons. When the BaBar and Belle B factories began to collect large amounts of data in the early 2000s, CLEO was no longer able to make competitive measurements of B mesons. CLEO revisited the Υ(1-3S) resonances, then underwent its last upgrade to CLEO-c. CESR ran at lower energies and CLEO measured many properties of the ψ resonances and D mesons. CLEO was the longest running experiment in the history of particle physics.
History
Proposal and construction
Cornell University had built a series of synchrotrons since the 1940s. The 10 GeV synchrotron in operation during the 1970s had conducted a number of experiments, but it ran at much lower energy than the 20 GeV linear accelerator at SLAC. As late as October 1974, Cornell planned to upgrade the synchrotron to reach energies of 25 GeV and build a new synchrotron to reach 40 GeV. After the discovery of the J/Ψ in November 1974 demonstrated that interesting physics could be done with an electron-positron collider, Cornell submitted a proposal in 1975 for an electron-positron collider operating up to center-of-mass energies of 16 GeV using the existing synchrotron tunnel. An accelerator at 16 GeV would explore the energy region between that of the SPEAR accelerator and the PEP and PETRA accelerators. CESR and CLEO were approved in 1977 and mostly finished by 1979. CLEO was built in the large experimental hall at the south end of CESR; a smaller detector named CUSB (for Columbia University-Stony Brook) was built at the north interaction region. Between the proposal for and construction of CESR and CLEO, Fermilab discovered the Υ resonances and suggested that as many as three states existed. The Υ(1S) and Υ(2S) were confirmed at the DORIS accelerator. The first order of business once CESR was running was to find the Υs. CLEO and CUSB found the Υ(1S) shortly after beginning to collect data, and used the mass difference from DORIS to quickly find the Υ(2S). CESR's higher beam energies allowed CLEO and CUSB to find the more massive Υ(3S) and discover the Υ(4S). Furthermore, the presence of an excess of electrons and muons at the Υ(4S) indicated that it decayed to B mesons. CLEO proceeded to publish over sixty papers using the original CLEO I configuration of the detector.
CLEO had competition in the measurement of B mesons, particularly from the ARGUS collaboration. The CLEO collaboration was worried that the ARGUS detector at DESY would be better than CLEO, therefore it began to plan for an upgrade. The improved detector would use a new drift chamber for tracking and dE/dx measurements, a cesium iodide calorimeter inside a new solenoid magnet, time of flight counters, and new muon detectors. The new drift chamber (DR2) had the same outer radius as the original drift chamber to allow it to be installed before the other components were ready.
CLEO collected data for two years in the CLEO I.V configuration: new drift chamber, ten layer vertex detector (VD) inside the drift chamber, three layer straw tube drift chamber insert (IV) inside the VD, and a prototype CsI calorimeter replacing one of the original pole-tip shower detectors. The highlight of the CLEO I.V era was the observation of semi-leptonic B decays to charmless final states, submitted less than three weeks before a similar observation from ARGUS. The shutdown for the installation of DR2 allowed ARGUS to beat CLEO to the observation of B mixing, which was the most cited measurement of any of the symmetric B experiments.
CLEO II
CLEO shut down in April 1988 to begin the remainder of the CLEO II installation, and finished the upgrade in August 1989. A six layer straw chamber precision tracker (PT) replaced the IV, and the time-of-flight detectors, CsI calorimeter, solenoid magnet and iron, and muon chambers were all installed. This would be the CLEO II configuration of the detector. During the CLEO II era, the collaboration observed the flavor changing neutral current decays B+,0→ K*+,0 γ and b → s γ. Decays of B mesons to two charmless mesons were also discovered during CLEO II. These decays were of interest because of the possibilility to observe CP violation in decays such as K±π0, although such a measurement would require large amounts of data.
Observation of time-dependent asymmetries in the production of certain flavor-symmetric final states (such as J/Ψ K) was an easier way to detect CP violation in B mesons, both theoretically and experimentally. An asymmetric accelerator, one in which the electrons and positrons had different energies, was necessary to measure the time difference between B0 and 0 decays. CESR and CLEO submitted a proposal to build a low energy ring in the existing tunnel and upgrade the CLEO II detector with NSF funding. SLAC also submitted a proposal to build a B factory with DOE funds. The initial designs were first reviewed in 1991, but DOE and NSF agreed that insufficient funds were available to build either facility and a decision on which one to build was postponed. The proposals were reconsidered in 1993, this time with both facilities competing for DOE money. In October 1993, it was announced that the B factory would be built at SLAC.
After losing the competition for the B factory, CESR and CLEO proceeded with a two-part plan to upgrade the accelerator and the detector. The first phase was the upgrade to the CLEO II.V configuration between May and October 1995, which included a silicon detector to replace the PT and a change of the gas mixture in the drift chamber from an argon-ethane mix to a helium-propane mix. The silicon detector provided excellent vertex resolution, allowing precise measurements of D0, D+, Ds and τ lifetimes and D mixing. The drift chamber had better efficiency and momentum resolution.
CLEO III
The second phase of the upgrade included new superconducting quadrupoles near the detector. The VD and DR2 detectors would need to be replaced to make room for the quadrupole magnets. A new silicon detector and particle identification chamber would also be included in the CLEO-III configuration.
The CLEO III upgrade replaced the drift chamber and silicon detector and added a ring-imaging Cherenkov (RICH) detector for enhanced particle identification. The CLEO III drift chamber (DR3) achieved the same momentum resolution as the CLEO II.V drift chamber, despite having a shorter lever arm to accommodate the RICH detector. The mass of the CLEO III endplates was also reduced to allow better resolution in the endcap calorimeters.
CLEO II.V had stopped collecting data in February 1999. The RICH detector was installed beginning in June 1999, and DR3 was installed immediately afterwards. The silicon detector was to be installed next, but it was still being built. An engineering run was taken until the silicon detector was ready for installation in February 2000. CLEO III collected 6 fb−1 of data at the Υ(4S) and another 2 fb−1 below the Υ(4S).
With the advent of the high luminosity BaBar and Belle experiments, CLEO could no longer make competitive measurements of most of the properties of the B mesons. CLEO decided to study the various bottom and charm quarkonia states and charm mesons. The program began by revisiting the Υ states below the B meson threshold and the last data collected with the CLEO-III detector was at the Υ(1-3S) resonances.
CLEO-c
CLEO-c was the final version of the detector, and it was optimized for taking data at the reduced beam energies needed for studies of the charm quark. It replaced the CLEO III silicon detector, which suffered from lower-than-expected efficiency, with a six layer, all stereo drift chamber (ZD). CLEO-c also operated with the solenoid magnet at a reduced magnetic field of 1 T to improve the detection of low momentum charged particles. The low particle multiplicities at these energies allowed efficient reconstruction of D mesons. CLEO-c measured properties of the D mesons that served as inputs to the measurements made by the B factories. It also measured many of the quarkonia states that helped verify lattice QCD calculations.
Detector
CLEO's subdetectors perform three main tasks: tracking of charged particles, calorimetry of neutral particles and electrons, and identification of charged particle type.
Tracking
CLEO has always used a solenoid magnet to allow the measurement of charged particles. The original CLEO design called for a superconducting solenoid, but it was clear that one could not be built in time. A conventional 0.42 T solenoid was installed first, then replaced by the superconducting magnet in September 1981. The superconducting coil was designed to operate at 1.2 T, but it was never operated above 1.0 T. A new magnet was built for the CLEO II upgrade and was placed between the calorimeter and the muon detector. It operated at 1.5 T until CLEO-c, when the magnetic field was reduced to 1.0 T.
Wire chambers
The original CLEO detector used three separate tracking chambers. The innermost chamber (IZ) was a three layer proportional wire chamber that occupied the region between a radius of 9 cm and 17 cm. Each layer had 240 anode wires to measure track azimuth and 144 cathode strip hoops 5 mm wide inside and outside the anode wires (864 cathode strips total) to measure track z.
The CLEO I drift chamber (DR) was immediately outside the IZ and occupied the region between a radius of 17.3 cm and 95 cm. It consisted of seventeen layers of 11.3 mm × 10.0 mm cells with 42.5 mm between the layers, for a total of 5304 cells. There were two layers of field wires for every layer of sense wires. The odd-numbered layers were axial layers, and the even-numbered layers were alternating stereo layers.
The last CLEO I dedicated tracking chamber was the planar outer Z drift chamber (OZ) between the solenoid magnet and the dE/dx chambers. It consisted of three layers separated radially by 2.5 cm. The innermost layer was perpendicular to the beamline, and the outer two layers were at ±10° relative to the innermost chamber to provide some azimuthal tracking information. Each octant was equipped with an OZ chamber.
A new drift chamber, DR2, was built to replace the original drift chamber. The new drift chamber had the same outer radius as the original one so that it could be installed before the rest of the CLEO II upgrades were ready. DR2 was a 51 layer detector, with a 000+000- axial/stereo layer arrangement. DR2 had only one layer of field wires between each layer of sense wires, allowing many more layers to fit in the allotted space. The axial sense wires had a half-cell stagger to help resolve the left-right ambiguity of the original drift chamber. The inner and outer field layers of the chamber were cathode strips to make measurements of the longitudinal coordinate of tracks. DR2 was also designed to make dE/dx measurements in addition to tracking measurements.
The IZ chamber was replaced with a ten-layer drift chamber (VD) in 1984. When the beampipe radius was reduced from 7.5 to 5.0 cm in 1986, a three-layer straw chamber (IV) was built to occupy the newly available space. The IV was replaced during the CLEO II upgrade with a five-layer straw tube with a 3.5 cm inner radius.
The CLEO III drift chamber (DR3) was designed to have similar performance as the CLEO II/II.V drift chamber even though it would be smaller to allow space for the RICH detector. The innermost sixteen layers were axial, and the outermost 31 layers were grouped in alternating stereo four-layer superlayers. The outer wall of the drift chamber was instrumented with 1 cm wide cathode pads to provide additional z measurements.
The last drift chamber built for CLEO was the inner drift chamber ZD for the CLEO-c upgrade. Its six layer, all stereo layer design would provide longitudinal measurements of low-momentum tracks that would not reach stereo layers of the main drift chamber. With the exception of the larger stereo angle and smaller cell size, the ZD design was very similar to the DR3 design.
Silicon detectors
CLEO built its first silicon vertex detector for the CLEO II.V upgrade. The silicon detector was a three-layer device, arranged in octants. The innermost layer was at a radius of 2.4 cm and the outermost layer was at a radius of 4.7 cm. A total of 96 silicon wafers were used, with a total of 26208 readout channels.
The CLEO III upgrade included a new four layer, double-sided silicon vertex detector. It was made of 447 identical 1 in × 2 in wafers with a 50 micrometre strip pitch on the r-φ side and a 100 micrometre pitch on the z side. The silicon detector achieved 85% efficiency after installation, but soon began to suffer increasingly large inefficiencies. The inefficiencies were found in roughly semi-circular regions on the wafers. The silicon detector was replaced for CLEO-c because of its poor performance, the reduced need for vertexing capabilities, and the desire to minimize the material near the beampipe.
Calorimetry
CLEO I had three separate calorimeters. All used layers of proportional tubes interleaved with sheets of lead. The octant shower detectors were outside the time-of-flight detectors in each of the octants. Each octant detector had 44 layers of proportional tubes, alternating parallel and perpendicular to the beampipe. Wires were ganged together to reduce the number of readout channels for a total of 774 gangs. The octant end shower detectors were sixteen layer devices placed at either end of the dE/dx chambers. The layers followed an azimuthal, positive stereo, azimuthal, negative stereo pattern. The stereo wires were parallel to the slanted sides of the detector. The layers were ganged in a similar fashion as the octant shower detectors. The pole tip shower detector was placed between the ends of the drift chamber and the pole tips of the magnet flux return. The pole tip shower detector had 21 layers, with seven groups of vertical, +120°, -120° layers. The shower detector on each side was built in two halves to allow access to the beampipe.
The calorimetry was significantly improved during the CLEO II upgrade. The new electromagnetic calorimeter used 7784 CsI crystals doped with thallium. Each crystal was roughly 30 cm deep and had a 5 cm × 5 cm face. The central region of the calorimeter was a cylinder placed between the drift chamber and the solenoid magnet, and two endcap calorimeters were placed at either end of the drift chamber. The crystals in the endcap were oriented parallel to the beam line. The crystals in the central calorimeter faced a point displaced from the interaction point both longitudinally and transversely by a few centimeters to avoid inefficiencies from particles passing between neighboring crystals. The calorimeter primarily measured the energy of photons or electrons, however it was also used to detect antineutrons. All versions of the detector from CLEO-II through CLEO-c used the CsI calorimeter.
Particle identification
Five types of long-lived, charged particles are produced at CLEO: electrons, pions, muons, kaons and protons. Proper identification of each of these types significantly improves the capabilities of the detector. Particle identification was done by both dedicated subdetectors and by the calorimeter and drift chamber.
The outer portion of the CLEO detector was divided into independent octants that were primarily dedicated to charged particle identification. No clear consensus was reached on the choice of technology for particle identification, therefore two octants were equipped with dE/dx ionization chambers, two octants were equipped with high pressure gas Cerenkov detectors, and four octants were equipped with low pressure gas Cerenkov detectors. The dE/dx system demonstrated superior particle identification performance and aided in tracking, therefore in September 1981 all eight octants were equipped with dE/dx chambers. The dE/dx chambers measured the ionization of charged particles as they passed through a multiwire proportional chamber (MWPC). Each dE/dx octant was made with 124 separate modules, and each module contained 117 wires. Groups of ten modules were ganged together to minimize the number of readout channels. The first two and last two modules were not instrumented, therefore each octant had twelve cells.
The time-of-flight detector was directly outside the dE/dx chambers. It identified a charged particle by measuring its velocity and comparing it to the momentum measurement from the tracking chambers. Scintillating bars were arranged parallel to the beamline, with six bars for each half of the octant. The six bars in each octant half overlapped to avoid having any uninstrumented regions. The scintillation photons were detected by photomultiplier tubes. Each bar was 2.03 m × 0.312 m× 0.025 m.
The CLEO I muon drift chambers were the outermost detectors. Two layers of muon detectors were outside the magnet iron on either end of CLEO. The barrel region had two additional layers of muon chambers after 15 cm and 30 cm of magnet iron. The muon detectors were between 4 and 10 radiation lengths deep and were sensitive to muons with energies of at least 1-2 GeV. The magnet yoke weighed 580 tons, and each of four movable carts at each corner of the detector weighed 240 tons, for a total of 1540 tons.
CLEO II used time-of-flight detectors between the drift chamber and the calorimeter, one in the barrel region, the other in the endcap region. The barrel region consisted of 64 Bicron bars with light guides leading to photomultiplier tubes outside the magnetic field region. A similar system covered the endcap region. The TOF system had a timing resolution of 150 cm. The central and endcap TOF detectors combined covered 97% of the solid angle.
The CLEO I muon detector was far away enough from the interaction region that in-flight decays of pions and kaons were a significant background. The more compact structure of the CLEO II detector allowed the muon detectors to be moved closer to the interaction point. Three layers of muon detectors were placed behind layers of iron absorbers. The streamer counters were read out from each end to determine the z position.
The CLEO III upgrade included the addition of the RICH subdetector, a dedicated particle identification subdetector. The RICH detector was required to be less than 20 cm in the radial direction, between the drift chamber and the calorimeter, and less than 12% of a radiation length. The RICH detector used the Cerenkov radiation of charged particles to measure their velocity. Combined with the momentum measurement from the tracking detectors, the mass of the particle, and therefore its identity, could be determined. Charged particles produced Cerenkov light as they pass through a LiF window. Fourteen rings of thirty LiF crystals comprised the radiator of the RICH, and the four centermost rings had a sawtooth pattern to prevent total internal reflection of the Cerenkov photons. The photons traveled through a nitrogen expansion volume, which allowed the cone angle to be precisely determined. The photons were detected by 7.5 mm × 8.0 mm cathode pads in a multi-wire chamber containing a methane-triethylamine gas mixture.
Physics program
CLEO has published over 200 articles in Physical Review Letters and more than 180 articles in Physical Review. The reports of inclusive and exclusive b → s γ have both been cited over 500 times. B physics was usually CLEO's top priority, but the collaboration has made measurements across a wide spectrum of particle physics topics.
B mesons
CLEO's most cited paper reported the first measurement of the flavor-changing neutral current decay b→sγ. The measurement agreed well with the Standard Model and placed significant constraints on numerous beyond the Standard Model proposals, such as charged Higgs and anomalous WWγ couplings. The analogous exclusive decay B+,0→ K*+,0 γ was also measured. CLEO and ARGUS reported nearly simultaneous measurements of inclusive charmless semileptonic B meson decays, which directly established a non-zero value of the CKM matrix element |Vub|. Exclusive charmless semileptonic B meson decays were first observed by CLEO six years later in the modes B → πlν, ρlν, and were used to determine |Vub|. CLEO also discovered many of the hadronic analogs: B+,0→ K(892)+π−, φ K(*), K+π0, K0π0, π+π−, π+ρ0, π+ρ−, π+ω η K*, η′ K and K0π+, K+π−. These charmless hadronic decay modes can probe CP violation and are sensitive to the angles α and γ of the unitarity triangle. Finally, CLEO observed many exclusive charmed decays of B mesons, including several that are sensitive to |Vcb|: B→ D(*)K*−, 0→ D*0π0 B→ Λπ−, Λπ+π−, 0→ D*0π+π+π−π−, 0→ D*ρ′−, B0→ D*−pπ+, D*−p, B→ J/Ψ φ K, B0→ D*+D*−, and B+→ 0 K+.
Charm hadrons
Although CLEO ran mainly near the Υ(4S) to study B mesons, it was also competitive with experiments designed to study charm hadrons. The first measurement of charm hadron properties by CLEO was the observation of the Ds. CLEO measured a mass of 1970±7 MeV, considerably lower than previous observations at 2030±60 MeV and 2020±10 MeV. CLEO discovered the DsJ(2573) and the DsJ(2463). CLEO was the first experiment to measure the doubly Cabibbo suppressed decay D0→ K+π−, and CLEO performed Dalitz analyses of D0,+ in several decay modes. CLEO studied the D*(2010)+, making the first measurement of its width and the most precise measurement of the D*-D0 mass difference. CLEO-c made many of the most accurate measurements of D meson branching ratios in inclusive channels, μ+νμ, semileptonic decays, and hadronic decays. These branching fractions are important inputs to B meson measurements at BaBar and Belle. CLEO first observed the purely leptonic decay D→μ+, which provided an experimental measure of the decay constant fDs. CLEO-c made the most precise measurements of fD+ and fDs. These decay constants are in turn a key input to the interpretation of other measurements, such as B mixing. Other D decay modes discovered by CLEO are p, ωπ+, η ρ+, η'ρ+, φρ+, η π+, η'π+, and φ l ν. CLEO discovered many charmed baryons and discovered or improved the measurement of many charmed baryon decay modes. Before BaBar and Belle began discovering new charm baryons in 2005, CLEO had discovered thirteen of the twenty known charm baryons: Ξ, Ξ(2790), Ξ(2815), Ξ, Σ(2520), Ξ(2645), Ξ(2645), and Λ(2593). Charmed baryon decay modes discovered at CLEO are Ω→ Ω−e+e; Λ→ p0η, Ληπ+, Σ+η, Σ*+η, Λ0K+, Σ+π0, Σ+ω, Λπ+π+π−π0, Λωπ+; and Ξ→Ξ0e+ e.
Quarkonium
Quarkonium states provide experimental input for lattice QCD and non-relativistic QCD calculations. CLEO studied the Υ system until the end of the CUSB and CUSB-II experiments, then returned to the Υ system with the CLEO III detector. CLEO-c studied the lower mass ψ states. CLEO and CUSB published their first papers back-to-back, reporting observation of the first three Υ states. Earlier claims of the Υ(3S) relied on fits of one peak with three components; CLEO and CUSB's observation of three well separated peaks dispelled any remaining doubt about the existence of the Υ(3S). The Υ(4S) was discovered shortly after by CLEO and CUSB and was interpreted as decaying to B mesons because of its large decay width. An excess of electrons and muons at the Υ(4S) demonstrated the existence of weak decays and confirmed the interpretation of the Υ(4S) decaying to B mesons. CLEO and CUSB later reported the existence of the Υ(5S) and Υ(6S) states.
CLEO I through CLEO II had significant competition in Υ physics, primarily from the CUSB, Crystal Ball and ARGUS experiments. CLEO was able, however, to observe a number of Υ(1S) decays: τ+τ−, J/Ψ X and γ X with X = π+, π0, 2π+, π+K+, π+p, 2K+, 3π+, 2π+K+, and 2π+p. The radiative decays are sensitive to the production of glueballs.
CLEO collected more data at the Υ(1-3S) resonances at the end of the CLEO III era. CLEO III discovered the Υ(1D) state, the χb1,2(2P)→ωΥ(1S) transitions, and Υ(3S)→τ+τ− decays among others.
CLEO-c measured many of the properties of the charmonium states. Highlights include confirmation of ηc', confirmation of Y(4260), pseudoscalar-vector decays of ψ(2S), ψ(2S)→J/ψ decays, observation of thirteen new hadronic decays of ψ(2S), observation of hc(1P1), and measurement of the mass and branching fractions of η in ψ(2S)→J/ψ decay.
Tau leptons
CLEO discovered six decay modes of the τ:
τ → K−π0ντ,
e−ντeγ,
π−π−π+η ντ, π−π0π0η ντ, f1π ντ,
K−η ντ and K−ωντ.
CLEO measured the lifetime of the τ three times with a precision comparable or better than any other measurements at the time. CLEO also measured the mass of the τ twice. CLEO set limits on the mass of ντ several times, although the CLEO limit was never the most stringent one. CLEO's measurements of the Michel parameters were the most precise for their time, many by a substantial margin.
Other measurements
CLEO has studied two-photon physics, where both an electron and positron radiate a photon. The two photons interact to produce either a vector meson or hadron-antihadron pairs. CLEO published measurements of both the vector meson process and the hadron-antihadron process.
CLEO performed an energy scan for center-of-mass energies between 7 GeV and 10 GeV to measure the hadronic cross section ratio. CLEO made the first measurements of the π+ and K+ electromagnetic form factors above Q2 > 4 GeV2.
Finally, CLEO has performed searches for Higgs and beyond SM particles: Higgs bosons, axions, magnetic monopoles, neutralinos, fractionally charged particles, bottom squarks, and familons.
Collaboration
Initial design of a detector for the south interaction region of CESR began in 1975. Physicists from Harvard University, Syracuse University and the University of Rochester had worked at the Cornell synchrotron, and were natural choices as collaborators with Cornell. They were joined by groups from Rutgers University and Vanderbilt University, along with collaborators from LeMoyne College and Ithaca College. Additional institutions were assigned responsibility for detector components as they joined the collaboration. Cornell appointed a physicist to oversee development of the portion of the detector inside the magnet, outside the magnet, and of the magnet itself. The structure of the collaboration was designed to avoid perceived shortcomings at SLAC, where SLAC physicists were felt to dominate operations by virtue of their access to the accelerator and detector and to computing and machine facilities. Collaborators were free to work on the analysis of their choosing, and the approval of results for publication was by collaboration-wide vote. The spokesperson (later spokespeople) were also selected by collaboration-wide vote, including graduate students. The other officers in the collaboration were an analysis coordinator and a run manager, then later also a software coordinator.
The first CLEO paper listed 73 authors from eight institutions. Cornell University, Syracuse University and the University of Rochester have been members of CLEO for its entire history, and forty-two institutions have been members of CLEO at one time. The collaboration was its largest in 1996 at 212 members, before collaborators began to move to the BaBar and Belle experiments. The largest number of authors to appear on a CLEO paper was 226. A paper published near the time CLEO stopped taking data had 123 authors.
Notes
References
AIP Study of Multi-Institutional Collaborations Phase I: High-Energy Physics
Particle detectors | CLEO (particle detector) | [
"Technology",
"Engineering"
] | 6,779 | [
"Particle detectors",
"Measuring instruments"
] |
5,478,196 | https://en.wikipedia.org/wiki/Sudden%20ionospheric%20disturbance | A sudden ionospheric disturbance (SID) is any one of several ionospheric perturbations, resulting from abnormally high ionization/plasma density in the D region of the ionosphere and caused by a solar flare and/or solar particle event (SPE). The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result often interrupts or interferes with telecommunications systems.
Discovery
The Dellinger effect, or sometimes Mögel–Dellinger effect, is another name for a sudden ionospheric disturbance. The effect was discovered by John Howard Dellinger around 1935 and also described by the German physicist Hans Mögel (1900-1944) in 1930. The fadeouts are characterized by sudden onset and a recovery that takes minutes or hours.
Cause
When a solar flare occurs on the Sun a blast of intense ultraviolet (UV) and x-ray (sometimes even gamma ray) radiation hits the dayside of the Earth after a propagation time of about 8 minutes. This high energy radiation is absorbed by atmospheric particles, raising them to excited states and knocking electrons free in the process of photoionization. The low altitude ionospheric layers (D region and E region) immediately increase in density over the entire dayside. The ionospheric disturbance enhances VLF radio propagation. Scientists on the ground can use this enhancement to detect solar flares; by monitoring the signal strength of a distant VLF transmitter, sudden ionospheric disturbances (SIDs) are recorded and indicate when solar flares have taken place. The small geomagnetic effect in the lower ionosphere appears as a small hook on magnetic records and is therefore called "geomagnetic crochet effect" or "sudden field effect".
Effects on radio waves
Short wave radio waves (in the HF range) are absorbed by the increased particles in the low altitude D-region of the ionosphere, causing a complete blackout of radio communications. This is called a short wave fadeout (SWF). These fadeouts last for a few minutes to a few hours and are most severe in the equatorial regions where the Sun is most directly overhead. Although High Frequency signals suffer a fadeout because of the enhanced D-layer, the Sudden Ionospheric Disturbance enhances long wave (VLF) radio propagation. SIDs are observed and recorded by monitoring the signal strength of a distant VLF transmitter.
A whole array of sub-classes of SIDs exist, detectable by different techniques at various wavelengths: the short-wave fadeout (SWF), the SPA (Sudden Phase Anomaly), SFD (Sudden Frequency Deviation), SCNA (Sudden Cosmic Noise Absorption), SEA (Sudden Enhancement of Atmospherics), etc.
See also
Ionospheric storm
Space weather
References
External links
AAVSO SID Monitoring Program
Further information on SID monitoring
Space Weather Monitors- Stanford SOLAR Center
Amateur SID monitoring station
SID monitoring using Spectrum Lab
NASA - Carrington Super Flare NASA May 6, 2008
Ionosphere
Planetary science
Radio frequency propagation
Space science
Space weather | Sudden ionospheric disturbance | [
"Physics",
"Astronomy"
] | 644 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Outer space",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Space science",
"Waves",
"Planetary science",
"Astronomical sub-disciplines"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.