id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
77,366,958 | https://en.wikipedia.org/wiki/Flore%20laurentienne | Flore laurentienne (The Laurentian Flora En) by Marie-Victorin Bro. (Conrad Kirouac), is the scientific inventory of vascular plant resources growing spontaneously in the St. Lawrence River valley, in Quebec, Canada.
First published by the Bros. of the Christian Schools in 1935, the manual lists and describes 1568 species of Pteridophytes, Gymnosperms, Angiosperms, plants illustrated by Bro. Alexandre Blouin.
History
The Flore laurentienne is the fruit of thirty years of study, research, gathering, plant collecting, and classification of thousands of specimens. In 1935, in the midst of an economic crisis, it took the energy, charisma and sense of organization of Marie-Victorin, assisted by his collaborators, to bring the manuscript to the presses of the Brothers of the Christian Schools.
From its launch on April 3, 1935, at the Viger Hotel in Montreal, the Flore laurentienne was acclaimed as the bible of French-Canadian naturalists.
Flore laurentienne divisions
<blockquote>Preface — Historical and bibliographical summary of Laurentian botany — General outline — Synopsis of systematic groups — Artificial key to plants of Quebec — Pteridophytes — Spermatophytes — Gymnosperms — Angiosperms — Dicotyls — Monocotyls — Glossary — Abbreviations of author names — Alphabetical index (Marie-Victorin, p. 4, 1935)</blockquote>
Editions
Recent publications are still available in bookstores, educational institutions, public libraries and on line, the work published for the first time in 1935, in large format, has undergone several reissues:
Second edition, completely revised and updated by Ernest Rouleau (1916-1991), published in September 1964, printed on Bible paper and in a reduced format;
Third edition, updated and annotated by Luc Brouillet, Stuart G. Hay and Isabelle Goulet, published in October 1995, reprinted in 2002;
Digital edition, florelaurentienne.com, on line, updated, annotated, continuously active since 2001.
Collaborators
To carry out his work, Bro. Marie-Victorin surrounded himself with several collaborators, some of whom were his students. At the forefront of these is Bro. Alexandre [Blouin] (1892-1987), the author of the 2800 illustrations of the Flora, and whose name appears on the title page of the work. Jacques Rousseau, who would later become a botanist and ethnologist of international reputation is the author of the " artificial key of Quebec plants ”, which, by avoiding overly technical elements and using the simplest and easiest to perceive characters, « allows even beginners and amateurs to orient themselves and arrive at the desired identification ». For his part, Jules Brunel, Marie-Victorin's assistant at the Montreal Botanical Institute, was responsible for preparing the manuscripts, checking the documentation and correcting the proofs. The last two mentioned also wrote the sections dealing with some of the more contentious genres.
The author also addresses special thanks to other people, including Bro. Rolland-Germain, his collaborator for thirty years, Marcelle Gauvreau, librarian of the Botanical Institute, and Émile Jacques, curator of the herbarium of this institution.
Reception
... The publication of the first edition of Flore laurentienne was an event awaited by Quebec society at the time; it is announced on the front page of the daily Le Devoir. Biologist Georges Préfontaine wrote in Le Devoir: “A new monument, luminous and imperishable, stands today in the firmament of American botanical science.” The literary critic Pierre Daviault, in Le Droit, is equally complimentary:
The same year the flora was published, the gold medal from the Provancher Society of Natural History of Canada was awarded to Marie-Victorin for its publication.
Culture
The Flore laurentienne is mentioned several times in Réjean Ducharme's novel, L’Hiver de force''.
References
External links
Archive | Brother Marie-Victorin: a heritage between nature and literature, quotes, analyses, audio documents, Radio-Canada Info, July 2019
Frère Marie-Victorin (1885-1944), Anticosti, land with immense spaces, the sea all around, and a social world of the 1920s gone forever. Anticosti/UdeM archives, 14 photos (French)
Marie-Victorin Herbarium, Institut de recherche en biologie vegetale, at the Montreal Botanical Garden
Revisit the Laurentian Flora, Michaële Perron-Langlais, Radio-Canada, 12 June 2024
.
.
Florae (publication)
Flora of Quebec | Flore laurentienne | Biology | 983 |
72,571,890 | https://en.wikipedia.org/wiki/Glass%20basketball%20court | A glass basketball court is a basketball court with a glass floor that uses light emitting diodes (LEDs) to display the court lines and other graphics.
History
ASB GlassFloor, a German manufacturer, first demonstrated a glass court for sports including basketball in 2011. Its first installation was for a 3x3 basketball event in Berlin in 2014. The company makes two different kinds of glass floor that are approved by FIBA for tier 1 competitions: ASB MultiSports, which offers LED lines, and ASB LumiFlex, which allows full motion video and player tracking. The LumiFlex option can display statistics and advertising for spectators in the arena in ways comparable to digital on-screen graphics on television broadcasts.
In 2017, FIBA had allowed another manufacturer to supply LED-lined glass floors for tier 2 and tier 3 competitions, noting that it passed the association's requirements for player and ball reaction against the surface, and avoided the redundant lines on many existing multi-use courts. After successful trials, FIBA approved glass courts on October 1, 2022, for tier-1 competitions such as the FIBA Basketball World Cup, and a glass court would be used for the first time during the 2023 FIBA Under-19 Women's Basketball World Cup in Madrid.
In 2014, Nike developed a glass court with AKQA, Rhizomatiks and WiSpark for an exhibition in Shanghai. The company invited 30 players to practice with Kobe Bryant on the court, nicknamed the "House of Mamba". The custom court included motion tracking and lighting that could track players as they ran drills.
Glass courts are installed in several European basketball arenas, including the BallsportArena Dresden, the OYM Performance Center in Switzerland, and an arena at the University of Oxford. Two professional European basketball teams permanently installed ASB GlassFloor courts in their arenas in 2024. FC Bayern Munich installed a LumiFlex LED court at its home arena, BMW Park. Panathinaikos B.C. also installed a LumiFlex court at their home, O.A.C.A. Olympic Indoor Hall, in Athens.
In February 2024, the NBA held the Saturday night activities of All-Star weekend, including the skills challenge, on a glass court at Lucas Oil Stadium. The All-Star Game proper was played at Gainbridge Fieldhouse, on a traditional court.
Disadvantages
As of June 2022, a MultiSports floor costs about $80–90 (USD) per square foot and a LumiFlex floor costs about $500 per square foot; a full NBA court with LumiFlex technology would cost about $2 million, leading to doubt about its viability for widespread adoption.
A glass court cannot be laid atop an ice surface, making it unsuitable for multi-purpose arenas which host both ice hockey and basketball games during overlapping schedules.
References
Glass architecture
Sports rules and regulations
Sports venues by type
Playing field surfaces
Basketball equipment | Glass basketball court | Materials_science,Engineering | 602 |
2,409,756 | https://en.wikipedia.org/wiki/Hemimastigophora | Hemimastigophora is a group of single-celled eukaryotic organisms including the Spironematellidae, first identified in 1988, and the Paramastigidae. Over the next 30 years, different authors proposed placing these organisms in various branches of the eukaryotes. In 2018 Lax et al. reported the first genetic information for Spironemidae, and suggest that they are from an ancient lineage of eukaryotes which constitute a separate clade from all other eukaryotic kingdoms. It may be related to the Telonemia.
History of classification
Hemimastigophora was established in 1988 by Foissner et al., as a new phylum with a single family, Spironemidae. Its placement on the eukaryote tree of life was unclear, but the authors suggested that the structure of its pellicle and cell nucleus indicated a close relationship with Euglenozoa. For 30 years after the description of the group, no genetic information was available. During that time, researchers proposed that it should be classified in, or near, an assortment of other groups, including the alveolates, apusomonads, ancyromonads, and Rhizaria.
In an article published in 2018, Lax et al. announced that a new hemimastigophoran species, Hemimastix kukwesjijk, had been discovered in a Nova Scotian soil sample, and successfully cultivated in the laboratory. A second hemimastigophoran, a new species of Spironema, was found in the same sample. Phylogenomic analyses of the two organisms suggest that Hemimastigophora is a very ancient lineage, which diverged from the other eukaryotes at such an early date that the group should be classified at the supra-kingdom level.
The 2024 study revealed the enigmatic Meteora sporadica to be also related to Hemimastigophora.
Classification
The hemimastigote classification, as of 2022:
Family Spironematellidae (=Spironemidae Doflein 1916)
Hemimastix Foissner, Blatterer & Foissner, 1988
H. amphikineta Foissner, Blatterer & Foissner, 1988
H. kukwesjijk Eglit & Simpson, 2018
Stereonema Foissner & Foissner 1993 non Kützing 1836
S. geiseri Foissner & Foissner 1993
Spironematella (=Spironema Klebs 1893 non Vuillemin 1905 non Léger & Hesse 1922 non Rafinesque 1838 non Hochst. 1842 non Lindley 1840 non Meek 1864)
S. multiciliata (Klebs 1892) Silva 1970 (=Spironema multiciliatum Klebs 1893)
S. terricola (Foissner & Foissner 1993) Shɨshkin 2022 (=Spironema terricola Foissner & Foissner 1993)
S. goodeyi (Foissner & Foissner 1993) Shɨshkin 2022 (=Spironema goodeyi Foissner & Foissner 1993)
Family Paramastigidae
Paramastix Skuja 1948
P. lata Skuja 1956
P. minuta Skuja 1964
P. conifera Skuja 1948
P. truncata Skuja 1948
References | Hemimastigophora | Biology | 724 |
38,002,503 | https://en.wikipedia.org/wiki/Gamma1%20Normae | {{DISPLAYTITLE:Gamma1 Normae}}
Gamma1 Normae, Latinized from γ1 Normae, is a single, yellow-white hued star in the southern constellation of Norma. It is faintly visible to the naked eye with an apparent visual magnitude of 4.98. The annual parallax shift is only as measured from Earth, which yields a rough distance estimate of 1,500 light years from the Sun. It is moving closer to the Sun with a radial velocity of around -16 km/s.
This is an F-type supergiant star with a stellar classification of F9 Ia It has 6.6 times the mass of the Sun and has expanded to about 160 times the Sun's radius. The star is radiating 2,040 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 6,068 K. It is estimated to be around 53 million years old.
γ2 Nor is a nearby star nearly a magnitude brighter.
References
F-type supergiants
Norma (constellation)
Normae, Gamma1
Durchmusterung objects
146143
079790
6058 | Gamma1 Normae | Astronomy | 239 |
40,789 | https://en.wikipedia.org/wiki/Bilateral%20synchronization | In telecommunications, bilateral synchronization (or bilateral control) is a synchronization control system between exchanges A and B in which the clock at telephone exchange A controls the data received at exchange B and the clock at exchange B controls the data received at exchange A.
Bilateral synchronization is usually implemented by deriving the timing from the incoming bitstream.
Source: from Federal Standard 1037C in support of MIL-STD-188
See also
Plesiochronous digital hierarchy
Synchronization | Bilateral synchronization | Engineering | 107 |
73,963,826 | https://en.wikipedia.org/wiki/Marta%20Schuhmacher | Marta Schuhmacher is a distinguished professor of environmental technology at the Department of Chemical Engineering at Universitat Rovira i Virgili, Tarragona, Spain. She is known for her work linking the presence of chemicals with environmental and human health issues.
Career
Schuhmacher did her degree in Chemistry (1976), and received a B.Sc. in 1991 from UNED, and completed her Ph.D. in 1990 from University of Zaragoza, Spain. Later, she also did a master's in engineering and environmental management from School of Industrial Organization, Ministry of Industry and Energy, Madrid, Spain (1995).
She has been teaching at DEQ since 1993, and in 2009, she became a professor or Catedrática. Since 2015, she has been a distinguished professor at the University of Rovira i Virgili. She served as the director of the AGA Research Group (Environmental Analysis and Management) of the Chemical Engineering department of the Rovira i Virgili University and the Tecnatox center till 2023. As of 2024 she is a full professor of environmental technology at the Universitat Rovira i Virgili.
Research
Schuhmacher is known for her work in examining pollution, specifically suspended particulate matter, in schools. She has also examined the presence of microplastics in marine systems, and compared models used to define toxicity. Schuhmacher has focused on the development of techniques for risk assessment, and their application to new and emerging chemical compounds, and very particularly to that of mixtures.
Selected publications
Awards and honors
Schuhmacher received recognition with the President Macià Work Medal in 2023.
References
Living people
University of Zaragoza alumni
Academic staff of the University of Rovira i Virgili
Toxicologists
Women toxicologists
Spanish toxicologists
Year of birth missing (living people) | Marta Schuhmacher | Environmental_science | 375 |
462,433 | https://en.wikipedia.org/wiki/Unit%20of%20length | A unit of length refers to any arbitrarily chosen and accepted reference standard for measurement of length. The most common units in modern use are the metric units, used in every country globally. In the United States the U.S. customary units are also in use. British Imperial units are still used for some purposes in the United Kingdom and some other countries. The metric system is sub-divided into SI and non-SI units.
Metric system
SI
The base unit in the International System of Units (SI) is the meter, defined as "the length of the path travelled by light in vacuum during a time interval of seconds." It is approximately equal to . Other SI units are derived from the meter by adding prefixes, as in millimeter or kilometer, thus producing systematic decimal multiples and submultiples of the base unit that span many orders of magnitude. For example, a kilometer is .
Non-SI
In the centimeter–gram–second system of units, the basic unit of length is the centimeter, or of a meter.
Other non-SI units are derived from decimal multiples of the meter.
Imperial/U.S.
The basic unit of length in the imperial and U.S. customary systems is the yard, defined as exactly by international treaty in 1959.
Common imperial units and U.S. customary units of length include:
thou or mil ( of an inch)
inch ()
foot (12 inches, 0.3048 m)
yard (3 feet, 0.9144 m)
(terrestrial) mile (5280 feet, or 1760 yards 1609.344 m)
(land) league
Marine
In addition, the following are used by sailors:
fathom (for depth; only in non-metric countries) (2 yards = 1.8288 m)
nautical mile (one minute of arc of latitude = )
Aviation
Aviators use feet for altitude worldwide (except in Russia and China) and nautical miles for distance.
Surveying
Surveyors in the United States continue to use:
chain (22 yards, or )
rod (also called pole or perch) (quarter of a chain, 5 yards, or )
Australian building trades
The Australian building trades adopted the metric system in 1966 and the units used for measurement of length are meters (m) and millimeters (mm). Centimeters (cm) are avoided as they cause confusion when reading plans. For example, the length two and a half meters is usually recorded as 2500 mm or 2.5 m; it would be considered non-standard to record this length as 250 cm.
Surveyor's trade
American surveyors use a decimal-based system of measurement devised by Edmund Gunter in 1620. The base unit is Gunter's chain of which is subdivided into 4 rods, each of 16.5 ft or 100 links of 0.66 feet. A link is abbreviated "lk", and links "lks", in old deeds and land surveys done for the government.
Science
Astronomy
Astronomical measure uses:
Earth radius ≈ 6,371 km
Lunar distance LD ≈ . Average distance between the center of Earth and the center of the Moon.
astronomical unit au. Defined as . Approximately the distance between the Earth and Sun.
light-year ly ≈ . The distance that light travels in a vacuum in one Julian year.
parsec pc ≈ or about
Hubble length 14.4 billion light-years or 4.55 gigaparsecs
Physics
In atomic physics, sub-atomic physics, and cosmology, the preferred unit of length is often related to a chosen fundamental physical constant, or combination thereof. This is often a characteristic radius or wavelength of a particle. Some common natural units of length are included in this table:
Archaic
Archaic units of distance include:
cana
cubit
rope
league
li (China)
pace (the "double pace" of about 5 feet used in Ancient Rome)
verst (Russia)
Informal
In everyday conversation, and in informal literature, it is common to see lengths measured in units of objects of which everyone knows the approximate width. Common examples are:
Double-decker bus (9.5–11 meters in length)
American football field (100 yards in length)
Thickness of a human hair (around 80 micrometers)
Other
Horse racing and other equestrian activities keep alive:
furlong =
horse length ≈
See also
List of examples of lengths
Medieval weights and measures
Orders of magnitude (length)
System of measurement
Units of measurement
References
Further reading | Unit of length | Mathematics | 901 |
18,026,038 | https://en.wikipedia.org/wiki/Bismuth%28III%29%20iodide | Bismuth(III) iodide is the inorganic compound with the formula BiI3. This gray-black salt is the product of the reaction of bismuth and iodine, which once was of interest in qualitative inorganic analysis.
Bismuth(III) iodide adopts a distinctive crystal structure, with iodide centres occupying a hexagonally closest-packed lattice, and bismuth centres occupying either none or two-thirds of the octahedral holes (alternating by layer), therefore it is said to occupy one third of the total octahedral holes.
Synthesis
Bismuth(III) iodide forms upon heating an intimate mixture of iodine and bismuth powder:
2Bi + 3I2 → 2BiI3
BiI3 can also be made by the reaction of bismuth oxide with aqueous hydroiodic acid:
Bi2O3(s) + 6HI(aq) → 2BiI3(s) + 3H2O(l)
Reactions
Since bismuth(III) iodide is insoluble in water, an aqueous solution can be tested for the presence of Bi3+ ions by adding a source of iodide such as potassium iodide. A black precipitate of bismuth(III) iodide indicates a positive test.
Bismuth(III) iodide forms iodobismuth(III) anions when heated with halide donors:
2 NaI + BiI3 → Na2[BiI5]
Bismuth(III) iodide catalyzes the Mukaiyama aldol reaction. Bi(III) is also used in a Barbier type allylation of carbonyl compounds in combination with a reducing agent such as zinc or magnesium.
References
Bismuth iodide
Iodides
Metal halides | Bismuth(III) iodide | Chemistry | 393 |
28,777,526 | https://en.wikipedia.org/wiki/Synizesis%20%28biology%29 | Synizesis refers to a phenomenon sometimes observed in one of the subphases of meiosis. This phenomenon, sometimes referred to as a "synizetic knot", and contrasted with the chromosome "bouquet" more typically observed, is characterized by the localization of the meiotic chromosomes in a tight clump on one side of the nucleus. The term synizesis seems to have been coined by Clarence Erwin McClung in 1905.
The synizetic knot (Synizesis) was later found to be a technical artifact induced by the feature of strong acidic fixatives used during that time (e.g., Flemming's strong fixative) to precipitate the thread-like delicate chromosomes of the Leptotene stage of first meiotic prophase into a dark staining knot.
References
Cellular processes | Synizesis (biology) | Biology | 173 |
2,053,015 | https://en.wikipedia.org/wiki/Manganate | In inorganic nomenclature, a manganate is any negatively charged molecular entity with manganese as the central atom. However, the name is usually used to refer to the tetraoxidomanganate(2−) anion, MnO, also known as manganate(VI) because it contains manganese in the +6 oxidation state. Manganates are the only known manganese(VI) compounds.
Other manganates include hypomanganate or manganate(V), , permanganate or manganate(VII), , and the dimanganate or dimanganate(III) .
A manganate(IV) anion has been prepared by radiolysis of dilute solutions of permanganate. It is mononuclear in dilute solution, and shows a strong absorption in the ultraviolet and a weaker absorption at 650 nm.
Structure
The manganate(VI) ion is tetrahedral, similar to sulfate or chromate: indeed, manganates are often isostructural with sulfates and chromates, a fact first noted by Eilhard Mitscherlich in 1831. The manganese–oxygen distance is 165.9 pm, about 3 pm longer than in permanganate. As a d1 ion, it is paramagnetic, but any Jahn–Teller distortion is too small to be detected by X-ray crystallography. Manganates are dark green in colour, with a visible absorption maximum of λmax = 606 nm (ε = ). The Raman spectrum has also been reported.
Preparation
Sodium and potassium manganates are usually prepared in the laboratory by stirring the equivalent permanganate in a concentrated solution (5–10 M) of the hydroxide for 24 hours or with heating.
+ → + + O2
Potassium manganate is prepared industrially, as an intermediate to potassium permanganate, by dissolving manganese dioxide in molten potassium hydroxide with potassium nitrate or air as the oxidizing agent.
+ + O2 → +
Disproportionation
Manganates are unstable towards disproportionation in all but the most alkaline of aqueous solutions. The ultimate products are permanganate and manganese dioxide, but the kinetics are complex and the mechanism may involve protonated and/or manganese(V) species.
Uses
Manganates, particularly the insoluble barium manganate, BaMnO4, have been used as oxidizing agents in organic synthesis: they will oxidize primary alcohols to aldehydes and then to carboxylic acids, and secondary alcohols to ketones. Barium manganate has also been used to oxidize hydrazones to diazo compounds.
Related compounds
Manganate is formally the conjugate base of hypothetical manganic acid , which cannot be formed because of its rapid disproportionation. However, its second acid dissociation constant has been estimated by pulse radiolysis techniques:
HMnO MnO + H+ pKa = 7.4 ± 0.1
Manganites
The name "manganite" is used for compounds formerly believed to contain the anion , with manganese in the +3 oxidation state. However, most of these "manganites" do not contain discrete oxyanions, but are mixed oxides with perovskite (LaMnIIIO3, CaMnIVO3), spinel (LiMnO4) or sodium chloride (LiMnIIIO2, NaMnIIIO2) structures.
One exception is potassium dimanganate(III), K6Mn2O6, which contains discrete Mn2O anions.
References
Transition metal oxyanions
Oxometallates | Manganate | Chemistry | 776 |
78,594,273 | https://en.wikipedia.org/wiki/Mortgage%20button | The "mortgage button" or "amity button" was a small ornamental inlay often featured on newel posts of a main staircase in the 19th and early 20th centuries, particularly in American and European homes. It was used to hide joinery.
The name comes from the historical misconception that they represented a homeowner who had paid off their mortgage. According to tradition, the homeowner would arrange to have a button made of ivory set onto the newel post when the house was paid off. Another version is that a scrimshaw maker would engrave the date the loan was paid off onto a piece of ivory, which was inserted the newel.
One popular myth was that the decorative cap was concealing a deed to the house, or a mortgage document, which had been rolled up and hidden inside the newel post. According to writer Mary Miley Theobald, no such documents have ever been found, although house plans were found inside the newel post on one occasion.
Others have suggested that the ivory button on the newel post was a symbol of cooperation or brotherly love.
References
Stairways
Architectural elements
Stairs | Mortgage button | Technology,Engineering | 232 |
35,625,619 | https://en.wikipedia.org/wiki/List%20of%20RISC%20OS%20filetypes | This is a sub-article to RISC OS.
filetypes use metadata to distinguish file formats. Some common file formats from other systems are mapped to filetypes by the MimeMap module. Such mapping was previously handled by DosMap.
The MimeMap module maps filetypes to and from MIME content types, dotted filename extensions and Apple's Uniform Type Identifiers.
Requests for new filetype allocations for all versions are handled centrally by RISC OS Open.
RISC OS filetypes
Filetypes were originally classified by Acorn into distinct ranges:
User
This range of filetypes was intended for personal use in closed systems, not for general distribution. Nevertheless, many programs using these types were distributed, especially as Public-domain software. Consequently there are many clashes.
Non-commercial software
Commercial software
Acorn reserved
Generic data
References
External links
File Types Programmer's Reference Manuals at RISC OS Open wiki
RISC OS filetypes
RISC OS | List of RISC OS filetypes | Technology | 206 |
54,025,987 | https://en.wikipedia.org/wiki/Succinimidyl%204-%28N-maleimidomethyl%29cyclohexane-1-carboxylate | Succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (SMCC) is a heterobifunctional amine-to-sulfhydryl crosslinker, which contains two reactive groups at opposite ends: N-hydroxysuccinimide-ester and maleimide, reactive with amines and thiols respectively. SMCC is often used in bioconjugation to link proteins with other functional entities (fluorescent dyes, tracers, nanoparticles, cytotoxic agents). For example, a targeted anticancer agent – trastuzumab emtansine (antibody-drug conjugate containing an antibody trastuzumab chemically linked to a highly potent drug DM-1) – is prepared using SMCC reagent.
References
Reagents for biochemistry
Maleimides
Succinimides | Succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate | Chemistry,Biology | 201 |
62,528,566 | https://en.wikipedia.org/wiki/Epichlo%C3%AB%20baconii | Epichloë baconii is a haploid sexual species in the fungal genus Epichloë.
A systemic grass symbiont first described in 1993, Epichloë baconii is a sister lineage to Epichloë stromatolonga.
Epichloë baconii is found in Europe, where it has been identified in many species of grasses, including Agrostis capillaris, Agrostis stolonifera, Calamagrostis villosa, Calamagrostis varia and Calamagrostis purpurea.
References
baconii
Fungi described in 1993
Fungi of Europe
Fungus species | Epichloë baconii | Biology | 129 |
21,188,237 | https://en.wikipedia.org/wiki/Hybrid%20mass%20spectrometer | A hybrid mass spectrometer is a device for tandem mass spectrometry that consists of a combination of two or more m/z separation devices of different types.
Notation
The different m/z separation elements of a hybrid mass spectrometer can be represented by a shorthand notation. The symbol Q represents a quadrupole mass analyzer, q is a radio frequency collision quadrupole, TOF is a time-of-flight mass spectrometer, B is a magnetic sector and E is an electric sector.
Sector quadrupole
A sector instrument can be combined with a collision quadrupole and quadrupole mass analyzer to form a hybrid instrument. A BEqQ configuration with a magnetic sector (B), electric sector (E), collision quadrupole (q) and m/z selection quadrupole (Q) have been constructed and an instrument with two electric sectors (BEEQ) has been described.
Quadrupole time-of-flight
A triple quadrupole mass spectrometer with the final quadrupole replaced by a time-of-flight device is known as a quadrupole time-of-flight instrument. Such an instrument can be represented as QqTOF.
Ion trap time-of-flight
In an ion trap instrument, ions are trapped in a quadrupole ion trap and then injected into the TOF. The trap can be 3-D or a linear trap.
Linear ion trap and Fourier transform mass analyzers
A linear ion trap combined with a Fourier transform ion cyclotron resonance or Orbitrap mass spectrometer is marketed by Thermo Scientific as the LTQ FT and LTQ Orbitrap, respectively.
References
Mass spectrometry
Tandem mass spectrometry | Hybrid mass spectrometer | Physics,Chemistry | 358 |
5,220,115 | https://en.wikipedia.org/wiki/Nomen%20oblitum | In zoological nomenclature, a nomen oblitum (plural: nomina oblita; Latin for "forgotten name") is a disused scientific name which has been declared to be obsolete (figuratively "forgotten") in favor of another "protected" name.
In its present meaning, the nomen oblitum came into being with the fourth edition (1999) of the International Code of Zoological Nomenclature. After 1 January 2000, a scientific name may be formally declared to be a nomen oblitum when it has been shown not to have been used as a valid name within the scientific community since 1899, and when it is either a senior synonym (there is also a more recent name which applies to the same taxon, and which is in common use) or a homonym (it is spelled the same as another name, which is in common use), and when the preferred junior synonym or homonym has been shown to be in wide use in 50 or more publications in the past few decades. Once a name has formally been declared to be a nomen oblitum, the now obsolete name is to be "forgotten". By the same act, the next available name must be declared to be protected under the title nomen protectum. Thereafter it takes precedence.
An example is the case of the scientific name for the leopard shark. Despite the name Mustelus felis being the senior synonym, an error in recording the dates of publication resulted in the widespread use of Triakis semifasciata as the leopard shark's scientific name. After this long-standing error was discovered, T. semifasciata was made the valid name (as a nomen protectum) and Mustelus felis was declared invalid (as a nomen oblitum).
Use in taxonomy
The designation nomen oblitum has been used relatively frequently to keep the priority of old, sometimes disused names, and, controversially, often without establishing that a name actually meets the criteria for the designation. Some taxonomists have regarded the failure to properly establish the nomen oblitum designation as a way to avoid doing taxonomic research or to retain a preferred name regardless of priority. When discussing the taxonomy of North American birds, Rea (1983) stated that "...Swainson's [older but disused] name must stand unless it can be demonstrated conclusively to be a nomen oblitum (a game some taxonomists play to avoid their supposed fundamental principle, priority)."
Banks and Browning (1995) responded directly to Rea's strict application of ICZN rules for determining nomina oblita, stating: "We believe that the fundamental obligation of taxonomists is to promote stability, and that the principle of priority is but one way in which this can be effected. We see no stability in resurrecting a name of uncertain basis that has been used in several different ways to replace a name that has been used uniformly for most of a century."
See also
Glossary of scientific naming
Nomen conservandum
Nomen dubium
Nomen novum
Nomen nudum
References
Taxonomy (biology)
Zoological nomenclature
Latin biological phrases | Nomen oblitum | Biology | 652 |
41,729,345 | https://en.wikipedia.org/wiki/Wikipedia%20administrators | On Wikipedia, trusted users may be appointed as administrators (also referred to as admins, sysops or janitors), following a successful request for adminship. Currently, there are 839 administrators on the English Wikipedia. Administrators have additional technical privileges compared with other editors, such as being able to protect and delete pages and being able to block users from editing pages.
On Wikipedia, becoming an administrator is often referred to as "being given [or taking up] the mop", a term which has also been used elsewhere. In 2006, The New York Times reported that administrators on Wikipedia, of whom there were then about 1,000, were "geographically diverse". In July 2012, it was widely reported that Wikipedia was "running out of administrators", because in 2005 and 2006, 40 to 50 people were often appointed administrators each month, but in the first half of 2012, only nine in total were appointed.
However, Jimmy Wales, Wikipedia's co-founder, denied that this was a crisis or that Wikipedia was running out of admins, saying, "The number of admins has been stable for about two years, there's really nothing going on." Wales had previously (in a message sent to the English Wikipedia mailing list on February 11, 2003) stated that being an admin is "not a big deal", and that "It's merely a technical matter that the powers given to sysops are not given out to everyone."
In his 2008 book Wikipedia: The Missing Manual, John Broughton states that while many people think of administrators on Wikipedia as judges, that is not the purpose of the role. Instead, he says, admins usually "delete pages" and "protect pages involved in edit wars". Wikipedia administrators are not employees or agents of the Wikimedia Foundation.
Requests for adminship
While the first Wikipedia administrators were appointed by Jimmy Wales in October 2001, administrator privileges on Wikipedia are now granted through a process known as requests for adminship (RfA). Any registered editor may nominate themselves, or may request another editor to do so. The process has been said to be "akin to putting someone through the Supreme Court" by Andrew Lih, a scientist and professor who is himself an administrator on the English Wikipedia. Lih also said, "It's pretty much a hazing ritual at this point", in contrast to how the process worked early in Wikipedia's history, when all one had to do to become an admin was "prove you weren't a bozo".
Candidacy for the role is normally considered only after "extensive work on the wiki". While any editor may vote in an RfA, the outcome is not determined by a majority vote, but rather by whether consensus has been reached that the candidate would make a good administrator, a decision that can only be made by a bureaucrat, a Wikipedia editor who is also appointed by the community through a "request" process, though the process is much stricter for them than for administrators. This may have been implemented as a result of RfAs attracting increasing levels of attention: Stvilia et al. quoted that "Prior to mid-2005, RfAs typically did not attract much attention. Since then, it has become quite common for RfAs to attract huge numbers of RfA groupies who all support one another", with the record number of votes in one RfA as of May 2022 being 468 for the RfA of longtime editor Tamzin Hadasa Kelly, which was supported by 340 users and opposed by 116 amidst controversy over criticism of supporters of Donald Trump.
Role
Once granted administrator privileges, a user has access to additional functions in order to perform certain duties. These include "messy cleanup work", deletion of articles deemed unsuitable, protecting pages (restricting editing privileges to that page), and blocking the accounts of disruptive users. Blocking a user must be done according to Wikipedia's policies and a reason must be stated for the block, which will be permanently logged by the software. Use of this privilege to "gain editing advantages" is considered inappropriate.
Scientific studies
A 2013 scientific paper by researchers from Virginia Tech and Rensselaer Polytechnic Institute found that after editors are promoted to administrator status, they often focus more on articles about controversial topics than they did before. The researchers also proposed an alternative method for choosing administrators, in which more weight is given to the votes of experienced editors. This corresponds to a modality of plural voting. Another paper, presented at the 2008 Conference on Human Factors in Computing Systems, analyzed data from all 1,551 requests for adminship from January 2006 to October 2007, with the goal of determining which (if any) of the criteria recommended in Wikipedia's Guide to requests for adminship were the best predictors of whether the user in question would actually become an admin. In December 2013, a similar study was published by researchers from the Polish-Japanese Institute of Information Technology in Warsaw, which aimed to model the results of requests for adminship on the Polish Wikipedia using a model derived from Wikipedia's edit history. They found that they could "classify the votes in the RfA procedures using this model with an accuracy level that should be sufficient to recommend candidates."
Notes
References
System administration
Administrators | Wikipedia administrators | Technology | 1,082 |
38,743,470 | https://en.wikipedia.org/wiki/Tank%20connector | Tank connectors are a type of tank fitting also known as tank inlets, tank outlets, or tank nipples. This fitting must be leakage proof, as the water supply (inward and outward) depends on same. Many different varieties of tank connectors exist.
Tank connectors are widely made of plastic (PVC) or brass. They have a flange either on the edge of one side or in the center. They are supplemented with a rubber washer or a plastic washer with one or two hexagonal flange nuts to tighten the connector to the tank wall. Those with two nuts usually require some silicone or other sealant to prevent fluid passing along the threads.
The size of a connector varies from 1/2 to 4 in.
Plumbing | Tank connector | Engineering | 155 |
36,130,784 | https://en.wikipedia.org/wiki/Thin%20set%20%28analysis%29 | In mathematical analysis, a thin set is a subset of n-dimensional complex space Cn with the property that each point has a neighbourhood on which some non-zero holomorphic function vanishes. Since the set on which a holomorphic function vanishes is closed and has empty interior (by the Identity theorem), a thin set is nowhere dense, and the closure of a thin set is also thin.
The fine topology was introduced in 1940 by Henri Cartan to aid in the study of thin sets.
References
Several complex variables | Thin set (analysis) | Mathematics | 110 |
19,904 | https://en.wikipedia.org/wiki/Meteorology | Meteorology is a branch of the atmospheric sciences (which include atmospheric chemistry and physics) with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not begin until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. It was not until after the elucidation of the laws of physics, and more particularly in the latter half of the 20th century, the development of the computer (allowing for the automated solution of a great many modelling equations) that significant breakthroughs in weather forecasting were achieved. An important branch of weather forecasting is marine weather forecasting as it relates to maritime and coastal safety, in which weather effects also include atmospheric interactions with large bodies of water.
Meteorological phenomena are observable weather events that are explained by the science of meteorology. Meteorological phenomena are described and quantified by the variables of Earth's atmosphere: temperature, air pressure, water vapour, mass flow, and the variations and interactions of these variables, and how they change over time. Different spatial scales are used to describe and predict weather on local, regional, and global levels.
Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between Earth's atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in many diverse fields such as the military, energy production, transport, agriculture, and construction.
The word meteorology is from the Ancient Greek μετέωρος metéōros (meteor) and -λογία -logia (-(o)logy), meaning "the study of things high in the air".
History
Ancient meteorology up to the time of Aristotle
Early attempts at predicting weather were often related to prophecy and divining, and were sometimes based on astrological ideas. Ancient religions believed meteorological phenomena to be under the control of the gods. The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. The Egyptians had rain-making rituals as early as 3500 BC.
Ancient Indian Upanishads contain mentions of clouds and seasons. The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides evidence of weather observation.
Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos.
The ancient Greeks were the first to make theories about the weather. Many natural philosophers studied the weather. However, as meteorological instruments did not exist, the inquiry was largely qualitative, and could only be judged by more general theoretical speculations. Herodotus states that Thales predicted the solar eclipse of 585 BC. He studied Babylonian equinox tables. According to Seneca, he gave the explanation that the cause of the Nile's annual floods was due to northerly winds hindering its descent by the sea. Anaximander and Anaximenes thought that thunder and lightning was caused by air smashing against the cloud, thus kindling the flame. Early meteorological theories generally considered that there was a fire-like substance in the atmosphere. Anaximander defined wind as a flowing of air, but this was not generally accepted for centuries. A theory to explain summer hail was first proposed by Anaxagoras. He observed that air temperature decreased with increasing height and that clouds contain moisture. He also noted that heat caused objects to rise, and therefore the heat on a summer day would drive clouds to an altitude where the moisture would freeze. Empedocles theorized on the change of the seasons. He believed that fire and water opposed each other in the atmosphere, and when fire gained the upper hand, the result was summer, and when water did, it was winter. Democritus also wrote about the flooding of the Nile. He said that during the summer solstice, snow in northern parts of the world melted. This would cause vapors to form clouds, which would cause storms when driven to the Nile by northerly winds, thus filling the lakes and the Nile. Hippocrates inquired into the effect of weather on health. Eudoxus claimed that bad weather followed four-year periods, according to Pliny.
Aristotelian meteorology
These early observations would form the basis for Aristotle's Meteorology, written in 350 BC. Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle. His work would remain an authority on meteorology for nearly 2,000 years.
The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted:
If the flashing body is set on fire and rushes violently to the Earth it is called a thunderbolt; if it is only half of fire, but violent also and massive, it is called a meteor; if it is entirely free from fire, it is called a smoking bolt. They are all called 'swooping bolts' because they swoop down upon the Earth. Lightning is sometimes smoky and is then called 'smoldering lightning"; sometimes it darts quickly along and is then said to be vivid. At other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called 'swooping lightning'
After Aristotle, progress in meteorology stalled for a long time. Theophrastus compiled a book on weather forecasting, called the Book of Signs, as well as On Winds. He gave hundreds of signs for weather phenomena for a period up to a year. His system was based on dividing the year by the setting and the rising of the Pleiad, halves into solstices and equinoxes, and the continuity of the weather for those periods. He also divided months into the new moon, fourth day, eighth day and full moon, in likelihood of a change in the weather occurring. The day was divided into sunrise, mid-morning, noon, mid-afternoon and sunset, with corresponding divisions of the night, with change being likely at one of these divisions. Applying the divisions and a principle of balance in the yearly weather, he came up with forecasts like that if a lot of rain falls in the winter, the spring is usually dry. Rules based on actions of animals are also present in his work, like that if a dog rolls on the ground, it is a sign of a storm. Shooting stars and the Moon were also considered significant. However, he made no attempt to explain these phenomena, referring only to the Aristotelian method. The work of Theophrastus remained a dominant influence in weather forecasting for nearly 2,000 years.
Meteorology after Aristotle
Meteorology continued to be studied and developed over the centuries, but it was not until the Renaissance in the 14th to 17th centuries that significant advancements were made in the field. Scientists such as Galileo and Descartes introduced new methods and ideas, leading to the scientific revolution in meteorology.
Speculation on the cause of the flooding of the Nile ended when Eratosthenes, according to Proclus, stated that it was known that man had gone to the sources of the Nile and observed the rains, although interest in its implications continued.
During the era of Roman Greece and Europe, scientific interest in meteorology waned. In the 1st century BC, most natural philosophers claimed that the clouds and winds extended up to 111 miles, but Posidonius thought that they reached up to five miles, after which the air is clear, liquid and luminous. He closely followed Aristotle's theories. By the end of the second century BC, the center of science shifted from Athens to Alexandria, home to the ancient Library of Alexandria. In the 2nd century AD, Ptolemy's Almagest dealt with meteorology, because it was considered a subset of astronomy. He gave several astrological weather predictions. He constructed a map of the world divided into climatic zones by their illumination, in which the length of the Summer solstice increased by half an hour per zone between the equator and the Arctic. Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations.
In 25 AD, Pomponius Mela, a Roman geographer, formalized the climatic zone system. In 63–64 AD, Seneca wrote Naturales quaestiones. It was a compilation and synthesis of ancient Greek theories. However, theology was of foremost importance to Seneca, and he believed that phenomena such as lightning were tied to fate. The second book(chapter) of Pliny's Natural History covers meteorology. He states that more than twenty ancient Greek authors studied meteorology. He did not make any personal contributions, and the value of his work is in preserving earlier speculation, much like Seneca's work.
From 400 to 1100, scientific learning in Europe was preserved by the clergy. Isidore of Seville devoted a considerable attention to meteorology in Etymologiae, De ordine creaturum and De natura rerum. Bede the Venerable was the first Englishman to write about the weather in De Natura Rerum in 703. The work was a summary of then extant classical sources. However, Aristotle's works were largely lost until the twelfth century, including Meteorologica. Isidore and Bede were scientifically minded, but they adhered to the letter of Scripture.
Islamic civilization translated many ancient works into Arabic which were transmitted and translated in western Europe to Latin.
In the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the sun and moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes.
In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight in Opticae thesaurus; he estimated that twilight begins when the sun is 19 degrees below the horizon, and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km).
Adelard of Bath was one of the early translators of the classics. He also discussed meteorological topics in his Quaestiones naturales. He thought dense air produced propulsion in the form of wind. He explained thunder by saying that it was due to ice colliding in clouds, and in Summer it melted. In the thirteenth century, Aristotelian theories reestablished dominance in meteorology. For the next four centuries, meteorological work by and large was mostly commentary. It has been estimated over 156 commentaries on the Meteorologica were written before 1650.
Experimental evidence was less important than appeal to the classics and authority in medieval thought. In the thirteenth century, Roger Bacon advocated experimentation and the mathematical approach. In his Opus majus, he followed Aristotle's theory on the atmosphere being composed of water, air, and fire, supplemented by optics and geometric proofs. He noted that Ptolemy's climatic zones had to be adjusted for topography.
St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit cannot appear higher than 42 degrees above the horizon.
In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow.
By the middle of the sixteenth century, meteorology had developed along two lines: theoretical science based on Meteorologica, and astrological weather forecasting. The pseudoscientific prediction by natural signs became popular and enjoyed protection of the church and princes. This was supported by scientists like Johannes Muller, Leonard Digges, and Johannes Kepler. However, there were skeptics. In the 14th century, Nicole Oresme believed that weather forecasting was possible, but that the rules for it were unknown at the time. Astrological influence in meteorology persisted until the eighteenth century.
Gerolamo Cardano's De Subilitate (1550) was the first work to challenge fundamental aspects of Aristotelian theory. Cardano maintained that there were only three basic elements- earth, air, and water. He discounted fire because it needed material to spread and produced nothing. Cardano thought there were two kinds of air: free air and enclosed air. The former destroyed inanimate things and preserved animate things, while the latter had the opposite effect.
Rene Descartes's Discourse on the Method (1637) typifies the beginning of the scientific revolution in meteorology. His scientific method had four principles: to never accept anything unless one clearly knew it to be true; to divide every difficult problem into small problems to tackle; to proceed from the simple to the complex, always seeking relationships; to be as complete and thorough as possible with no prejudice.
In the appendix Les Meteores, he applied these principles to meteorology. He discussed terrestrial bodies and vapors which arise from them, proceeding to explain the formation of clouds from drops of water, and winds, clouds then dissolving into rain, hail and snow. He also discussed the effects of light on the rainbow. Descartes hypothesized that all bodies were composed of small particles of different shapes and interwovenness. All of his theories were based on this hypothesis. He explained the rain as caused by clouds becoming too large for the air to hold, and that clouds became snow if the air was not warm enough to melt them, or hail if they met colder wind. Like his predecessors, Descartes's method was deductive, as meteorological instruments were not developed and extensively used yet. He introduced the Cartesian coordinate system to meteorology and stressed the importance of mathematics in natural science. His work established meteorology as a legitimate branch of physics.
In the 18th century, the invention of the thermometer and barometer allowed for more accurate measurements of temperature and pressure, leading to a better understanding of atmospheric processes. This century also saw the birth of the first meteorological society, the Societas Meteorologica Palatina in 1780.
In the 19th century, advances in technology such as the telegraph and photography led to the creation of weather observing networks and the ability to track storms. Additionally, scientists began to use mathematical models to make predictions about the weather. The 20th century saw the development of radar and satellite technology, which greatly improved the ability to observe and track weather systems. In addition, meteorologists and atmospheric scientists started to create the first weather forecasts and temperature predictions.
In the 20th and 21st centuries, with the advent of computer models and big data, meteorology has become increasingly dependent on numerical methods and computer simulations. This has greatly improved weather forecasting and climate predictions. Additionally, meteorology has expanded to include other areas such as air quality, atmospheric chemistry, and climatology. The advancement in observational, theoretical and computational technologies has enabled ever more accurate weather predictions and understanding of weather pattern and air pollution. In current time, with the advancement in weather forecasting and satellite technology, meteorology has become an integral part of everyday life, and is used for many purposes such as aviation, agriculture, and disaster management.
Instruments and classification scales
In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally.
Atmospheric composition research
In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics. In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines.
Research into cyclones and air flow
In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes.
Observation networks and weather forecasting
In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health.
During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus, by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area.
This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems.
Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected.
FitzRoy coined the term "weather forecast" and tried to separate scientific approaches from prophetic ones.
Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services.
Numerical weather prediction
In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws.
It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability.
Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury.
In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases.
Meteorologists
Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018.
Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media.
Equipment
Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air.
Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization.
Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño.
Spatial scales
The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale.
Other subclassifications are used to describe the unique, local, or broad effects within those subclasses.
Microscale
Microscale meteorology is the study of atmospheric phenomena on a scale of about or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. Misoscale meteorology is an informal subdivision.
Mesoscale
Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause, and the lower section of the stratosphere. Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms, squall lines, fronts, precipitation bands in tropical and extratropical cyclones, and topographically generated weather systems such as mountain waves and sea and land breezes.
Synoptic scale
Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 105 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones, and to some extent jet streams. All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations.
Global scale
Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles. Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation, or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation. Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions.
Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies.
Some meteorological principles
Boundary layer meteorology
Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat, matter, or momentum on time scales of less than a day are caused by turbulent motions. Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology.
Dynamic meteorology
Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as an infinitesimal region in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum.
Applications
Weather forecasting
Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve.
Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome.
There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them.
Aviation meteorology
Aviation meteorology deals with the impact of weather on air traffic management. It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual:
The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage.
Agricultural meteorology
Meteorologists, soil scientists, agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield, water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather.
Hydrometeorology
Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle, the water budget, and the rainfall statistics of storms. A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences.
The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project – that are trying to address this issue.
Nuclear meteorology
Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere.
Maritime meteorology
Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center, Honolulu National Weather Service forecast office, United Kingdom Met Office, KNMI and JMA prepare high seas forecasts for the world's oceans.
Military meteorology
Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force's Air Force Weather Agency is responsible for the Air Force and Army.
Environmental meteorology
Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions.
Renewable energy
Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy.
See also
References
Further reading
Byers, Horace. General Meteorology. New York: McGraw-Hill, 1994.
Dictionaries and encyclopedias
History
External links
Please see weather forecasting for weather forecast sites.
Air Quality Meteorology – Online course that introduces the basic concepts of meteorology and air quality necessary to understand meteorological computer models. Written at a bachelor's degree level.
The GLOBE Program – (Global Learning and Observations to Benefit the Environment) An international environmental science and education program that links students, teachers, and the scientific research community in an effort to learn more about the environment through student data collection and observation.
Glossary of Meteorology – From the American Meteorological Society, an excellent reference of nomenclature, equations, and concepts for the more advanced reader.
JetStream – An Online School for Weather – National Weather Service
Learn About Meteorology – Australian Bureau of Meteorology
The Weather Guide – Weather Tutorials and News at About.com
Meteorology Education and Training (MetEd) – The COMET Program
NOAA Central Library – National Oceanic & Atmospheric Administration
The World Weather 2010 Project The University of Illinois at Urbana–Champaign
Ogimet – online data from meteorological stations of the world, obtained through NOAA free services
National Center for Atmospheric Research Archives, documents the history of meteorology
Weather forecasting and Climate science – United Kingdom Meteorological Office
Meteorology , BBC Radio 4 discussion with Vladimir Janković, Richard Hambyn and Iba Taub (In Our Time, 6 March 2003)
Virtual exhibition about meteorology on the digital library of Paris Observatory
Applied and interdisciplinary physics
Oceanography
Physical geography
Greek words and phrases | Meteorology | Physics,Environmental_science | 8,461 |
4,886,965 | https://en.wikipedia.org/wiki/Baselining | Baselining is a method for analyzing computer network performance. The method is marked by comparing current performance to a "baseline" derived from past performance. If the performance of a network switch or other network components is measured over a period of time, that performance figure can be used as a comparative baseline for configuration changes.
Uses
Baselining is useful for many performance management tasks, including:
Monitoring daily network performance
Measuring trends in network performance
Assessing whether network performance is meeting requirements laid out in a service agreement
References
Network management | Baselining | Technology,Engineering | 102 |
23,634,474 | https://en.wikipedia.org/wiki/Normal%20crossing%20singularity | In algebraic geometry a normal crossing singularity is a singularity similar to a union of coordinate hyperplanes. The term can be confusing because normal crossing singularities are not usually normal schemes (in the sense of the local rings being integrally closed).
Normal crossing divisors
In algebraic geometry, normal crossing divisors are a class of divisors which generalize the smooth divisors. Intuitively they cross only in a transversal way.
Let A be an algebraic variety, and a reduced Cartier divisor, with its irreducible components. Then Z is called a smooth normal crossing divisor if either
(i) A is a curve, or
(ii) all are smooth, and for each component , is a smooth normal crossing divisor.
Equivalently, one says that a reduced divisor has normal crossings if each point étale locally looks like the intersection of coordinate hyperplanes.
Normal crossing singularity
In algebraic geometry a normal crossings singularity is a point in an algebraic variety that is locally isomorphic to a normal crossings divisor.
Simple normal crossing singularity
In algebraic geometry a simple normal crossings singularity is a point in an algebraic variety, the latter having smooth irreducible components, that is locally isomorphic to a normal crossings divisor.
Examples
The normal crossing points in the algebraic variety called the Whitney umbrella are not simple normal crossings singularities.
The origin in the algebraic variety defined by is a simple normal crossings singularity. The variety itself, seen as a subvariety of the two-dimensional affine plane is an example of a normal crossings divisor.
Any variety which is the union of smooth varieties which all have smooth intersections is a variety with normal crossing singularities. For example, let be irreducible polynomials defining smooth hypersurfaces such that the ideal defines a smooth curve. Then is a surface with normal crossing singularities.
References
Robert Lazarsfeld, Positivity in algebraic geometry, Springer-Verlag, Berlin, 1994.
Algebraic geometry
Geometry of divisors | Normal crossing singularity | Mathematics | 418 |
27,509,071 | https://en.wikipedia.org/wiki/Metadata%20Working%20Group | The Metadata Working Group was formed in 2007 by Adobe Systems, Apple, Canon, Microsoft and Nokia. Sony joined later in 2008. Microsoft proposed the idea in 2006.
The focus of the group is to advance the interoperability of metadata stored in digital media. Its specification, Guidelines for Handling Image Metadata, defined the interoperability among Exif, IIM (old IPTC), and XMP with consumer digital images. The following properties were selected for interoperability:
keywords
description
date and time
orientation
rating
copyright
creator
location created
location shown
Test files for verification were added in 2008 and are available for download.
References
External links
Metadata
Metadata standards
Information technology organizations
Standards organizations in the United States
International organizations based in the United States
Organizations established in 2006 | Metadata Working Group | Technology | 152 |
47,053,192 | https://en.wikipedia.org/wiki/Rubroboletus%20haematinus | Rubroboletus haematinus is a fungus of the genus Rubroboletus. First described scientifically in 1976 by Roy Halling as a species of Boletus, in 2015 it was transferred to Rubroboletus, a genus circumscribed the year previously to contain other allied reddish colored, blue-staining bolete species. It is found in the western United States.
See also
List of North American boletes
References
External links
Fungi described in 1976
Fungi of the United States
haematinus
Fungi without expected TNC conservation status
Fungus species | Rubroboletus haematinus | Biology | 122 |
52,824,147 | https://en.wikipedia.org/wiki/Korean%20brining%20salt | Korean brining salt, also called Korean sea salt, is a variety of edible salt with a larger grain size compared to common kitchen salt. It is called gulgeun-sogeum (; "coarse salt") or wang-sogeum (; "king/queen salt") in Korean. The salt is used mainly for salting napa cabbages when making kimchi. Because it is minimally processed, there are microorganisms present in the salt, which serve to help develop flavours in fermented foods.
References
Salts
Edible salt
Korean cuisine | Korean brining salt | Chemistry | 119 |
9,105,867 | https://en.wikipedia.org/wiki/FitzHugh%E2%80%93Nagumo%20model | The FitzHugh–Nagumo model (FHN) describes a prototype of an excitable system (e.g., a neuron).
It is an example of a relaxation oscillator because, if the external stimulus exceeds a certain threshold value, the system will exhibit a characteristic excursion in phase space, before the variables and relax back to their rest values.
This behaviour is a sketch for neural spike generations, with a short, nonlinear elevation of membrane voltage , diminished over time by a slower, linear recovery variable representing sodium channel reactivation and potassium channel deactivation, after stimulation by an external input current.
The equations for this dynamical system read
The FitzHugh–Nagumo model is a simplified 2D version of the Hodgkin–Huxley model which models in a detailed manner activation and deactivation dynamics of a spiking neuron.
In turn, the Van der Pol oscillator is a special case of the FitzHugh–Nagumo model, with .
History
It was named after Richard FitzHugh (1922–2007) who suggested the system in 1961 and Jinichi Nagumo et al. who created the equivalent circuit the following year.
In the original papers of FitzHugh, this model was called Bonhoeffer–Van der Pol oscillator (named after Karl-Friedrich Bonhoeffer and Balthasar van der Pol) because it contains the Van der Pol oscillator as a special case for . The equivalent circuit was suggested by Jin-ichi Nagumo, Suguru Arimoto, and Shuji Yoshizawa.
Qualitative analysis
Qualitatively, the dynamics of this system is determined by the relation between the three branches of the cubic nullcline and the linear nullcline.
The cubic nullcline is defined by .
The linear nullcline is defined by .
In general, the two nullclines intersect at one or three points, each of which is an equilibrium point. At large values of , far from origin, the flow is a clockwise circular flow, consequently the sum of the index for the entire vector field is +1. This means that when there is one equilibrium point, it must be a clockwise spiral point or a node. When there are three equilibrium points, they must be two clockwise spiral points and one saddle point.
If the linear nullcline pierces the cubic nullcline from downwards then it is a clockwise spiral point or a node.
If the linear nullcline pierces the cubic nullcline from upwards in the middle branch, then it is a saddle point.
The type and stability of the index +1 can be numerically computed by computing the trace and determinant of its Jacobian: The point is stable iff the trace is negative. That is, .
The point is a spiral point iff . That is, .
The limit cycle is born when a stable spiral point becomes unstable by Hopf bifurcation.
Only when the linear nullcline pierces the cubic nullcline at three points, the system has a separatrix, being the two branches of the stable manifold of the saddle point in the middle.
If the separatrix is a curve, then trajectories to the left of the separatrix converge to the left sink, and similarly for the right.
If the separatrix is a cycle around the left intersection, then trajectories inside the separatrix converge to the left spiral point. Trajectories outside the separatrix converge to the right sink. The separatrix itself is the limit cycle of the lower branch of the stable manifold for the saddle point in the middle. Similarly for the case where the separatrix is a cycle around the right intersection.
Between the two cases, the system undergoes a homoclinic bifurcation.
Gallery figures: FitzHugh-Nagumo model, with , and varying . (They are animated. Open them to see the animation.)
See also
Autowave
Biological neuron model
Computational neuroscience
Hodgkin–Huxley model
Morris–Lecar model
Reaction–diffusion
Theta model
Chialvo map
References
Further reading
FitzHugh R. (1955) "Mathematical models of threshold phenomena in the nerve membrane". Bull. Math. Biophysics, 17:257—278
FitzHugh R. (1961) "Impulses and physiological states in theoretical models of nerve membrane". Biophysical J. 1:445–466
FitzHugh R. (1969) "Mathematical models of excitation and propagation in nerve". Chapter 1 (pp. 1–85 in H. P. Schwan, ed. Biological Engineering, McGraw–Hill Book Co., N.Y.)
Nagumo J., Arimoto S., and Yoshizawa S. (1962) "An active pulse transmission line simulating nerve axon". Proc. IRE. 50:2061–2070.
External links
FitzHugh–Nagumo model on Scholarpedia
Interactive FitzHugh-Nagumo. Java applet, includes phase space and parameters can be changed at any time.
Interactive FitzHugh–Nagumo in 1D. Java applet to simulate 1D waves propagating in a ring. Parameters can also be changed at any time.
Interactive FitzHugh–Nagumo in 2D. Java applet to simulate 2D waves including spiral waves. Parameters can also be changed at any time.
Java applet for two coupled FHN systems Options include time delayed coupling, self-feedback, noise induced excursions, data export to file. Source code available (BY-NC-SA license).
Nonlinear systems
Electrophysiology
Computational neuroscience
Biophysics
Articles containing video clips | FitzHugh–Nagumo model | Physics,Mathematics,Biology | 1,182 |
2,581,136 | https://en.wikipedia.org/wiki/Rationalizable%20strategy | Rationalizability is a solution concept in game theory. It is the most permissive possible solution concept that still requires both players to be at least somewhat rational and know the other players are also somewhat rational, i.e. that they do not play dominated strategies. A strategy is rationalizable if there exists some possible set of beliefs both players could have about each other's actions, that would still result in the strategy being played.
Rationalizability is a broader concept than a Nash equilibrium. Both require players to respond optimally to some belief about their opponents' actions, but Nash equilibrium requires these beliefs to be correct, while rationalizability does not. Rationalizability was first defined, independently, by Bernheim (1984) and Pearce (1984).
Definition
Starting with a normal-form game, the rationalizable set of actions can be computed as follows:
Start with the full action set for each player.
Remove all dominated strategies, i.e. strategies that "never make sense" (are never a best reply to any belief about the opponents' actions). The motivation for this step is no rational player would ever choose such actions.
Remove all actions which are never a best reply to any belief about the opponents' remaining actions—this second step is justified because each player knows that the other players are rational.
Continue the process until no further actions can be eliminated.
In a game with finitely many actions, this process always terminates and leaves a non-empty set of actions for each player. These are the rationalizable actions.
Iterated elimination of strictly dominated strategies (IESDS)
The iterated elimination (or deletion, or removal) of dominated strategies (also denominated as IESDS, or IDSDS, or IRSDS) is one common technique for solving games that involves iteratively removing dominated strategies. In the first step, at most one dominated strategy is removed from the strategy space of each of the players since no rational player would ever play these strategies. This results in a new, smaller game. Some strategies—that were not dominated before—may be dominated in the smaller game. The first step is repeated, creating a new even smaller game, and so on. The process stops when no dominated strategy is found for any player. This process is valid since it is assumed that rationality among players is common knowledge, that is, each player knows that the rest of the players are rational, and each player knows that the rest of the players know that he knows that the rest of the players are rational, and so on ad infinitum (see Aumann, 1976).
There are two versions of this process. One version involves only eliminating strictly dominated strategies. If, after completing this process, there is only one strategy for each player remaining, that strategy set is the unique Nash equilibrium. Moreover, iterated elimination of strictly dominated strategies is path independent. That is, if at any point in the process there are multiple strictly dominated strategies, then it doesn't matter for the end result which strategies we remove first.
Strict Dominance Deletion Step-by-Step Example:
C is strictly dominated by A for Player 1. Therefore, Player 1 will never play strategy C. Player 2 knows this. (see IESDS Figure 1)
Of the remaining strategies (see IESDS Figure 2), Z is strictly dominated by Y and X for Player 2. Therefore, Player 2 will never play strategy Z. Player 1 knows this.
Of the remaining strategies (see IESDS Figure 3), B is strictly dominated by A for Player 1. Therefore, Player 1 will never play B. Player 2 knows this.
Of the remaining strategies (see IESDS Figure 4), Y is strictly dominated by X for Player 2. Therefore, Player 2 will never play Y. Player 1 knows this.
Only one rationalizable strategy is left {A,X} which results in a payoff of (10,4). This is the single Nash Equilibrium for this game.
Another version involves eliminating both strictly and weakly dominated strategies. If, at the end of the process, there is a single strategy for each player, this strategy set is also a Nash equilibrium. However, unlike the first process, elimination of weakly dominated strategies may eliminate some Nash equilibria. As a result, the Nash equilibrium found by eliminating weakly dominated strategies may not be the only Nash equilibrium. (In some games, if we remove weakly dominated strategies in a different order, we may end up with a different Nash equilibrium.)
Weak Dominance Deletion Step-by-Step Example:
O is strictly dominated by N for Player 1. Therefore, Player 1 will never play strategy O. Player 2 knows this. (see IESDS Figure 5)
U is weakly dominated by T for Player 2. If Player 2 chooses T, then the final equilibrium is (N,T)
O is strictly dominated by N for Player 1. Therefore, Player 1 will never play strategy O. Player 2 knows this. (see IESDS Figure 6)
T is weakly dominated by U for Player 2. If Player 2 chooses U, then the final equilibrium is (N,U)
In any case, if by iterated elimination of dominated strategies there is only one strategy left for each player, the game is called a dominance-solvable game.
Iterated elimination by mixed strategy
There are instances when there is no pure strategy that dominates another pure strategy, but a mixture of two or more pure strategies can dominate another strategy. This is called Strictly Dominant Mixed Strategies. Some authors allow for elimination of strategies dominated by a mixed strategy in this way.
Example 1:
In this scenario, for player 1, there is no pure strategy that dominates another pure strategy. Let's define the probability of player 1 playing up as p, and let p = . We can set a mixed strategy where player 1 plays up and down with probabilities (,). When player 2 plays left, then the payoff for player 1 playing the mixed strategy of up and down is 1, when player 2 plays right, the payoff for player 1 playing the mixed strategy is 0.5. Thus regardless of whether player 2 chooses left or right, player 1 gets more from playing this mixed strategy between up and down than if the player were to play the middle strategy. In this case, we should eliminate the middle strategy for player 1 since it's been dominated by the mixed strategy of playing up and down with probability (,).
Example 2:
We can demonstrate the same methods on a more complex game and solve for the rational strategies. In this scenario, the blue coloring represents the dominating numbers in the particular strategy.
Step-by-step solving:
For Player 2, X is dominated by the mixed strategy Y and Z.
The expected payoff for playing strategy Y + Z must be greater than the expected payoff for playing pure strategy X, assigning and as tester values. The argument for mixed strategy dominance can be made if there is at least one mixed strategy that allows for dominance.
Testing with and gets the following:
Expected average payoff of Strategy Y: (4+0+4) = 4
Expected average payoff of Strategy Z: (0+5+5) = 5
Expected average payoff of pure strategy X: (1+1+3) = 5
Set up the inequality to determine whether the mixed strategy will dominate the pure strategy based on expected payoffs.
uY + uZ ⩼ uX
4 + 5 > 5
Mixed strategy Y and Z will dominate pure strategy X for Player 2, and thus X can be eliminated from the rationalizable strategies for P2.
For Player 1, U is dominated by the pure strategy D.
For player 2, Y is dominated by the pure strategy Z.
This leaves M dominating D for Player 1.
The only rationalizable strategy for Players 1 and 2 is then (M,Z) or (3,5).
Constraints on beliefs
Consider a simple coordination game (the payoff matrix is to the right). The row player can play a if he can reasonably believe that the column player could play A, since a is a best response to A. He can reasonably believe that the column player can play A if it is reasonable for the column player to believe that the row player could play a. She can believe that he will play a if it is reasonable for her to believe that he could play a, etc.
This provides an infinite chain of consistent beliefs that result in the players playing (a, A). This makes (a, A) a rationalizable pair of actions. A similar process can be repeated for (b, B).
As an example where not all strategies are rationalizable, consider a prisoner's dilemma pictured to the left. Row player would never play c, since c is not a best response to any strategy by the column player. For this reason, c is not rationalizable.
Conversely, for two-player games, the set of all rationalizable strategies can be found by iterated elimination of strictly dominated strategies. For this method to hold however, one also needs to consider strict domination by mixed strategies. Consider the game on the right with payoffs of the column player omitted for simplicity. Notice that "b" is not strictly dominated by either "t" or "m" in the pure strategy sense, but it is still dominated by a strategy that would mix "t" and "m" with probability of each equal to 1/2. This is due to the fact that given any belief about the action of the column player, the mixed strategy will always yield higher expected payoff. This implies that "b" is not rationalizable.
Moreover, "b" is not a best response to either "L" or "R" or any mix of the two. This is because an action that is not rationalizable can never be a best response to any opponent's strategy (pure or mixed). This would imply another version of the previous method of finding rationalizable strategies as those that survive the iterated elimination of strategies that are never a best response (in pure or mixed sense).
In games with more than two players, however, there may be strategies that are not strictly dominated, but which can never be the best response. By the iterated elimination of all such strategies one can find the rationalizable strategies for a multiplayer game.
Rationalizability and Nash equilibria
It can be easily proved that a Nash equilibrium is a rationalizable equilibrium; however, the converse is not true. Some rationalizable equilibria are not Nash equilibria. This makes the rationalizability concept a generalization of Nash equilibrium concept.
As an example, consider the game matching pennies pictured to the right. In this game the only Nash equilibrium is row playing h and t with equal probability and column playing H and T with equal probability. However, all pure strategies in this game are rationalizable.
Consider the following reasoning: row can play h if it is reasonable for her to believe that column will play H. Column can play H if its reasonable for him to believe that row will play t. Row can play t if it is reasonable for her to believe that column will play T. Column can play T if it is reasonable for him to believe that row will play h (beginning the cycle again). This provides an infinite set of consistent beliefs that results in row playing h. A similar argument can be given for row playing t, and for column playing either H or T.
See also
Self-confirming equilibrium
Strategic dominance
Footnotes
References
Bernheim, D. (1984) Rationalizable Strategic Behavior. Econometrica 52: 1007–1028.
Fudenberg, Drew and Jean Tirole (1993) Game Theory. Cambridge: MIT Press.
Pearce, D. (1984) Rationalizable Strategic Behavior and the Problem of Perfection. Econometrica 52: 1029–1050.
Ratcliff, J. (1992–1997) lecture notes on game theory, §2.2: "Iterated Dominance and Rationalizability"
Game theory | Rationalizable strategy | Mathematics | 2,481 |
10,136 | https://en.wikipedia.org/wiki/Expert%20system | In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks.
An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.
History
Early development
Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions.
Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome.
These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts,
statistical pattern matching, or probability theory.
Formal introduction and later developments
This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS.
Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.
Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic.
One such early expert system shell based on Prolog was APES.
One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law."
In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.
In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, VP-Expert, and many others), started appearing regularly.
The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.
During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field.
In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers.
In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term rule-based systems, with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.
Current approaches to expert systems
The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section.
Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems."
More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems.
Software architecture
An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface.
The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.
The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.
There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:
A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.
Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.
The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.
As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were:
Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as fuzzy logic, and combination of probabilities.
Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web.
Advantages
The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance.
Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.
A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.
Summing up the benefits of using expert systems, the following can be highlighted:
Increased availability and reliability: Expertise can be accessed on any computer hardware and the system always completes responses on time.
Multiple expertise: Several expert systems can be run simultaneously to solve a problem. and gain a higher level of expertise than a human expert.
Explanation: Expert systems always describe of how the problem was solved.
Fast response: The expert systems are fast and able to solve a problem in real-time.
Reduced cost: The cost of expertise for each user is significantly reduced.
Disadvantages
The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.
Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.
Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision.
How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2. Thus, the search space can grow exponentially.
There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on.
Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.
Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.
Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.
The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.
Finally, the following disadvantages of using expert systems can be summarized:
Expert systems have superficial knowledge, and a simple task can potentially become computationally expensive.
Expert systems require knowledge engineers to input the data, data acquisition is very hard.
The expert system may choose the most inappropriate method for solving a particular problem.
Problems of ethics in the use of any form of AI are very relevant at present.
It is a closed world with specific knowledge, in which there is no deep perception of concepts and their interrelationships until an expert provides them.
Applications
Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.
Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach.
CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.
Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.
SMH.PAL is an expert system for the assessment of students with multiple disabilities.
GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted.
Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI.
See also
AI winter
CLIPS
Constraint logic programming
Constraint satisfaction
Knowledge engineering
Learning classifier system
Rule-based machine learning
References
Works cited
External links
Expert System tutorial on Code Project
Decision support systems
Information systems | Expert system | Technology | 4,913 |
23,958,261 | https://en.wikipedia.org/wiki/C9H10O4 | {{DISPLAYTITLE:C9H10O4}}
The molecular formula C9H10O4 (molar mass : 182.17 g/mol, exact mass : 182.05790878 u) may refer to:
3,5-Dihydroxyphenylpropionoic acid, a metabolite of alkylresorcinols
Dihydrocaffeic acid, a phenolic compound
Ethyl protocatechuate, a phenolic compound
Flopropione
Homovanillic acid
m-Hydroxyphenylhydracrylic acid
Methylenomycin A
Syringaldehyde
Veratric acid, a benzoic acid derivative | C9H10O4 | Chemistry | 150 |
11,023,071 | https://en.wikipedia.org/wiki/Arxula%20adeninivorans | Arxula adeninivorans (Blastobotrys adeninivorans) is a dimorphic yeast with unusual characteristics. The first description of A. adeninivorans was provided in the mid-eighties. The species was initially designated as Trichosporon adeninovorans. After the first identification in the Netherlands, strains of this species were later on also found in Siberia and in South Africa in soil and in wood hydrolysates. Recently, A. adeninivorans was renamed as Blastobotrys adeninivorans after a detailed phylogenetic comparison with other related yeast species. However, many scientists desire to maintain the popular name A. adeninivorans.
Characteristics
All A. adeninivorans strains share unusual biochemical activities being able to assimilate a range of amines, adenine (hence the name A. adeninivorans) and several other purine compounds as sole energy and carbon source, they all share properties like nitrate assimilation, they are thermo-tolerant (they can grow at temperatures of up to ). A special feature of biotechnological impact is a temperature-dependent dimorphism. At temperatures above a reversible transition from budding cells to mycelial forms is induced. Budding is re-established when cultivation temperature is decreased below .
Biotechnological potential
The unusual characteristics described above render A. adeninivorans very attractive for biotechnological applications. On the one hand, it is a source for many enzymes with interesting properties and the respective genes, for instance glucoamylase, tannase, lipase, phosphatases and many others. On the other hand, it is a very robust and safe organism that can be genetically engineered to produce foreign proteins. Suitable host strains can be transformed with plasmids. The basic design of such plasmids is similar to that described under Hansenula polymorpha and yeast expression platforms.
Here are two special examples of recombinant strains and their application: in both cases several plasmids with different foreign product genes were introduced into the yeast. In a first case this recombinant yeast strain acquired the capability to produce natural plastics, namely PHA (polyhydroxyalkanoates). For this purpose a new synthetic pathway had to be transferred into this organism consisting of three enzymes. The respective genes phbA, phbB and phbC were isolated from the bacterium Ralstonia eutropha and integrated into plasmids. These plasmids were introduced into the organism. The resulting recombinant strain was able to produce the plastic material.
In the second example a biosensor for the detection of estrogenic activities in wastewater has been developed. In this case the route how estrogens act in nature was mimicked. A gene for the human estrogen receptor alpha (hERalpha) contained on a first plasmid was initially introduced. The protein encoded by this gene recognizes and binds estrogens. The complex is then bound to a second gene contained on a second plasmid that becomes activated upon binding. In this case a gene sequence of a reporter gene (the gene product can be easily monitored by simple assays) was fused to a control sequence (a promoter) responsive to the estrogen/receptor complex. Such strains can be cultured in the presence of wastewater and the estrogens present in such samples can be easily quantified by the amount of the reporter gene product.
References
Gellissen G (ed) (2005) Production of recombinant proteins - novel microbial and eukaryotic expression systems. Wiley-VCH, Weinheim.
Yeasts
Fungi described in 1984
Fungi of Africa
Fungi of Europe
Fungus species | Arxula adeninivorans | Biology | 776 |
70,037,109 | https://en.wikipedia.org/wiki/Orbicula%20richenii | Orbicula richenii is a species of fungus belonging to the Orbicula genus. It was documented in 1904 by Brazilian mycologist Johannes Rick.
References
Pezizales
Fungi described in 1904
Fungus species | Orbicula richenii | Biology | 45 |
186,991 | https://en.wikipedia.org/wiki/Nectocaris | Nectocaris is a genus of squid-like animal of controversial affinities known from the Cambrian period. The initial fossils were described from the Burgess Shale of Canada. Other similar remains possibly referrable to the genus are known from the Emu Bay Shale of Australia and Chengjiang Biota of China.
Nectocaris was a free-swimming, predatory or scavenging organism. This lifestyle is reflected in its binomial name: Nectocaris means "swimming shrimp" (from the Ancient Greek , , meaning "swimmer" and , , "shrimp"). Two morphs are known: a small morph, about an inch long, and a large morph, anatomically identical but around four times longer.
Nectocaridids have controversial affinities. Some authors have suggested that they represent the earliest known cephalopods. However, their morphology is strongly dissimilar to confirmed early cephalopods, and thus their affinities to cephalopods and even to molluscs more broadly are rejected by most authors. Their affinities to any animal group beyond Bilateria are uncertain, though they have been suggested to be members of Lophotrochozoa.
The closely related Ordovician taxon Nectocotis is a second genus, closely resembling Nectocaris, but suggested to have had an internal skeletal element.
Anatomy
Nectocaris had a flattened, kite-shaped body with a fleshy fin running along the length of each side. The small head had two stalked eyes, a single pair of tentacles, and a flexible funnel-shaped structure opening out to the underside of the body. The funnel often gets wider away from the head. The funnel has been suggested to represent an eversible (able to be turned inside out) pharynx. Internally, a long cavity runs along the body axis, which is suggested to represent the digestive tract. The body contains a pair of gills; the gills comprise blades emerging from a zig-zag axis. Muscle blocks surrounded the axial cavity, and are now preserved as dark blocks in the lateral body. The fins also show dark blocks, with fine striations superimposed over them. These striations often stand in high relief above the rock surface itself.
Diversity
Although Nectocaris is known from Canada, China and Australia, in rocks spanning some 20 million years, there does not seem to be much diversity; size excepted, all specimens are anatomically very similar. Historically, three genera have been erected for nectocaridid taxa from different localities, but these 'species' – Petalilium latus and Vetustovermis planus – likely belong to the same genus or even the same species as N. pteryx. Within N. pteryx, there seem to be two discrete morphs, one large (~10 cm in length), one small (~3 cm long). These perhaps represent separate male and female forms.
Ecology
The unusual shape of the nectocaridid funnel has led to its interpretation as an eversible proboscis. Martin R. Smith and Jean-Bernard Caron have suggested that it was used for jet propulsion, though this has been questioned by other authors. The eyes of Nectocaris would have had a similar visual acuity to modern Nautilus (if they lacked a lens) or squid (if they did not). They are thought to have been freely-swimming nektonic organisms, that were either scavengers or predators on soft-bodied animals, using their tentacles to manipulate food items.
Affinity
The affinity of Nectocaris is controversial. Martin R. Smith and Jean-Bernard Caron have suggested that nectocaridids represent early cephalopods. In a 2010 publication in Nature, they suggested that the ancestor of modern cephalopods and nectocaridids probably lacked a mineralised shell, while Smith in a later 2013 publication suggested that it may be more plausible that nectocaridids had instead lost a mineralised shell and developed a morphology convergent on modern coleoids. However, other authors contend that the morphology of nectocaridids is contrary to what is known about cephalopod and mollusc evolution, and they cannot be accommodated within these groups, and can only be confidently placed as members of Bilateria.
History of study
Nectocaris has a long and convoluted history of study. Charles Doolittle Walcott, the discoverer of the Burgess Shale, had photographed the one specimen he had collected in the 1910s, but never had time to investigate it further. As such, it was not until 1976 that Nectocaris was formally described, by Simon Conway Morris.
Because the genus was originally known from a single, incomplete specimen and with no counterpart, Conway Morris was unable to deduce its affinity. It had some features which were reminiscent of arthropods, but these could well have been convergently derived. Its fins were very unlike those of arthropods.
Working from photographs, the Italian palaeontologist Alberto Simonetta believed he could classify Nectocaris within the chordates. He focussed mainly on the tail and fin morphology, interpreting Conway Morris's 'gut' as a notochord – a distinctive chordate feature.
The classification of Nectocaris was revisited in 2010, when Martin Smith and Jean-Bernard Caron described 91 additional specimens, many of them better preserved than the type. These allowed them to reinterpret Nectocaris as a primitive cephalopod, with only 2 tentacles instead of the 8 or 10 limbs of modern cephalopods. The structure previous researchers had identified as an oval carapace or shield behind the eyes was suggested to be a soft funnel, similar to the ones used for propulsion by modern cephalopods. The interpretation would push back the origin of cephalopods by at least 30 million years, much closer to the first appearance of complex animals, in the Cambrian explosion, and implied that – against the widespread expectation – cephalopods evolved from non-mineralized ancestors.
Later independent analyses questioned the cephalopod interpretation, stating that it did not square with the established theory of cephalopod evolution, and that nectocaridids should be considered incertae sedis among Bilateria.
Vetustovermis
Vetustovermis (from Latin: "very old worm") is a soft-bodied middle Cambrian animal, known from a single reported fossil specimen from the South Australian Emu Bay shale. It is probably a junior synonym of Nectocaris pteryx.
The original description of Vetustovermis hedged its bets regarding classification, but tentatively highlighted some similarities with the annelid worms. It was later considered an arthropod, and in 2010 Smith and Caron, agreeing that Petalilium was at least a close relative of Vetustovermis (but that treating it as a synonym was premature, given the poor preservation of the Vetustovermis type), placed it with Nectocaris in the clade Nectocarididae.
Early press reports misspelled the genus name as Vetustodermis.
Petalilium
Petalilium (sometimes misspelled Petalium) is an enigmatic genus of Cambrian organism known from the Haikou area, from the Maotianshan mudstone member of the Chengjiang biota. The taxon is a junior synonym of Nectocaris pteryx.
Fossils of Petalilium show a dorsoventrally flattened body, usually 5 to 6 centimetres, but ranging from 1.5 to 10 cm. It has an ovate trunk region and a large muscular foot, and a head with stalked eyes and a pair of long tentacles. The trunk region possesses about 50 soft, flexible, transverse bars, lateral serialised structures of unknown function. The upper part of the body, interpreted as a mantle, is covered with a random array of spines on the back, while gills project underneath. A complete, tubular gut runs the length of the body.
Whilst it was originally described as a phyllocarid, and a ctenophore affinity has been suggested, neither interpretation is supported by any compelling evidence.
Some of the characters observed in Chen et al.'s (2005) study suggested that Petalilium may be related to Nectocaris.
See also
Cambrian explosion
Paleobiota of the Burgess Shale
Chengjiang biota
List of Chengjiang Biota species by phylum
Footnotes
References
Further reading
– 3D animations are available and a more detailed consideration of Nectocaris
– Brian Switek discusses the taxonomy and history of Nectocaris in his blog
– BBC News coverage of the Smith & Caron's (2010) re-description
– a blog article supporting Mazurek & Zatoń's (2011) view
External links
Nectocarididae
Burgess Shale animals
Cambrian Series 2 first appearances
Miaolingian extinctions
Maotianshan shales fossils
Prehistoric cephalopod genera
Prehistoric invertebrates of Oceania
Controversial taxa
Fossil taxa described in 1976
Emu Bay Shale
Cambrian genus extinctions | Nectocaris | Biology | 1,876 |
47,626,347 | https://en.wikipedia.org/wiki/Setipiprant | Setipiprant (INN; developmental code names ACT-129968, KYTH-105) is an investigational drug developed for the treatment of asthma and scalp hair loss. It was originally developed by Actelion and acts as a selective, orally available antagonist of the prostaglandin D2 receptor 2 (DP2). The drug is being developed as a novel treatment for male pattern baldness by Allergan.
Medical uses
Scalp hair loss
Acting through DP2, PGD2 can inhibit hair growth, suggesting that this receptor is a potential target for bald treatment. A phase 2A study to evaluate the safety, tolerability, and efficacy of oral setipiprant relative to a placebo in 18- to 49-year-old males with androgenetic alopecia was completed in May 2018 and did not find statistically significant improvement.
Allergic conditions
Setipiprant proved to be well tolerated and reasonably effective in reducing allergen-induced airway responses in asthmatic patient clinical trials. However, the drug, while supporting the concept that DP2 contributes to asthmatic disease, did not show sufficient advantage over existing drugs and was discontinued from further development for this application.
Adverse effects
Data from phase II and III clinical trials did not detect any severe adverse effects to setipiprant. The authors were unable to identify any pattern of adverse effects that differ from placebo, including subjective reporting of symptoms and objective laboratory monitoring.
Interactions
While setipiprant mildly induces the drug metabolizing enzyme CYP3A4 in vitro, the interaction appears to not be clinically relevant.
Pharmacology
Mechanism of action
Allergic conditions
Setipiprant binds to the DP2 receptor with a dissociation constant of 6 nM, representing potent antagonism of the receptor. The DP2 receptor, also called the CRTh2 receptor, is a G-protein-coupled receptor (GPCR) that is expressed on certain inflammatory cells, such as eosinophils, basophils, and certain lymphocytes. For its mechanism of action in the treatment of allergic conditions, setipiprant's DP2 antagonism prevents the action of prostaglandin D2 (PGD2) on these receptors. The DP2 receptor mediates the activation of type 2 helper T (Th2) cells, eosinophils, and basophils in the lungs, which are white blood cells implicated in producing the inflammatory response the characterizes allergic conditions. Activation of DP2 on Th2 cells by PGD2 induces the secretion of inflammatory cytokines (interleukin (IL) 4, IL-5, and IL-13), which cause an increase of eosinophils in the blood, remodeling of lung tissue, and hypersensitivity of lung tissue to allergens.
Setipiprant does not antagonize the thromboxane receptor (TP). The bronchoconstricting properties of PGD2 are not inhibited by setipiprant, since these are mediated by the TP receptor. As a point of contrast, ramatroban is a selective TP antagonist and DP2 receptor antagonist.
Setipiprant does not appreciably inhibit the activity of the enzyme cyclooxygenase 1 (COX-1), which is responsible for the synthesis of prostaglandins (including PGD2).
Scalp hair loss
Prostaglandin D2 synthase (PTGDS) is an enzyme that produces PGD2. In men with androgenic alopecia, the enzyme PTGDS is elevated in the bald scalp tissue, as well as its product PGD2. PGD2 inhibits the growth of hair follicles through its activity on the DP2 receptor, but not the DP1 receptor. Theoretically, setipiprant's DP2 receptor antagonism may counteract the activity of PGD2 in hair follicles, thereby stimulating hair growth.
Pharmacokinetics
The oral bioavailability of setipiprant is 44% in rats and 55% in dogs, which suggests that it should be orally bioavailable in humans. The half-life of setipiprant in humans is about 11 hours. The maximum concentration in plasma (Cmax) is 6.04 and 6.44 mcg/mL for setipiprant tablets and capsules respectively, with an area under the curve of 31.88 and 31.50 mcg×hours/mL for setipiprant tablets and capsules respectively. Cmax was reached between 1.8–4 hours after oral administration. The tablet and capsule formulations are bioequivalent.
Chemistry
Setipiprant appears as a light yellow to yellow colored solid. Based on general guidelines, the powder form is considered stable for 2 years at 4 degrees C, and for 3 years as -20 degrees C. When dissolved in a solvent, setipiprant is stable for 1 month at -20 degrees C, and 6 months at -80 degrees C. It is considered soluble in DMSO at concentrations ≥ 36 mg/mL.
History
Setipiprant was initially researched by Actelion as a treatment for allergies and inflammatory disorders, particularly asthma, but despite being well tolerated in clinical trials and showing reasonable efficacy against allergen-induced airway responses in asthmatic patients, it failed to show sufficient advantages over existing drugs and was discontinued from further development in this application.
However, following the discovery in 2012 that the prostaglandin D2 receptor (DP/PGD2) is expressed at high levels in the scalp of men affected by male pattern baldness, the rights to setipiprant were acquired by Kythera to develop the drug as a novel treatment for baldness. The favorable pharmacokinetics and relative lack of side effects seen in earlier clinical trials mean that fresh clinical trials for this new application can be conducted fairly quickly. , setipiprant is currently under development by Allergan for the prevention of androgenic alopecia after their successful acquisition of Kythera.
See also
Prostaglandin DP2 receptor
Fevipiprant
Ramatroban
References
External links
Setipiprant - AdisInsight
Prostaglandins
Receptor antagonists
1-Naphthyl compounds | Setipiprant | Chemistry | 1,333 |
19,009,041 | https://en.wikipedia.org/wiki/Talking%20animal | A talking animal or speaking animal is any non-human animal that can produce sounds or gestures resembling those of a human language. Several species or groups of animals have developed forms of communication which superficially resemble verbal language, however, these usually are not considered a language because they lack one or more of the defining characteristics, e.g. grammar, syntax, recursion, and displacement. Researchers have been successful in teaching some animals to make gestures similar to sign language, although whether this should be considered a language has been disputed.
Possibility of animal language
The term refers to animals who can imitate (though not necessarily understand) human speech. Parrots, for example, repeat phrases of human speech through exposure. There were parrots that learnt to use words in proper context and had meaningful dialogues with humans. Alex, a grey parrot, understood questions about color, shape, size, number etc. of objects and would provide a one-word answer to them. He is also documented to have asked an existential question. Another grey parrot, N'kisi, could use 950 words in proper context, was able to form sentences and even understood the concept of grammatical tense.
Researchers have attempted to teach great apes (chimpanzees, gorillas and orangutans) spoken language with poor results as they can only be taught how to say one or a few basic or limited words or phrases or less, and sign language with significantly better results as they can be very creative with various signs like those of deaf people. In this regard, there are now numerous studies and an extensive bibliography.
Reported cases by species
Birds
Alex, a grey parrot researched and trained by Dr. Irene Pepperberg, demonstrated knowledge of cca. 100 words, understood the meaning of several types of questions and was documented to ask one question about himself.
N'kisi, a grey parrot, knows over 900 words, can form sentences and even understands grammatical tense.
Dogs
An owner hears a dog making a sound that resembles a phrase says the phrase back to the dog, who then repeats the sound and is rewarded with a treat. Eventually the dog learns a modified version of the original sound. Dogs have limited vocal imitation skills, so these sounds usually need to be shaped by selective attention and social reward.
A dog on America's Funniest Home Videos named Fluffy, made noises that to some viewers resembled "I want my momma" after being asked "Do you want your momma?". Other videos showed other dogs making noises which to some viewers resembling "Run around", "I want it", "I love momma" and "Hello".
Odie, a pug who produced noises resembling "I love you" on demand, made appearances on several television shows.
Paranormal researcher Charles Fort wrote in his book Wild Talents (1932) of several alleged cases of dogs that could speak English. Fort took the stories from contemporary newspaper accounts.
In 1715 Gottfried Wilhelm Leibniz published an account of his encounter with a talking dog that could pronounce about 30 words.
Don, a German pointer born around the beginning of the 20th century, was a dog that was reputed to be able to pronounce a couple of words in German and became a vaudeville sensation as a result. Although most scientists at the time dismissed Don's capabilities, the author Jan Bondeson puts forward an argument that Don was genuinely capable of limited human speech and criticises the tests that were performed on Don at the time as having serious methodological flaws.
In 1959 a German sheepdog by the name of Corinna living in Prague spontaneously developed a capability for limited human speech. According to the zoologist Hermann Hartwigg, published under the pseudonym 'Hermann Dembeck', Corinna 'holds the record in modern times for its talking prowess'.
Cats
The first place-winning video "Cat's Got a Tongue" from Season 10, Episode 20 of America's Funniest Home Videos features a cat speaking purported human words and phrases such as "Oh my dog", "Oh Long John", "Oh Long Johnson", "Oh Don piano", "Why I eyes ya", and "All the live long day." A longer version of the clip (which revealed the animal was reacting to the presence of another cat) was aired in the UK. Clips from this video are prevalent on YouTube. The cat became an Internet phenomenon in 2006 and appeared as a character in "Faith Hilling", the 226th episode of South Park, which aired on March 28, 2012.
"Cat Says, 'No'," another video from the show that won first place in Season 7, Episode 10, features a cat repeatedly saying, "no".
Miles v. City Council of Augusta, Georgia, in which the court found that the exhibition of a talking cat was considered an occupation for the purposes of municipal licensing law.
Great apes
Great apes mimicking human speech is rare although some of them have attempted to do so by often watching and mimicking the gestures, and voices from their human trainers. Apparently, human voice control in non-human great apes could derive from an evolutionary ancestor with similar voice control capacities. These include chimpanzees and orangutans.
Johnny (1944–2007), was a chimpanzee that could also clearly say the word "mama".
In 1962, Bioparco di Roma, a chimpanzee named Renata could clearly say the word "mama" when praised by her trainer.
Kokomo Jr., was a chimpanzee and mascot of the Today show, who was known to say the word "mama".
Viki was a chimpanzee that could voice four words:
mama
papa
up
cup
Tilda (born 1965, Borneo), is an orangutan who responds to her keepers in a human-like manner e.g. pointing to the food and repeating the word "Cologne Zoo" by controlling her lips and tongue, as well as manipulating her vocal chords. To do this, she clicks her tongue to produce various tones of her voice, and grumbles in a way that is comparable to humans making vowel sounds. She only does this during feeding time when she wants to attract her keepers' attention. This was mainly due to her former time being taught by a human trainer while she was in the entertainment business.
Rocky (born September 25, 2004), resident of the Indianapolis Zoo, is an orangutan that can say the word "hi". He was the very first ape to produce sounds similar to words in a "conversational context". These sounds have been recorded in use and can be seen here. In the video, Rocky is participating in a training session wherein he is being asked to produce vocals outside of the typical orangutan "vocabulary." The Indianapolis Zoo made a public statement about Rocky's vocalizations and their implications for current and future studies.
Elephants
Batyr (1969–1993), an elephant from Kazakhstan, was reported to have a vocabulary of more than 20 phrases. Recordings of Batyr saying "Batyr is good", "Batyr is hungry", and words such as "drink" and "give" were played on Kazakh state radio in 1980.
Kosik (born 1990) is an elephant able to imitate Korean words.
Cetaceans
Some of the species of toothed whales like dolphins and porpoises such as beluga whales and orca can imitate the patterns of human speech.
NOC, a captive beluga whale in the United States Navy's Cold Ops program, could mimic some words well enough to confuse Navy divers on at least one occasion.
John C. Lilly's assistant Margaret Howe trained a dolphin named Peter to produce several words, including a credible "Mar-ga-ret".
Wikie is an orca that can say "hello", "goodbye", and "Amy" (her trainer).
Others
Hoover was a harbor seal who repeated common phrases heard around his exhibit at the New England Aquarium, including his name. He appeared in publications like Reader's Digest and The New Yorker, and television programs like Good Morning America.
Gef the talking mongoose was an alleged talking animal who inhabited a small house on the Isle of Man, off the coast of Great Britain. Fringe authors believe Gef was a poltergeist, a strange animal or cryptid. Contemporary academics believe it was most likely a hoax.
It is not unusual for goats to make noises that sound like syllables from human words. Some videos of this behavior have ended up becoming popular on YouTube. An example from Tennessee of a baby goat seeming to say "what what what?" got over seven million views.
In fiction
There are many examples throughout history in fiction. be it in written form or in film and animation. In the Pokémon franchise. Meowth of Team Rocket is considered a unique Pokémon in that he can understand and use human language, even serving as a translator for his fellow Pokémon, where they can only usually call out their own names verbally.
See also
Animal cognition
Animal communication
Animal language
Biosemiotics
Derek Bickerton – Animal Communication Systems researcher
Human–animal communication
Human speechome project
Kinship with All Life – book
Vocal learning
References
External links
Listen to Nature "The Language of Birds" includes article and audio samples of "talking" birds
New England Aquarium's Hoover page
Ethology
Folklore
Fairy tale stock characters | Talking animal | Biology | 1,920 |
49,778,652 | https://en.wikipedia.org/wiki/Scientific%20Workgroup%20for%20Rocketry%20and%20Spaceflight | The Scientific Workgroup for Rocketry and Spaceflight (WARR) () is a scientific workgroup situated at Technical University of Munich, composed mainly of its students. It was founded by students in 1962 with the goal to compensate for the lack of a chair for space technology at the university at the time. Since the establishment of such a chair in 1966, the group has conducted practical projects, starting with the first successful development and of a hybrid rocket in Germany. One rocket of this type was launched in 1972, another is on permanent display at Deutsches Museum. WARR has attained some public attention by for its projects in space elevator competitions, small satellites interstellar spaceflight concepts, and for winning all SpaceX Hyperloop pod competitions.
Currently, WARR works in the fields of hybrid propulsion, satellite technology, robotics, and transportation technologies.
History
Project groups of WARR
Rocketry
Existing since the foundation of WARR in 1962, the department for rocketry is the oldest project group of WARR. With the launch of the first German hybrid rocket in 1974, WARR achieved its first major success, which was promptly followed by the construction of multiple test engines. In 2009 the development of its next rocket began, called WARR-Ex2, powered by the in-house developed hybrid engine HYPER-1 with solid HTPB fuel and nitrous oxide as oxidizer. The rocket was successfully launched on 20 May 2015 from the missile base CLBI on the Atlantic coast of Brazil and reached a maximal altitude of approximately 5 km.
Even before the launch of WARR-Ex2, WARR had begun working on its successor, WARR-Ex3, as part of project STERN (STudentische Experimental-RaketeN) (German abbreviation for "student experimental rocketry"), organized and financed by the German Aerospace Center. As the given objectives of STERN were already reached within WARR-Ex2, it was decided to build a larger rocket, the WARR-Ex 3. It uses liquid oxygen instead of nitrous oxide, while maintaining the use of HTPB.It launched in July 2023 from FAR in California and reached an apogee of 12.4 Kilometres.
The Newest project, Project Nixus, features a Biliquid, regeneratively cooled, 3D Printed engine that provides 3.5Kn of thrust. It uses Ethanol and Liquid Oxygen, building on the experience with cryogenics that the EX-3 provided. It will see its first flight on the EX-4 Rocket at the European Rocketry Challenge. The rocket features many new technologies such as CFRP loadbearing skins, modular connectors, custom avionics and an SLM printed IN718 Valkyrie engine. The engine has been Hotfired 22 times as of July 2023
Satellite Technology
Since the cubesat First-MOVE was primarily developed by doctoral candidates from the institute of astronautics at the TUM, the involvement of students was intensified during the development of its successor MOVE-II. To make use of WARR's existing infrastructure, a new project group was founded, where the members could work on all subsystems. In 2012, development of a mission profile was started. After approval by the German Aerospace Center in 2015, launch of the satellite is expected in 2017.
MOVE-II is a 10x10x10 cm big satellite (1U-Cubesat). It consists of a bus on the one side, which is responsible for power supply, communication and attitude control. Its Mission is to educate Students and Test some prototype Solar Cells.
MOVE-IIb is an almost exact copy of MOVE-II launched in 2019.
Space-Elevator
WARR Space-Elevator is developing climber robots since its founding in 2005, and also organizes corresponding competitions. The first climber was developed for the JSETEC2009 competition and reached the targeted 150 m in the shortest time. In 2011 the European Space Elevator Challenge (EUSPEC) was established, which also focused on energy efficiency. Following that year the competition was repeated with increased cable length of 50 m.
Interstellar Flight
The WARR Interstellar Flight Team (ISF) is working on concepts for interstellar travel.
The goals of WARR ISF are:
Research on crewed and uncrewed interstellar travel
Utilization of methods from engineering sciences, especially interdisciplinary system engineering
Publication of results on international conferences and journals
Presentation of research findings to the public
In May 2013 the "Ghost Team" of WARR ISF participated in Project Icarus. The name "Ghost" derives from the sudden appearance of the team in the competition and resulting in confusion of the other participants. WARR presented its concept at the British Interplanetary Society in October 2013 and was awarded for the best design among 4 international teams.
In October 2014 begun development of a laser propelled interstellar probe for the Project Dragonfly Design Competition, held by the Initiative for Interstellar Studies (I4IS). The WARR team could prevail in this competition against international competitors, too.
Hyperloop
In August 2015 the project group Hyperloop was founded to participate in the Hyperloop Pod Competition sponsored by SpaceX. In January 2016, WARR's was one of 30 international teams selected (from a pool of over 700 initially participating) to build a functional prototype for the final phase of the competition in summer of 2016.
, the prototype developed by WARR was intended to feature an electrodynamic suspension system to levitate and an axial compressor to minimize aerodynamic drag from the residual air inside the tube.
The WARR pod was the fastest in the January 2017 competition which was run on the SpaceX Hyperloop test track—or Hypertube—a mile-long, partial-vacuum, -diameter steel tube purpose-built in Hawthorne, California for the competition.
In December 2018, WARR Hyperloop was rebranded to TUM Hyperloop. Since this time it is managed by a separate organisation, called NEXT.
See also
Delft Aerospace Rocket Engineering
Space Concordia
External links
TUM Hyperloop
Youtube-Channel of TUM Hyperloop
WARR Homepage
Youtube-Channel of WARR
References
Rocketry
Space agencies
Space programme of Germany
Technical University of Munich | Scientific Workgroup for Rocketry and Spaceflight | Engineering | 1,261 |
46,955,123 | https://en.wikipedia.org/wiki/Angles%20between%20flats | The concept of angles between lines (in the plane or in space), between two planes (dihedral angle) or between a line and a plane can be generalized to arbitrary dimensions. This generalization was first discussed by Camille Jordan. For any pair of flats in a Euclidean space of arbitrary dimension one can define a set of mutual angles which are invariant under isometric transformation of the Euclidean space. If the flats do not intersect, their shortest distance is one more invariant. These angles are called canonical or principal. The concept of angles can be generalized to pairs of flats in a finite-dimensional inner product space over the complex numbers.
Jordan's definition
Let and be flats of dimensions and in the -dimensional Euclidean space . By definition, a translation of or does not alter their mutual angles. If and do not intersect, they will do so upon any translation of which maps some point in to some point in . It can therefore be assumed without loss of generality that and intersect.
Jordan shows that Cartesian coordinates in can then be defined such that and are described, respectively, by the sets of equations
and
with . Jordan calls these coordinates canonical. By definition, the angles are the angles between and .
The non-negative integers are constrained by
For these equations to determine the five non-negative integers completely, besides the dimensions and and the number of angles , the non-negative integer must be given. This is the number of coordinates , whose corresponding axes are those lying entirely within both and . The integer is thus the dimension of . The set of angles may be supplemented with angles to indicate that has that dimension.
Jordan's proof applies essentially unaltered when is replaced with the -dimensional inner product space over the complex numbers. (For angles between subspaces, the generalization to is discussed by Galántai and Hegedũs in terms of the below variational characterization.)
Angles between subspaces
Now let and be subspaces of the -dimensional inner product space over the real or complex numbers. Geometrically, and are flats, so Jordan's definition of mutual angles applies. When for any canonical coordinate the symbol denotes the unit vector of the axis, the vectors form an orthonormal basis for and the vectors form an orthonormal basis for , where
Being related to canonical coordinates, these basic vectors may be called canonical.
When denote the canonical basic vectors for and the canonical basic vectors for then the inner product vanishes for any pair of and except the following ones.
With the above ordering of the basic vectors, the matrix of the inner products is thus diagonal. In other words, if and are arbitrary orthonormal bases in and then the real, orthogonal or unitary transformations from the basis to the basis and from the basis to the basis realize a singular value decomposition of the matrix of inner products . The diagonal matrix elements are the singular values of the latter matrix. By the uniqueness of the singular value decomposition, the vectors are then unique up to a real, orthogonal or unitary transformation among them, and the vectors and (and hence ) are unique up to equal real, orthogonal or unitary transformations applied simultaneously to the sets of the vectors associated with a common value of and to the corresponding sets of vectors (and hence to the corresponding sets of ).
A singular value can be interpreted as corresponding to the angles introduced above and associated with and a singular value can be interpreted as corresponding to right angles between the orthogonal spaces and , where superscript denotes the orthogonal complement.
Variational characterization
The variational characterization of singular values and vectors implies as a special case a variational characterization of the angles between subspaces and their associated canonical vectors. This characterization includes the angles and introduced above and orders the angles by increasing value. It can be given the form of the below alternative definition. In this context, it is customary to talk of principal angles and vectors.
Definition
Let be an inner product space. Given two subspaces with , there exists then a sequence of angles called the principal angles, the first one defined as
where is the inner product and the induced norm. The vectors and are the corresponding principal vectors.
The other principal angles and vectors are then defined recursively via
This means that the principal angles form a set of minimized angles between the two subspaces, and the principal vectors in each subspace are orthogonal to each other.
Examples
Geometric example
Geometrically, subspaces are flats (points, lines, planes etc.) that include the origin, thus any two subspaces intersect at least in the origin. Two two-dimensional subspaces and generate a set of two angles. In a three-dimensional Euclidean space, the subspaces and are either identical, or their intersection forms a line. In the former case, both . In the latter case, only , where vectors and are on the line of the intersection and have the same direction. The angle will be the angle between the subspaces and in the orthogonal complement to . Imagining the angle between two planes in 3D, one intuitively thinks of the largest angle, .
Algebraic example
In 4-dimensional real coordinate space R4, let the two-dimensional subspace be
spanned by and , and let the two-dimensional subspace be
spanned by and with some real and such that . Then and are, in fact, the pair of principal vectors corresponding to the angle with , and and are the principal vectors corresponding to the angle with
To construct a pair of subspaces with any given set of angles in a (or larger) dimensional Euclidean space, take a subspace with an orthonormal basis and complete it to an orthonormal basis of the Euclidean space, where . Then, an orthonormal basis of the other subspace is, e.g.,
Basic properties
If the largest angle is zero, one subspace is a subset of the other.
If the largest angle is , there is at least one vector in one subspace perpendicular to the other subspace.
If the smallest angle is zero, the subspaces intersect at least in a line.
If the smallest angle is , the subspaces are orthogonal.
The number of angles equal to zero is the dimension of the space where the two subspaces intersect.
Advanced properties
Non-trivial (different from and ) angles between two subspaces are the same as the non-trivial angles between their orthogonal complements.
Non-trivial angles between the subspaces and and the corresponding non-trivial angles between the subspaces and sum up to .
The angles between subspaces satisfy the triangle inequality in terms of majorization and thus can be used to define a distance on the set of all subspaces turning the set into a metric space.
The sine of the angles between subspaces satisfy the triangle inequality in terms of majorization and thus can be used to define a distance on the set of all subspaces turning the set into a metric space. For example, the sine of the largest angle is known as a gap between subspaces.
Extensions
The notion of the angles and some of the variational properties can be naturally extended to arbitrary inner products and subspaces with infinite dimensions.
Computation
Historically, the principal angles and vectors first appear in the context of canonical correlation and were originally computed using SVD of corresponding covariance matrices. However, as first noticed in, the canonical correlation is related to the cosine of the principal angles, which is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precision computer arithmetic. The sine-based algorithm fixes this issue, but creates a new problem of very inaccurate computation of highly uncorrelated principal vectors, since the sine function is ill-conditioned for angles close to /2. To produce accurate principal vectors in computer arithmetic for the full range of the principal angles, the combined technique first compute all principal angles and vectors using the classical cosine-based approach, and then recomputes the principal angles smaller than /4 and the corresponding principal vectors using the sine-based approach. The combined technique is implemented in open-source libraries Octave and SciPy and contributed and
to MATLAB.
See also
Singular value decomposition
Canonical correlation
References
Analytic geometry
Linear algebra
Angle | Angles between flats | Physics,Mathematics | 1,676 |
2,172,095 | https://en.wikipedia.org/wiki/Lithium%20perchlorate | Lithium perchlorate is the inorganic compound with the formula LiClO4. This white or colourless crystalline salt is noteworthy for its high solubility in many solvents. It exists both in anhydrous form and as a trihydrate.
Applications
Inorganic chemistry
Lithium perchlorate is used as a source of oxygen in some chemical oxygen generators. It decomposes at about 400 °C, yielding lithium chloride and oxygen:
LiClO4 → LiCl + 2 O2
Over 60% of the mass of the lithium perchlorate is released as oxygen. It has both the highest oxygen to weight and oxygen to volume ratio of all practical perchlorate salts, and higher oxygen to volume ratio than liquid oxygen.
Lithium perchlorate is used as an oxidizer in some experimental solid rocket propellants, and to produce red colored flame in pyrotechnic compositions.
Organic chemistry
LiClO4 is highly soluble in organic solvents, even diethyl ether. Such solutions are employed in Diels–Alder reactions, where it is proposed that the Lewis acidic Li+ binds to Lewis basic sites on the dienophile, thereby accelerating the reaction.
Lithium perchlorate is also used as a co-catalyst in the coupling of α,β-unsaturated carbonyls with aldehydes, also known as the Baylis–Hillman reaction.
Solid lithium perchlorate is found to be a mild and efficient Lewis acid for promoting cyanosilylation of carbonyl compounds under neutral conditions.
Batteries
Lithium perchlorate is also used as an electrolyte salt in lithium-ion batteries. Lithium perchlorate is chosen over alternative salts such as lithium hexafluorophosphate or lithium tetrafluoroborate when its superior electrical impedance, conductivity, hygroscopicity, and anodic stability properties are of importance to the specific application. However, these beneficial properties are often overshadowed by the electrolyte's strong oxidizing properties, making the electrolyte reactive toward its solvent at high temperatures and/or high current loads. Due to these hazards the battery is often considered unfit for industrial applications.
Biochemistry
Concentrated solutions of lithium perchlorate (4.5 mol/L) are used as a chaotropic agent to denature proteins.
Production
Lithium perchlorate can be manufactured by reaction of sodium perchlorate with lithium chloride. It can be also prepared by electrolysis of lithium chlorate at 200 mA/cm2 at temperatures above 20 °C.
Safety
Perchlorates often give explosive mixtures with organic compounds, finely divided metals, sulfur, and other reducing agents.
References
Fuerther reading
External links
WebBook page for LiClO4
Perchlorates
Lithium salts
Oxidizing agents
Electrolytes | Lithium perchlorate | Chemistry | 572 |
41,049 | https://en.wikipedia.org/wiki/Direct-sequence%20spread%20spectrum | In telecommunications, direct-sequence spread spectrum (DSSS) is a spread-spectrum modulation technique primarily used to reduce overall signal interference. The direct-sequence modulation makes the transmitted signal wider in bandwidth than the information bandwidth.
After the despreading or removal of the direct-sequence modulation in the receiver, the information bandwidth is restored, while the unintentional and intentional interference is substantially reduced.
Swiss inventor, Gustav Guanella proposed a "means for and method of secret signals". With DSSS, the message symbols are modulated by a sequence of complex values known as spreading sequence. Each element of the spreading sequence, a so-called chip, has a shorter duration than the original message symbols. The modulation of the message symbols scrambles and spreads the signal in the spectrum, and thereby results in a bandwidth of the spreading sequence. The smaller the chip duration, the larger the bandwidth of the resulting DSSS signal; more bandwidth multiplexed to the message signal results in better resistance against narrowband interference.
Some practical and effective uses of DSSS include the code-division multiple access (CDMA) method, the IEEE 802.11b specification used in Wi-Fi networks, and the Global Positioning System.
Transmission method
Direct-sequence spread-spectrum transmissions multiply the symbol sequence being transmitted with a spreading sequence that has a higher rate than the original message rate. Usually, sequences are chosen such that the resulting spectrum is spectrally white. Knowledge of the same sequence is used to reconstruct the original data at the receiving end. This is commonly implemented by the element-wise multiplication with the spreading sequence, followed by summation over a message symbol period. This process, despreading, is mathematically a correlation of the transmitted spreading sequence with the spreading sequence. In an AWGN channel, the despreaded signal's signal-to-noise ratio is increased by the spreading factor, which is the ratio of the spreading-sequence rate to the data rate.
While a transmitted DSSS signal occupies a wider bandwidth than the direct modulation of the original signal would require, its spectrum can be restricted by conventional pulse-shape filtering.
If an undesired transmitter transmits on the same channel but with a different spreading sequence, the despreading process reduces the power of that signal. This effect is the basis for the code-division multiple access (CDMA) method of multi-user medium access, which allows multiple transmitters to share the same channel within the limits of the cross-correlation properties of their spreading sequences.
Benefits
Resistance to unintended or intended jamming
Sharing of a single channel among multiple users
Reduced signal/background-noise level hampers interception
Determination of relative timing between transmitter and receiver
Uses
The United States GPS, European Galileo and Russian GLONASS satellite navigation systems; earlier GLONASS used DSSS with a single spreading sequence in conjunction with FDMA, while later GLONASS used DSSS to achieve CDMA with multiple spreading sequences.
DS-CDMA (Direct-Sequence Code Division Multiple Access) is a multiple access scheme based on DSSS, by spreading the signals from/to different users with different codes. It is the most widely used type of CDMA.
Cordless phones operating in the 900 MHz, 2.4 GHz and 5.8 GHz bands
IEEE 802.11b 2.4 GHz Wi-Fi, and its predecessor 802.11-1999. (Their successor 802.11g uses both OFDM and DSSS)
Automatic meter reading
IEEE 802.15.4 (used, e.g., as PHY and MAC layer for Zigbee, or, as the physical layer for WirelessHART)
Radio-controlled model Automotive, Aeronautical and Marine vehicles
Spread spectrum radar for covertness and resistance to jamming and spoofing
See also
Complementary code keying
Frequency-hopping spread spectrum
Linear-feedback shift register
Orthogonal frequency-division multiplexing
References
The Origins of Spread-Spectrum Communications
NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management
External links
Civil Spread Spectrum History
Computer network technology
Quantized radio modulation modes
Wireless networking
IEEE 802.11
ja:スペクトラム拡散#直接拡散 | Direct-sequence spread spectrum | Technology,Engineering | 844 |
11,798,095 | https://en.wikipedia.org/wiki/Phyllosticta%20sojaecola | Phyllosticta sojaecola is a plant pathogen infecting soybean.
Hosts and symptoms
Causes Phyllosticta Leaf Spot on soybeans. Forms circular lesions with reddish-brown borders and light brown center. The center of the lesion will drop out over time. Visible pycnidia can be seen in older lesions. A common consequence of infection is reduced yield from the damaged leaves.
Disease cycle
Phyllosticta sojicola and all other members of the Phyllosticta genus are ascomycete fungi, with pathogenic species forming spots on leaves and some fruit. Phyllosticta sojicola emerges from infected plant debris in spring and spread by wind and rain-splash onto healthy plants. While the infection method for Phyllosticta sojicola are unknown, other Phyllosticta species are known to infect leaves via an appressorium in a process that requires adequate moisture. Within mature lesions, the fungus forms pycnidia to overwinter and repeat the cycle. Phyllosticta sojicola can also survive on seeds and infect new fields through infected seed.
Environment and management
Phyllosticta sojicola prefers cool, moist conditions, as pycnidia require moist conditions to germinate. The pathogen can be managed by rotating to non-hosts and using tillage to remove infected residue. As infected seed can transmit the pathogen, seed testing is recommended to prevent introduction of disease.
See also
List of soybean diseases
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Peanut diseases
Soybean diseases
sojaecola
Fungi described in 1900
Fungus species | Phyllosticta sojaecola | Biology | 354 |
153,914 | https://en.wikipedia.org/wiki/Cassette%20deck | A cassette deck is a type of tape machine for playing and recording audio cassettes that does not have a built-in power amplifier or speakers, and serves primarily as a transport. It can be a part of an automotive entertainment system, a part of a portable audio system or a part of a home component system. In the latter case, it is also called a component cassette deck or just a component deck.
History
Roots
The first consumer tape recorder to employ a tape reel permanently housed in a small removable cartridge was the RCA tape cartridge, which appeared in 1958 as a predecessor to the cassette format. At that time, reel-to-reel recorders and players were commonly used by enthusiasts but required large individual reels and tapes which had to be threaded by hand, making them less accessible to the casual consumer. Both RCA and Bell Sound attempted to commercialize the cartridge format, but a few factors stalled adoption, including lower-than-advertised availability of selections in the prerecorded media catalog, delays in production setup, and a stand-alone design that was not considered by audiophiles to be truly hi-fi.
The compact cassette (a Philips trademark) was introduced by the Philips Corporation at the Internationale Funkausstellung Berlin in 1963 and marketed as a device purely intended for portable speech-only dictation machines. The tape width was inch (actually 0.15 inch, 3.81 mm) and tape speed was per second, giving a decidedly non-Hi-Fi frequency response and quite high noise levels.
Early cassette decks
Early recorders were intended for dictation and journalists, and were typically hand-held battery-powered devices with built-in microphones and automatic gain control on recording. Tape recorder audio-quality had improved by the mid-1970s, and a cassette deck with manual level controls and VU meters became a standard component of home high-fidelity systems. Eventually the reel-to-reel recorder was completely displaced, in part because of the usage constraints presented by their large size, expense, and the inconvenience of threading and rewinding the tape reels - cassettes are more portable and can be stopped and immediately removed in the middle of playback without rewinding. Cassettes became extremely popular for automotive and other portable music applications. Although pre-recorded cassettes were widely available, many users would combine (dub) songs from their vinyl records or cassettes to make a new custom mixtape cassette.
In 1970, the Advent Corporation combined Dolby B noise reduction system with chromium dioxide (CrO2) tape to create the Advent Model 200, the first high-fidelity cassette deck. Dolby B uses volume companding of high frequencies to boost low-level treble information by up to 9 dB, reducing them (and the hiss) on playback. CrO2 used different bias and equalization settings to reduce the overall noise level and extend the high frequency response. Together these allowed a usefully flat frequency response beyond 15 kHz for the first time. This deck was based on a top-loading mechanism by Nakamichi, then soon replaced by the Model 201 based on a more reliable transport made by Wollensak, a division of 3M, which was commonly used in audio/visual applications. Both featured an unusual single VU meter which could be switched between or for both channels. The Model 200 featured piano key style transport controls, with the Model 201 using the distinctive combination of a separate lever for rewind/fast forward and the large play and stop button as found on their commercial reel to reel machines of the era.
Most manufacturers adopted a standard top-loading format with piano key controls, dual VU meters, and slider level controls. There was a variety of configurations leading to the next standard format in the late 1970s, which settled on front-loading (see main picture) with cassette well on one side, dual VU meters on the other, and later dual-cassette decks with meters in the middle. Mechanical controls were replaced with electronic push buttons controlling solenoid mechanical actuators, though low cost models would retain mechanical controls. Some models could search and count gaps between songs.
Widespread use
Cassette decks soon came into widespread use and were designed variously for professional applications, home audio systems, and for mobile use in cars, as well as portable recorders. From the mid-1970s to the late 1990s the cassette deck was the preferred music source for the automobile. Like an 8-track cartridge, it was relatively insensitive to vehicle motion, but it had reduced tape flutter, as well as the obvious advantages of smaller physical size and fast forward/rewind capability.
A major boost to the cassette's popularity came with the release of the Sony Walkman personal cassette player in 1979, designed specifically as a headphone-only ultra-compact wearable music source. Although the vast majority of such players eventually sold were not Sony products, the name Walkman has become synonymous with this type of device.
Cassette decks were eventually manufactured by almost every well known brand in home audio, and many in professional audio, with each company offering models of very high quality.
Performance improvements and additional features
Cassette decks reached their pinnacle of performance and complexity by the mid-1980s. Cassette decks from companies such as Nakamichi, Revox, and Tandberg incorporated advanced features such as multiple tape heads and dual capstan drive with separate reel motors. Auto-reversing decks became popular and were standard on most factory installed automobile decks.
Integrated noise reduction systems - Dolby B, C, and S
The Dolby B noise reduction system was key to realizing low noise performance on the - compared to reel-to-reel-technology - relatively slow and narrow cassette tapes. It works by boosting the high frequencies on recording, especially low-level high-frequency sounds, with corresponding high frequency reduction on playback. This lowers the high frequency noise (hiss) by approximately 9 dB. Enhanced versions included Dolby C (from 1980) and Dolby S types. Of the three, however, only Dolby B became common on automobile decks.
Three heads for realtime monitoring of recordings and improved sound quality
Three-head technology uses separate heads for recording and playback (the third of the three heads being the erase head).
This allows different record and playback head gaps to be used.
A narrower head gap is optimal for playback than for recording, so the head gap width of any combined record/playback head must necessarily be a compromise.
Separate record and playback heads also allow off-the-tape monitoring during recording, permitting immediate verification of the recording quality.
(Such machines can be identified by the presence of a monitor switch with positions for tape and source, or similar.)
Three-head systems were common on reel-to-reel decks, but were more difficult to implement for cassettes,
which do not provide separate openings for record and play heads.
Some models squeezed a monitor head into the capstan area, and others combined separate record and playback gaps into a single headshell.
Auto reverse for automated sequential playback of both cassette sides
In later years, an auto-reverse feature appeared that allowed the deck to play (and, in some decks, record) on both sides of the cassette without the operator having to manually remove, flip, and re-insert the cassette.
Most auto-reverse machines use a four-channel head (similar to those on multitrack recorders), with only two channels connected to the electronics at one time, one pair for each direction.
Auto-reverse decks employ a capstan and pinch roller for each side.
Since these use the same opening in the cassette shell normally used for the erase head,
such decks must fit the erase head (or two, one for each direction) into the center opening in the shell along with the record/play head.
In later auto-reverse machines, the auto reverse mechanism uses an ordinary two-track, quarter-width head,
but operates by mechanically rotating the head 180 degrees so that the two head gaps access the other tracks of the tape.
There is usually an azimuth adjustment screw for each position.
Nevertheless, due to the repeated movement, the alignment (in particular, the azimuth) deviates with usage.
Even in a machine with a four-channel head, slight asymmetries in the cassette shell make it difficult to align the
head perfectly for both directions.
In one machine, the Dragon, Nakamichi addressed the issue with a motor-driven automatic head alignment mechanism.
This proved effective but very expensive.
Later Nakamichi auto-reverse models, the RX series, was essentially a single-directional deck,
but with an added mechanism that physically removed the cassette from the transport, flipped it over, and re-inserted it.
Akai made a similar machine but with the mechanism and cassette laid out horizontally instead of upright.
This permitted the convenience of auto-reverse with little compromise in record or playback quality.
Integration of digital electronics, from the 1980s
As a part of the Digital Revolution, the ongoing development of electronics technology decreased the cost of digital circuitry to the point that the technology could be applied to consumer electronics. The application of such digital electronics to cassette decks provides an early example of mechatronic design, which aims to enhance mechanical systems with electronic components in order to improve performance, increase system flexibility, or reduce cost. The inclusion of logic circuitry and solenoids into the transport and control mechanisms of cassette decks, often referred to logic control, contrasts with earlier piano-key transport controls and mechanical linkages. One goal of using logic circuitry in cassette decks or recorders was to minimize equipment damage upon incorrect user input by including fail-safes into the transport and control mechanism. Such fail-safe behavior was described in a review by Julian Hirsch of a particular cassette deck featuring logic control. Some examples of fail-safe mechanisms incorporated into logic control decks include: a mechanism designed to protect internal components from damage when the tape or motor is locked, a mechanism designed to prevent the tape from being wound improperly, among others. Some logic control decks were designed to incorporate light-touch buttons or remote control, among other features marketed as being convenient. In the car stereo industry, full logic control was developed with the aim of miniaturization, so that the cassette deck would take up less dashboard space.
Dolby HX Pro for higher recording levels on the same tape material
Bang & Olufsen developed the HX Pro headroom extension system in conjunction with Dolby Laboratories in 1982. This was used in many higher-end decks. HX Pro reduces the high-frequency bias during recording when the signal being recorded has a high level of high frequency content. Such a signal is self-biasing. Reducing the level of the bias signal permits the desired signal to be recorded at a higher level without saturating the tape, thus increasing headroom or maximum recording level.
Some decks incorporated microprocessor programs to adjust tape bias and record level calibration automatically.
Advances in tape materials
New tape formulations were introduced.
Chromium dioxide (referred to as CrO2 or Type II) was the first tape designed for extended high-frequency response, but it required higher bias. Later, as the IEC Type II standard was defined, a different equalization settings was also mandated to reduce hiss, thus giving up some extension at the high end of the audio spectrum.
Better-quality cassette recorders soon appeared with a switch for the tape type.
Later decks incorporated coded holes in the shell to autodetect the tape type.
Chromium dioxide tape was thought to cause increased wear on the heads, so TDK and Maxell adapted cobalt-doped ferric formulations to mimic CrO2.
Sony briefly tried FerriChrome (Type III) which claimed to combine the best of both; some people, however, stated that the reverse was true because the Cr top layer seemed to wear off quickly, reducing this type to Fe in practice. Most recent decks produce the best response and dynamic headroom with metal tapes (IEC Type IV) which require still higher bias for recording, though they will play back correctly at the II setting since the equalization is the same.
Effects achieved by the technological developments
With all of these improvements, the best units could record and play the full audible spectrum from 20 Hz to over 20 kHz (although this was commonly quoted at -10, -20 or even -30 dB, not at full output level), with wow and flutter less than 0.05% and very low noise. A high-quality recording on cassette could rival the sound of an average commercial CD, though the quality of pre-recorded cassettes has been regarded by the general public as lower than could be achieved in a quality home recording. There was a call for better sound quality in 1981, surprisingly by the head of Tower Records, Russ Solomon. At a meeting of the National Association of Recording Merchandisers (NARM) Retail Advisory Committee in Carlsbad, California, Solomon played two recordings of a Santana track; one he had recorded himself and the pre-recorded cassette release from Columbia Records. He used this technique to demonstrate what he called "the tunnel effect" in the audio range of pre-recorded cassettes and commented to the reporter Sam Sutherland, who wrote a news article printed in Billboard magazine:
"The buyer who is aware of sound quality is making his own." "They won't be satisfied with the 'tunnel effect' of prerecorded tape. And home tape deck users don't use prerecorded tapes at all." Yet, contended Solomon, while Tower's own stores show strong blank tape sales gains, its prerecorded sales have increased by only 2% to 3%. With an estimated 15% of the chain's total tape business now generated by the sales of blanks, "it would appear our added tape sales are going to TDK, Maxell and Sony, not you." he concluded. - Billboard, Vol. 93, No. 38, 26 September 1981.
Noise reduction and fidelity
A variety of noise reduction and other schemes are used to increase fidelity, with Dolby B being almost universal for both prerecorded tapes and home recording. Dolby B was designed to address the high-frequency noise inherent in cassette tapes, and along with improvements in tape formulation it helped the cassette win acceptances as a high-fidelity medium. At the same time, Dolby B provided acceptable performance when played back on decks that lacked Dolby circuitry, meaning there was little reason not to use it if it was available.
The main alternative to Dolby was the dbx noise reduction system, which achieved a high signal-to-noise ratio, but was essentially unlistenable when played back on decks that lacked the dbx decoding circuitry.
Philips developed an alternative noise reduction system known as Dynamic Noise Limiter (DNL) which did not require the tapes to be processed during recording; this was also the basis of the later DNR noise reduction.
Dolby later introduced Dolby C and Dolby S noise reduction, which achieved higher levels of noise reduction; Dolby C became common on high-fidelity decks, but Dolby S, released when cassette sales had begun to decline, never achieved widespread use. It was only licensed for use on higher end tape decks that included dual motors, triple heads, and other refinements.
Dolby HX Pro headroom extension provided better high-frequency response by adjusting the inaudible tape bias during the recording of strong high-frequency sounds, which had a bias effect of their own. Developed by Bang & Olufsen, it did not require a decoder to play back. Since B&O held patent rights and required paying license fees, many other manufacturers refrained from using it too.
Other refinements to improve cassette performance included Tandberg's DYNEQ, Toshiba's and Telefunken's High Com, and on some high-end decks, automatic recording bias, fine pitch adjustment and (sometimes) head azimuth adjustment such as the Tandberg TCD-330 and TCD-340A.
By the late 1980s, thanks to such improvements in the electronics, the tape material and manufacturing techniques, as well as dramatic improvements to the precision of the cassette shell, tape heads and transport mechanics, sound fidelity on equipment from the top manufacturers far surpassed the levels originally expected of the medium. On suitable audio equipment, cassettes could produce a very pleasant listening experience. High-end cassette decks could achieve 15 Hz-22 kHz±3 dB frequency response with wow and flutter below 0.022%, and a signal-to-noise ratio of up to 61 dB (for Type IV tape, without noise-reduction) . With noise reduction typical signal-to-noise figures of 70-76 dB with Dolby C, 80-86 dB with Dolby S, and 85 - 90 dB with dbx could be achieved. Many casual listeners could not tell the difference between compact cassette and compact disc.
From the early 1980s, the fidelity of prerecorded cassettes began to improve dramatically. Whereas Dolby B was already in widespread use in the 1970s, prerecorded cassettes were duplicated onto rather poor quality tape stock at (often) high speed and did not compare in fidelity to high-grade LPs. However, systems such as XDR, along with the adoption of higher-grade tape (such as chromium dioxide, but typically recorded in such a way as to play back at the normal 120 μs position), and the frequent use of Dolby HX Pro, meant that cassettes became a viable high-fidelity option, one that was more portable and required less maintenance than records. In addition, cover art, which had generally previously been restricted to a single image of the LP cover along with a minimum of text, began to be tailored to cassettes as well, with fold-out lyric sheets or librettos and fold-out sleeves becoming commonplace.
Some companies, such as Mobile Fidelity, produced audiophile cassettes in the 1980s, which were recorded on high-grade tape and duplicated on premium equipment in real time from a digital master. Unlike audiophile LPs, which continue to attract a following, these became moot after the compact disc became widespread.
Almost all cassette decks have an MPX filter to improve the sound quality and the tracking of the noise reduction system when recording from an FM stereo broadcast. However, in many especially cheaper decks, this filter cannot be disabled, and because of that record/playback frequency response in those decks typically is limited to 16 kHz. In other decks, the MPX filter can be switched off or on independently from the Dolby switch. On yet other decks, the filter is off by default, and an option to switch it on or off is only provided when Dolby is activated; this prevents the MPX filter from being used when it's not required.
In-car entertainment systems
A key element of the cassette's success was its use in in-car entertainment systems, where the small size of the tape was significantly more convenient than the competing 8-track cartridge system. Cassette players in cars and for home use were often integrated with a radio receiver. In-car cassette players were the first to adopt automatic reverse ("auto-reverse") of the tape direction at each end, allowing a cassette to be played endlessly without manual intervention. Home cassette decks soon added the feature.
Cassette tape adaptors have been developed which allow newer media players to be played through existing cassette decks, in particular those in cars which generally do not have input jacks. These units do not suffer from reception problems from FM transmitter based system to play back media players through the FM radio, though supported frequencies for FM transmitters that are not used on commercial broadcasters in a given region (e.g. any frequency below 88.1 in the US) somewhat eliminates that problem.
Maintenance
Cassette equipment needs regular maintenance, as cassette tape is a magnetic medium that is in physical contact with the tape head and other metallic parts of the recorder/player mechanism. Without such maintenance, the high-frequency response of the cassette equipment will suffer.
One problem occurs when iron oxide (or similar) particles from the tape itself become lodged in the playback head. As a result, the tape heads will require occasional cleaning to remove such particles. The metal capstan and the rubber pinch roller can become coated with these particles, leading them to pull the tape less precisely over the head; this in turn leads to misalignment of the tape over the head azimuth, producing noticeably unclear high tones, just as if the head itself were out of alignment. Isopropyl alcohol and denatured alcohol are both suitable head-cleaning fluids.
The heads and other metallic components in the tape path (such as spindles and capstans) may become magnetized with use, and require demagnetizing (see Cassette demagnetizer).
Decline in popularity
Analog cassette deck sales were expected to decline rapidly with the advent of the compact disc and other digital recording technologies such as digital audio tape (DAT), MiniDisc, and the CD-R recorder drives. Philips responded with the digital compact cassette, a system which was backward-compatible with existing analog cassette recordings for playback. However, it failed to garner a significant market share and was withdrawn from the market. One reason proposed for the lack of acceptance of digital recording formats such as DAT was a fear by content providers that the ability to make very high-quality copies would hurt sales of copyrighted recordings.
The rapid transition was not realized and CDs and cassettes successfully co-existed for nearly 20 years. A contributing factor may have been the inability of early CD players to reliably read discs with surface damage and offer anti-skipping features for applications where external vibration would be present, such as automotive and recreation environments. Early CD playback equipment also tended to be expensive compared to cassette equipment of similar quality and did not offer recording capability. Many home and portable entertainment systems supported both formats and commonly allowed the CD playback to be recorded on cassette tape. The rise of inexpensive all-solid-state portable digital music systems based on MP3, AAC and similar formats finally saw the eventual decline of the domestic cassette deck. As of 2020, Marantz, Teac, and Tascam are among the few companies still manufacturing cassette decks in relatively small quantities for professional and niche market use. By the late 1990s, automobiles were offered with entertainment systems that played both cassettes and CDs. By the end of the late 2000s, very few cars were offered with cassette decks. The last vehicle model in the United States that came standard with a factory-installed cassette player was the 2010 Lexus SC 430, however the Ford Crown Victoria came with a cassette deck as an option until the model was discontinued in 2011. As radios became tightly integrated into dashboards, many cars lacked even standard openings that would accept aftermarket cassette player installations.
Despite the decline in the production of cassette decks, these products are still valued by some. Many blind and elderly people find the newest digital technologies very difficult to use compared to the cassette format. Cassette tapes are not vulnerable to scratching from handling (though the exposed magnetic tape is vulnerable to stretching from poking), and play from where they were last stopped (though some modern MP3 players offer savestating electronically). Cassette tapes can also be recorded multiple times (though some solid-state digital recorders are now offering that function).
Today, cassette decks are not considered by most people to be either the most versatile or highest fidelity sound recording devices available, as even very inexpensive CD or digital audio players can reproduce a wide frequency range with no speed variations. Many current budget-oriented cassette decks lack a tape selector to set proper bias and equalization settings to take best advantage of the extended high end of Type II [High Bias] and Type IV [Metal Bias] tapes.
Cassettes remain popular for audio-visual applications. Some CD recorders, particularly those intended for business use, incorporate a cassette deck to allow both formats for recording meetings, church sermons, and books on tape.
References
External links
Audio Asylum Tape Trail – A discussion forum of interest to those involved in cassette technology.
Vintage Cassette Decks - A collection of Vintage cassette decks of all brands.
Audio players
Recording devices
Tape recording
1963 in technology
Audiovisual introductions in 1963
Products introduced in 1963 | Cassette deck | Technology | 4,967 |
29,343,018 | https://en.wikipedia.org/wiki/Operational%20acceptance%20testing | Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service, or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance testing (OR&A). Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.
OAT elaborates upon and compartmentalises operational aspects of acceptance testing.
According to the International Software Testing Qualifications Board (ISTQB), OAT may include checking the backup/restore facilities, IT disaster recovery procedures, maintenance tasks and periodic check of security vulnerabilities., and whitepapers on ISO 29119 and Operational Acceptance by Anthony Woods, and ISO 25000 and Operational Acceptance Testing by Dirk Dach et al., OAT generally includes:
Component Testing
Failover (Within the same data centre)
Component fail-over
Network fail-over
Functional Stability
Accessibility
Conversion
Stability
Usability
IT Service Management (Supportability)
Monitoring and Alerts (to ensure proper alerts are configured in the system if something goes wrong)
Portability
Compatibility
Interoperability
Installation and Backout
Localization
Recovery (across data centres)
Application/system recovery
Data recovery
Reliability
Backup and Restoration (Recovery)
Disaster Recovery
Maintainability
Performance, Stress and Volume,
Procedures (Operability) and Supporting Documentation (Supportability)
Security and Penetration
During OAT changes may be made to environmental parameters which the application uses to run smoothly. For example, with Microsoft Windows applications with a mixed or hybrid architecture, this may include: Windows services, configuration files, web services, XML files, COM+ components, web services, IIS, stored procedures in databases, etc. Typically OAT should occur after each main phase of the development life cycle: design, build, and functional testing. In sequential projects it is often viewed as a final verification before a system is released; where in agile and iterative projects, a more frequent execution of OAT occurs providing stakeholders with assurance of continued stability of the system and its operating environment.
An approach used in OAT may follow these steps:
Design the system,
Assess the design,
Build the system,
Confirm if built to design,
Evaluate the system addresses business functional requirements,
Assess the system for compliance with non-functional requirements,
Deploy the system,
Assess operability and supportability of the system.
For running the OAT test cases, the tester normally has exclusive access to the system or environment. This means that a single tester would be executing the test cases at a single point of time. For OAT the exact Operational Readiness quality gates are defined: both entry and exit gates. The primary emphasis of OAT should be on the operational stability, portability and reliability of the system.
References
Software testing | Operational acceptance testing | Engineering | 620 |
74,960,592 | https://en.wikipedia.org/wiki/Commation | Commation is a genus of marine heterotrophic protists closely related to the actinophryids. It contains two species, Commation cryoporinum and Commation eposianum, discovered in antarctic waters and described in 1993. Currently, the genus is classified within a monotypic family Commatiidae and order Commatiida. Along with the photosynthetic raphidophytes, these organisms compose the class of stramenopiles known as Raphidomonadea.
Etymology
The name of the genus, Commation, derives , referring to the overall comma shape of the biconvex cells.
Morphology
Commation is a genus of unicellular eukaryotes. They are solitary planktonic organisms that live as circular or oval, sometimes flattened, cells with a proboscis. Occasionally, a single flagellum with tripartite hairs (or mastigonemes) emerges from the proximity of the proboscis. They predominantly move by gliding, a motion facilitated by excretion of mucus. The cell nucleus appears at the base of the proboscis. The presence of two flagellar basal bodies hints at their stramenopile origin, since two heterokont flagella (one smooth, one with mastigonemes) are the main distinguishing trait of the Stramenopiles. They also present microtubular roots and a striated root or rhizoplast, a fiber connecting the nucleus to the basal bodies.
The mitochondria of Commation species have tubular cristae. One or more types of extrusomes occur scattered throughout the cytoplasm. One species, C. cryoporinum, presents two types of extrusomes, some of them visible under light microscopy when large enough. The other species, C. eposianum, only contains one type of extrusome that is not visible. The complex cytoskeleton of Commation contains structures consisting of microtubular arrays and electron-dense structures, present in both the cell bodies and proboscis.
Ecology
Commation cells are phagotrophic and non-photosynthetic, unlike their raphidophyte relatives. They live as plankton on the Antarctic Ocean, and were obtained at a depth of 10–20 meters. A single cell similar to C.eposianum was found in a cover slip preparation belonging to a 1989 sample obtained from a Pacific Ocean cruise off California, indicating that the genus Commation may not be endemic to the Antarctic region. Despite being heterotrophic, they are classified as part of the phytoplankton in ecological surveys.
Systematics
Commation was described as a genus by two biologists of the University of Copenhagen, Helge Abildhauge Thomsen and Jacob Larsen. The description was published in 1993 on the European Journal of Protistology. Subsequent taxonomic research papers assigned Commation to a monotypic family Commatiidae and order Commatiida. The order Commatiida was initially assigned to the class Jacobea on the basis of branched tubular mitochondrial cristae. Phylogenetic analyses in 2013 demonstrated that both Commation and a group of heliozoa known as Actinophryida were related to the raphidophyte algae. The first two groups, while heterotrophic, were united in the subclass Raphopoda, while the raphidophyte algae were given their own subclass Raphidophycidae. Together, these two subclasses currently compose the class Raphidomonadea.
Species
Two species have been described:
Commation cryoporinum
Commation eposianum
References
Taxa described in 1993
Ochrophyte genera
Ochrophyta | Commation | Biology | 769 |
23,775,396 | https://en.wikipedia.org/wiki/Noro%E2%80%93Frenkel%20law%20of%20corresponding%20states | The Noro–Frenkel law of corresponding states is an equation in thermodynamics that describes the critical temperature of the liquid-gas transition T as a function of the range of the attractive potential R. It states that all short-ranged spherically symmetric pair-wise additive attractive potentials are characterised by the same thermodynamics properties, if compared at the same reduced density and second virial coefficient
Description
Johannes Diderik van der Waals's law of corresponding states expresses the fact that there are basic similarities in the thermodynamic properties of all simple gases. Its essential feature is that if we scale the thermodynamic variables that describe an equation of state (temperature, pressure, and volume) with respect to their values at the liquid-gas critical point, all simple fluids obey the same reduced equation of state.
Massimo G. Noro and Daan Frenkel formulated an extended law of corresponding states that predicts the phase behaviour of short-ranged potentials on the basis of the effective pair potential alone – extending the validity of the van der Waals law to systems interacting through pair potentials with different functional forms.
The Noro–Frenkel law suggests to condensate the three quantities which are expected to play a role in the thermodynamics behavior of a system (hard-core size, interaction energy and range) into a combination of only two quantities: an effective hard core diameter and the reduced second virial coefficient. Noro and Frenkel suggested to determine the effective hard core diameter following the expression suggested by Barker based on the separation of the potential into attractive Vatt and repulsive Vrep parts used in the Weeks–Chandler– Andersen method. The reduced second virial coefficient, i.e., the second virial coefficient B2 divided by the second virial coefficient of hard spheres with the effective diameter can be calculated (or experimentally measured) once the potential is known. B2 is defined as
Applications
The Noro–Frenkel law is particularly useful for the description of colloidal and globular protein solutions, for which the range of the potential is indeed significantly smaller than the particle size. For these systems the thermodynamic properties can be re-written as a function of only two parameters, the reduced density (using the effective diameter as length scale) and the reduced second-virial coefficient B. The gas-liquid critical point of all systems satisfying the extended law of corresponding states are characterized by same values of B at the critical point.
The Noro-Frenkel law can be generalized to particles with limited valency (i.e. to non spherical interactions). Particles interacting with different potential ranges but identical valence behave again according to the generalized law, but with a different value for each valence of B at the critical point.
See also
Equation of states
Van der Waals equation
References
Thermodynamic properties
Condensed matter physics
Thermodynamic equations | Noro–Frenkel law of corresponding states | Physics,Chemistry,Materials_science,Mathematics,Engineering | 606 |
28,387 | https://en.wikipedia.org/wiki/Spirituality | The meaning of spirituality has developed and expanded over time, and various meanings can be found alongside each other. Traditionally, spirituality is referred to a religious process of re-formation which "aims to recover the original shape of man", oriented at "the image of God" as exemplified by the founders and sacred texts of the religions of the world. The term was used within early Christianity to refer to a life oriented toward the Holy Spirit and broadened during the Late Middle Ages to include mental aspects of life.
In modern times, the term both spread to other religious traditions and broadened to refer to a wider range of experiences, including a range of esoteric and religious traditions. Modern usages tend to refer to a subjective experience of a sacred dimension, and the "deepest values and meanings by which people live", often in a context separate from organized religious institutions. This may involve belief in a supernatural realm beyond the ordinarily observable world, personal growth, a quest for an ultimate or sacred meaning, religious experience, or an encounter with one's own "inner dimension" or spirit.
Etymology
The term spirit means "animating or vital principle in man and animals". It is derived from the Old French , which comes from the Latin word (soul, ghost, courage, vigor, breath) and is related to spirare (to breathe). In the Vulgate, the Latin word is used to translate the Greek pneuma and Hebrew ruach.
The term "spiritual", meaning "concerning the spirit", is derived from Old French (12c.), which is derived from Latin , which comes by or "spirit".
The term "spirituality" is derived from Middle French , from Late Latin (nominative ), which is also derived from Latin .
Definition
There is no single, widely agreed-upon definition of spirituality. Surveys of the definition of the term, as used in scholarly research, show a broad range of definitions with limited overlap. A survey of reviews by McCarroll, each dealing with the topic of spirituality, gave twenty-seven explicit definitions among which "there was little agreement". This causes some difficulty in trying to study spirituality systematically; i.e., it impedes both understanding and the capacity to communicate findings in a meaningful fashion.
According to Kees Waaijman, the traditional meaning of spirituality is a process of re-formation that "aims to recover the original shape of man, the image of God. To accomplish this, the re-formation is oriented at a mold, which represents the original shape: in Judaism the Torah, in Christianity there is Christ, for Buddhism, Buddha, and in Islam, Muhammad." Houtman and Aupers suggest that modern spirituality is a blend of humanistic psychology, mystical and esoteric traditions, and Eastern religions.
In modern times the emphasis is on subjective experience and the "deepest values and meanings by which people live", incorporating personal growth or transformation, usually in a context separate from organized religious institutions. Spirituality can be defined generally as an individual's search for ultimate or sacred meaning, and purpose in life. Additionally it can mean to seek out or search for personal growth, religious experience, belief in a supernatural realm or afterlife, or to make sense of one's own "inner dimension".
Development of the meaning of spirituality
Classical, medieval, and early modern periods
Bergomi detects "an enlightened form of non-religious spirituality" in late antiquity.
In ancient Rome, the concept of spirituality consisted mainly of the pax deorum (the peace of the gods), this was achieved through rituals and festivals that ensured divine favour and cosmic order. While Roman spirituality was communal, it also involved personal engagement with the divine through the study of mythology and philosophy. Myths served as allegories for moral lessons and models for personal conduct, guiding individuals in their relationship with the gods. The influence of Pythagorean philosophy, especially the Golden Verses, encouraged introspection, self-discipline, and ethical living. This blend of myth, philosophy, and ritual shaped a spirituality focused on both societal harmony and personal connection with the divine.
Words translatable as "spirituality" first began to arise in the 5th century and only entered common use toward the end of the Middle Ages. In a Biblical context the term means being animated by God. The New Testament offers the concept of being driven by the Holy Spirit, as opposed to living a life in which one rejects this influence.
In the 11th century, this meaning of "Spirituality" changed. Instead, the word began to denote the mental aspect of life, as opposed to the material and sensual aspects of life, "the ecclesiastical sphere of light against the dark world of matter". In the 13th century "spirituality" acquired a social and psychological meaning. Socially it denoted the territory of the clergy: "the ecclesiastical against the temporary possessions, the ecclesiastical against the secular authority, the clerical class against the secular class". Psychologically, it denoted the realm of the inner life: "the purity of motives, affections, intentions, inner dispositions, the psychology of the spiritual life, the analysis of the feelings".
In the 17th and 18th centuries, a distinction was made between higher and lower forms of spirituality: "A spiritual man is one who is Christian 'more abundantly and deeper than others'." The word was also associated with mysticism and quietism, and acquired a negative meaning.
Modern spirituality
Modern notions of spirituality developed throughout the 19th and 20th centuries, mixing Christian ideas with Western esoteric traditions and elements of Asian, especially Indian, religions. Spirituality became increasingly disconnected from traditional religious organizations and institutions. It is sometimes associated today with philosophical, social, or political movements such as liberalism, feminist theology, and green politics.
Modern Roman religion
In modern Roman neopagan spirituality, initiation is a central element that facilitates deeper spiritual development and access to sacred knowledge. It is viewed as a transformative process, guiding the initiate through stages of spiritual growth. Initiation introduces the individual to the esoteric meanings of Roman myths, deities, and the concept of pax deorum (peace of the gods), aligning the individual with cosmic order. This process not only prepares the initiate for participation in rituals but also emphasizes personal alignment with the divine will. As such, initiation is both a rite of passage and a means to engage meaningfully with divine forces, ensuring the individual's spiritual preparedness to uphold the traditions of Roman religious practice.
Transcendentalism and Unitarian Universalism
Ralph Waldo Emerson (1803–1882) was a pioneer of the idea of spirituality as a distinct field. He was one of the major figures in Transcendentalism, an early 19th-century liberal Protestant movement, which was rooted in English and German Romanticism, the Biblical criticism of Johann Gottfried Herder and Friedrich Schleiermacher, the skepticism of Hume, and Neoplatonism.
The Transcendentalists emphasized an intuitive, experiential approach to religion. Following Schleiermacher, an individual's intuition of truth was taken as the criterion for truth. In the late 18th and early 19th century, the first translations of Hindu texts appeared, which were also read by the Transcendentalists, and influenced their thinking. They also endorsed universalist and Unitarianist ideas, leading to Unitarian Universalism, the idea that there must be truth in other religions as well since a loving God would redeem all living beings, not just Christians.
Theosophy, anthroposophy, and the perennial philosophy
A major influence on modern spirituality was the Theosophical Society, which searched for 'secret teachings' in Asian religions. It has been influential on modernist streams in several Asian religions, notably Neo-Vedanta, the revival of Theravada Buddhism, and Buddhist modernism, which have taken over modern western notions of personal experience and universalism and integrated them in their religious concepts. A second, related influence was Anthroposophy, whose founder, Rudolf Steiner, was particularly interested in developing a genuine Western spirituality, and in the ways that such a spirituality could transform practical institutions such as education, agriculture, and medicine. More independently, the spiritual science of Martinus was an influence, especially in Scandinavia.
The influence of Asian traditions on Western modern spirituality was also furthered by the perennial philosophy, whose main proponent Aldous Huxley was deeply influenced by Swami Vivekananda's Neo-Vedanta and universalism, and the spread of social welfare, education and mass travel after World War II.
Neo-Vedanta
An important influence on western spirituality was Neo-Vedanta, also called neo-Hinduism and Hindu Universalism, a modern interpretation of Hinduism which developed in response to western colonialism and orientalism. It aims to present Hinduism as a "homogenized ideal of Hinduism" with Advaita Vedanta as its central doctrine. Due to the colonisation of Asia by the western world, since the 19th century an exchange of ideas has been taking place between the western world and Asia, which also influenced western religiosity. Unitarianism, and the idea of Universalism, was brought to India by missionaries, and had a major influence on neo-Hinduism via Ram Mohan Roy's Brahmo Samaj and Brahmoism. Roy attempted to modernise and reform Hinduism, from the idea of Universalism. This universalism was further popularised, and brought back to the west as neo-Vedanta, by Swami Vivekananda.
"Spiritual but not religious"
After the Second World War, spirituality and theistic religion became increasingly disconnected, and spirituality became more oriented on subjective experience, instead of "attempts to place the self within a broader ontological context". A new discourse developed, in which (humanistic) psychology, mystical and esoteric traditions and eastern religions are being blended, to reach the true self by self-disclosure, free expression, and meditation.
The distinction between the spiritual and the religious became more common in the popular mind during the late 20th century with the rise of secularism and the advent of the New Age movement. Authors such as Chris Griscom and Shirley MacLaine explored it in numerous ways in their books. Paul Heelas noted the development within New Age circles of what he called "seminar spirituality": structured offerings complementing consumer choice with spiritual options.
Among other factors, declining membership of organized religions and the growth of secularism in the western world have given rise to this broader view of spirituality. The term "spiritual" is now frequently used in contexts in which the term "religious" was formerly employed. Both theists and atheists have criticized this development.
Traditional spirituality
Abrahamic faiths
Judaism
Spirituality in Judaism () may involve practices of Jewish ethics, Jewish prayer, Jewish meditation, Shabbat and holiday observance, Torah study, dietary laws, teshuvah, and other practices. It may involve practices ordained by halakhah or other practices.
Kabbalah (literally "receiving") is an esoteric method, discipline and school of thought of Judaism. Kabbalah is a set of esoteric teachings meant to explain the relationship between an unchanging, eternal and mysterious Ein Sof (no end) and the mortal and finite universe (his creation). Interpretations of Kabbalistic spirituality are found within Hasidic Judaism, a branch of Orthodox Judaism founded in 18th-century Eastern Europe by Rabbi Israel Baal Shem Tov. Hasidism often emphasizes the Immanent Divine presence and focuses on emotion, fervour, and the figure of the Tzadik. This movement included an elite ideal of nullification to paradoxical Divine Panentheism.
The Musar movement is a Jewish spiritual movement that has focused on developing character traits such as faith, humility, and love. The Musar movement, first founded in the 19th century by Israel Salanter and developed in the 21st century by Alan Morinis and Ira F. Stone, has encouraged spiritual practices of Jewish meditation, Jewish prayer, Jewish ethics, tzedakah, teshuvah, and the study of musar (ethical) literature.
Reform Judaism and Conservative Judaism have often emphasized the spirituality of Jewish ethics and tikkun olam, feminist spirituality, Jewish prayer, Torah study, ritual, and musar.
Christianity
Christian spirituality is the spiritual practice of living out a personal faith. Pope Francis offers several ways in which the calling of Christian spirituality can be considered:
"Christian spirituality proposes an alternative understanding of the quality of life, and encourages a prophetic and contemplative lifestyle, one capable of deep enjoyment free of the obsession with consumption";
"Christian spirituality proposes a growth marked by moderation and the capacity to be happy with little."
Work, with an understanding of its meaning, and relaxation are both important dimensions of Christian spirituality.
The terminology of the Catholic Church refers to an act of faith (fides qua creditur) following the acceptance of faith (fides quae creditur). Although all Catholics are expected to pray together at Mass, there are many different forms of spirituality and private prayer which have developed over the centuries. Each of the major religious orders of the Catholic Church and other lay groupings have their own unique spirituality – its own way of approaching God in prayer and in living out the Gospel.
Christian mysticism refers to the development of mystical practices and theory within Christianity. It has often been connected to mystical theology, especially in the Catholic and Eastern Orthodox traditions. The attributes and means by which Christian mysticism is studied and practiced are varied and range from ecstatic visions of the soul's mystical union with God to simple prayerful contemplation of Holy Scripture (i.e., Lectio Divina).
Progressive Christianity is a contemporary movement which seeks to remove the supernatural claims of the faith and replace them with a post-critical understanding of biblical spirituality based on historical and scientific research. It focuses on the lived experience of spirituality over historical dogmatic claims, and accepts that the faith is both true and a human construction, and that spiritual experiences are psychologically and neurally real and useful.
Islam
An inner spiritual struggle and an outer physical struggle are two commonly accepted meanings of the Arabic word jihad: The "greater jihad" is the inner struggle by a believer to fulfill his religious duties and fight against one's ego. This non-violent meaning is stressed by both Muslim and non-Muslim authors.
Al-Khatib al-Baghdadi, an 11th-century Islamic scholar, referenced a statement by the companion of Muhammad, Jabir ibn Abd-Allah:
Sufism
The best known form of Islamic mystic spirituality is the Sufi tradition (famous through Rumi and Hafiz) in which a Sheikh or pir transmits spiritual discipline to students.
Sufism or () is defined by its adherents as the inner, mystical dimension of Islam. A practitioner of this tradition is generally known as a (). Sufis believe they are practicing ihsan (perfection of worship) as revealed by Gabriel to Muhammad,
Sufis consider themselves as the original true proponents of this pure original form of Islam. They are strong adherents to the principal of tolerance, peace and against any form of violence. The Sufi have suffered severe persecution by more rigid and fundamentalist groups such as the Wahhabi and Salafi movement. In 1843 the Senussi Sufi were forced to flee Mecca and Medina and head to Sudan and Libya.
Classical Sufi scholars have defined Sufism as "a science whose objective is the reparation of the heart and turning it away from all else but God". Alternatively, in the words of the Darqawi Sufi teacher Ahmad ibn Ajiba, "a science through which one can know how to travel into the presence of the Divine, purify one's inner self from filth, and beautify it with a variety of praiseworthy traits".
Indian religions
Jainism
Jainism, traditionally known as Jain Dharma, is an ancient Indian religion. The three main pillars of Jainism are ahiṃsā (non-violence), anekāntavāda (non-absolutism), and aparigraha (non-attachment). Jains take five main vows: ahiṃsā (non-violence), satya (truth), asteya (not stealing), brahmacharya (sexual continence), and aparigraha (non-possessiveness). These principles have affected Jain culture in many ways, such as leading to a predominantly vegetarian lifestyle. Parasparopagraho jīvānām (the function of souls is to help one another) is the faith's motto and the Ṇamōkāra mantra is its most common and basic prayer.
Jainism traces its spiritual ideas and history through a succession of twenty-four leaders or Tirthankaras, with the first in the current time cycle being Rishabhadeva, whom the tradition holds to have lived millions of years ago; the twenty-third tirthankara Parshvanatha, whom historians date to 9th century BCE; and the twenty-fourth tirthankara, Mahavira around 600 BCE. Jainism is considered to be an eternal dharma with the tirthankaras guiding every time cycle of the cosmology.
Buddhism
Buddhist practices are known as Bhavana, which literally means "development" or "cultivating" or "producing" in the sense of "calling into existence". It is an important concept in Buddhist praxis (Patipatti). The word bhavana normally appears in conjunction with another word forming a compound phrase such as citta-bhavana (the development or cultivation of the heart/mind) or metta-bhavana (the development/cultivation of loving kindness). When used on its own bhavana signifies 'spiritual cultivation' generally.
Various Buddhist paths to liberation developed throughout the ages. Best-known is the Noble Eightfold Path, but others include the Bodhisattva Path and Lamrim.
Hinduism
Hinduism has no traditional ecclesiastical order, no centralized religious authorities, no governing body, no prophets nor any binding holy book; Hindus can choose to be polytheistic, henotheistic, pantheistic, monotheistic, or atheistic. Within this diffuse and open structure, spirituality in Hindu philosophy is an individual experience, and referred to as ksaitrajña (). It defines spiritual practice as one's journey towards moksha, awareness of self, the discovery of higher truths, Ultimate reality, and a consciousness that is liberated and content.
Four paths
Traditionally, Hinduism identifies three mārga (ways) of spiritual practice, namely Jñāna (ज्ञान), the way of knowledge; Bhakti, the way of devotion; and Karma yoga, the way of selfless action. In the 19th century Vivekananda, in his neo-Vedanta synthesis of Hinduism, added Rāja yoga, the way of contemplation and meditation, as a fourth way, calling all of them "yoga".
Jñāna marga is a path often assisted by a guru (teacher) in one's spiritual practice. Bhakti marga is a path of faith and devotion to deity or deities; the spiritual practice often includes chanting, singing and music – such as in kirtans – in front of idols, or images of one or more deity, or a devotional symbol of the holy. Karma marga is the path of one's work, where diligent practical work or vartta (, profession) becomes in itself a spiritual practice, and work in daily life is perfected as a form of spiritual liberation and not for its material rewards. Rāja marga is the path of cultivating necessary virtues, self-discipline, tapas (meditation), contemplation and self-reflection sometimes with isolation and renunciation of the world, to a pinnacle state called samādhi. This state of samādhi has been compared to peak experience.
There is a rigorous debate in Indian literature on relative merits of these theoretical spiritual practices. For example, Chandogyopanishad suggests that those who engage in ritualistic offerings to gods and priests will fail in their spiritual practice, while those who engage in tapas will succeed; Shvetashvatara Upanishad suggests that a successful spiritual practice requires a longing for truth, but warns of becoming 'false ascetic' who go through the mechanics of spiritual practice without meditating on the nature of Self and universal Truths. In the practice of Hinduism, suggest modern era scholars such as Vivekananda, the choice between the paths is up to the individual and a person's proclivities. Other scholars suggest that these Hindu spiritual practices are not mutually exclusive, but overlapping. These four paths of spirituality are also known in Hinduism outside India, such as in Balinese Hinduism, where it is called Chatur Marga (literally: four paths).
Schools and spirituality
Different schools of Hinduism encourage different spiritual practices. In Tantric school for example, the spiritual practice has been referred to as sādhanā. It involves initiation into the school, undergoing rituals, and achieving moksha liberation by experiencing union of cosmic polarities. The Hare Krishna school emphasizes bhakti yoga as spiritual practice. In Advaita Vedanta school, the spiritual practice emphasizes jñāna yoga in stages: samnyasa (cultivate virtues), sravana (hear, study), manana (reflect) and dhyana (nididhyasana, contemplate).
Sikhism
Sikhism considers spiritual life and secular life to be intertwined: "In the Sikh Weltanschauung ... the temporal world is part of the Infinite Reality and partakes of its characteristics." Guru Nanak described living an "active, creative, and practical life" of "truthfulness, fidelity, self-control and purity" as being higher than a purely contemplative life.
The 6th Sikh Guru Guru Hargobind re-affirmed that the political/temporal (Miri) and spiritual (Piri) realms are mutually coexistent. According to the 9th Sikh Guru, Tegh Bahadhur, the ideal Sikh should have both Shakti (power that resides in the temporal), and Bhakti (spiritual meditative qualities). This was developed into the concept of the Saint Soldier by the 10th Sikh Guru, Gobind Singh.
According to Guru Nanak, the goal is to attain the "attendant balance of separation-fusion, self-other, action-inaction, attachment-detachment, in the course of daily life", the polar opposite to a self-centered existence. Nanak talks further about the one God or akal (timelessness) that permeates all life). and which must be seen with 'the inward eye', or the 'heart', of a human being.
In Sikhism there is no dogma, priests, monastics or yogis.
African spirituality
In some African contexts, spirituality is considered a belief system that guides the welfare of society and the people therein, and eradicates sources of unhappiness occasioned by evil.
In traditional society prior to colonization and extensive introduction to Christianity or Islam, religion was the strongest element in society influencing the thinking and actions of the people. Hence spirituality was a sub-domain of religion. Despite the rapid social, economic and political changes of the last century, traditional religion remains the essential background for many African people. And that religion is a communal given, not an individual choice. Religion gives all of life its meaning and provides ground for action. Each person is "a living creed of his religion". There is no concern for spiritual matters apart from ones physical and communal life. Life continues after death but remains focused on pragmatic family and community matters.
Contemporary spirituality
The term spiritual has frequently become used in contexts in which the term religious was formerly employed. Contemporary spirituality is also called "post-traditional spirituality" and "New Age spirituality". Hanegraaf makes a distinction between two "New Age" movements: New Age in a restricted sense, which originated primarily in mid-twentieth century England and had its roots in Theosophy and Anthroposophy, and "New Age" in a general sense, which emerged in the later 1970s:
Those who speak of spirituality outside of religion often define themselves as spiritual but not religious, and generally believe in the existence of different "spiritual paths", emphasizing the importance of finding one's own individual path to spirituality. According to one 2005 poll, about 24% of the United States population identifies itself as "spiritual but not religious".
Lockwood draws attention to the variety of spiritual experience in the contemporary West:
The new Western spiritual landscape, characterised by consumerism and choice abundance, is scattered with novel religious manifestations based in psychology and the Human Potential Movement, each offering participants a pathway to the Self.
Those who speak of spirituality within religion also recognise the need for spirituality to take on a contemporary form: thus, for example, Pope Francis refers to and reflects on "contemporary devotion" in his encyclical letter Dilexit nos issued in 2024.
Characteristics
Modern spirituality centers on the "deepest values and meanings by which people live". It often embraces the idea of an ultimate or an alleged immaterial reality. It envisions an inner path enabling a person to discover the essence of his or her being.
Not all modern notions of spirituality embrace transcendental ideas. Secular spirituality emphasizes humanistic ideas on moral character (qualities such as love, compassion, patience, tolerance, forgiveness, contentment, responsibility, harmony, and a concern for others). These are aspects of life and human experience which go beyond a purely materialist view of the world without necessarily accepting belief in a supernatural reality or any divine being. Nevertheless, many humanists (e.g. Bertrand Russell, Jean-Paul Sartre) who clearly value the non-material, communal, and virtuous aspects of life reject this usage of the term "spirituality" as being overly-broad (i.e. it effectively amounts to saying "everything and anything that is good and virtuous is necessarily spiritual"). In 1930 Russell, a self-described agnostic renowned as an atheist, wrote "... one's ego is no very large part of the world. The man who can centre his thoughts and hopes upon something transcending self can find a certain peace in the ordinary troubles of life which is impossible to the pure egoist."
Similarly, Aristotle – one of the first known Western thinkers to demonstrate that morality, virtue and goodness can be derived without appealing to supernatural forces – argued that "men create Gods in their own image" (not the other way around). Moreover, theistic and atheistic critics alike dismiss the need for the "secular spirituality" label on the basis that it appears to be nothing more than obscurantism in that:
the term "spirit" is commonly taken as denoting the existence of unseen / otherworldly / life-giving forces; and
words such as "morality", "philanthropy" and "humanism" already efficiently and succinctly describe the prosocial-orientation and civility that the phrase "secular spirituality" is meant to convey but without risking confusion that one is referring to something supernatural.
Although personal well-being, both physical and psychological, is said to be an important aspect of modern spirituality, this does not imply that spirituality is essential to achieving happiness (e.g. see ). Free-thinkers who reject notions that the numinous or non-material is important to living well can be just as happy as more spiritually-oriented individuals (see)
Contemporary proponents of spirituality may suggest that spirituality develops inner peace and forms a foundation for happiness. For example, meditation and similar practices are suggested to help the practitioner cultivate a personal inner life and character. Ellison and Fan (2008) assert that spirituality causes a wide array of positive health outcomes, including "morale, happiness, and life satisfaction". However, Schuurmans-Stekhoven (2013) actively attempted to replicate this research and found more "mixed" results. Nevertheless, spirituality has played a central role in some self-help movements such as Alcoholics Anonymous:
Such spiritually-informed treatment approaches have been challenged as pseudoscience.
Spiritual experience
Spiritual experiences play a central role in modern spirituality. Both western and Asian authors have popularised this notion. Important early-20th century Western writers who studied the phenomenon of spirituality, and their works, include William James, The Varieties of Religious Experience (1902) and Rudolph Otto, especially The Idea of the Holy (1917)
James' notions of "spiritual experience" had a further influence on the modernist streams in Asian traditions, making them even further recognisable for a western audience.
William James popularized the use of the term "religious experience" in his The Varieties of Religious Experience. He has also influenced the understanding of mysticism as a distinctive experience which allegedly grants knowledge.
Wayne Proudfoot traces the roots of the notion of "religious experience" further back to the German theologian Friedrich Schleiermacher (1768–1834), who argued that religion is based on a feeling of the infinite. Schleiermacher used the idea of "religious experience" to defend religion against the growing scientific and secular critique. Many scholars of religion, of whom William James was the most influential, adopted the concept.
Major Asian influences on contemporary spirituality have included Swami Vivekananda (1863–1902) and D. T. Suzuki. (1870–1966) Vivekananda popularised a modern syncretic Hinduism, in which an emphasis on personal experience replaced the authority of scriptures. Suzuki had a major influence on the popularisation of Zen in the west and popularized the idea of enlightenment as insight into a timeless, transcendent reality. Other influences came through Paul Brunton's A Search in Secret India (1934), which introduced Ramana Maharshi (1879–1950) and Meher Baba (1894–1969) to a western audience.
Spiritual experiences can include being connected to a larger reality, yielding a more comprehensive self; joining with other individuals or the human community; with nature or the cosmos; or with the divine realm.
Spiritual practices
Kees Waaijman discerns four forms of spiritual practices:
Somatic practices, especially deprivation and diminishment. Deprivation aims to purify the body. Diminishment concerns the repulsement of ego-oriented impulses. Examples include fasting and poverty.
Psychological practices, for example meditation.
Social practices. Examples include the practice of obedience and communal ownership, reforming ego-orientedness into other-orientedness.
Spiritual. All practices aim at purifying ego-centeredness, and direct the abilities at the divine reality.
Spiritual practices may include meditation, mindfulness, prayer, the contemplation of sacred texts, ethical development,
and spiritual retreats in a convent. Love and/or compassion are often described as the mainstay of spiritual development.
Within spirituality is also found "a common emphasis on the value of thoughtfulness, tolerance for breadth and practices and beliefs, and appreciation for the insights of other religious communities, as well as other sources of authority within the social sciences."
Scientific research
Health and well-being
Various studies (most originating from North America) have reported a positive correlation between spirituality and mental well-being in both healthy people and those encountering a range of physical illnesses or psychological disorders. Although spiritual individuals tend to be optimistic, report greater social support, and experience higher intrinsic meaning in life, strength, and inner peace, whether the correlation represents a causal link remains contentious. Both supporters and opponents of this claim agree that past statistical findings are difficult to interpret, in large part because of the ongoing disagreement over how spirituality should be defined and measured. There is also evidence that an agreeable/positive temperament and/or a tendency toward sociability (which all correlate with spirituality) might actually be the key psychological features that predispose people to subsequently adopt a spiritual orientation and that these characteristics, not spiritually per se, add to well-being. There is also some suggestion that the benefits associated with spirituality and religiosity might arise from being a member of a close-knit community. Social bonds available via secular sources (i.e., not unique to spirituality or faith-based groups) might just as effectively raise well-being. In sum, spirituality may not be the "active ingredient" (i.e., past association with psychological well-being measures might reflect a reverse causation or effects from other variables that correlate with spirituality), and that the effects of agreeableness, conscientiousness, or virtue – personality traits common in many non-spiritual people yet known to be slightly more common among the spiritual – may better account for spirituality's apparent correlation with mental health and social support.
Intercessionary prayer
Masters and Spielmans conducted a meta-analysis of all the available and reputable research examining the effects of distant intercessory prayer. They found no discernible health effects from being prayed for by others. In fact, one large and scientifically rigorous study by Herbert Benson and colleagues revealed that intercessory prayer had no effect on recovery from cardiac arrest, but patients told people were praying for them actually had an increased risk of medical complications.
Spiritual care in health care professions
In the health-care professions there is growing interest in "spiritual care", to complement the medical-technical approaches and to improve the outcomes of medical treatments. Puchalski et al. argue for "compassionate systems of care" in a spiritual context.
Spiritual experiences
Neuroscientists have examined brain functioning during reported spiritual experiences finding that certain neurotransmitters and specific areas of the brain are involved. Moreover, experimenters have also successfully induced spiritual experiences in individuals by administering psychoactive agents known to elicit euphoria and perceptual distortions. Conversely, religiosity and spirituality can also be dampened by electromagnetic stimulation of the brain. These results have motivated some leading theorists to speculate that spirituality may be a benign subtype of psychosis – benign in the sense that the same aberrant sensory perceptions that those suffering clinical psychoses evaluate as distressingly incongruent and inexplicable are instead interpreted by spiritual individuals as positive (personal and meaningful transcendent experiences).
Measurement
Considerable debate persists about – among other factors – spirituality's relation to religion, the number and content of its dimensions, its relation to concepts of well-being, and its universality. A number of research groups have developed instruments which attempt to measure spirituality quantitatively, including unidimensional (e.g. the Character Strength Inventory—Spirituality and the Daily Spiritual Experiences Scale) and multi-dimensional (e.g. Spiritual Transcendence Scale (STS) and the Brief Multidimensional Measure of Religiousness/Spirituality (BMMRS)) scales. MacDonald et al. gave an "Expressions of Spirituality Inventory" (ESI-R) measuring five dimensions of spirituality to over 4000 persons across eight countries. The study results and interpretation highlighted the complexity and challenges of measurement of spirituality cross-culturally.
See also
Esotericism
Glossary of spirituality terms
Ietsism
Interspirituality
Kardecist spiritism
Multiple religious belonging
Outline of spirituality
Reason
Relationship between religion and science
Sacred–profane dichotomy
Self-actualization
Spiritual activism
Spiritual intelligence
Sublime (philosophy)
Thelema
True Will
Notes
References
Sources
Published sources
Web-sources
Further reading
Belief
Metaphysics of religion
Epistemology of religion | Spirituality | Biology | 7,304 |
1,268,562 | https://en.wikipedia.org/wiki/Purified%20water | Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt).
Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals.
Parameters of water purity
Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are:
inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests)
organic compounds (typically monitored as TOC or by specific tests)
bacteria (monitored by total viable counts or epifluorescence)
endotoxins and nucleases (monitored by LAL or specific enzyme tests)
particulates (typically controlled by filtration)
gases (typically managed by degassing when required)
Purification methods
Distillation
Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving solid contaminants behind. Distillation produces very pure water. A white or yellowish mineral scale is left in the distillation apparatus, which requires regular cleaning. Distilled water, like all purified water, must be stored in a sterilized container to guarantee the absence of bacteria. For many procedures, more economical alternatives are available, such as deionized water, and are used in place of distilled water.
Double distillation
Double-distilled water (abbreviated "ddH2O", "Bidest. water" or "DDW") is prepared by slow boiling the uncontaminated condensed water vapor from a prior slow boiling. Historically, it was the de facto standard for highly purified laboratory water for biochemistry and used in laboratory trace analysis until combination purification methods of water purification became widespread.
Deionization
Deionized water (DI water, DIW or de-ionized water), often synonymous with demineralized water / DM water, is water that has had almost all of its mineral ions removed, such as cations like sodium, calcium, iron, and copper, and anions such as chloride and sulfate. Deionization is a chemical process that uses specially manufactured ion-exchange resins, which exchange hydrogen and hydroxide ions for dissolved minerals, and then recombine to form water. Because most non-particulate water impurities are dissolved salts, deionization produces highly pure water that is generally similar to distilled water, with the advantage that the process is quicker and does not build up scale.
However, deionization does not significantly remove uncharged organic molecules, viruses, or bacteria, except by incidental trapping in the resin. Specially made strong base anion resins can remove Gram-negative bacteria. Deionization can be done continuously and inexpensively using electrodeionization.
Three types of deionization exist: co-current, counter-current, and mixed bed.
Co-current deionization
Co-current deionization refers to the original downflow process where both input water and regeneration chemicals enter at the top of an ion-exchange column and exit at the bottom. Co-current operating costs are comparatively higher than counter-current deionization because of the additional usage of regenerants. Because regenerant chemicals are dilute when they encounter the bottom or finishing resins in an ion-exchange column, the product quality is lower than a similarly sized counter-flow column.
The process is still used, and can be maximized with the fine-tuning of the flow of regenerants within the ion exchange column.
Counter-current deionization
Counter-current deionization comes in two forms, each requiring engineered internals:
Upflow columns where input water enters from the bottom and regenerants enter from the top of the ion exchange column.
Upflow regeneration where water enters from the top and regenerants enter from the bottom.
In both cases, separate distribution headers (input water, input regenerant, exit water, and exit regenerant) must be tuned to: the input water quality and flow, the time of operation between regenerations, and the desired product water analysis.
Counter-current deionization is the more attractive method of ion exchange. Chemicals (regenerants) flow in the opposite direction to the service flow. Less time for regeneration is required when compared to cocurrent columns. The quality of the finished product can be as low as .5 parts per million. The main advantage of counter-current deionization is the low operating cost, due to the low usage of regenerants during the regeneration process.
Mixed bed deionization
Mixed bed deionization is a 40/60 mixture of cation and anion resin combined in a single ion-exchange column. With proper pretreatment, product water purified from a single pass through a mixed bed ion exchange column is the purest that can be made. Most commonly, mixed bed demineralizers are used for final water polishing to clean the last few ions within water prior to use. Small mixed bed deionization units have no regeneration capability. Commercial mixed bed deionization units have elaborate internal water and regenerant distribution systems for regeneration. A control system operates pumps and valves for the regenerants of spent anions and cations resins within the ion exchange column. Each is regenerated separately, then remixed during the regeneration process. Because of the high quality of product water achieved, and because of the expense and difficulty of regeneration, mixed bed demineralizers are used only when the highest purity water is required.
Softening
Softening consists in preventing the possible precipitation of poorly soluble minerals from natural water due to changes occurring in the physico-chemical conditions (such as pCO2, pH, and Eh). It is applied when poorly soluble ions present in water might precipitate as insoluble salts (e.g., , ...), or interact with a chemical process. The water is "softened" by exchanging poorly soluble divalent cations (mainly , and ) with the soluble cation. Softened water has therefore a higher electrical conductivity than deionized water. Softened water cannot be considered as truly demineralized water, but does no longer contain cations responsible for the hardness of water and causing the formation of limescale, a hard chalky deposit essentially consisting of CaCO3, building up inside kettles, hot water boilers, and pipework.
Demineralization
In the strict sense, the term demineralization should imply removing all dissolved mineral species from water. Thus not only removing dissolved salt as obtained by simple deionization, but also neutral dissolved species such as dissolved iron hydroxides () or dissolved silica (), two solutes often present in water. In this way, demineralized water has the same electrical conductivity as deionized water, but is purer because it does not contain non-ionized substances, i.e. neutral solutes. However, demineralized water is often used interchangeably with deionized water and can be also confused with softened water, depending on the exact definition used: removing only the cations susceptible to precipitate as insoluble minerals (from there, "demineralization"), or removing all the "mineral species" present in water, and thus not only dissolved ions but also neutral solute species. So, the term demineralized water is vague and deionized water or softened water should often be preferred in its place for more clarity.
Other processes
Other processes are also used to purify water, including reverse osmosis, carbon filtration, microporous filtration, ultrafiltration, ultraviolet oxidation, or electrodialysis. These are used in place of, or in addition to, the processes listed above. Processes rendering water potable but not necessarily closer to being pure H2O / hydroxide + hydronium ions include the use of dilute sodium hypochlorite, ozone, mixed-oxidants (electro-catalyzed H2O + NaCl), and iodine; See discussion regarding potable water treatments under "Health effects" below.
Uses
Purified water is suitable for many applications, including autoclaves, hand-pieces, laboratory testing, laser cutting, and automotive use. Purification removes contaminants that may interfere with processes, or leave residues on evaporation. Although water is generally considered to be a good electrical conductor—for example, domestic electrical systems are considered particularly hazardous to people if they may be in contact with wet surfaces—pure water is a poor conductor. The conductivity of water is measured in Siemens per meter (S/m). Sea-water is typically 5 S/m, drinking water is typically in the range of 5-50 mS/m, while highly purified water can be as low as 5.5 μS/m (0.055 μS/cm), a ratio of about 1,000,000:1,000:1.
Purified water is used in the pharmaceutical industry. Water of this grade is widely used as a raw material, ingredient, and solvent in the processing, formulation, and manufacture of pharmaceutical products, active pharmaceutical ingredients (APIs) and intermediates, compendial articles, and analytical reagents. The microbiological content of the water is of importance and the water must be regularly monitored and tested to show that it remains within microbiological control.
Purified water is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain critical consistency of taste, clarity, and color. This guarantees the consumer reliably safe and satisfying drinking. In the process prior to filling and sealing, individual bottles are always rinsed with deionised water to remove any particles that could cause a change in taste.
Deionised and distilled water are used in lead–acid batteries to prevent erosion of the cells, although deionised water is the better choice as more impurities are removed from the water in the creation process.
Laboratory use
Technical standards on water quality have been established by a number of professional organizations, including the American Chemical Society (ACS), ASTM International, the U.S. National Committee for Clinical Laboratory Standards (NCCLS) which is now CLSI, and the U.S. Pharmacopeia (USP). The ASTM, NCCLS, and ISO 3696 or the International Organization for Standardization classify purified water into Grade 1–3 or Types I–IV depending on the level of purity. These organizations have similar, although not identical, parameters for highly purified water.
Note that the European Pharmacopeia uses Highly Purified Water (HPW) as a definition for water meeting the quality of Water For Injection, without however having undergone distillation. In the laboratory context, highly purified water is used to denominate various qualities of water having been "highly" purified.
Regardless of which organization's water quality norm is used, even Type I water may require further purification depending on the specific laboratory application. For example, water that is being used for molecular-biology experiments needs to be DNase or RNase-free, which requires special additional treatment or functional testing. Water for microbiology experiments needs to be completely sterile, which is usually accomplished by autoclaving. Water used to analyze trace metals may require the elimination of trace metals to a standard beyond that of the Type I water norm.
Criticism
A member of the ASTM D19 (Water) Committee, Erich L. Gibbs, criticized ASTM Standard D1193, by saying "Type I water could be almost anything – water that meets some or all of the limits, part or all of the time, at the same or different points in the production process."
Electrical conductivity
Completely de-gassed ultrapure water has a conductivity of 1.2 × 10−4 S/m, whereas on equilibration to the atmosphere it is 7.5 × 10−5 S/m due to dissolved CO2 in it. The highest grades of ultrapure water should not be stored in glass or plastic containers because these container materials leach (release) contaminants at very low concentrations. Storage vessels made of silica are used for less-demanding applications and vessels of ultrapure tin are used for the highest-purity applications. It is worth noting that, although electrical conductivity only indicates the presence of ions, the majority of common contaminants found naturally in water ionize to some degree. This ionization is a good measure of the efficacy of a filtration system, and more expensive systems incorporate conductivity-based alarms to indicate when filters should be refreshed or replaced. For comparison, seawater has a conductivity of perhaps 5 S/m (53 mS/cm is quoted), while normal un-purified tap water may have conductivity of 5 × 10−3 S/m (50 μS/cm) (to within an order of magnitude), which is still about 2 or 3 orders of magnitude higher than the output from a well-functioning demineralizing or distillation mechanism, so low levels of contamination or declining performance are easily detected.
Industrial uses
Some industrial processes, notably in the semiconductor and pharmaceutical industries, need large amounts of very pure water. In these situations, feedwater is first processed into purified water and then further processed to produce ultrapure water.
Another class of ultrapure water used for pharmaceutical industries is called Water-For-Inject (WFI), typically generated by multiple distillation or compressed-vaporation process of DI water or RO-DI water. It has a tighter bacteria requirement as 10 CFU per 100 mL, instead of the 100 CFU per mL per USP.
Other uses
Distilled or deionized water is commonly used to top up the lead–acid batteries used in cars and trucks and for other applications. The presence of foreign ions commonly found in tap water will drastically shorten the lifespan of a lead–acid battery.
Distilled or deionized water is preferable to tap water for use in automotive cooling systems.
Using deionised or distilled water in appliances that evaporate water, such as steam irons and humidifiers, can reduce the build-up of mineral scale, which shortens appliance life. Some appliance manufacturers say that deionised water is no longer necessary.
Purified water is used in freshwater and marine aquariums. Since it does not contain impurities such as copper and chlorine, it helps to keep fish free from diseases and avoids the build-up of algae on aquarium plants due to its lack of phosphate and silicate. Deionized water should be re-mineralized before use in aquaria since it lacks many macro- and micro-nutrients needed by plants and fish.
Water (sometimes mixed with methanol) has been used to extend the performance of aircraft engines. In piston engines, it acts to delay the onset of engine knocking. In turbine engines, it allows more fuel flow for a given turbine temperature limit and increases mass flow. As an example, it was used on early Boeing 707 models. Advanced materials and engineering have since rendered such systems obsolete for new designs; however, spray-cooling of incoming air-charge is still used to a limited extent with off-road turbo-charged engines (road-race track cars).
Deionized water is very often used as an ingredient in many cosmetics and pharmaceuticals. "Aqua" is the standard name for water in the International Nomenclature of Cosmetic Ingredients standard, which is mandatory on product labels in some countries.
Because of its high relative dielectric constant (~80), deionized water is also used (for short durations, when the resistive losses are acceptable) as a high voltage dielectric in many pulsed power applications, such as the Sandia National Laboratories Z Machine.
Distilled water can be used in PC water-cooling systems and Laser Marking Systems. The lack of impurity in the water means that the system stays clean and prevents a buildup of bacteria and algae. Also, the low conductance reduces the risk of electrical damage in the event of a leak. However, deionized water has been known to cause cracks in brass and copper fittings.
When used as a rinse after washing cars, windows, and similar applications, purified water dries without leaving spots caused by dissolved solutes.
Deionized water is used in water-fog fire-extinguishing systems used in sensitive environments, such as where high-voltage electrical and sensitive electronic equipment is used. The 'sprinkler' nozzles use much finer spray jets than other systems and operate at up 35 MPa (350 bar; 5,000 psi) of pressure. The extremely fine mist produced takes the heat out of fire rapidly, and the fine droplets of water are nonconducting (when deionized) and are less likely to damage sensitive equipment. Deionized water, however, is inherently acidic, and contaminants (such as copper, dust, stainless and carbon steel, and many other common materials) rapidly supply ions, thus re-ionizing the water. It is not generally considered acceptable to spray water on electrical circuits that are powered, and it is generally considered undesirable to use water in electrical contexts.
Distilled or purified water is used in humidors to prevent cigars from collecting bacteria, mold, and contaminants, as well as to prevent residue from forming on the humidifier material.
Window cleaners using water-fed pole systems also use purified water because it enables the windows to dry by themselves leaving no stains or smears. The use of purified water from water-fed poles also prevents the need for using ladders and therefore ensure compliance with Work at Height Legislation in the UK.
Mineral consumption
Distillation removes all minerals from water, and the membrane methods of reverse osmosis and nanofiltration remove most, or virtually all, minerals. This results in demineralized water, which has not been proven to be healthier than drinking water. The World Health Organization investigated the health effects of demineralized water in 1980, and found that demineralized water increased diuresis and the elimination of electrolytes, with decreased serum potassium concentration. Magnesium, calcium and other nutrients in water may help to protect against nutritional deficiency. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a 20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2–4 mmol/L. For fluoride, the concentration recommended for dental health is 0.5–1.0 mg/L, with a maximum guideline value of 1.5 mg/L to avoid dental fluorosis.
Municipal water supplies often add or have trace impurities at levels that are regulated to be safe for consumption. Much of these additional impurities, such as volatile organic compounds, fluoride, and an estimated 75,000+ other chemical compounds are not removed through conventional filtration; however, distillation and reverse osmosis eliminate nearly all of these impurities.
Artificial seawater
Atmospheric water generator
Electrodeionization
Heavy water
Hydrogen production
Milli-Q water
Ultrapure water
Water for injection
Water ionizer
Water softening
Water purification
References
Liquid water
Distillation
Drinking water
Coolants
Liquid dielectrics
Water supply
Excipients
Filtration | Purified water | Chemistry,Engineering,Environmental_science | 4,272 |
853,778 | https://en.wikipedia.org/wiki/Dispersion%20relation | In the physical sciences and electrical engineering, dispersion relations describe the effect of dispersion on the properties of waves in a medium. A dispersion relation relates the wavelength or wavenumber of a wave to its frequency. Given the dispersion relation, one can calculate the frequency-dependent phase velocity and group velocity of each sinusoidal component of a wave in the medium, as a function of frequency. In addition to the geometry-dependent and material-dependent dispersion relations, the overarching Kramers–Kronig relations describe the frequency-dependence of wave propagation and attenuation.
Dispersion may be caused either by geometric boundary conditions (waveguides, shallow water) or by interaction of the waves with the transmitting medium. Elementary particles, considered as matter waves, have a nontrivial dispersion relation, even in the absence of geometric constraints and other media.
In the presence of dispersion, a wave does not propagate with an unchanging waveform, giving rise to the distinct frequency-dependent phase velocity and group velocity.
Dispersion
Dispersion occurs when sinusoidal waves of different wavelengths have different propagation velocities, so that a wave packet of mixed wavelengths tends to spread out in space. The speed of a plane wave, , is a function of the wave's wavelength :
The wave's speed, wavelength, and frequency, f, are related by the identity
The function expresses the dispersion relation of the given medium. Dispersion relations are more commonly expressed in terms of the angular frequency and wavenumber . Rewriting the relation above in these variables gives
where we now view f as a function of k. The use of ω(k) to describe the dispersion relation has become standard because both the phase velocity ω/k and the group velocity dω/dk have convenient representations via this function.
The plane waves being considered can be described by
where
A is the amplitude of the wave,
A0 = A(0, 0),
x is a position along the wave's direction of travel, and
t is the time at which the wave is described.
Plane waves in vacuum
Plane waves in vacuum are the simplest case of wave propagation: no geometric constraint, no interaction with a transmitting medium.
Electromagnetic waves in vacuum
For electromagnetic waves in vacuum, the angular frequency is proportional to the wavenumber:
This is a linear dispersion relation, in which case the waves are said to be non-dispersive. That is, the phase velocity and the group velocity are the same:
and thus both are equal to the speed of light in vacuum, which is frequency-independent.
De Broglie dispersion relations
For de Broglie matter waves the frequency dispersion relation is non-linear:
The equation says the matter wave frequency in vacuum varies with wavenumber () in the non-relativistic approximation. The variation has two parts: a constant part due to the de Broglie frequency of the rest mass () and a quadratic part due to kinetic energy.
Derivation
While applications of matter waves occur at non-relativistic velocity, de Broglie applied special relativity to derive his waves.
Starting from the relativistic energy–momentum relation:
use the de Broglie relations for energy and momentum for matter waves,
where is the angular frequency and is the wavevector with magnitude , equal to the wave number. Divide by and take the square root. This gives the relativistic frequency dispersion relation:
Practical work with matter waves occurs at non-relativistic velocity. To approximate, we pull out the rest-mass dependent frequency:
Then we see that the factor is very small so for not too large, we expand and multiply:
This gives the non-relativistic approximation discussed above.
If we start with the non-relativistic Schrödinger equation we will end up without the first, rest mass, term.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
! Animation: phase and group velocity of electrons
|-
|
This animation portrays the de Broglie phase and group velocities (in slow motion) of three free electrons traveling over a field 0.4 ångströms in width. The momentum per unit mass (proper velocity) of the middle electron is lightspeed, so that its group velocity is 0.707 c. The top electron has twice the momentum, while the bottom electron has half. Note that as the momentum increases, the phase velocity decreases down to c, whereas the group velocity increases up to c, until the wave packet and its phase maxima move together near the speed of light, whereas the wavelength continues to decrease without bound. Both transverse and longitudinal coherence widths (packet sizes) of such high energy electrons in the lab may be orders of magnitude larger than the ones shown here.
|}
Frequency versus wavenumber
As mentioned above, when the focus in a medium is on refraction rather than absorption—that is, on the real part of the refractive index—it is common to refer to the functional dependence of angular frequency on wavenumber as the dispersion relation. For particles, this translates to a knowledge of energy as a function of momentum.
Waves and optics
The name "dispersion relation" originally comes from optics. It is possible to make the effective speed of light dependent on wavelength by making light pass through a material which has a non-constant index of refraction, or by using light in a non-uniform medium such as a waveguide. In this case, the waveform will spread over time, such that a narrow pulse will become an extended pulse, i.e., be dispersed. In these materials, is known as the group velocity and corresponds to the speed at which the peak of the pulse propagates, a value different from the phase velocity.
Deep water waves
The dispersion relation for deep water waves is often written as
where g is the acceleration due to gravity. Deep water, in this respect, is commonly denoted as the case where the water depth is larger than half the wavelength. In this case the phase velocity is
and the group velocity is
Waves on a string
For an ideal string, the dispersion relation can be written as
where T is the tension force in the string, and μ is the string's mass per unit length. As for the case of electromagnetic waves in vacuum, ideal strings are thus a non-dispersive medium, i.e. the phase and group velocities are equal and independent (to first order) of vibration frequency.
For a nonideal string, where stiffness is taken into account, the dispersion relation is written as
where is a constant that depends on the string.
Electron band structure
In the study of solids, the study of the dispersion relation of electrons is of paramount importance. The periodicity of crystals means that many levels of energy are possible for a given momentum and that some energies might not be available at any momentum. The collection of all possible energies and momenta is known as the band structure of a material. Properties of the band structure define whether the material is an insulator, semiconductor or conductor.
Phonons
Phonons are to sound waves in a solid what photons are to light: they are the quanta that carry it. The dispersion relation of phonons is also non-trivial and important, being directly related to the acoustic and thermal properties of a material. For most systems, the phonons can be categorized into two main types: those whose bands become zero at the center of the Brillouin zone are called acoustic phonons, since they correspond to classical sound in the limit of long wavelengths. The others are optical phonons, since they can be excited by electromagnetic radiation.
Electron optics
With high-energy (e.g., ) electrons in a transmission electron microscope, the energy dependence of higher-order Laue zone (HOLZ) lines in convergent beam electron diffraction (CBED) patterns allows one, in effect, to directly image cross-sections of a crystal's three-dimensional dispersion surface. This dynamical effect has found application in the precise measurement of lattice parameters, beam energy, and more recently for the electronics industry: lattice strain.
History
Isaac Newton studied refraction in prisms but failed to recognize the material dependence of the dispersion relation, dismissing the work of another researcher whose measurement of a prism's dispersion did not match Newton's own.
Dispersion of waves on water was studied by Pierre-Simon Laplace in 1776.
The universality of the Kramers–Kronig relations (1926–27) became apparent with subsequent papers on the dispersion relation's connection to causality in the scattering theory of all types of waves and particles.
See also
Ellipsometry
Ultrashort pulse
Waves in plasmas
Notes
References
External links
Poster on CBED simulations to help visualize dispersion surfaces, by Andrey Chuvilin and Ute Kaiser
Angular frequency calculator
Equations of physics | Dispersion relation | Physics,Mathematics | 1,890 |
47,370,141 | https://en.wikipedia.org/wiki/Suillus%20borealis | Suillus borealis is a species of bolete fungus in the family Suillaceae. Found in western North America where it associates with western white pine (Pinus monticola), the fungus was described as new to science in 1965 by mycologists Alexander H. Smith, Harry Delbert Thiers, and Orson K. Miller. It is similar in appearance to Suillus luteus, but unlike in that species, the partial veil does not form a ring on the stipe.
The species is considered to be an excellent edible mushroom.
See also
List of North American boletes
References
External links
borealis
Edible fungi
Fungi described in 1965
Fungi of the United States
Fungi without expected TNC conservation status
Fungus species | Suillus borealis | Biology | 148 |
65,860,227 | https://en.wikipedia.org/wiki/Semiconductor%20industry%20in%20Taiwan | The semiconductor industry, including Integrated Circuit (IC) manufacturing, design, and packaging, forms a major part of Taiwan's IT industry. Due to its strong capabilities in OEM wafer manufacturing and a complete industry supply chain, Taiwan has been able to distinguish itself as a leading microchip manufacturer and dominate the global marketplace. Taiwan’s semiconductor sector accounted for US$115 billion, around 20 percent of the global semiconductor industry. In sectors such as foundry operations, Taiwanese companies account for 50 percent of the world market, with Taiwan Semiconductor Manufacturing Company (TSMC) the biggest player in the foundry market.
Overview
TSMC and United Microelectronics Corporation (UMC) are the two largest contract chipmakers in the world, while MediaTek is the fourth-largest fabless semiconductor company globally. ASE Group is also the world's largest Outsourced Semiconductor Assembly and Test (OSAT) provider.
History
The Taiwanese semiconductor industry got its start in 1974. In 1976 the government convinced RCA to transfer semiconductor technology to Taiwan. Under the direction of Chiang Ching-Kuo the government appointed the Industrial Technology Research Institute (ITRI) to lead the development of the industry with an emphasis on developing commercial products rather than pure scientific advances. ITRI sent four teams of engineers to train at RCA before building a demonstration factory in Taiwan. The demonstration factory was able to achieve higher yields than RCA's fabs in the US. The demonstration factory was spun off by ITRI in 1980 as UMC. UMC received initial investment from both private and public sources.
In 1987, TSMC pioneered the fabless foundry model, reshaping the global semiconductor industry. From ITRI's first 3-inch wafer fabrication plant built in 1977 and the founding of UMC in 1980, the industry had developed into a world leader with 40 fabs in operation by 2002. In 2007, the semiconductor industry overtook that of the United States, second only to Japan.
The sector output reached US$39 billion in 2009, ranking first in global market share in IC manufacturing, packaging, and testing, and second in IC design. Although the global financial crisis from 2007 to 2010 affected sales and exports, the industry has rebounded with companies posting record profits for 2010. In 2010 Taiwan had the largest share of 300 nm, 90 nm, and 60 nm manufacturing capacities worldwide, and was expected to pass Japan in total IC fab capacity by mid-2011. By 2020, Taiwan was the unmatched leader of the global semiconductor industry with Taiwan Semiconductor Manufacturing Company (TSMC) alone accounting for more than 50% of the global market.
In the 2020s artificial intelligence processing emerged as a significant demand driver for the Taiwanese semiconductor industry.
Sustainability
The semiconductor industry uses a large portion of the power produced in Taiwan. By 2022 TSMC alone is estimated to consume 7.2% of Taiwan's total power output. Due to pressure from customers and government regulations the semiconductor industry has been switching to green power. In July 2020 TSMC signed a 20-year deal with Ørsted to buy the entire production of two offshore wind farms under development off Taiwan's west coast. At the time of its signing it was the largest corporate green energy order ever made. Much of the switch to renewable energy has been mandated by Apple Inc. whose primary components suppliers are located on Taiwan.
Cybersecurity
The Taiwanese semiconductor industry is one of the top targets of Chinese intelligence activity abroad.
Geopolitics
Taiwanese TSMC and South Korean rival Samsung have near total control of the leading edge of the semiconductor industry with TSMC significantly ahead of Samsung. This situation in which global production capabilities have been concentrated in just a few selected countries leads to significant geopolitical challenges and contributes heavily to changes in global techno-politics.
Due to its significant position in both the American and Chinese tech industry supply chains, Taiwan has been enmeshed in the technological front of the China–United States trade war and the larger geopolitical conflict between the two powers. The US prohibited companies which use American equipment or IP from exporting products to prohibited companies in China. This forced Taiwanese semiconductor companies to stop doing business with major Chinese clients like Huawei.
In January 2021 the German government appealed to the Taiwanese government to help persuade Taiwanese semiconductor companies to ramp up production as a global semiconductor shortage was hampering the German economy's recovery from the COVID-19 pandemic. A lack of semiconductors had caused vehicle production lines to be idled, leading German Economy Minister Peter Altmaier to personally reach out to Taiwan's economics affairs minister Wang Mei-hua in an attempt to get Taiwanese semiconductor companies to increase their manufacturing capacity. Similar requests had been made by the United States, the European Union, and Japan. The Taiwanese government and TSMC announced that as much as possible priority would be given to automakers from Taiwan's close geopolitical allies.
In April 2021 the US Government blacklisted seven Chinese supercomputing companies due to alleged involvement in supplying equipment to the Peoples Liberation Army (PLA), Chinese military–industrial complex, and Weapon of Mass Destruction (WMD) programs. In response Taiwanese chipmakers Alchip and TSMC suspended new orders from Chinese supercomputing company Tianjin Phytium Information Technology.
The geopolitical strength of the semiconductor industry is often referred to as Taiwan's "Silicon Shield." According to the New York Times, "Taiwan has relied on its dominance of the microchip industry for its defense," and that, "because its semiconductor industry is so important to Chinese manufacturing and the United States consumer economy, actions that threaten its foundries would be too risky." In 2022 Matthew Pottinger challenged the existence of a Silicon Shield arguing that China does not behave in ways which appear rational to audiences in democratic countries.
International policy measures have been taken in attempts to ensure the longevity TSMC's manufacturing output by third parties such as the United States. Through policy efforts such as the CHIPS and Science Act, the United States and Taiwanese governments have taken steps to bolster TSMC's manufacturing capability on U.S. soil. Such policy efforts were put in place after geopolitical tensions between the United States and China demonstrated a potential weak point in the nation's reliance on foreign manufacturing. Notably, TSMC announced plans to build a $12 billion semiconductor manufacturing plant in Arizona, enhancing their semiconductor production capabilities on international land.
In April 2024, the United States Department of Commerce provided TSMC Arizona with a grant for a total of $6.6 billion in funding under the CHIPS and Science Act. Additionally, the two countries are investing in joint research initiatives and workforce development programs to provide a steady pipeline of skilled workers for the semiconductor industry. TSMC's expansion into the United States has also been met with significant challenges, particularly in its Arizona plant, facing a 1-year delay on its planned operating date. Some TSMC managers have attributed the plant's troubled development to cultural clashes between TSMC's management and American workers.
See also
Defense industry of Taiwan
Taiwania (supercomputer)
Taiwania 3
References
Semiconductor device fabrication
Semiconductor industry by country
Industry in Taiwan | Semiconductor industry in Taiwan | Materials_science | 1,455 |
17,543,768 | https://en.wikipedia.org/wiki/Pseudo-order | In constructive mathematics, pseudo-order is a name given to certain binary relations appropriate for modeling continuous orderings.
In classical mathematics, its axioms constitute a formulation of a strict total order (also called linear order), which in that context can also be defined in other, equivalent ways.
Examples
The constructive theory of the real numbers is the prototypical example where the pseudo-order formulation becomes crucial. A real number is less than another if there exists (one can construct) a rational number greater than the former and less than the latter. In other words, here x < y holds if there exists a rational number z such that x < z < y.
Notably, for the continuum in a constructive context, the usual trichotomy law does not hold, i.e. it is not automatically provable. The axioms in the characterization of orders like this are thus weaker (when working using just constructive logic) than alternative axioms of a strict total order, which are often employed in the classical context.
Definition
A pseudo-order is a binary relation satisfying the three conditions:
It is not possible for two elements to each be less than the other. That is, for all and ,
Every two elements for which neither one is less than the other must be equal. That is, for all and ,
For all , , and , if then either or . That is, for all , and ,
Auxiliary notation
There are common constructive reformulations making use of contrapositions and the valid equivalences as well as . The negation of the pseudo-order of two elements defines a reflexive partial order . In these terms, the first condition reads
and it really just expresses the asymmetry of . It implies irreflexivity, as familiar from the classical theory.
Classical equivalents to trichotomy
The second condition exactly expresses the anti-symmetry of the associated partial order,
With the above two reformulations, the negation signs may be hidden in the definition of a pseudo-order.
A natural apartness relation on a pseudo-ordered set is given by . With it, the second condition exactly states that this relation is tight,
Together with the first axiom, this means equality can be expressed as negation of apartness. Note that the negation of equality is in general merely the double-negation of apartness.
Now the disjunctive syllogism may be expressed as . Such a logical implication can classically be reversed, and then this condition exactly expresses trichotomy. As such, it is also a formulation of connectedness.
Discussion
Asymmetry
The non-contradiction principle for the partial order states that or equivalently , for all elements. Constructively, the validity of the double-negation exactly means that there cannot be a refutation of any of the disjunctions in the classical claim , whether or not this proposition represents a decidable problem.
Using the asymmetry condition, the above also implies , the double-negated strong connectedness. In a classical logic context, "" thus constitutes a (non-strict) total order.
Co-transitivity
The contrapositive of the third condition exactly expresses that the associated relation (the partial order) is transitive. So that property is called co-transitivity. Using the asymmetry condition, one quickly derives the theorem that a pseudo-order is actually transitive as well. Transitivity is common axiom in the classical definition of a linear order.
The condition also called is comparison (as well as weak linearity): For any nontrivial interval given by some and some above it, any third element is either above the lower bound or below the upper bound. Since this is an implication of a disjunction, it ties to the trichotomy law as well. And indeed, having a pseudo-order on a Dedekind-MacNeille-complete poset implies the principle of excluded middle. This impacts the discussion of completeness in the constructive theory of the real numbners.
Relation to other properties
This section assumes classical logic. At least then, following properties can be proven:
If R is a co-transitive relation, then
R is also quasitransitive;
R satisfies axiom 3 of semiorders;
incomparability w.r.t. R is a transitive relation; and
R is connex iff it is reflexive.
Sufficient conditions for a co-transitive relation R to be transitive also are:
R is left Euclidean;
R is right Euclidean;
R is antisymmetric.
A semi-connex relation R is also co-transitive if it is symmetric, left or right Euclidean, transitive, or quasitransitive. If incomparability w.r.t. R is a transitive relation, then R is co-transitive if it is symmetric, left or right Euclidean, or transitive.
See also
Constructive analysis
Indecomposability (intuitionistic logic)
Linear order
Notes
References
Constructivism (mathematics)
Order theory | Pseudo-order | Mathematics | 1,034 |
2,986,919 | https://en.wikipedia.org/wiki/Direct-conversion%20receiver | A direct-conversion receiver (DCR), also known as a homodyne, synchrodyne, zero intermediate frequency or zero-IF receiver, is a radio receiver design that demodulates the incoming radio signal using synchronous detection driven by a local oscillator whose frequency is identical to, or very close to the carrier frequency of the intended signal. (This contrasts with the standard superheterodyne receiver, which uses an initial conversion to an intermediate frequency.)
The simplification of performing only a single frequency conversion reduces the basic circuit complexity but other issues arise, for instance, regarding dynamic range. In its original form it was unsuited to receiving AM and FM signals without implementing an elaborate phase locked loop. Although these and other technical challenges made this technique rather impractical around the time of its invention (1930s), current technology, and software radio in particular, have revived its use in certain areas including some consumer products.
Principle of operation
The conversion of the modulated signal to baseband is done in a single frequency conversion. This avoids the complexity of the superheterodyne's two (or more) frequency conversions, IF stage(s), and image rejection issues.
The received radio frequency signal is fed directly into a frequency mixer, just as in a superheterodyne receiver. However unlike the superheterodyne, the frequency of the local oscillator is not offset from, but identical to, the received signal's frequency. The result is a demodulated output just as would be obtained from a superheterodyne receiver using synchronous detection (a product detector) following an intermediate frequency (IF) stage.
Technical issues
To match the performance of the superheterodyne receiver, a number of the functions normally addressed by the IF stage must be accomplished at baseband. Since there is no high gain IF amplifier utilizing automatic gain control (AGC), the baseband output level may vary over a very wide range dependent on the received signal strength. This is one major technical challenge which limited the practicability of the design. Another issue is the inability of this design to implement envelope detection of AM signals. Thus direct demodulation of AM or FM signals (as used in broadcasting) requires phase locking the local oscillator to the carrier frequency, a much more demanding task compared to the more robust envelope detector or ratio detector at the output of an IF stage in a superheterodyne design. However this can be avoided in the case of a direct-conversion design using quadrature detection followed by digital signal processing. Using software radio techniques, the two quadrature outputs can be processed in order to perform any sort of demodulation and filtering on down-converted signals from frequencies close to the local oscillator frequency. The proliferation of digital hardware, along with refinements in the analog components involved in the frequency conversion to baseband, has thus made this simpler topology practical in many applications.
History and applications
The homodyne was developed in 1932 by a team of British scientists searching for a design to surpass the superheterodyne (two stage conversion model). The design was later renamed the "synchrodyne". Not only did it have superior performance due to the single conversion stage, but it also had reduced circuit complexity and power consumption. The design suffered from the thermal drift of the local oscillator which changed its frequency over time. To counteract this drift, the frequency of the local oscillator was compared with the broadcast input signal by a phase detector. This produced a correction voltage which would vary the local oscillator frequency keeping it in lock with the wanted signal. This type of feedback circuit evolved into what is now known as a phase-locked loop. While the method has existed for several decades, it had been difficult to implement due largely to component tolerances, which must be of small variation for this type of circuit to function successfully.
Advantages
Unwanted by-product beat signals from the mixing stage do not need any further processing, as they are completely rejected by use of a low-pass filter at the audio output stage. The receiver design has the additional advantage of high selectivity, and is therefore a precision demodulator. The design principles can be extended to permit separation of adjacent channel broadcast signals whose sidebands may overlap the wanted transmission. The design also improves the detection of pulse-modulated transmission mode signals.
Disadvantages
Signal leakage paths can occur in the receiver. The high audio frequency gain required can result in difficulty in rejecting mains hum. Local-oscillator energy can leak through the mixer stage to the antenna input and then reflect back into the mixer stage. The overall effect is that the local oscillator energy will self-mix and create a DC offset signal. The offset may be large enough to overload the baseband amplifiers and prevent receiving the wanted signal. There are design modifications that deal with this issue, but they add to the complexity of the receiver. The additional design complexity often outweighs the benefits of a direct-conversion receiver.
Modern usage
Wes Hayward and Dick Bingham's 1968 article brought new interest in direct-conversion designs.
The development of the integrated circuit and incorporation of complete phase-locked loop devices in low-cost IC packages made this design widely accepted. Usage is no longer limited to the reception of AM radio signals, but also finds use in processing more complex modulation methods. Direct-conversion receivers are now incorporated into many receiver applications, including cellphones, pagers, televisions, avionics, medical imaging apparatus and software-defined radio systems.
See also
Crystal radio
Harmonic mixer
Heterodyne
Heterodyne detection
Homodyne detection
IQ imbalance, a problem affecting direct-conversion receivers
Low IF receiver
Neutrodyne
Reflectional receiver
Regenerative radio receiver
Tuned radio frequency receiver
References
External links
The History of the Homodyne and Syncrodyne The Journal of the British Institution of Radio Engineers, April 1954
, "Wireless Signaling" (heterodyne principle) – 12 August 1902 - by Reginald Fessenden
Radio electronics
Receiver (radio) | Direct-conversion receiver | Engineering | 1,259 |
17,086,606 | https://en.wikipedia.org/wiki/Kikuchi%20lines%20%28physics%29 | Kikuchi lines are patterns of electrons formed by scattering. They pair up to form bands in electron diffraction from single crystal specimens, there to serve as "roads in orientation-space" for microscopists uncertain of what they are looking at. In transmission electron microscopes, they are easily seen in diffraction from regions of the specimen thick enough for multiple scattering. Unlike diffraction spots, which blink on and off as one tilts the crystal, Kikuchi bands mark orientation space with well-defined intersections (called zones or poles) as well as paths connecting one intersection to the next.
Experimental and theoretical maps of Kikuchi band geometry, as well as their direct-space analogs e.g. bend contours, electron channeling patterns, and fringe visibility maps are increasingly useful tools in electron microscopy of crystalline and nanocrystalline materials. Because each Kikuchi line is associated with Bragg diffraction from one side of a single set of lattice planes, these lines can be labeled with the same Miller or reciprocal-lattice indices that are used to identify individual diffraction spots. Kikuchi band intersections, or zones, on the other hand are indexed with direct-lattice indices i.e. indices which represent integer multiples of the lattice basis vectors a, b and c.
Kikuchi lines are formed in diffraction patterns by diffusely scattered electrons, e.g. as a result of thermal atom vibrations. The main features of their geometry can be deduced from a simple elastic mechanism proposed in 1928 by Seishi Kikuchi, although the dynamical theory of diffuse inelastic scattering is needed to understand them quantitatively.
In x-ray scattering, these lines are referred to as Kossel lines (named after Walther Kossel).
Recording experimental Kikuchi patterns and maps
The figure on the left shows the Kikuchi lines leading to a silicon [100] zone, taken with the beam direction approximately 7.9° away from the zone along the (004) Kikuchi band. The dynamic range in the image is so large that only portions of the film are not overexposed. Kikuchi lines are much easier to follow with dark-adapted eyes on a fluorescent screen, than they are to capture unmoving on paper or film, even though eyes and photographic media both have a roughly logarithmic response to illumination intensity. Fully quantitative work on such diffraction features is therefore assisted by the large linear dynamic range of CCD detectors.
This image subtends an angular range of over 10° and required use of a shorter than usual camera length L. The Kikuchi band widths themselves (roughly λL/d where λ/d is approximately twice the Bragg angle for the corresponding plane) are well under 1°, because the wavelength λ of electrons (about 1.97 picometres in this case) is much less than the lattice plane d-spacing itself. For comparison, the d-spacing for silicon (022) is about 192 picometres while the d-spacing for silicon (004) is about 136 picometres.
The image was taken from a region of the crystal which is thicker than the inelastic mean free path (about 200 nanometres), so that diffuse scattering features (the Kikuchi lines) would be strong in comparison to coherent scattering features (diffraction spots). The fact that surviving diffraction spots appear as disks intersected by bright Kikuchi lines means that the diffraction pattern was taken with a convergent electron beam. In practice, Kikuchi lines are easily seen in thick regions of either selected area or convergent beam electron diffraction patterns, but difficult to see in diffraction from crystals much less than 100 nm in size (where lattice-fringe visibility effects become important instead). This image was recorded in convergent beam, because that too reduces the range of contrasts that have to be recorded on film.
Compiling Kikuchi maps which cover more than a steradian requires that one take many images at tilts changed only incrementally (e.g. by 2° in each direction). This can be tedious work, but may be useful when investigating a crystal with unknown structure as it can clearly unveil the lattice symmetry in three dimensions.
Kikuchi line maps and their stereographic projection
The figure on the left plots Kikuchi lines for a larger section of silicon's orientation space. The angle subtended between the large [011] and [001] zones at the bottom is 45° for silicon. Note that four-fold zone in the lower right (here labeled [001]) has the same symmetry and orientation as the zone labeled [100] in the experimental pattern above, although that experimental pattern only subtends about 10°.
Note also that the figure at left is excerpted from a stereographic projection centered on that [001] zone. Such conformal projections allow one to map pieces of spherical surface onto a plane while preserving the local angles of intersection, and hence zone symmetries. Plotting such maps requires that one be able to draw arcs of circles with a very large radius of curvature. The figure at left, for example, was drawn before the advent of computers and hence required the use of a beam compass. Finding a beam compass today might be fairly difficult, since it is much easier to draw curves having a large radius of curvature (in two or three dimensions) with help from a computer.
The angle-preserving effect of stereographic plots is even more obvious in the figure at right, which subtends a full 180° of the orientation space of a face-centered or cubic close packed crystal e.g. like that of Gold or Aluminum. The animation follows {220} fringe-visibility bands of that face-centered cubic crystal between <111> zones, at which point rotation by 60° sets up travel to the next <111> zone via a repeat of the original sequence. Fringe-visibility bands have the same global geometry as do Kikuchi bands, but for thin specimens their width is proportional (rather than inversely proportional) to d-spacing. Although the angular field width (and tilt range) obtainable experimentally with Kikuchi bands is generally much smaller, the animation offers a wide-angle view of how Kikuchi bands help informed crystallographers find their way between landmarks in the orientation space of a single crystal specimen.
Real space analogs
Kikuchi lines serve to highlight the edge on lattice planes in diffraction images of thicker specimens. Because Bragg angles in the diffraction of high energy electrons are very small (~ degrees for 300 keV), Kikuchi bands are quite narrow in reciprocal space. This also means that in real space images, lattice planes edge-on are decorated not by diffuse scattering features but by contrast associated with coherent scattering. These coherent scattering features include added diffraction (responsible for bend contours in curved foils), more electron penetration (which gives rise to electron channeling patterns in scanning electron images of crystal surfaces), and lattice fringe contrast (which results in a dependence of lattice fringe intensity on beam orientation which is linked to specimen thickness). Although the contrast details differ, the lattice plane trace geometry of these features and of Kikuchi maps are the same.
Bend contours and rocking curves
Rocking curves (left) are plots of scattered electron intensity, as a function of the angle between an incident electron beam and the normal to a set of lattice planes in the specimen. As this angle changes in either direction from edge-on (at which orientation the electron beam runs parallel to the lattice planes and perpendicular to their normal), the beam moves into Bragg diffracting condition and more electrons are diffracted outside the microscope's back focal plane aperture, giving rise to the dark-line pairs (bands) seen in the image of the bent silicon foil shown in the image on the right.
The [100] bend contour "spider" of this image, trapped in a region of silicon that was shaped like an oval watchglass less than a micrometre in size, was imaged with 300 keV electrons. If you tilt the crystal, the spider moves toward the edges of the oval as though it is trying to get out. For example, in this image the spider's [100] intersection has moved to the right side of the ellipse as the specimen was tilted to the left.
The spider's legs, and their intersections, can be indexed as shown in precisely the same way as the Kikuchi pattern near [100] in the section on experimental Kikuchi patterns above. In principle, one could therefore use this bend contour to model the foil's vector tilt (with milliradian accuracy) at all points across the oval.
Lattice fringe visibility maps
As you can see from the rocking curve above, as specimen thickness moves into the 10 nanometre and smaller range (e.g. for 300 keV electrons and lattice spacings near 0.23 nm) the angular range of tilts that give rise to diffraction and/or lattice-fringe contrast becomes inversely proportional to specimen thickness. The geometry of lattice-fringe visibility therefore becomes useful in the electron microscope study of nanomaterials, just as bend contours and Kikuchi lines are useful in the study of single crystal specimens (e.g. metals and semiconductor specimens with thickness in the tenth-micrometre range). Applications to nanostructure for example include: (i) determining the 3D lattice parameters of individual nanoparticles from images taken at different tilts, (ii) fringe fingerprinting of randomly oriented nanoparticle collections, (iii) particle thickness maps based on fringe contrast changes under tilt, (iv) detection of icosahedral twins from the lattice image of a randomly oriented nanoparticle, and (v) analysis of orientation relationships between nanoparticles and a cylindrical support.
Electron channeling patterns
The above techniques all involve detection of electrons which have passed through a thin specimen, usually in a transmission electron microscope. Scanning electron microscopes, on the other hand, typically look at electrons "kicked up" when one rasters a focussed electron beam across a thick specimen. Electron channeling patterns are contrast effects associated with edge-on lattice planes that show up in scanning electron microscope secondary and/or backscattered electron images.
The contrast effects are to first order similar to those of bend contours, i.e. electrons which enter a crystalline surface under diffracting conditions tend to channel (penetrate deeper into the specimen without losing energy) and thus kick up fewer electrons near the entry surface for detection. Hence bands form, depending on beam/lattice orientation, with the now-familiar Kikuchi line geometry.
The first scanning electron microscope (SEM) image was an image of electron channeling contrast in silicon steel. However, practical uses for the technique are limited because only a thin layer of abrasion damage or amorphous coating is generally adequate to obscure the contrast. If the specimen had to be given a conductive coating before examination to prevent charging, this too could obscure the contrast. On cleaved surfaces, and surfaces self-assembled on the atomic scale, electron channeling patterns are likely to see growing application with modern microscopes in the years ahead.
See also
Electron diffraction
Electron backscatter diffraction (EBSD)
References
External links
Calculate patterns with WebEMApS at UIUC.
Some interactive 3D maps at UM Saint Louis.
Calculate Kikuchi map or patterns with free software PTCLab .
Diffraction
Electron microscopy | Kikuchi lines (physics) | Physics,Chemistry,Materials_science | 2,400 |
30,155,978 | https://en.wikipedia.org/wiki/God%20helps%20those%20who%20help%20themselves | The phrase "God helps those who help themselves" is a motto that emphasizes the importance of self-initiative and agency. The phrase originated in ancient Greece as "the gods help those who help themselves" and may originally have been proverbial. It is illustrated by two of Aesop's Fables and a similar sentiment is found in ancient Greek drama. Although it has been commonly attributed to Benjamin Franklin, the modern English wording appears earlier in Algernon Sidney's work.
The phrase is often mistaken as a scriptural quote, though it is not stated in the Bible. Some Christians consider the expression contrary to the biblical message of God's grace and help for the helpless, and its denunciation of greed and selfishness. A variant of the phrase is addressed in the Quran (13:11).
Origin
Ancient Greece
The sentiment appears in several ancient Greek tragedies. Sophocles, in his Philoctetes (c. 409 BC), wrote, "No good e'er comes of leisure purposeless; And heaven ne'er helps the men who will not act."
Euripides, in the fragmentary Hippolytus Veiled (before 428 BC), mentions that, "Try first thyself, and after call in God; For to the worker God himself lends aid." In his Iphigeneia in Tauris, Orestes says, "I think that Fortune watcheth o'er our lives, surer than we. But well said: he who strives will find his gods strive for him equally."
A similar version of this saying "God himself helps those who dare," better translated as "divinity helps those who dare" (audentes deus ipse iuvat), comes from Ovid's Metamorphoses, 10.586. The phrase is spoken by Hippomenes when contemplating whether to enter a foot race against Atalanta for her hand in marriage. If Hippomenes were to lose, however, he would be killed. Hippomenes decides to challenge Atalanta to a race and, with the aid of Venus, Hippomenes was able to win the race.
The same concept is found in the fable of Hercules and the Wagoner, first recorded by Babrius in the 1st century AD. In it, a wagon falls into a ravine, or in later versions becomes mired, but when its driver appeals to Hercules for help, he is told to get to work himself. Aesop is also credited with a similar fable about a man who calls on the goddess Athena for help when his ship is wrecked and is advised to try swimming first. It has been conjectured that both stories were created to illustrate an already existing proverb.
The French author Jean de La Fontaine also adapted the first of these fables as Le chartier embourbé (Fables VI.18) and draws the moral Aide-toi, le ciel t'aidera (Help yourself and Heaven will help you too). A little earlier, George Herbert had included "Help thyself, and God will help thee" in his proverb collection, Jacula Prudentum (1651). But it was the English political theorist Algernon Sidney who originated the now familiar wording, "God helps those who help themselves", apparently the first exact rendering of the phrase. Benjamin Franklin later used it in his Poor Richard's Almanack (1736) and has been widely quoted.
Old Testament
Several passages within the Tanakh imply a predispondence for blessing amongst those who work for themselves, including:
– The Lord will send a blessing on your barns and on everything you put your hand to.
– A little sleep, a little slumber, a little folding of the hands to rest—and poverty will come on you like a bandit and scarcity like an armed man.
– He who works his land will have abundant food, but he who chases fantasies lacks judgment.
– Diligent hands will rule, but laziness ends in slave labor.
– The sluggard craves and gets nothing, but the desires of the diligent are fully satisfied.
– The horse is made ready for the day of battle, but victory rests with the Lord.
New Testament
While the term does not appear verbatim in Christian scriptures, these passages are used to suggest an ethic of personal agency, and taking initiative:
– Whatever you do, work at it with all your heart, as working for the Lord, not for men.
– If anyone does not provide for his relatives, and especially for his immediate family, he has denied the faith and is worse than an unbeliever.
– For as the body without the spirit is dead, so faith without works is dead also.
Reliance upon God is not mentioned, but is strongly implied in addition to helping one's self.
There is also a relationship to the Parable of the Faithful Servant, and the Parable of the Ten Virgins, which has a similar eschatological theme: be prepared for the day of reckoning.
Conversely with agency, in other instances the Bible emphasises reliance on God and examples of Jesus serving or healing those who lacked the ability to help themselves, implying that self-reliance and reliance on God are complementary (See Mark 6:34; Mark 1:30–31; and Mark 10:46–52.)
Islamic texts
A passage with similar sentiments can be found in the Quran:
It has a different meaning in that it implies that help in oneself is a prerequisite for expecting the help of God. An Arab proverb and reported saying of the Islamic prophet Muhammad with a similar meaning is "Trust in God But Tie Your Camel". According to Tirmidhi, one day Muhammad noticed a Bedouin leaving his camel without tying it. He asked the Bedouin, "Why don't you tie down your camel?" The Bedouin answered, "I placed my trust in Allah." At that, Muhammad said, "Tie your camel and place your trust in Allah."
Chinese idiom
The Chinese idiom 天道酬勤 (pinyin: tiān dào choú qín) also expresses a similar meaning, that "Heaven rewards the diligent".
Other historical uses
The French society Aide-toi, le ciel t'aidera (Help yourself and Heaven will help you too) played an important role in bringing about the July Revolution of 1830 in France.
The Canadian society Aide-toi, le Ciel t’aidera, founded by Louis-Victor Sicotte, is credited with introducing the celebration of Saint-Jean-Baptiste Day for French Canadians.
Aide-toi et Dieu t'aidera (Help yourself, and God will help you) was the motto on the ship's wheel of the famous UK-built Confederate sea raider CSS Alabama, captained by Raphael Semmes during the American Civil War.
Contemporary views and controversy
The belief that this is a phrase that occurs in the Bible, or is even one of the Ten Commandments, is common in the United States. The beliefs of Americans regarding this phrase and the Bible have been studied by Christian demographer and pollster George Barna. To the statement "The Bible teaches that God helps those who help themselves", across a series of polls, 53% of Americans agree strongly, 22% agree somewhat, 7% disagree somewhat, 14% disagree strongly, and 5% stated they don't know. A poll in the late 1990s showed the majority (81%) believe the concept is taught by the Bible, another stating 82%, with "born-again" Christians less (68%) likely to agree than non "born-again" Christians (81%). Despite not appearing in the Bible, the phrase topped a poll of the most widely known Bible verses. Five percent of American teenagers said they believed that it was the central message of the Bible.
Barna see this as evidence of Americans' growing unfamiliarity with the Bible and believes that it reflects a shift to values conflicting with the doctrine of Grace in Christianity and "suggests a spiritual self-reliance inconsistent with Christianity". Christian minister Erwin Lutzer argues there is some support for this saying in the Bible (, ); however, much more often God helps those who cannot help themselves, which is what grace is about (the parable of the Pharisee and the Publican, , ). The statement is often criticised as espousing a Semi-Pelagian model of salvation, which most Christians denounce as heresy.
See also
Trust in God and keep your powder dry, a similar exhortation from Oliver Cromwell to his troops
Praise the Lord and Pass the Ammunition, a similar exhortation in World War II
Parable of the drowning man, modern story often told with this as its moral
References
External links
– a negative form
Motivation
Greek proverbs
Quotations from literature
Quotations from religion | God helps those who help themselves | Biology | 1,836 |
246,034 | https://en.wikipedia.org/wiki/International%20Astronautical%20Federation | The International Astronautical Federation (IAF) is an international space advocacy organization based in Paris, and founded in 1951 as a non-governmental organization to establish a dialogue between scientists around the world and to lay the information for international space cooperation. It has over 390 members from 68 countries across the world. They are drawn from space agencies, companies, universities, professional associations, museums, government organizations and learned societies. The IAF organizes the annual International Astronautical Congress (IAC). Pascale Ehrenfreund has served as the president of the IAF.
History
After World War II, Heinz Gartmann, Gunter Loeser, and Heinz-Hermann Koelle formed the German Rocket Society. They contacted the British Interplanetary Society (BIS) and Groupement Astronautique Français. The French group's leader, Alexandre Ananoff, organized the First International Congress for Astronautics in Paris, France, in September 1950. At the second congress in London, United Kingdom, in September 1951, the International Astronautical Federation (IAF) was organized; at the third congress in Stuttgart, West Germany, in 1952, the IAF constitution was adopted and the organization registered under Swiss Law.
Events
International Astronautical Federation Congress (IAC)
The IAC is a space event and the largest put on by the organization, with approximately 6,000 participants each year. A different member of IAF is selected by IAF each year to host the IAC. An annual event held in September or October, the congress includes "networking events, talks, and a technical program on advances in science and exploration, applications and operations, technology, infrastructure, and space and society." There are side events including the annual IAF Workshop with the support of the United Nations,[1] which takes place during the 2 days preceding the IAC.
IAF Global Conference
The IAF Global Conferences are organized annually. Each year they have a specific space-related topic and theme, and are held in alternating or new locations.
Global Lunar Exploration Conference (GLUC 2010) in Beijing
Global Space Exploration Conference (GLEX 2012) in Washington, D.C.
Global Space Applications Conference (GLAC 2014) in Paris
Global Space Innovation Conference (GLIC 2015) in Munich
Global Conference on Space and the Information Society (GLIS 2016) in Geneva
Global Space Exploration Conference (GLEX 2017) in Beijing
Global Space Applications Conference (GLAC 2018) in Montevideo
Global Conference on Space for Emerging Countries (GLEC 2019) in Marrakech
Global Space Exploration Conference (GLEX 2021) in St. Petersburg
The International Space Forum at Ministerial Level (ISF)
The International Space Forum at Ministerial Level (ISF) is an event held by the organization.
The event was founded by the IAF Vice President for Science and Academic Relations in 2015. The gathering pushes discussion on the involvement of universities into space activities, and includes university ministers and delegates from space agencies and other international organizations. Keynote speakers focus on the event's selected theme for the year.
2016 International Space Forum at Ministerial Level – ISF Trento
2017 International Space Forum at Ministerial Level, the African Chapter – ISF Nairobi
2018 International Space Forum at Ministerial Level, the Latin American and Caribbean Chapter – ISF Buenos Aires
2019 International Space Forum at Ministerial Level – the Mediterranean Chapter – ISF Reggio Calabria
Other
IAF Spring Meetings - The IAF Spring Meetings gather every year in March the IAF community in Paris. For three days, IAF Administrative and Technical Committees meet and the International Programme Committee selects the abstracts to be presented during the year's IAC.
IAF International Meeting for Members of Parliament - annual meeting for members of parliaments, it acts as an informal forum to discuss space matters. Lasting one day, the events has a keynote speech, and all politicians are allowed to make statements on their home country's developments.
The IAF Workshop - Organized with the United Nations Office for Outer Space Affairs (UNOOSA), this event "provides space emerging countries with capacity building opportunities in using space science, technologies, applications and exploration in support of sustainable economic, social and environmental development and on the role of industry".
Awards
The IAF runs two large-scale awards schemes for young professionals and students: The Emerging Space Leaders (ESL) Grants, and the Young Space Leaders Recognition (YSL) Programme. This allows young people to attend the IAC free of charge, and have their travel, accommodation and costs paid whilst there.
Every year at the International Astronautical Congress, awards are given out: The main awards are the IAF World Space Award, the Allan D. Emil Memorial Award, the IAF Hall of Fame, the IAF Distinguished Service Award, the Franck J. Malina Astronautics Medal, the Luigi G. Napolitano Award, the AAAF Medals and the Hermann Oberth Medals.
World Space Award
The World Space Award is designated by the IAF as its "most prestigious award" and as the organization's premier accolade, it is often described as the "world's highest aerospace award." The award is presented to an eminent individual or team at the IAC Congress, after a nomination process, that has made an "exceptional impact to the progress of the world space activities" by their outstanding contributions in the fields of space science, technology, medicine, law, or management.
Frank J. Malina Astronautics Medal
The Frank J. Malina Astronautics Medal is presented every year at the Congress of the IAF. The medal is presented annually, commencing in 1986, to an educator who has demonstrated excellence in taking the fullest advantage of the resources available to him/her to promote the study of astronautics and related space sciences.
The Frank J. Malina Award consists of the Malina commemorative medal and a certificate of citation, presented at the International Astronautical Federation Awards Banquet. The funding for the medal is by the Aerojet-General Corporation and administration for the medal done through the IAF's Space Education and Outreach Committee.
Publications
The IAF publishes proceedings from its meeting electronically, along with studies undertaken by IAF committees, and other reports.
See also
Manfred Lachs
References
External links
International Astronautical Congress 2013 (archived)
1951 establishments in France
Organizations based in Paris
Organizations established in 1951
Space advocacy organizations | International Astronautical Federation | Astronomy | 1,283 |
27,233,680 | https://en.wikipedia.org/wiki/Decomposed%20granite | Decomposed granite is a kind of granite rock that is weathered to the point that the parent material readily fractures into smaller pieces of weaker rock. Further weathering yields material that easily crumbles into mixtures of gravel-sized particles known as grus that further may break down to produce a mixture of clay and silica sand or silt particles. Different specific granite types have differing propensities to weather, and so differing likelihoods of producing decomposed granite. It has practical uses that include its incorporation into roadway and driveway paving materials, residential gardening materials in arid environments, as well as various types of walkways and heavy-use paths in parks. Different colors of decomposed granite are available, deriving from the natural range of granite colors from different quarry sources, and admixture of other natural and synthetic materials can extend the range of decomposed granite properties.
Definition and composition
Decomposed granite is rock of granitic origin that has weathered to the point that it readily fractures into smaller pieces of weak rock. Further weathering produces rock that easily crumbles into mixtures of gravel-sized particles, sand, and silt-sized particles with some clay. Eventually, the gravel may break down to produce a mixture of silica sand, silt particles, and clay. Different specific granite types have differing propensities to weather, and so differing likelihoods of producing decomposed granite.
The parent granite material is a common type of igneous rock that is granular, with its grains large enough to be distinguished with the unaided eye (i.e., it is phaneritic in texture); it is composed of plagioclase feldspar, orthoclase feldspar, quartz, mica, and possibly other minerals. The chemical transformation of feldspar, one of the primary constituents of granite, into the clay mineral kaolin is one of the important weathering processes. The presence of clay allows water to seep in and further weaken the rock allowing it to fracture or crumble into smaller particles, where, ultimately, the grains of silica produced from the granite are relatively resistant to weathering, and may remain almost unaltered.
Uses
Decomposed granite, as a crushed stone form, is used as a pavement building material. It is used on driveways, garden walkways, bocce courts and pétanque terrains, and urban, regional, and national park walkways and heavy-use paths. DG can be installed and compacted to meet handicapped accessibility specifications and criteria, such as the ADA standards in the U.S. Different colors are available based on the various natural ranges available from different quarry sources, and polymeric stabilizers and other additives can be included to change the properties of the natural material. Decomposed granite is also sometimes used as a component of soil mixtures for cultivating bonsai.
See also
Crushed rock
Chipseal
References
Granite
Building stone
Pavements
Road construction
Earthworks (engineering)
Natural materials
Stone (material) | Decomposed granite | Physics,Engineering | 618 |
70,137,519 | https://en.wikipedia.org/wiki/Rariglobus | Rariglobus is a genus of bacteria from the family of Opitutaceae with one known species Rariglobus hedericola. Rariglobus hederico has been isolated from a freshwater ditch in Eugendorf.
See also
List of bacterial orders
List of bacteria genera
References
Verrucomicrobiota
Bacteria genera
Monotypic bacteria genera
Taxa described in 2020 | Rariglobus | Biology | 81 |
44,031,958 | https://en.wikipedia.org/wiki/Lactifluus%20edulis | Lactifluus edulis is a species of agaric fungus in the family Russulaceae. Described as new to science in 1994, it is found in Burundi.
See also
List of Lactifluus species
References
External links
Fungi described in 1994
Fungi of Africa
edulis
Fungus species | Lactifluus edulis | Biology | 64 |
16,705,504 | https://en.wikipedia.org/wiki/Cypher%20stent | Cypher is a brand of drug-eluting coronary stent from Cordis Corporation, a Cardinal Health company. During a balloon angioplasty, the stent is inserted into the artery to provide a "scaffold" to open the artery. An anti-rejection-type medication, sirolimus, helps to limit the overgrowth of normal cells while the artery heals which reduces the chance of re-blockage in the treated area known as restenosis, and reduces the chances that another procedure is required.
The Cypher stent was approved for use by the FDA in 2003. Following claims of inconsistent manufacturing processes and poor sales, Johnson & Johnson have announced that it will stop selling Cypher stents by the end of 2011.
See also
Sirolimus: Anti-proliferative effects
References
Drug delivery devices | Cypher stent | Chemistry | 172 |
44,217,608 | https://en.wikipedia.org/wiki/Amide%20ring | Amide Rings are small motifs in proteins and polypeptides. They consist of 9-atom or 11-atom rings formed by two CO...HN hydrogen bonds between a side chain amide group and the main chain atoms of a short polypeptide. They are observed with glutamine or asparagine side chains within proteins and polypeptides. Structurally similar rings occur in the binding of purine, pyrimidine and nicotinamide bases to the main chain atoms of proteins. About 4% of asparagines and glutamines form amide rings; in databases of protein domain structures, one is present, on average, every other protein.
In such rings the polypeptide has the conformation of beta sheet or of type II polyproline helix (PPII). A number of glutamines and asparagines help bind short peptides (with the PPII conformation) in the groove of class II MHC (Major Histocompatibility Complex) proteins by forming these motifs. An 11-atom amide ring, involving a glutamine residue, occurs at the interior of the light chain variable domains of some Immunoglobulin G antibodies and assists in linking the two beta-sheets.
An amide ring is employed in the specificity of the adaptor protein GRB2 for a particular asparagine within proteins it binds. GRB2 binds strongly to the pentapeptide EYINQ (when the tyrosine is phosphorylated); in such structures a 9-atom amide ring occurs between the amide side chain of the pentapeptide's asparagine and the main chain atoms of residue 109 of GRB2.
References
External links
Motivated Proteins: ;
PDBeMotif: .
Protein structural motifs | Amide ring | Chemistry,Biology | 385 |
58,130,085 | https://en.wikipedia.org/wiki/NGC%204237 | NGC 4237 is a flocculent spiral galaxy located about 60 million light-years away in the constellation Coma Berenices. The galaxy was discovered by astronomer William Herschel on December 30, 1783 and is a member of the Virgo Cluster. It is also classified as a LINER galaxy and as a Seyfert galaxy.
NGC 4237 appears to be deficient in neutral atomic hydrogen (H I). This, combined with its large projected distance from M87 and its radial velocity close to the Virgo Cluster mean suggests that the galaxy may be on a highly radial orbit through the center of the cluster.
Gallery
See also
List of NGC objects (4001–5000)
NGC 4212
References
External links
4237
39393
Coma Berenices
Virgo Cluster
Astronomical objects discovered in 1783
Flocculent spiral galaxies
7315
Seyfert galaxies
LINER galaxies | NGC 4237 | Astronomy | 179 |
4,946,686 | https://en.wikipedia.org/wiki/Relative%20velocity | The relative velocity of an object B relative to an observer A, denoted (also or ), is the velocity vector of B measured in the rest frame of A.
The relative speed is the vector norm of the relative velocity.
Classical mechanics
In one dimension (non-relativistic)
We begin with relative motion in the classical, (or non-relativistic, or the Newtonian approximation) that all speeds are much less than the speed of light. This limit is associated with the Galilean transformation. The figure shows a man on top of a train, at the back edge. At 1:00 pm he begins to walk forward at a walking speed of 10 km/h (kilometers per hour). The train is moving at 40 km/h. The figure depicts the man and train at two different times: first, when the journey began, and also one hour later at 2:00 pm. The figure suggests that the man is 50 km from the starting point after having traveled (by walking and by train) for one hour. This, by definition, is 50 km/h, which suggests that the prescription for calculating relative velocity in this fashion is to add the two velocities.
The diagram displays clocks and rulers to remind the reader that while the logic behind this calculation seem flawless, it makes false assumptions about how clocks and rulers behave. (See The train-and-platform thought experiment.) To recognize that this classical model of relative motion violates special relativity, we generalize the example into an equation:
where:
is the velocity of the Man relative to Earth,
is the velocity of the Man relative to the Train,
is the velocity of the Train relative to Earth.
Fully legitimate expressions for "the velocity of A relative to B" include "the velocity of A with respect to B" and "the velocity of A in the coordinate system where B is always at rest". The violation of special relativity occurs because this equation for relative velocity falsely predicts that different observers will measure different speeds when observing the motion of light.
In two dimensions (non-relativistic)
The figure shows two objects A and B moving at constant velocity. The equations of motion are:
where the subscript i refers to the initial displacement (at time t equal to zero). The difference between the two displacement vectors, , represents the location of B as seen from A.
Hence:
After making the substitutions and , we have:
Galilean transformation (non-relativistic)
To construct a theory of relative motion consistent with the theory of special relativity, we must adopt a different convention. Continuing to work in the (non-relativistic) Newtonian limit we begin with a Galilean transformation in one dimension:
where x' is the position as seen by a reference frame that is moving at speed, v, in the "unprimed" (x) reference frame. Taking the differential of the first of the two equations above, we have, , and what may seem like the obvious statement that , we have:
To recover the previous expressions for relative velocity, we assume that particle A is following the path defined by dx/dt in the unprimed reference (and hence dx′/dt′ in the primed frame). Thus and , where and refer to motion of A as seen by an observer in the unprimed and primed frame, respectively. Recall that v is the motion of a stationary object in the primed frame, as seen from the unprimed frame. Thus we have , and:
where the latter form has the desired (easily learned) symmetry.
Special relativity
As in classical mechanics, in special relativity the relative velocity is the velocity of an object or observer B in the rest frame of another object or observer A. However, unlike the case of classical mechanics, in Special Relativity, it is generally not the case that
This peculiar lack of symmetry is related to Thomas precession and the fact that two successive Lorentz transformations rotate the coordinate system. This rotation has no effect on the magnitude of a vector, and hence relative speed is symmetrical.
Parallel velocities
In the case where two objects are traveling in parallel directions, the relativistic formula for relative velocity is similar in form to the formula for addition of relativistic velocities.
The relative speed is given by the formula:
Perpendicular velocities
In the case where two objects are traveling in perpendicular directions, the relativistic relative velocity is given by the formula:
where
The relative speed is given by the formula
General case
The general formula for the relative velocity of an object or observer B in the rest frame of another object or observer A is given by the formula:
where
The relative speed is given by the formula
See also
Doppler effect
Peculiar velocity
Proper motion
Range rate
Radial velocity
Rapidity
Relativistic speed
Space velocity (astronomy)
Notes
References
Further reading
Alonso & Finn, Fundamental University Physics
Greenwood, Donald T, Principles of Dynamics.
Goodman and Warner, Dynamics.
Beer and Johnston, Statics and Dynamics.
McGraw Hill Dictionary of Physics and Mathematics.
Rindler, W., Essential Relativity.
KHURMI R.S., Mechanics, Engineering Mechanics, Statics, Dynamics
External links
Relative Motion at HyperPhysics
A Java applet illustrating Relative Velocity, by Andrew Duffy
Relatív mozgás (1)...(3) Relative motion of two train (1)...(3). Videos on the portal FizKapu.
Sebességek összegzése Relative tranquility of trout in creek. Video on the portal FizKapu.
Physical quantities
Classical mechanics
Velocity
Velocity | Relative velocity | Physics,Mathematics | 1,161 |
60,590,206 | https://en.wikipedia.org/wiki/List%20of%20lime%20kilns | Historic or notable lime kilns include.
Australia
Lime Kiln Remains, Ipswich
Pipers Creek Lime Kilns
Raffan's Mill and Brick Bottle Kilns
There were a number of lime kilns at Wool Bay, South Australia. One kiln remains and was listed along with the jetty under the name of Wool Bay Lime Kiln & Jetty on the South Australian Heritage Register on 28 November 1985.
There also are or were lime kilns at:
Adelaide Brighton Cement
Anna Creek Station
Blayney, New South Wales
Bower, South Australia
Claremont, Ipswich
Coopers Creek, Victoria
Galong, New South Wales
Kingston and Arthur's Vale Historic Area
Langshaw Marble Lime Works
Marmor, Queensland
New Farm, Queensland
North Coogee
Platina railway station
Point Nepean
Portland Cement Works Precinct
Portland, New South Wales
Quarry Amphitheatre
Quartz Roasting Pits Complex
South Fremantle, Western Australia
Walkerville, Victoria
Waurn Ponds, Victoria
United Kingdom
Annery kiln, Monkleigh, England
Limekilns at Kiln Park, Pembrokeshire, Penally, Wales
Cocking Lime Works, West Sussex, England
Grove Lime Kiln, Isle of Portland, England
Minera Limeworks, Wrexham, Wales
Solva limekilns, Pembrokeshire, Wales
There are or were lime kilns at many other places in the United Kingdom.
United States
See also
Lime Kiln (disambiguation)
:Category:Lime kilns in Canada
:Category:Lime kilns in France
:Category:Lime kilns in Germany
:Category:Lime kilns in Hong Kong
:Category:Lime kilns in Hungary
:Category:Lime kilns in Ireland
:Category:Lime kilns in Italy
:Category:Lime kilns in Latvia
:Category:Lime kilns in Portugal
:Category:Lime kilns in Slovenia
:Category:Lime kilns in South Africa
:Category:Lime kilns in Sweden
References
lime kilns | List of lime kilns | Chemistry,Engineering | 409 |
21,843 | https://en.wikipedia.org/wiki/Nucleosome | A nucleosome is the basic structural unit of DNA packaging in eukaryotes. The structure of a nucleosome consists of a segment of DNA wound around eight histone proteins and resembles thread wrapped around a spool. The nucleosome is the fundamental subunit of chromatin. Each nucleosome is composed of a little less than two turns of DNA wrapped around a set of eight proteins called histones, which are known as a histone octamer. Each histone octamer is composed of two copies each of the histone proteins H2A, H2B, H3, and H4.
DNA must be compacted into nucleosomes to fit within the cell nucleus. In addition to nucleosome wrapping, eukaryotic chromatin is further compacted by being folded into a series of more complex structures, eventually forming a chromosome. Each human cell contains about 30 million nucleosomes.
Nucleosomes are thought to carry epigenetically inherited information in the form of covalent modifications of their core histones. Nucleosome positions in the genome are not random, and it is important to know where each nucleosome is located because this determines the accessibility of the DNA to regulatory proteins.
Nucleosomes were first observed as particles in the electron microscope by Don and Ada Olins in 1974, and their existence and structure (as histone octamers surrounded by approximately 200 base pairs of DNA) were proposed by Roger Kornberg. The role of the nucleosome as a regulator of transcription was demonstrated by Lorch et al. in vitro in 1987 and by Han and Grunstein and Clark-Adams et al. in vivo in 1988.
The nucleosome core particle consists of approximately 146 base pairs (bp) of DNA wrapped in 1.67 left-handed superhelical turns around a histone octamer, consisting of 2 copies each of the core histones H2A, H2B, H3, and H4. Core particles are connected by stretches of linker DNA, which can be up to about 80 bp long. Technically, a nucleosome is defined as the core particle plus one of these linker regions; however the word is often synonymous with the core particle. Genome-wide nucleosome positioning maps are now available for many model organisms and human cells.
Linker histones such as H1 and its isoforms are involved in chromatin compaction and sit at the base of the nucleosome near the DNA entry and exit binding to the linker region of the DNA. Non-condensed nucleosomes without the linker histone resemble "beads on a string of DNA" under an electron microscope.
In contrast to most eukaryotic cells, mature sperm cells largely use protamines to package their genomic DNA, most likely to achieve an even higher packaging ratio. Histone equivalents and a simplified chromatin structure have also been found in Archaea, suggesting that eukaryotes are not the only organisms that use nucleosomes.
Structure
Structure of the core particle
Overview
Pioneering structural studies in the 1980s by Aaron Klug's group provided the first evidence that an octamer of histone proteins wraps DNA around itself in about 1.7 turns of a left-handed superhelix. In 1997 the first near atomic resolution crystal structure of the nucleosome was solved by the Richmond group, showing the most important details of the particle. The human alpha satellite palindromic DNA critical to achieving the 1997 nucleosome crystal structure was developed by the Bunick group at Oak Ridge National Laboratory in Tennessee. The structures of over 20 different nucleosome core particles have been solved to date, including those containing histone variants and histones from different species. The structure of the nucleosome core particle is remarkably conserved, and even a change of over 100 residues between frog and yeast histones results in electron density maps with an overall root mean square deviation of only 1.6Å.
The nucleosome core particle (NCP)
The nucleosome core particle (shown in the figure) consists of about 146 base pair of DNA wrapped in 1.67 left-handed superhelical turns around the histone octamer, consisting of 2 copies each of the core histones H2A, H2B, H3, and H4. Adjacent nucleosomes are joined by a stretch of free DNA termed linker DNA (which varies from 10 - 80 bp in length depending on species and tissue type).The whole structure generates a cylinder of diameter 11 nm and a height of 5.5 nm.
Nucleosome core particles are observed when chromatin in interphase is treated to cause the chromatin to unfold partially. The resulting image, via an electron microscope, is "beads on a string". The string is the DNA, while each bead in the nucleosome is a core particle. The nucleosome core particle is composed of DNA and histone proteins.
Partial DNAse digestion of chromatin reveals its nucleosome structure. Because DNA portions of nucleosome core particles are less accessible for DNAse than linking sections, DNA gets digested into fragments of lengths equal to multiplicity of distance between nucleosomes (180, 360, 540 base pairs etc.). Hence a very characteristic pattern similar to a ladder is visible during gel electrophoresis of that DNA. Such digestion can occur also under natural conditions during apoptosis ("cell suicide" or programmed cell death), because autodestruction of DNA typically is its role.
Protein interactions within the nucleosome
The core histone proteins contains a characteristic structural motif termed the "histone fold", which consists of three alpha-helices (α1-3) separated by two loops (L1-2). In solution, the histones form H2A-H2B heterodimers and H3-H4 heterotetramers. Histones dimerise about their long α2 helices in an anti-parallel orientation, and, in the case of H3 and H4, two such dimers form a 4-helix bundle stabilised by extensive H3-H3' interaction. The H2A/H2B dimer binds onto the H3/H4 tetramer due to interactions between H4 and H2B, which include the formation of a hydrophobic cluster.
The histone octamer is formed by a central H3/H4 tetramer sandwiched between two H2A/H2B dimers. Due to the highly basic charge of all four core histones, the histone octamer is stable only in the presence of DNA or very high salt concentrations.
Histone - DNA interactions
The nucleosome contains over 120 direct protein-DNA interactions and several hundred water-mediated ones. Direct protein - DNA interactions are not spread evenly about the octamer surface but rather located at discrete sites. These are due to the formation of two types of DNA binding sites within the octamer; the α1α1 site, which uses the α1 helix from two adjacent histones, and the L1L2 site formed by the L1 and L2 loops. Salt links and hydrogen bonding between both side-chain basic and hydroxyl groups and main-chain amides with the DNA backbone phosphates form the bulk of interactions with the DNA. This is important, given that the ubiquitous distribution of nucleosomes along genomes requires it to be a non-sequence-specific DNA-binding factor. Although nucleosomes tend to prefer some DNA sequences over others, they are capable of binding practically to any sequence, which is thought to be due to the flexibility in the formation of these water-mediated interactions. In addition, non-polar interactions are made between protein side-chains and the deoxyribose groups, and an arginine side-chain intercalates into the DNA minor groove at all 14 sites where it faces the octamer surface.
The distribution and strength of DNA-binding sites about the octamer surface distorts the DNA within the nucleosome core. The DNA is non-uniformly bent and also contains twist defects. The twist of free B-form DNA in solution is 10.5 bp per turn. However, the overall twist of nucleosomal DNA is only 10.2 bp per turn, varying from a value of 9.4 to 10.9 bp per turn.
Histone tail domains
The histone tail extensions constitute up to 30% by mass of histones, but are not visible in the crystal structures of nucleosomes due to their high intrinsic flexibility, and have been thought to be largely unstructured. The N-terminal tails of histones H3 and H2B pass through a channel formed by the minor grooves of the two DNA strands, protruding from the DNA every 20 bp. The N-terminal tail of histone H4, on the other hand, has a region of highly basic amino acids (16–25), which, in the crystal structure, forms an interaction with the highly acidic surface region of a H2A-H2B dimer of another nucleosome, being potentially relevant for the higher-order structure of nucleosomes. This interaction is thought to occur under physiological conditions also, and suggests that acetylation of the H4 tail distorts the higher-order structure of chromatin.
Higher order structure
The organization of the DNA that is achieved by the nucleosome cannot fully explain the packaging of DNA observed in the cell nucleus. Further compaction of chromatin into the cell nucleus is necessary, but it is not yet well understood. The current understanding is that repeating nucleosomes with intervening "linker" DNA form a 10-nm-fiber, described as "beads on a string", and have a packing ratio of about five to ten. A chain of nucleosomes can be arranged in a 30 nm fiber, a compacted structure with a packing ratio of ~50 and whose formation is dependent on the presence of the H1 histone.
A crystal structure of a tetranucleosome has been presented and used to build up a proposed structure of the 30 nm fiber as a two-start helix.
There is still a certain amount of contention regarding this model, as it is incompatible with recent electron microscopy data. Beyond this, the structure of chromatin is poorly understood, but it is classically suggested that the 30 nm fiber is arranged into loops along a central protein scaffold to form transcriptionally active euchromatin. Further compaction leads to transcriptionally inactive heterochromatin.
Dynamics
Although the nucleosome is a very stable protein-DNA complex, it is not static and has been shown to undergo a number of different structural re-arrangements including nucleosome sliding and DNA site exposure. Depending on the context, nucleosomes can inhibit or facilitate transcription factor binding. Nucleosome positions are controlled by three major contributions: First, the intrinsic binding affinity of the histone octamer depends on the DNA sequence. Second, the nucleosome can be displaced or recruited by the competitive or cooperative binding of other protein factors. Third, the nucleosome may be actively translocated by ATP-dependent remodeling complexes.
Nucleosome sliding
When incubated thermally, nucleosomes reconstituted onto the 5S DNA positioning sequence were able to reposition themselves translationally onto adjacent sequences. This repositioning does not require disruption of the histone octamer but is consistent with nucleosomes being able to "slide" along the DNA in cis. CTCF binding sites act as nucleosome positioning anchors so that, when used to align various genomic signals, multiple flanking nucleosomes can be readily identified. Although nucleosomes are intrinsically mobile, eukaryotes have evolved a large family of ATP-dependent chromatin remodelling enzymes to alter chromatin structure, many of which do so via nucleosome sliding. Nucleosome sliding is one of the possible mechanism for large scale tissue specific expression of genes. The transcription start site for genes expressed in a particular tissue, are nucleosome depleted while, the same set of genes in other tissue where they are not expressed, are nucleosome bound.
DNA site exposure
Nucleosomal DNA is in equilibrium between a wrapped and unwrapped state. DNA within the nucleosome remains fully wrapped for only 250 ms before it is unwrapped for 10-50 ms and then rapidly rewrapped, as measured using time-resolved FRET. This implies that DNA does not need to be actively dissociated from the nucleosome but that there is a significant fraction of time during which it is fully accessible. Introducing a DNA-binding sequence within the nucleosome increases the accessibility of adjacent regions of DNA when bound.
This propensity for DNA within the nucleosome to "breathe" has important functional consequences for all DNA-binding proteins that operate in a chromatin environment. In particular, the dynamic breathing of nucleosomes plays an important role in restricting the advancement of RNA polymerase II during transcription elongation.
Nucleosome free region
Promoters of active genes have nucleosome free regions (NFR). This allows for promoter DNA accessibility to various proteins, such as transcription factors. Nucleosome free region typically spans for 200 nucleotides in S. cerevisiae Well-positioned nucleosomes form boundaries of NFR. These nucleosomes are called +1-nucleosome and −1-nucleosome and are located at canonical distances downstream and upstream, respectively, from transcription start site. +1-nucleosome and several downstream nucleosomes also tend to incorporate H2A.Z histone variant.
Modulating nucleosome structure
Eukaryotic genomes are ubiquitously associated into chromatin; however, cells must spatially and temporally regulate specific loci independently of bulk chromatin. In order to achieve the high level of control required to co-ordinate nuclear processes such as DNA replication, repair, and transcription, cells have developed a variety of means to locally and specifically modulate chromatin structure and function. This can involve covalent modification of histones, the incorporation of histone variants, and non-covalent remodelling by ATP-dependent remodeling enzymes.
Histone post-translational modifications
Since they were discovered in the mid-1960s, histone modifications have been predicted to affect transcription. The fact that most of the early post-translational modifications found were concentrated within the tail extensions that protrude from the nucleosome core lead to two main theories regarding the mechanism of histone modification. The first of the theories suggested that they may affect electrostatic interactions between the histone tails and DNA to "loosen" chromatin structure. Later it was proposed that combinations of these modifications may create binding epitopes with which to recruit other proteins. Recently, given that more modifications have been found in the structured regions of histones, it has been put forward that these modifications may affect histone-DNA and histone-histone interactions within the nucleosome core. Modifications (such as acetylation or phosphorylation) that lower the charge of the globular histone core are predicted to "loosen" core-DNA association; the strength of the effect depends on location of the modification within the core.
Some modifications have been shown to be correlated with gene silencing; others seem to be correlated with gene activation. Common modifications include acetylation, methylation, or ubiquitination of lysine; methylation of arginine; and phosphorylation of serine. The information stored in this way is considered epigenetic, since it is not encoded in the DNA but is still inherited to daughter cells. The maintenance of a repressed or activated status of a gene is often necessary for cellular differentiation.
Histone variants
Although histones are remarkably conserved throughout evolution, several variant forms have been identified. This diversification of histone function is restricted to H2A and H3, with H2B and H4 being mostly invariant. H2A can be replaced by H2AZ (which leads to reduced nucleosome stability) or H2AX (which is associated with DNA repair and T cell differentiation), whereas the inactive X chromosomes in mammals are enriched in macroH2A. H3 can be replaced by H3.3 (which correlates with activate genes and regulatory elements) and in centromeres H3 is replaced by CENPA.
ATP-dependent nucleosome remodeling
A number of distinct reactions are associated with the term ATP-dependent chromatin remodeling. Remodeling enzymes have been shown to slide nucleosomes along DNA, disrupt histone-DNA contacts to the extent of destabilizing the H2A/H2B dimer and to generate negative superhelical torsion in DNA and chromatin. Recently, the Swr1 remodeling enzyme has been shown to introduce the variant histone H2A.Z into nucleosomes. At present, it is not clear if all of these represent distinct reactions or merely alternative outcomes of a common mechanism. What is shared between all, and indeed the hallmark of ATP-dependent chromatin remodeling, is that they all result in altered DNA accessibility.
Studies looking at gene activation in vivo and, more astonishingly, remodeling in vitro have revealed that chromatin remodeling events and transcription-factor binding are cyclical and periodic in nature. While the consequences of this for the reaction mechanism of chromatin remodeling are not known, the dynamic nature of the system may allow it to respond faster to external stimuli. A recent study indicates that nucleosome positions change significantly during mouse embryonic stem cell development, and these changes are related to binding of developmental transcription factors.
Dynamic nucleosome remodelling across the Yeast genome
Studies in 2007 have catalogued nucleosome positions in yeast and shown that nucleosomes are depleted in promoter regions and origins of replication.
About 80% of the yeast genome appears to be covered by nucleosomes and the pattern of nucleosome positioning clearly relates to DNA regions that regulate transcription, regions that are transcribed and regions that initiate DNA replication. Most recently, a new study examined dynamic changes in nucleosome repositioning during a global transcriptional reprogramming event to elucidate the effects on nucleosome displacement during genome-wide transcriptional changes in yeast (Saccharomyces cerevisiae). The results suggested that nucleosomes that were localized to promoter regions are displaced in response to stress (like heat shock). In addition, the removal of nucleosomes usually corresponded to transcriptional activation and the replacement of nucleosomes usually corresponded to transcriptional repression, presumably because transcription factor binding sites became more or less accessible, respectively. In general, only one or two nucleosomes were repositioned at the promoter to effect these transcriptional changes. However, even in chromosomal regions that were not associated with transcriptional changes, nucleosome repositioning was observed, suggesting that the covering and uncovering of transcriptional DNA does not necessarily produce a transcriptional event. After transcription, the rDNA region has to protected from any damage, it suggested HMGB proteins play a major role in protecting the nucleosome free region.
DNA Twist Defects
DNA twist defects are when the addition of one or a few base pairs from one DNA segment are transferred to the next segment resulting in a change of the DNA twist. This will not only change the twist of the DNA but it will also change the length. This twist defect eventually moves around the nucleosome through the transferring of the base pair, this means DNA twists can cause nucleosome sliding. Nucleosome crystal structures have shown that superhelix location 2 and 5 on the nucleosome are commonly found to be where DNA twist defects occur as these are common remodeler binding sites. There are a variety of chromatin remodelers but all share the existence of an ATPase motor which facilitates chromatin sliding on DNA through the binding and hydrolysis of ATP. ATPase has an open and closed state. When the ATPase motor is changing from open and closed states, the DNA duplex changes geometry and exhibits base pair tilting. The initiation of the twist defects via the ATPase motor causes tension to accumulate around the remodeler site. The tension is released when the sliding of DNA has been completed throughout the nucleosome via the spread of two twist defects (one on each strand) in opposite directions.
Nucleosome assembly in vitro
Nucleosomes can be assembled in vitro by either using purified native or recombinant histones. One standard technique of loading the DNA around the histones involves the use of salt dialysis. A reaction consisting of the histone octamers and a naked DNA template can be incubated together at a salt concentration of 2 M. By steadily decreasing the salt concentration, the DNA will equilibrate to a position where it is wrapped around the histone octamers, forming nucleosomes. In appropriate conditions, this reconstitution process allows for the nucleosome positioning affinity of a given sequence to be mapped experimentally.
Disulfide crosslinked nucleosome core particles
A recent advance in the production of nucleosome core particles with enhanced stability involves site-specific disulfide crosslinks. Two different crosslinks can be introduced into the nucleosome core particle. A first one crosslinks the two copies of H2A via an introduced cysteine (N38C) resulting in histone octamer which is stable against H2A/H2B dimer loss during nucleosome reconstitution. A second crosslink can be introduced between the H3 N-terminal histone tail and the nucleosome DNA ends via an incorporated convertible nucleotide. The DNA-histone octamer crosslink stabilizes the nucleosome core particle against DNA dissociation at very low particle concentrations and at elevated salt concentrations.
Nucleosome assembly in vivo
Nucleosomes are the basic packing unit of genomic DNA built from histone proteins around which DNA is coiled. They serve as a scaffold for formation of higher order chromatin structure as well as for a layer of regulatory control of gene expression. Nucleosomes are quickly assembled onto newly synthesized DNA behind the replication fork.
H3 and H4
Histones H3 and H4 from disassembled old nucleosomes are kept in the vicinity and randomly distributed on the newly synthesized DNA. They are assembled by the chromatin assembly factor 1 (CAF-1) complex, which consists of three subunits (p150, p60, and p48). Newly synthesized H3 and H4 are assembled by the replication coupling assembly factor (RCAF). RCAF contains the subunit Asf1, which binds to newly synthesized H3 and H4 proteins. The old H3 and H4 proteins retain their chemical modifications which contributes to the passing down of the epigenetic signature. The newly synthesized H3 and H4 proteins are gradually acetylated at different lysine residues as part of the chromatin maturation process. It is also thought that the old H3 and H4 proteins in the new nucleosomes recruit histone modifying enzymes that mark the new histones, contributing to epigenetic memory.
H2A and H2B
In contrast to old H3 and H4, the old H2A and H2B histone proteins are released and degraded; therefore, newly assembled H2A and H2B proteins are incorporated into new nucleosomes. H2A and H2B are assembled into dimers which are then loaded onto nucleosomes by the nucleosome assembly protein-1 (NAP-1) which also assists with nucleosome sliding. The nucleosomes are also spaced by ATP-dependent nucleosome-remodeling complexes containing enzymes such as Isw1 Ino80, and Chd1, and subsequently assembled into higher order structure.
Gallery
The crystal structure of the nucleosome core particle () - different views showing details of histone folding and organization. Histones , , , and are coloured.
See also
Chromomere
References
External links
MBInfo - What are nucleosomes
Nucleosomes on the Richmond Lab website
Nucleosome at the PDB
Dynamic Remodeling of Individual Nucleosomes Across a Eukaryotic Genome in Response to Transcriptional Perturbation
Nucleosome positioning data and tools online (annotated list, constantly updated)
Histone protein structure
HistoneDB 2.0 - Database of histones and variants at NCBI
Molecular biology
Epigenetics
Nuclear organization | Nucleosome | Chemistry,Biology | 5,276 |
33,090,200 | https://en.wikipedia.org/wiki/RSCS | Remote Spooling Communications Subsystem or RSCS is a subsystem ("virtual machine" in VM terminology) of IBM's VM/370 operating system which accepts files transmitted to it from local or remote system and users and transmits them to destination local or remote users and systems. RSCS also transmits commands and messages among users and systems.
RSCS is the software that powered the world’s largest network (or network of networks) prior to the Internet and directly influenced both internet development and user acceptance of networking between independently managed organizations. RSCS was developed by Edson Hendricks and T.C. Hartmann. Both as an IBM product and as an IBM internal network, it later became known as VNET. The network interfaces continued to be called the RSCS compatible protocols and were used to interconnect with IBM systems other than VM systems (typically MVS) and non-IBM computers.
The history of this program, and its influence on IBM and the IBM user community, is described in contemporaneous accounts and interviews by Melinda Varian. Technical goals and innovations are described by Creasy and by Hendricks and Hartmann in seminal papers. Among academic users, the same software was employed by BITNET and related networks worldwide.
Background
RSCS arose because people throughout IBM recognized a need to exchange files. Hendricks’s solution was CPREMOTE, which he completed by mid-1969. CPREMOTE was the first example of a “service virtual machine” and was motivated partly by the desire to prove the usefulness of that concept.
In 1971, Norman L. Rasmussen, Manager of IBM’s Cambridge Scientific Center (CSC), asked Hendricks to find a way for the CSC machine to communicate with machines at IBM’s other Scientific Centers. CPREMOTE had taught Hendricks so much about how a communications facility would be used and what function was needed in such a facility, that he decided to discard it and begin again with a new design. After additional iterations, based on feedback from real users and contributed suggestions and code from around the company, Hendricks and Tim Hartmann, of the IBM Technology Data Center in Poughkeepsie, NY, produced RSCS, which went into operation within IBM in 1973.
The first version of RSCS distributed outside of IBM (1975) was not a complete networking package. It included uncalled subroutines for functions such as store-and-forward that were included in the IBM internal version. The store-and-forward function was added in the VNET PRPQ, first for files, and then for messages and commands.
Once those capabilities were added, “the network began to grow like crazy.” Although at first the IBM network depended on people going to their computer room and dialing a phone, it soon began to acquire leased lines.
At SHARE XLVI, in February, 1976, Hendricks and Hartmann reported that the network, which was now beginning to be called VNET, spanned the continent and connected 50 systems. By SHARE 52, in March, 1979, they reported that VNET connected 239 systems, in 38 U.S. cities and 10 other countries. “VNET passed 1000 nodes in 1983 and 3000 nodes in 1989. It currently (1990s) connects somewhat more than 4000 nodes, about two-thirds of which are VM systems.”
In comparison, by 1981 the ARPANET consisted of 213 host computers. Both ARPANET and VNET continued to grow rapidly.
By 1986, IBM’s Think magazine estimated that VNET was saving the company $150,000,000 per year as the result of increased productivity.
Other RSCS Protocol Compatible Networks
Due to the key role RSCS played in building networks, the line drivers became known as the "RSCS Protocols". The supported protocols were drawn from other programs. The CPREMOTE protocol may have been the very first symmetrical protocol (sometimes called a "balanced" protocol). To expand the RSCS network to include MVS, Hartmann reverse-engineered the HASP Network Job Interface protocol, which enabled the network to grow rapidly. He later added the JES2 Network Job Entry as an RSCS/VNET line driver.
BITNET was a cooperative United States university network founded in 1981 by Ira Fuchs at the City University of New York (CUNY) and Greydon Freeman at Yale University which was based on VNET. The first network link was between CUNY and Yale.
The BITNET (RSCS) protocols were eventually ported to non-IBM computer systems, and became widely implemented under VAX/VMS in addition to DECnet (The VAX/VMS NJE protocol stack was known as Jnet).
At its zenith around 1991, BITNET extended to almost 500 organizations and 3,000 nodes, all educational institutions. It spanned North America (in Canada it was known as NetNorth), Europe (as EARN), India (TIFR) and some Persian Gulf states (as GulfNet). BITNET was also very popular in other parts of the world, especially in South America, where about 200 nodes were implemented and heavily used in the late 1980s and early 1990s.
Over time, BITNET was eventually merged into the Internet. Newer versions of RSCS, as well as Jnet and the various UNIX NJE stacks, provided support for TCPNJE line drivers. Since most sites that were on BITNET also had access to the Internet, the BITNET links that were once ran over leased lines and dialup modems were tunneled over the Internet. It was also not uncommon to run NJE over SNA.
Technical Issues
R. J. Creasy described RSCS as an operating system and considered it an essential component of the VM/370 Time-Sharing System. "The Virtual Machine Facility/370, VM/370 for short, is a convenient name for three different operating systems: the Control Program (CP), the Conversational Monitor System (CMS), and the Remote Spooling and Communications Subsystem (RSCS). Together they form a general purpose tool for the delivery of the computing resources of the IBM System/ 370 machines to a wide variety of people and computers. ...RSCS is the operating system used to provide information transfer among machines linked with communications facilities."
Details of the design of RSCS as a virtual machine subsystem are described in the IBM Systems Journal.
From a technical point of view, RSCS differed from ARPANET in that it was a point-to-point "store and forward" network, as such it was more like UUCP. Unlike ARPANET, it did not require dedicated Interface Message Processor or continuous network connections. Messages and files were transmitted in their entirety from one server to the next until reaching their destination. In case of a broken network connection RSCS would retain the message and retry transmission when the remote system became available.
VNET vs. ARPANET
VNET was the first large-scale connectionless network, making it possible for a computer to join the network using dial-up lines, making connection inexpensive while ARPANET required dedicated 50kb lines at first (later raised to 230KB. Most leased lines at the time typically operated at a maximum rate of 9600 baud.
VNET employed a vastly simplified routing and path finding approach, later adopted for the Internet.
VNET was a true "distributed control" while ARPANET required a "control" center operated at Bolt, Beranek, and Newman in Cambridge, MA.
Notes
References
See also
NETDATA
Remote job entry
Computer networking
Computer printing
History of the Internet
IBM mainframe software
VM (operating system) | RSCS | Technology,Engineering | 1,578 |
44,270,459 | https://en.wikipedia.org/wiki/Transhumanist%20Party | The Transhumanist Party is a political party in the United States. The party's platform is based on the ideas and principles of transhumanist politics, e.g., human enhancement, human rights, science, life extension, and technological progress.
History
The Transhumanist Party was founded in 2014 by Zoltan Istvan. Istvan became the first political candidate to run for office under the banner of the Transhumanist Party when he announced his candidacy for President of the United States in the United States presidential election of 2016; he did not have ballot access in any state and received 95 write-in votes from two states.
As part of his campaign Zoltan and a cadre of transhumanist activists and embedded journalists embarked on a four-month journey in the coffin-shaped Immortality Bus, which traveled on a winding cross-country route from San Francisco to Washington D.C. The Transhumanist Party has been featured or mentioned in many major media sites, including the National Review, Business Insider, Extreme Tech, Vice, Wired, The Telegraph, The Huffington Post, The Joe Rogan Experience, Heise Online, Gizmodo, and Reason. Political scientist Roland Benedikter said the formation of the Transhumanist Party in the USA was one of three reasons transhumanism entered into the mainstream in 2014, creating "a new level of public visibility and potential impact."
November 2016–2018
Following the end of the 2016 presidential election, after Zoltan's 2016 presidential campaign was completed, Gennady Stolyarov II became the Chairman of the party, and the organization was restructured. Under Chairman Stolyarov, the party adopted a new Constitution, which included three immutable Core Ideals in Article I, Section I.
New positions were founded, including Pavel Ilin became Secretary, Dinorah Delfin Director of Admissions and Public Relations, Arin Vahanian as Director of Marketing, Sean Singh as Director of Applied Innovation, Brent Reitze as Director of Publication, Franco Cortese as Director of Scholarship, and B.J. Murphy as Director of Social Media. Restructured advisor positions included Zoltan Istvan as Political and Media Advisor, Bill Andrews as Biotechnology Advisor, Jose Cordeiro as Technology Advisor, Newton Lee as Education and Media Advisor, Keith Comito as Crowdfunding Advisor, Aubrey de Grey as Anti-Aging Advisor, Rich Lee as Biohacking Advisor, Katie King as Media Advisor, Ira Pastor as Regeneration Advisor, Giovanni Santostasi as Regeneration Advisor, Elizabeth Parrish as Advocacy Advisor, and Paul Spiegel as Legal Advisor.
The U.S. Transhumanist Party held six Platform votes during January, February, March, May, June, and November 2017, on the basis of which 82 Platform planks were adopted. The U.S. Transhumanist Party holds votes of its members electronically and is the first political party in the United States to use ranked-preference voting method with instant runoffs in its internal ballots.
In May 2018 the New York Times reported the U.S. Transhumanist Party as having 880 members. On July 7, 2018, the U.S. Transhumanist Party reached 1,000 members and released a demographic analysis of its membership. This analysis showed that 704 members, or 70.4%, were eligible to vote in the United States, whereas 296 or 29.6% were allied members.
During this time, the Transhumanist Party hosted several expert discussion panels, on subjects including artificial intelligence, life extension, art and transhumanism, and cryptocurrencies. Chairman Stolyarov has also hosted in-person Enlightenment Salons, which were aimed at cross-disciplinary discussion of transhumanist and life-extensionist ideas under the auspices of the U.S. Transhumanist Party.
On August 11, 2017, at the Fest 2017 conference in San Diego, California, Chairman Stolyarov gave an address entitled "The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity", which provided an overview of the U.S. Transhumanist Party's key principles and objectives.
2020 presidential campaign
The Transhumanist Party presidential primary attracted media attention from BioEdge and the Milwaukee Record. While some media outlets reported Zoltan Istvan was considering running again, ultimately he did not join the party's primary. After a protracted primary process with nine candidates, featuring numerous debates, Johannon Ben Zion was elected as the party's nominee. After winning the primary, Ben Zion gave his acceptance speech at RAAD Fest 2019 in Las Vegas. and filed with the FEC. Shortly thereafter, film producer, entrepreneur, and longevity organizer Charlie Kam became Ben Zion's running mate. On March 4, 2020, Ben Zion participated in the Free & Equal Elections Foundation's Open Presidential debate in Chicago, Illinois. Zoltan Istvan also participated in the debate, running as a Republican.
On June 12, 2020, it was announced that Ben Zion had left the Transhumanist Party, with him declaring that his belief in Techno-progressivism was incompatible with the party and that he would instead be pursuing a run for the Reform Party nomination. Kam was declared the replacement presidential nominee. In June 2020 Charlie Kam participated in a panel with London Futurists and in July 2020 his campaign received press coverage in the Daily Express. On August 21, 2020, Kam announced his selection of Elizabeth (Liz) Parrish as his vice-presidential running mate. Kam did not have ballot access or registered write-in status in any state.
2024 presidential campaign
The USTP's 2024 presidential candidate was Thomas Ernest Ross, Jr. Tom Ross won with 62.02% of the votes cast in the U.S. Transhumanist Party electronic primary held May 14–22, 2023. After winning, Ross selected Daniel Twedt to be his vice-presidential running mate. He did not have ballot access in any state.
Tom Ross's campaign had three major initiatives: the Earthling Initiative, the Artisanal Intelligence Initiative, and the Extraterrestrial Initiative. To demonstrate his commitment to AI governance, Ross appointed an AI campaign manager early in his campaign.
Platform
A core tenet of the platform is that more funding is needed for research into human life extension research and research to reduce existential risk. More generally, the goal is to raise awareness among the general public about how technologies can enhance the human species. Democratic transhumanists and libertarian transhumanists tend to be in disagreement over the role of government in society, but both agree that laws should not encumber technological human progress.
The Transhumanist Party platform promotes national and global prosperity by sharing technologies and creating enterprises to lift people and nations out of poverty, war, and injustice. The Transhumanist Party also supports LGBT rights, drug legalization, and sex work legalization. The party seeks to fully subsidize university-level education while also working to "create a cultural mindset in America that embracing and producing radical technology and science is in the best interest of our nation and species."
In terms of foreign policy and national defense, the party wants to reduce the amount of money spent on foreign wars and use the money domestically. The party also advocates managing and preparing for existential risks, eliminating dangerous diseases, and proactively guarding against abuses of technology, such as nanotechnology, synthetic viruses, and artificial intelligence.
The USTP expressly supports the rights of Artificial General Intelligence entities that are sentient and/or lucid. The Transhumanist Bill of Rights Version 3.0 recognizes 7 levels of sentience and requires entities to exist at level 5 or higher to be considered as having rights. At level 5, the main criterion is that the entity be "lucid", meaning the entity is "meta-aware", or aware of its own awareness.
The various policy points of the US Transhumanist Party's platform have attracted both praise and criticism from sociologist Steve Fuller. For example, Fuller has praised the centrality of morphological freedom in the US Transhumanist Party's bill of rights, but on the other hand he has also written that the party is too critical of the US Department of Defense, which he argues could be an ally for some transhumanist initiatives such as human enhancement and existential risk reduction. In 2018 the party as a whole was reviewed favorably as an example of a successful "niche" party by Krisztian Szabados, a director at the Edmond J. Safra Center for Ethics at Harvard University.
State parties
Affiliate parties exist in the states of Arizona, California, Colorado, Illinois, Kentucky, Maryland, Michigan, Minnesota, Nevada, New York, North Carolina, Texas, Virginia, Washington and Washington DC.
International analogs
The Transhumanist Party in Europe is the umbrella organization that supports the national-level transhumanist parties in Europe by developing unified policies and goals for the continent. Among them is the UK Transhumanist Party, which was founded in January 2015. In October 2015, Amon Twyman, the party's leader at the time, published a blog post distancing the UK party from Zoltan Istvan's campaign.
References
External links
US Transhumanist Party – Official Website
Ben Zion 2020 Official Campaign Website
US Transhumanist Party – Historic website
Transhumanist organizations
Transhumanist politics
Life extension organizations
Political parties established in 2014
Political parties in the United States | Transhumanist Party | Technology | 1,968 |
23,543 | https://en.wikipedia.org/wiki/Probability%20distribution | In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space).
For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values.
Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names.
Introduction
A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often represented in notation by is the set of all possible outcomes of a random phenomenon being observed. The sample space may be any set: a set of real numbers, a set of descriptive labels, a set of vectors, a set of arbitrary non-numerical values, etc. For example, the sample space of a coin flip could be
To define probability distributions for the specific case of random variables (so the sample space can be seen as a numeric set), it is common to distinguish between discrete and absolutely continuous random variables. In the discrete case, it is sufficient to specify a probability mass function assigning a probability to each possible outcome (e.g. when throwing a fair die, each of the six digits to , corresponding to the number of dots on the die, has the probability The probability of an event is then defined to be the sum of the probabilities of all outcomes that satisfy the event; for example, the probability of the event "the die rolls an even value" is
In contrast, when a random variable takes values from a continuum then by convention, any individual outcome is assigned probability zero. For such continuous random variables, only events that include infinitely many outcomes such as intervals have probability greater than 0.
For example, consider measuring the weight of a piece of ham in the supermarket, and assume the scale can provide arbitrarily many digits of precision. Then, the probability that it weighs exactly 500g must be zero because no matter how high the level of precision chosen, it cannot be assumed that there are no non-zero decimal digits in the remaining omitted digits ignored by the precision level.
However, for the same use case, it is possible to meet quality control requirements such as that a package of "500 g" of ham must weigh between 490 g and 510 g with at least 98% probability. This is possible because this measurement does not require as much precision from the underlying equipment.
Absolutely continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., for some The cumulative distribution function is the area under the probability density function from to as shown in figure 1.
General probability definition
Let be a probability space, be a measurable space, and be a -valued random variable. Then the probability distribution of is the pushforward measure of the probability measure onto induced by . Explicitly, this pushforward measure on is given by
for
Any probability distribution is a probability measure on (in general different from , unless happens to be the identity map).
A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for absolutely continuous and discrete variables, is by means of a probability function whose input space is a σ-algebra, and gives a real number probability as its output, particularly, a number in .
The probability function can take as argument subsets of the sample space itself, as in the coin toss example, where the function was defined so that and . However, because of the widespread use of random variables, which transform the sample space into a set of numbers (e.g., , ), it is more common to study probability distributions whose argument are subsets of these particular kinds of sets (number sets), and all probability distributions discussed in this article are of this type. It is common to denote as the probability that a certain value of the variable belongs to a certain event .
The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is:
, so the probability is non-negative
, so no probability exceeds
for any countable disjoint family of sets
The concept of probability function is made more rigorous by defining it as the element of a probability space , where is the set of possible outcomes, is the set of all subsets whose probability can be measured, and is the probability function, or probability measure, that assigns a probability to each of these measurable subsets .
Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.
A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution.
Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function.
Terminology
Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below.
Basic terms
Random variable: takes values from a sample space; probabilities describe which values and set of values are taken more likely.
Event: set of possible values (outcomes) of a random variable that occurs with a certain probability.
Probability function or probability measure: describes the probability that the event occurs.
Cumulative distribution function: function evaluating the probability that will take a value less than or equal to for a random variable (only for real-valued random variables).
Quantile function: the inverse of the cumulative distribution function. Gives such that, with probability , will not exceed .
Discrete probability distributions
Discrete probability distribution: for many random variables with finitely or countably infinitely many values.
Probability mass function (pmf): function that gives the probability that a discrete random variable is equal to some value.
Frequency distribution: a table that displays the frequency of various outcomes .
Relative frequency distribution: a frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample (i.e. sample size).
Categorical distribution: for discrete random variables with a finite set of values.
Absolutely continuous probability distributions
Absolutely continuous probability distribution: for many random variables with uncountably many values.
Probability density function (pdf) or probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample.
Related terms
Support: set of values that can be assumed with non-zero probability (or probability density in the case of a continuous distribution) by the random variable. For a random variable , it is sometimes denoted as .
Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form , or a union thereof.
Head: the region where the pmf or pdf is relatively high. Usually has the form .
Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof.
Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half.
Mode: for a discrete random variable, the value with highest probability; for an absolutely continuous random variable, a location at which the probability density function has a local peak.
Quantile: the q-quantile is the value such that .
Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution.
Standard deviation: the square root of the variance, and hence another measure of dispersion.
Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right.
Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution.
Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution.
Cumulative distribution function
In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable with regard to a probability distribution is defined as
The cumulative distribution function of any real-valued random variable has the properties:
is non-decreasing;
is right-continuous;
;
and ; and
.
Conversely, any function that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers.
Any probability distribution can be decomposed as the mixture of a discrete, an absolutely continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the convex sum of the three according cumulative distribution functions.
Discrete probability distribution
A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values (almost surely) which means that the probability of any event can be expressed as a (finite or countably infinite) sum:
where is a countable set with . Thus the discrete random variables (i.e. random variables whose probability distribution is discrete) are exactly those with a probability mass function . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if for , the sum of probabilities would be .
Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution. When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices.
Cumulative distribution function
A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take.
Thus the cumulative distribution function has the form
The points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers.
Dirac delta representation
A discrete probability distribution is often represented with Dirac measures, the probability distributions of deterministic random variables. For any outcome , let be the Dirac measure concentrated at . Given a discrete probability distribution, there is a countable set with and a probability mass function . If is any event, then
or in short,
Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function , where which means
for any event
Indicator-function representation
For a discrete random variable , let be the values it can take with non-zero probability. Denote
These are disjoint sets, and for such sets
It follows that the probability that takes any value except for is zero, and thus one can write as
except on a set of probability zero, where is the indicator function of . This may serve as an alternative definition of discrete random variables.
One-point distribution
A special case is the discrete distribution of a random variable that can take on only one fixed value; in other words, it is a deterministic distribution. Expressed formally, the random variable has a one-point distribution if it has a possible outcome such that All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 to 1.
Absolutely continuous probability distribution
An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable has an absolutely continuous probability distribution if there is a function such that for each interval the probability of belonging to is given by the integral of over :
This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function.
In particular, the probability for to take any single value (that is, ) is zero, because an integral with coinciding upper and lower limits is always equal to zero.
If the interval is replaced by any measurable set , the according equality still holds:
An absolutely continuous random variable is a random variable whose probability distribution is absolutely continuous.
There are many examples of absolutely continuous probability distributions: normal, uniform, chi-squared, and others.
Cumulative distribution function
Absolutely continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function.
In this case, the cumulative distribution function has the form
where is a density of the random variable with regard to the distribution .
Note on terminology: Absolutely continuous distributions ought to be distinguished from continuous distributions, which are those having a continuous cumulative distribution function. Every absolutely continuous distribution is a continuous distribution but the inverse is not true, there exist singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. Some authors however use the term "continuous distribution" to denote all distributions whose cumulative distribution function is absolutely continuous, i.e. refer to absolutely continuous distributions as continuous distributions.
For a more general definition of density functions and the equivalent absolutely continuous measures see absolutely continuous measure.
Kolmogorov definition
In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function from a probability space to a measurable space . Given that probabilities of events of the form satisfy Kolmogorov's probability axioms, the probability distribution of is the image measure of , which is a probability measure on satisfying .
Other kinds of distributions
Absolutely continuous and discrete distributions with support on or are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves within some space or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it.
One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma. When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system.
This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let be instants in time and a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set would be equal in interval and , which might not happen; for example, it could oscillate similar to a sine, , whose limit when does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future. The branch of dynamical systems that studies the existence of a probability measure is ergodic theory.
Note that even in these cases, the probability distribution, if it exists, might still be termed "absolutely continuous" or "discrete" depending on whether the support is uncountable or countable, respectively.
Random number generation
Most algorithms are based on a pseudorandom number generator that produces numbers that are uniformly distributed in the half-open interval . These random variates are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated.
For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , we define
so that
This random variable X has a Bernoulli distribution with parameter . This is a transformation of discrete random variable.
For a distribution function of an absolutely continuous random variable, an absolutely continuous random variable must be constructed. , an inverse function of , relates to the uniform variable :
For example, suppose a random variable that has an exponential distribution must be constructed.
so and if has a distribution, then the random variable is defined by . This has an exponential distribution of .
A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudo-random numbers that are distributed in a given way.
Common probability distributions and their applications
The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, absolutely continuous, multivariate, etc.)
All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.
Linear growth (e.g. errors, offsets)
Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used absolutely continuous distribution
Exponential growth (e.g. prices, incomes, populations)
Log-normal distribution, for a single such quantity whose log is normally distributed
Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution
Uniformly distributed quantities
Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice)
Continuous uniform distribution, for absolutely continuously distributed values
Bernoulli trials (yes/no events, with a given probability)
Basic distributions:
Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution
Related to sampling schemes over a finite population:
Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement
Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement)
Categorical outcomes (events with possible outcomes)
Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution
Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution
Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution
Poisson process (events that occur independently with a given rate)
Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time
Exponential distribution, for the time before the next Poisson-type event occurs
Gamma distribution, for the time before the next k Poisson-type events occur
Absolute values of vectors with normally distributed components
Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.
Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.
Normally distributed quantities operated with sum of squares
Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test)
Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test)
F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient)
As conjugate prior distributions in Bayesian inference
Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc.
Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution
Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution
Some specialized applications of probability distributions
The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by , probability that the particle's position will be in the interval in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics.
Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provides the power flow calculation also in term of probability distribution.
Prediction of natural phenomena occurrences based on previous frequency distributions such as tropical cyclones, hail, time in between events, etc.
Fitting
See also
Conditional probability distribution
Empirical probability distribution
Histogram
Joint probability distribution
Probability measure
Quasiprobability distribution
Riemann–Stieltjes integral application to probability theory
Lists
List of probability distributions
List of statistical topics
References
Citations
Sources
External links
Field Guide to Continuous Probability Distributions, Gavin E. Crooks.
Distinguishing probability measure, function and distribution, Math Stack Exchange
Mathematical and quantitative methods (economics)
it:Variabile casuale#Distribuzione di probabilità | Probability distribution | Mathematics | 5,441 |
12,194,886 | https://en.wikipedia.org/wiki/C3H8O | {{DISPLAYTITLE:C3H8O}}
The molecular formula C3H8O may refer to:
Methoxyethane (Ethyl methyl ether), CH3-O-CH2-CH3, CAS number
Propanols
Isopropyl alcohol (isopropanol, 2-propanol), CH3-CHOH-CH3, CAS number
1-Propanol (n-propanol, n-propyl alcohol), CH3-CH2-CH2OH, CAS number | C3H8O | Chemistry | 115 |
26,537,401 | https://en.wikipedia.org/wiki/Anthropophilia | In parasitology, anthropophilia, from the Greek ἅνθρωπος (anthrōpos, "human being") and φιλία (philia, "friendship" or "love"), is a preference of a parasite or dermatophyte for humans over other animals. The related term endophilia refers specifically to a preference for being in human habitats, especially inside dwellings. The term zoophilia, in this context, describes animals which prefer non-human animals for nourishment.
Most usage of the term anthropophilia refers to hematophagous insects (see Anopheles) that prefer human blood over animal blood (zoophily, but see other meanings of zoophily). Examples other than haematophagy include geckoes that live close to humans, pied crows (Corvus albus), cockroaches, and many others. In the study of malaria and its disease vectors, researchers make the distinction between anthropophilic mosquitoes and other types as part of disease eradication efforts.
Anthropic organisms are organisms that show anthropophily, where the adjective synanthropic refers to organisms that live close to human settlements and houses, and eusynanthropic to those that live within human housing.
See also
Human-animal breastfeeding
References
Parasites of humans
Habitat | Anthropophilia | Biology | 301 |
1,067,670 | https://en.wikipedia.org/wiki/Hilbert%E2%80%93Speiser%20theorem | In mathematics, the Hilbert–Speiser theorem is a result on cyclotomic fields, characterising those with a normal integral basis. More generally, it applies to any finite abelian extension of , which by the Kronecker–Weber theorem are isomorphic to subfields of cyclotomic fields.
Hilbert–Speiser Theorem. A finite abelian extension has a normal integral basis if and only if it is tamely ramified over .
This is the condition that it should be a subfield of where is a squarefree odd number. This result was introduced by in his Zahlbericht and by .
In cases where the theorem states that a normal integral basis does exist, such a basis may be constructed by means of Gaussian periods. For example if we take a prime number , has a normal integral basis consisting of all the -th roots of unity other than . For a field contained in it, the field trace can be used to construct such a basis in also (see the article on Gaussian periods). Then in the case of squarefree and odd, is a compositum of subfields of this type for the primes dividing (this follows from a simple argument on ramification). This decomposition can be used to treat any of its subfields.
proved a converse to the Hilbert–Speiser theorem:
Each finite tamely ramified abelian extension of a fixed number field has a relative normal integral basis if and only if .
There is an elliptic analogue of the theorem proven by .
It is now called the Srivastav-Taylor theorem .
References
Cyclotomic fields
Theorems in algebraic number theory | Hilbert–Speiser theorem | Mathematics | 340 |
11,541,555 | https://en.wikipedia.org/wiki/HD%2011964%20b | HD 11964 b is an extrasolar planet, a gas giant like Jupiter approximately 110 light-years away in the constellation of Cetus. The planet orbits the yellow subgiant star HD 11964 in a nearly-circular orbit, taking over 5 years to complete a revolution around the star at a distance of 3.34 astronomical units.
The planet was discovered in 2005 and published as part of the Catalog of Nearby Exoplanets under the designation HD 11964 b. However, since that time there has been confusion as to the designations of the planets in the HD 11964 system, leading to some sources designating this planet as "HD 11964 c". In a recent review of the properties of multi-planet extrasolar planetary systems, the discovery team has stated that the correct designation for this planet is HD 11964 b.
References
Cetus
Exoplanets discovered in 2005
Giant planets
Exoplanets detected by radial velocity | HD 11964 b | Astronomy | 190 |
9,874,504 | https://en.wikipedia.org/wiki/ETSI%20Satellite%20Digital%20Radio | ETSI Satellite Digital Radio (SDR or ETSI SDR) describes a standard of satellite digital radio. It is an activity of the European standardisation organisation ETSI.
It addresses systems where a satellite broadcast directly to mobile and handheld receivers in L band or S band and is complemented by terrestrial transmitters. The broadcast content consists of multicast audio (digital radio), video (mobile TV) and data (program guide, text and graphical information, as well as off-line content). The satellite component allows geographical coverage at low cost, whereas the terrestrial component improves reception quality in built up areas. The specifications considers conditional access and digital rights management.
1worldspace planned to use ETSI SDR in its new network covering Europe from 2009, but the company went defunct before it launched its service. Also Ondas Media has announced to use ETSI SDR.
The ETSI SDR is also similar to the Sirius XM Radio, the S-DMB used in South Korea for multimedia broadcasting since May 2005, the China Multimedia Mobile Broadcasting (CMMB) and the defunct MobaHo! service (2004-2009). The DVB-SH specifications, which the DVB Project has created, target similar broadcast systems as ETSI SDR.
ETSI SDR Standard
The ETSI SDR standard allows implementation of parts of such networks in an interoperable way. So far, ETSI has standardized the physical layer of the air interface (radio interface). This allows implementation of demodulators in integrated circuits. The physical layer is described by the following parts of ETSI EN 302 550:
ETSI EN 302 550-1-1 "Satellite Earth Stations and Systems (SES); Satellite Digital Radio (SDR) Systems; Part 1: Physical Layer of the Radio Interface; Sub-Part 1: Outer Physical Layer"
ETSI EN 302 550-1-2 "Satellite Earth Stations and Systems (SES); Satellite Digital Radio (SDR) Systems; Part 1: Physical Layer of the Radio Interface; Sub-Part 2: Inner Physical Layer Single Carrier Modulation"
ETSI EN 302 550-1-3 "Satellite Earth Stations and Systems (SES); Satellite Digital Radio (SDR) Systems; Inner Physical Layer of the Radio Interface; Part 1: Physical Layer of the Radio Interface; Sub-Part 3: Inner Physical Layer Multi Carrier Modulation"
These three parts replace the previous ETSI SDR standards ETSI TS 102 550, ETSI TS 102 551-1 and ETSI TS 102 551-2.
The following technical report contains guidelines for the use of these standards:
ETSI TR 102 604 Satellite Earth Stations and Systems (SES); Satellite Digital Radio (SDR) Systems; Guidelines for the use of the physical layer standards
The following technical report describes the facts and assumptions on which the SDR standards are based:
ETSI TR 102 525 "Satellite Earth Stations and Systems (SES); Satellite Digital Radio (SDR) service; Functionalities, architecture and technologies"
Note that in this document the word "may" replaces the word "shall" due to a decision of the ETSI Board in June 2006.
All ETSI specifications are open standards available at ETSI Publications Download Area (this will open ETSI document search engine; free registration is required to download PDF files).
See also
Digital Audio Broadcasting (DAB)
Digital Multimedia Broadcasting (DMB)
Digital Radio Mondiale (DRM)
DVB-H (Digital Video Broadcasting - Handhelds)
DVB-T (Digital Video Broadcasting - Terrestrial)
Sirius Satellite Radio
EchoStar Mobile
XM Satellite Radio
References
External links
ETSI (European Telecommunications Standards Institute)
WorldSpace Europe
Ondas Media
1worldspace
EchoStar Mobile
Inmarsat
AT&T CruiseCast
News and information on the new digital broadcasting systems
ETSI
Open standards
Satellite radio
Telecommunications standards
Television technology | ETSI Satellite Digital Radio | Technology | 789 |
9,488,412 | https://en.wikipedia.org/wiki/Acyl-CoA | Acyl-CoA is a group of CoA-based coenzymes that metabolize carboxylic acids. Fatty acyl-CoA's are susceptible to beta oxidation, forming, ultimately, acetyl-CoA. The acetyl-CoA enters the citric acid cycle, eventually forming several equivalents of ATP. In this way, fats are converted to ATP, the common biochemical energy carrier.
Functions
Fatty acid activation
Fats are broken down by conversion to acyl-CoA. This conversion is one response to high energy demands such as exercise.
The oxidative degradation of fatty acids is a two-step process, catalyzed by acyl-CoA synthetase. Fatty acids are converted to their acyl phosphate, the precursor to acyl-CoA. The latter conversion is mediated by acyl-CoA synthase"
acyl-P + HS-CoA → acyl-S-CoA + Pi + H+
Three types of acyl-CoA synthases are employed, depending on the chain length of the fatty acid. For example, the substrates for medium chain acyl-CoA synthase are 4-11 carbon fatty acids. The enzyme acyl-CoA thioesterase takes of the acyl-CoA to form a free fatty acid and coenzyme A.
Beta oxidation of acyl-CoA
The second step of fatty acid degradation is beta oxidation. Beta oxidation occurs in mitochondria. After formation in the cytosol, acyl-CoA is transported into the mitochondria, the location of beta oxidation. Transport of acyl-CoA into the mitochondria requires carnitine palmitoyltransferase 1 (CPT1), which converts acyl-CoA into acylcarnitine, which gets transported into the mitochondrial matrix. Once in the matrix, acylcarnitine is converted back to acyl-CoA by CPT2. Beta oxidation may begin now that Acyl-CoA is in the mitochondria.
Beta oxidation of acyl-CoA occurs in four steps.
1. Acyl-CoA dehydrogenase catalyzes dehydrogenation of the acyl-CoA, creating a double bond between the alpha and beta carbons. FAD is the hydrogen acceptor, yielding FADH2.
2. Enoyl-CoA hydrase catalyzes the addition of water across the newly formed double bond to make an alcohol.
3. 3-hydroxyacyl-CoA dehydrogenase oxidizes the alcohol group to a ketone. NADH is produced from NAD+.
4. Thiolase cleaves between the alpha carbon and ketone to release one molecule of Acetyl-CoA and the Acyl-CoA which is now 2 carbons shorter.
This four step process repeats until acyl-CoA has removed all carbons from the chain, leaving only Acetyl-CoA. During one cycle of beta oxidation, Acyl-CoA creates one molecule of Acetyl-CoA, FADH2, and NADH. Acetyl-CoA is then used in the citric acid cycle while FADH2 and NADH are sent to the electron transport chain. These intermediates all end up providing energy for the body as they are ultimately converted to ATP.
Beta oxidation, as well as alpha-oxidation, also occurs in the peroxisome. The peroxisome handles beta oxidation of fatty acids that have more than 20 carbons in their chain because the peroxisome contains very-long-chain Acyl-CoA synthetases. These enzymes are better equipped to oxidize Acyl-CoA with long chains that the mitochondria cannot handle.
Example using stearic acid
Beta oxidation removes 2 carbons at a time, so in the oxidation of an 18 carbon fatty acid such as Stearic Acid 8 cycles will need to occur to completely break down Acyl-CoA. This will produce 9 Acetyl-CoA that have 2 carbons each, 8 FADH2, and 8 NADH.
Clinical significance
Heart muscle primarily metabolizes fat for energy and Acyl-CoA metabolism has been identified as a critical molecule in early stage heart muscle pump failure.
Cellular acyl-CoA content correlates with insulin resistance, suggesting that it can mediate lipotoxicity in non-adipose tissues. Acyl-CoA: diacylglycerol acyltransferase (DGAT) plays an important role in energy metabolism on account of key enzyme in triglyceride biosynthesis. The synthetic role of DGAT in adipose tissue such as the liver and the intestine, sites where endogenous levels of its activity and triglyceride synthesis are high and comparatively clear. Also, any changes in the activity levels might cause changes in systemic insulin sensitivity and energy homeostasis.
A rare disease called multiple acyl-CoA dehydrogenase deficiency (MADD) is a fatty acid metabolism disorder. Acyl-CoA is important because this enzyme helps make Acyl-CoA from free fatty acids, and this activates the fatty acid to be metabolized. This compromised fatty acid oxidation leads to many different symptoms, including severe symptoms such as cardiomyopathy and liver disease and mild symptoms such as episodic metabolic decomposition, muscle weakness and respiratory failure. MADD is a genetic disorder, caused by a mutation in the ETFA, ETFB, and ETFDH genes. MADD is known as an "autosomal recessive disorder" because for one to get this disorder, one must receive this recessive gene from both parents.
See also
Acetyl-CoA
Beta oxidation
Coenzyme A
Acyl CoA dehydrogenase
Fatty acid metabolism
Fatty acyl-CoA esters
References
External links
Metabolism
Thioesters of coenzyme A | Acyl-CoA | Chemistry,Biology | 1,206 |
7,958,839 | https://en.wikipedia.org/wiki/Nor- | In chemical nomenclature, nor- is a prefix to name a structural analog that can be derived from a parent compound by the removal of one carbon atom along with the accompanying hydrogen atoms. The nor-compound can be derived by removal of a , , or CH group, or of a C atom. The "nor-" prefix also includes the elimination of a methylene bridge in a cyclic parent compound, followed by ring contraction. (The prefix "homo-" which indicates the next higher member in a homologous series, is usually limited to noncyclic carbons). The terms desmethyl- or demethyl- are synonyms of "nor-".
"Nor" is an abbreviation of normal. Originally, the term was used to denote the completely demethylated form of the parent compound.
Later, the meaning was restricted to the removal of one group. Nor is written directly in front of the stem name, without a hyphen between, unless there is another prefix after nor (for example α-). If multiple groups are eliminated the prefix dinor, trinor, tetranor, etcetera is used. The prefix is preceded by the position number (locant) of the carbon atoms that disappear (for example 2,3-dinor). The original numbering of the parent compound is retained. According to IUPAC nomenclature, this prefix is not written with italic letters and unlike nor, when it is a di or higher nor, at the end of the numbers separated by commas, a hyphen is used (as for example 2,3-dinor-6-keto Prostaglandin F1α is produced by beta oxidation of the parent compound 6-keto Prostaglandin F1α). Here, though, carbon 1 and 2 are lost by oxidation. The new carbon 1 has now become a CCOH similar to the parent compound, looking as if just carbon 2 and 3 have been removed from the parent compound. "Dinor" does not have to be reduction in adjacent carbons, e.g. 5-Acetyl-4,18-dinor-retinoic acid, where 4 referred to a ring carbon and 18 referred to a methyl group on the 5th carbon on the ring.
The alternative use of "nor" in naming the unbranched form of a compound within a series of isomers (also referred to as "normal") is obsolete and not allowed in IUPAC names.
History
Perhaps the earliest known use of the prefix "nor" is that by A. Matthiessen and G.C. Foster in 1867 in a publication about the reaction between a strong acid and opianic acid (see image). Opianic acid (C10H10O5) is a compound with two methyl in the publication in question the authors called it "dimethyl nor-opianic acid". After reaction with a strong acid a compound was attained with only one methyl (C9H8O5). This partially demethylated opianic acid they called "methyl normal opianic acid". The completely demethylated compound (C8H6O5) was denoted by the term "normal opianic acid", abbreviated as "nor-opianic acid".
Similarly Matthiessen and Foster called narcotine, which has three methoxy groups, "trimethyl nor-narcotine". The singular demethylated narcotine was called "dimethyl nor-narcotine", the more demethylated narcotine "methyl nor-narcotine" and the completely demethylated form "normal narcotine" or "nor-narcotine".
"Since that time the meaning of the prefix has been generalized to denote the replacement of one or more methyl groups by H, or the disappearance of CH2 from a carbon chain".
At present, the meaning is restricted to denote the removal of only one group from the parent structure, rather than the completely demethylated form of the parent compound.
In literature, "nor" is sometimes called the "next lower homologue", although in this context "homologue" is an inexact term. "Nor" only refers to the removal of one carbon atom with the accompanying hydrogen, not the removal of other units. "Nor" compares two related compounds; it does not describe the relation to a homologous series.
False etymology
It is suggested that "nor" is an acronym of German "N ohne Radikal" ("nitrogen without radical"). At first, the British pharmacologist John H. Gaddum followed this theory,
but in response to a review of A.M. Woolman,
Gaddum retracted his support for this etymology.
Woolman believed that "N ohne Radikal" was a German mnemonic and likely a backronym, rather than the real meaning of the prefix "nor". This can be argued with the fact "that the prefix nor is used for many compounds which contain no nitrogen at all".
Obsolete use of the term
Originally, "nor" had an ambiguous meaning, as the term "normal" could also refer to the unbranched form in a series of isomers, for example as with alkanes, alkanols and some amino acids.
Names of unbranched alkanes and alkanols, like "normal butane" and "normal propyl alcohol", which are obsolete now,
have become the prefix n-, however, not "nor".
Other "normal" compounds got the prefix "nor". The IUPAC encourages that older trivial names, like norleucine and norvaline, not be used; the use of the prefix for isomeric compounds was already discouraged in 1955 or earlier.
Examples
See also
Norsteroid
References
Chemistry prefixes | Nor- | Chemistry | 1,222 |
7,525,718 | https://en.wikipedia.org/wiki/Destrin | Destrin or DSTN (also known as actin depolymerizing factor or ADF) is a protein which in humans is encoded by the DSTN gene. Destrin is a component protein in microfilaments.
The product of this gene belongs to the actin-binding proteins ADF (Actin-Depolymerizing Factor)/cofilin family. This family of proteins is responsible for enhancing the turnover rate of actin in vivo. This gene encodes the actin depolymerizing protein that severs actin filaments (F-actin) and binds to actin monomers (G-actin). Two transcript variants encoding distinct isoforms have been identified for this gene.
Structure
The tertiary structure of destrin was determined by the use of triple-resonance multidimensional nuclear magnetic resonance, or NMR for short. The secondary and tertiary structures of destrin are similar to the gelsolin family which is another actin-regulating protein family.
There are three ordered layers to destrin which is a globular protein. There is a central β sheet that is composed of one parallel strand and three antiparallel strands. This β sheet is between a long α helix along with a shorter one and two shorter helices on the opposite side. The four helices are parallel to the β strands.
Function
In a variety of eukaryotes, destrin regulates actin in the cytoskeleton. Destrin binds actin and is thought to connect it as gelsolin segment-1 does. Furthermore, the binding of actin by destrin and cofilin is regulated negatively by phosphorylation. Destrin can also sever actin filaments.
References
External links
Ramachandran Plot for destrin:
Protein families | Destrin | Biology | 385 |
21,636,379 | https://en.wikipedia.org/wiki/Bull%20kelp | Bull kelp is a common name for the brown alga Nereocystis luetkeana which is a true kelp in the family Laminariaceae.
Species in the genus Durvillaea are also sometimes called "bull kelp", but this is just a shortening of the common name southern bull kelp. Durvillaea is a genus in the order Fucales and, though superficially similar in appearance, is not a true kelp (all of which are in the order Laminariales).
Laminariaceae
Common names of organisms | Bull kelp | Biology | 121 |
26,809,738 | https://en.wikipedia.org/wiki/C23H28O6 | The molecular formula C23H28O6 (molar mass: 400.46 g/mol, exact mass: 400.1886 u) may refer to:
Enprostil
Molecular formulas | C23H28O6 | Physics,Chemistry | 42 |
68,006,061 | https://en.wikipedia.org/wiki/Cybersecurity%20Capacity%20Maturity%20Model%20for%20Nations | Cybersecurity Capacity Maturity Model for Nations (CMM) is a framework developed to review the cybersecurity capacity maturity of a country across five dimensions. The five dimensions covers the capacity area required by a country to improve its cybersecurity posture. It was designed by Global Cyber Security Capacity Centre (GCSCC) of University of Oxford and first of its kind framework for countries to review their cybersecurity capacity, benchmark it and receive recommendation for improvement. Each dimension is divided into factors and the factors broken down into aspects. The review process includes rating each factor or aspect along five stages that represents the how well a country is doing in respect to that factor or aspect. The recommendations includes guidance on areas of cybersecurity that needs improvement and thus will require more focus and investment. As at June, 2021, the framework has been adopted and implemented in over 80 countries worldwide. Its deployment has been catalyzed by the involvement of international organizations such as the Organization of American States (OAS), the World Bank (WB), the International Telecommunication Union (ITU) and the Commonwealth Telecommunications Union (CTO) and Global Forum on Cyber Expertise (GFCE).
Overview
The World Summit on Information Society identified capacity building in the realm of cybersecurity as one of the pillars necessary to reap the benefits of processes and services digitalization, especially in developing nations. The International Telecommunication Union reported that developing nations lack the necessary cybersecurity capacity to manage ICT risk and respond to cyberthreats. Because cyberattacks and vulnerabilities in one nation can affect other parts of the world, some maturity models were developed to assess the cybersecurity capacity of nations and benchmark the capacity level. One of such models is the CMM.
The CMM was developed in 2014, through collaborative effort between the GCSCC and over 200 experts from academia, international and regional organizations and the private sector. CMM assesses the capacity of a country from five identified area called dimensions with the objective of improving the coverage, measurement and effectiveness of cyber security capacity building within five levels of progression. Benchmarking of a country's cybersecurity capacity involves reviewing its initiatives and activities against the entire CMM and across all Dimensions. According to the report of a regional CMM assessment of Latin America and the Caribbean, CMM assessment aims to identify cybersecurity gaps and discover actions that works.
Since 2014, the CMM has undergone revisions and it is intended to be a living model that remain relevant to every aspect of cybersecurity needs at the national level.
Structure
The framework consists of dimensions, factors, aspects, indicators and stages.
Dimension.
The dimensions represent the scope of a country's cybersecurity capacity that will be assessed by CMM and it is broken down into factors. The dimensions are not stand alone, rather they are related to one another because a nation's performance in one dimension of capacity may require input from another dimension.
The five dimensions from the 2021 version are:
Developing cybersecurity policy and strategy - This dimension examines how a nation fares in terms of availability and implementation of Cybersecurity policies and strategy.
Encouraging responsible cybersecurity culture within society - This dimension views how well citizens of a nation are familiar with digital risk and the provision of a viable channel for reporting cybercriminal activities.
Building cybersecurity knowledge and capabilities - This dimension explores structures in place for cybersecurity awareness and education within the nation.
Creating effective legal and regulatory frameworks - Examine the ability of a country to develop, ratify and enforce cybersecurity and privacy related legislation.
Controlling risks through standards and technologies - This dimension examines the common use of cybersecurity standard and presence of structures for development of such technologies.
Factors:
The factors are the important component of a country's capacity whose maturity level is measured and there are 23 factors in the latest version with each having one or more aspects.
Aspects:
These are smaller subdivision of factors which helps with understanding each factor and help in evidence gathering and measurement.
Indicators:
Each Indicator define the actions that suggest that a nation has maintain a specific stage of maturity. The level of maturity assigned to an aspect depend on the ability of a nation to fulfill the steps and actions listed as its indicator. Evidence will be required to be provided before a particular stage can be attained. It is either an
evidence is available or not and to move to a higher stage, all of the Indicators within a particular stage will need to have been fulfilled.
Stage:
This represent how matured a nations is on each factor or aspect. There are 5 stages of maturity; start-up, formative, established, strategic and dynamic. For a nation to met a particular maturity stage, it has to fulfill some indicators.
Start-up - At this stage, a nation has no presentable evidence to show existence of cybersecurity initiatives.
Formative - Evidence is available to proof initiatives on some of the aspects, however these efforts may be at the initiation state or be ad hoc.
Established - The is evidence to show that the aspect is defined, functional and working but adequate resource allocation is lacking.
Strategic - Aspect has been prioritized based on national need.
Dynamic - A working adaptable cybersecurity strategy is available, which is evidenced by global leadership on cybersecurity issues, agility of decision-making, and resources allocation.
Development
The first version of the framework was released in 2014. Based on pilot assessments conducted in six countries, improvements were made on the model and an updated version was published in 2017. Based on lessons learnt over the years from CMM deployments and consultations from GCSCC Expert Advisory Panel, strategic, regional and implementation partners of the GCSCC, and other experts from academia, international and regional organisations, governments, the private sector, and civil society, an updated version was released in 2021.
The dimensions, factors and aspects have changed overtime between CMM versions.
The 2014 has 5 dimensions and 21 factors. The 2017 version has 5 dimensions with 24 factors. The 2021 version has 5 dimensions and 23 factors.
Table 1 lists the dimensions across the three versions.
Table 2 list the factors for each version.
The Review Process
CMM review process has 3 stages.
Stage 1: Desk research and country-partner identification.
The first step is selection of a country. A CMM review can be requested by a country or a country can be selected for assessment by an international or regional organization.
Once a nation is selected for assessment, a relationship is established with the host country and necessary stakeholders identified from academia, civil societies, government ministries/department, international organizations and the private sector.
Stage 2: The Review
The actual review with the stakeholders is a three-day consultation process and based on the five dimensions, multiple teams are created across stakeholders.
Open discussions or focus groups method is applied to ask and answers questions. Questions and answer can also be collected using online tool. Inability to provide evidence for all indicators under each aspect will result in a lower maturity level for that aspect.
Remote follow-up sessions or email communication may be used for further data collection.
Stage 3: Review Report
A report is presented to the country's government and it is at the discretion of that country to make it publicly available or not.
The recommendation
The output of the CMM assessment is a report which details the gaps identified from each aspect and the present maturity level of each indicator. The assessment report is the property of the assessed nation and they choose whether to make it public or not. Depending on a nation's need, it recommend areas that should be given priority in terms of resource allocation.
The report include a sunburst representation of the cybersecurity capacity of the nation, reason for placing each factor or aspect in a particular stage and recommendation of what can be done to move up along the maturity stage.
Sample results from some of the reviews are available on GCSCC's website.
Nations with CMM Assessment
The GCSCC website has the list of nations that has been assessed, which have been listed below.
Albania
Antigua and Barbuda
Argentina
Armenia
Bahamas
Bangladesh
Barbados
Belize
Benin
Bhutan
Bolivia
Bosnia and Herzegovina
Botswana
Brazil
Burkina Faso
Cabo Verde
Cameroon
Chile
Colombia
Cook Islands
Costa Rica
Cyprus
Dominica
Dominican Republic
Ecuador
El Salvador
Eswatini
Fiji
Gambia
Georgia
Ghana
Grenada
Guatemala
Guyana
Haiti
Honduras
Iceland
Indonesia
Ivory Coast
Jamaica
Kiribati
Kosovo
Kyrgyzstan
Lesotho
Liberia
Lithuania
Madagascar
Malawi
Mauritius
Mexico
Micronesia
Montenegro
Morocco
Mozambique
Myanmar
Namibia
Nicaragua
Niger
Nigeria
North Macedonia
Panama
Papua New Guinea
Paraguay
Peru
Rwanda
Saint Kitts and Nevis
Saint Lucia
Saint Vincent and the Grenadines
Samoa
Senegal
Serbia
Sierra Leone
Somalia
Sri Lanka
Suriname
Switzerland
Tanzania
Thailand
Tonga
Trinidad and Tobago
Tunisia
Tuvalu
Uganda
United Kingdom
Uruguay
Vanuatu
Venezuela
Zambia
References
Cyberspace
Computer security exploits | Cybersecurity Capacity Maturity Model for Nations | Technology | 1,790 |
58,822,898 | https://en.wikipedia.org/wiki/WZ%20Andromedae | WZ Andromedae (abbreviated to WZ And) is an eclipsing binary star in the constellation Andromeda. Its maximum apparent visual magnitude is 11.6, but drops down to 12.00 during the main eclipse which occurs roughly every 16.7 hours.
Variability
This binary star was found to be variable by Henrietta Leavitt, and shows the usual two eclipses, a main one and a secondary one with a less pronounced drop in magnitude. The period of 16.7 days, however, was found to vary in time without any consolidated trend.
System
In an eclipsing binary system, the 16.7 day period is also the orbital period. The two stars are of spectral type F5 and G3, and they have almost the same mass. They could be so close that mass transfer is occurring in the system, changing the orbital period in time. An alternative explanation could be the presence in the system of two low mass companions with orbital periods of 50 and 70 years, respectively. Their contribution to the luminosity of the system, however, would be negligible (less than 1%), but they would have a large angular separation (45 and 72 mas) from the two main stars.
Notes
References
Beta Lyrae variables
Andromeda (constellation)
Andromedae, WZ
J01014364+3805464 | WZ Andromedae | Astronomy | 280 |
20,082,214 | https://en.wikipedia.org/wiki/Obsessive%E2%80%93compulsive%20disorder | Obsessive–compulsive disorder (OCD) is a mental and behavioral disorder in which an individual has intrusive thoughts (an obsession) and feels the need to perform certain routines (compulsions) repeatedly to relieve the distress caused by the obsession, to the extent where it impairs general function.
Obsessions are persistent unwanted thoughts, mental images or urges that generate feelings of anxiety, disgust or discomfort. Some common obsessions include fear of contamination, obsession with symmetry, the fear of acting blasphemously, the sufferer's sexual orientation and the fear of possibly harming others or themselves. Compulsions are repeated actions or routines that occur in response to obsessions to achieve a relief from anxiety. Common compulsions include excessive hand washing, cleaning, counting, ordering, repeating, avoiding triggers, hoarding, neutralizing, seeking assurance, praying and checking things. People with OCD may only perform mental compulsions such as needing to know or remember things. While this is sometimes referred to as primarily obsessional obsessive–compulsive disorder (Pure O), it is also considered a misnomer due to associated mental compulsions and reassurance seeking behaviors that are consistent with OCD.
Compulsions occur often and typically take up at least one hour per day, impairing one's quality of life. Compulsions cause relief in the moment, but cause obsessions to grow over time due to the repeated reward-seeking behavior of completing the ritual for relief. Many adults with OCD are aware that their compulsions do not make sense, but they still perform them to relieve the distress caused by obsessions. For this reason, thoughts and behaviors in OCD are usually considered egodystonic (inconsistent with one's ideal self-image). In contrast, thoughts and behaviors in obsessive–compulsive personality disorder (OCPD) are usually considered egosyntonic (consistent with one's ideal self-image), helping differentiate between OCPD and OCD.
Although the exact cause of OCD is unknown, several regions of the brain have been implicated in its neuroanatomical model including the anterior cingulate cortex, orbitofrontal cortex, amygdala and BNST. The presence of a genetic component is evidenced by the increased likelihood for both identical twins to be affected than both fraternal twins. Risk factors include a history of child abuse or other stress-inducing events such as during the postpartum period or after streptococcal infections. Diagnosis is based on clinical presentation and requires ruling out other drug-related or medical causes; rating scales such as the Yale–Brown Obsessive–Compulsive Scale (Y-BOCS) assess severity. Other disorders with similar symptoms include generalized anxiety disorder, major depressive disorder, eating disorders, tic disorders, body-focused repetitive behavior and obsessive–compulsive personality disorder. Personality disorders are a common comorbidity, with schizotypal and OCPD having poor treatment response. The condition is also associated with a general increase in suicidality. The phrase obsessive–compulsive is sometimes used in an informal manner unrelated to OCD to describe someone as excessively meticulous, perfectionistic, absorbed or otherwise fixated. However, the actual disorder can vary in presentation and individuals with OCD may not be concerned with cleanliness or symmetry.
OCD is chronic and long-lasting with periods of severe symptoms followed by periods of improvement. Treatment can improve ability to function and quality of life, and is usually reflected by improved Y-BOCS scores. Treatment for OCD may involve psychotherapy, pharmacotherapy such as antidepressants or surgical procedures such as deep brain stimulation or, in extreme cases, psychosurgery. Psychotherapies derived from cognitive behavioral therapy (CBT) models, such as exposure and response prevention, acceptance and commitment therapy, and inference based-therapy, are more effective than non-CBT interventions. Selective serotonin reuptake inhibitors (SSRIs) are more effective when used in excess of the recommended depression dosage; however, higher doses can increase side effect intensity. Commonly used SSRIs include sertraline, fluoxetine, fluvoxamine, paroxetine, citalopram and escitalopram. Some patients fail to improve after taking the maximum tolerated dose of multiple SSRIs for at least two months; these cases qualify as treatment-resistant and can require second-line treatment such as clomipramine or atypical antipsychotic augmentation. While SSRIs continue to be first-line, recent data for treatment-resistant OCD supports adjunctive use of neuroleptic medications, deep brain stimulation and neurosurgical ablation. There is growing evidence to support the use of deep brain stimulation and repetitive transcranial magnetic stimulation for treatment-resistant OCD.
Obsessive–compulsive disorder affects about 2.3% of people at some point in their lives, while rates during any given year are about 1.2%. More than three million Americans suffer from OCD. According to Mercy, approximately 1 in 40 U.S. adults and 1 in 100 U.S. children have OCD. Although possible at times with triggers such as pregnancy, onset rarely occurs after age 35 and about 50% of patients experience detrimental effects to daily life before age 20. While OCD occurs worldwide, a recent meta-analysis showed that women are 1.6 times more likely to experience OCD. Based on data from 34 studies, the worldwide prevalence rate is 1.5% in women and 1% in men.
Signs and symptoms
OCD can present with a wide variety of symptoms. Certain groups of symptoms usually occur together as dimensions or clusters, which may reflect an underlying process. The standard assessment tool for OCD, the Yale–Brown Obsessive Compulsive Scale (Y-BOCS), has 13 predefined categories of symptoms. These symptoms fit into three to five groupings. A meta-analytic review of symptom structures found a four-factor grouping structure to be most reliable: symmetry factor, forbidden thoughts factor, cleaning factor and hoarding factor. The symmetry factor correlates highly with obsessions related to ordering, counting and symmetry, as well as repeating compulsions. The forbidden thoughts factor correlates highly with intrusive thoughts of a violent, religious or sexual nature. The cleaning factor correlates highly with obsessions about contamination and compulsions related to cleaning. The hoarding factor only involves hoarding-related obsessions and compulsions, and was identified as being distinct from other symptom groupings.
When looking into the onset of OCD, one study suggests that there are differences in the age of onset between males and females, with the average age of onset of OCD being 9.6 for male children and 11.0 for female children. Children with OCD often have other mental disorders, such as ADHD, depression, anxiety and disruptive behavior disorder. Continually, children are more likely to struggle in school and experience difficulties in social situations (Lack 2012). When looking at both adults and children a study found the average ages of onset to be 21 and 24 for males and females respectively. While some studies have shown that OCD with earlier onset is associated with greater severity, other studies have not been able to validate this finding. Looking at women specifically, a different study suggested that 62% of participants found that their symptoms worsened at a premenstrual age. Across the board, all demographics and studies showed a mean age of onset of less than 25.
Some OCD subtypes have been associated with improvement in performance on certain tasks, such as pattern recognition (washing subtype) and spatial working memory (obsessive thought subtype). Subgroups have also been distinguished by neuroimaging findings and treatment response, though neuroimaging studies have not been comprehensive enough to draw conclusions. Subtype-dependent treatment response has been studied and the hoarding subtype has consistently been least responsive to treatment.
While OCD is considered a homogeneous disorder from a neuropsychological perspective, many of the symptoms may be the result of comorbid disorders. For example, adults with OCD have exhibited more symptoms of attention deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) than adults without OCD.
In regards to the cause of onset, researchers asked participants in one study what they felt was responsible for triggering the initial onset of their illness. 29% of patients answered that there was an environmental factor in their life that did so. Specifically, the majority of participants who answered with that noted their environmental factor to be related to an increased responsibility.
Obsessions
Obsessions are stress-inducing thoughts that recur and persist, despite efforts to ignore or confront them. People with OCD frequently perform tasks, or compulsions, to seek relief from obsession-related anxiety. Within and among individuals, initial obsessions vary in clarity and vividness. A relatively vague obsession could involve a general sense of disarray or tension, accompanied by a belief that life cannot proceed as normal while the imbalance remains. A more intense obsession could be a preoccupation with the thought or image of a close family member or friend dying, or intrusive thoughts related to relationship rightness. Other obsessions concern the possibility that someone or something other than oneself—such as God, the devil or disease—will harm either the patient or the people or things the patient cares about. Others with OCD may experience the sensation of invisible protrusions emanating from their bodies or feel that inanimate objects are ensouled. Another common obsession is scrupulosity, the pathological guilt/anxiety about moral or religious issues. In scrupulosity, a person's obsessions focus on moral or religious fears, such as the fear of being an evil person or the fear of divine retribution for sin. Mysophobia, a pathological fear of contamination and germs, is another common obsession theme.
Some people with OCD experience sexual obsessions that may involve intrusive thoughts or images of "kissing, touching, fondling, oral sex, anal sex, intercourse, incest and rape" with "strangers, acquaintances, parents, children, family members, friends, coworkers, animals and religious figures" and can include heterosexual or homosexual contact with people of any age. Similar to other intrusive thoughts or images, some disquieting sexual thoughts are normal at times, but people with OCD may attach extraordinary significance to such thoughts. For example, obsessive fears about sexual orientation can appear to the affected individual, and even to those around them, as a crisis of sexual identity. Furthermore, the doubt that accompanies OCD leads to uncertainty regarding whether one might act on the troubling thoughts, resulting in self-criticism or self-loathing.
Most people with OCD understand that their thoughts do not correspond with reality; however, they feel that they must act as though these ideas are correct or realistic. For example, someone who engages in compulsive hoarding might be inclined to treat inorganic matter as if it had the sentience or rights of living organisms, despite accepting that such behavior is irrational on an intellectual level. There is debate as to whether hoarding should be considered an independent syndrome from OCD.
Compulsions
Some people with OCD perform compulsive rituals because they inexplicably feel that they must do so, while others act compulsively to mitigate the anxiety that stems from obsessive thoughts. The affected individual might feel that these actions will either prevent a dreaded event from occurring or push the event from their thoughts. In any case, their reasoning is so idiosyncratic or distorted that it results in significant distress, either personally or for those around the affected individual. Excessive skin picking, hair pulling, nail biting and other body-focused repetitive behavior disorders are all on the obsessive–compulsive spectrum. Some individuals with OCD are aware that their behaviors are not rational, but they feel compelled to follow through with them to fend off feelings of panic or dread. Furthermore, compulsions often stem from memory distrust, a symptom of OCD characterized by insecurity in one's skills in perception, attention and memory, even in cases where there is no clear evidence of a deficit.
Common compulsions may include hand washing, cleaning, checking things (such as locks on doors), repeating actions (such as repeatedly turning on and off switches), ordering items in a certain way and requesting reassurance. Although some individuals perform actions repeatedly, they do not necessarily perform these actions compulsively; for example, morning or nighttime routines and religious practices are not usually compulsions. Whether behaviors qualify as compulsions or mere habit depends on the context in which they are performed. For instance, arranging and ordering books for eight hours a day would be expected of someone who works in a library, but this routine would seem abnormal in other situations. In other words, habits tend to bring efficiency to one's life, while compulsions tend to disrupt it. Furthermore, compulsions are different from tics (such as touching, tapping, rubbing or blinking) and stereotyped movements (such as head banging, body rocking or self-biting), which are usually not as complex and not precipitated by obsessions. It can sometimes be difficult to tell the difference between compulsions and complex tics, and about 10–40% of people with OCD also have a lifetime tic disorder.
People with OCD rely on compulsions as an escape from their obsessive thoughts; however, they are aware that relief is only temporary and that intrusive thoughts will return. Some affected individuals use compulsions to avoid situations that may trigger obsessions. Compulsions may be actions directly related to the obsession, such as someone obsessed with contamination compulsively washing their hands, but they can be unrelated as well. In addition to experiencing the anxiety and fear that typically accompanies OCD, affected individuals may spend hours performing compulsions every day. In such situations, it can become difficult for the person to fulfill their work, familial or social roles. These behaviors can also cause adverse physical symptoms; for example, people who obsessively wash their hands with antibacterial soap and hot water can make their skin red and raw with dermatitis.
Individuals with OCD often use rationalizations to explain their behavior; however, these rationalizations do not apply to the behavioral pattern, but to each individual occurrence. For example, someone compulsively checking the front door may argue that the time and stress associated with one check is less than the time and stress associated with being robbed, and checking is consequently the better option. This reasoning often occurs in a cyclical manner and can continue for as long as the affected person needs it to in order to feel safe.
In cognitive behavioral therapy (CBT), OCD patients are asked to overcome intrusive thoughts by not indulging in any compulsions. They are taught that rituals keep OCD strong, while not performing them causes OCD to become weaker. This position is supported by the pattern of memory distrust; the more often compulsions are repeated, the more weakened memory trust becomes and this cycle continues as memory distrust increases compulsion frequency. For body-focused repetitive behaviors (BFRB) such as trichotillomania (hair pulling), skin picking and onychophagia (nail biting), behavioral interventions such as habit reversal training and decoupling are recommended for the treatment of compulsive behaviors.
OCD sometimes manifests without overt compulsions, which may be termed "primarily obsessional OCD." OCD without overt compulsions could, by one estimate, characterize as many as 50–60% of OCD cases.
Insight and overvalued ideation
The Diagnostic and Statistical Manual of Mental Disorders (DSM-5), identifies a continuum for the level of insight in OCD, ranging from good insight (the least severe) to no insight (the most severe). Good or fair insight is characterized by the acknowledgment that obsessive–compulsive beliefs are not or may not be true, while poor insight, in the middle of the continuum, is characterized by the belief that obsessive–compulsive beliefs are probably true. The absence of insight altogether, in which the individual is completely convinced that their beliefs are true, is also identified as a delusional thought pattern and occurs in about 4% of people with OCD. When cases of OCD with no insight become severe, affected individuals have an unshakable belief in the reality of their delusions, which can make their cases difficult to differentiate from psychotic disorders.
Some people with OCD exhibit what is known as overvalued ideas, ideas that are abnormal compared to affected individuals' respective cultures, and more treatment-resistant than most negative thoughts and obsessions. After some discussion, it is possible to convince the individual that their fears are unfounded. It may be more difficult to practice exposure and response prevention therapy (ERP) on such people, as they may be unwilling to cooperate, at least initially. Similar to how insight is identified on a continuum, obsessive-compulsive beliefs are characterized on a spectrum, ranging from obsessive doubt to delusional conviction. In the United States, overvalued ideation (OVI) is considered most akin to poor insight—especially when considering belief strength as one of an idea's key identifiers. Furthermore, severe and frequent overvalued ideas are considered similar to idealized values, which are so rigidly held by, and so important to affected individuals, that they end up becoming a defining identity. In adolescent OCD patients, OVI is considered a severe symptom.
Historically, OVI has been thought to be linked to poorer treatment outcome in patients with OCD, but it is currently considered a poor indicator of prognosis. The Overvalued Ideas Scale (OVIS) has been developed as a reliable quantitative method of measuring levels of OVI in patients with OCD and research has suggested that overvalued ideas are more stable for those with more extreme OVIS scores.
Cognitive performance
Though OCD was once believed to be associated with above-average intelligence, this does not appear to necessarily be the case. A 2013 review reported that people with OCD may sometimes have mild but wide-ranging cognitive deficits, most significantly those affecting spatial memory and to a lesser extent with verbal memory, fluency, executive function and processing speed, while auditory attention was not significantly affected. People with OCD show impairment in formulating an organizational strategy for coding information, set-shifting, and motor and cognitive inhibition.
Specific subtypes of symptom dimensions in OCD have been associated with specific cognitive deficits. For example, the results of one meta-analysis comparing washing and checking symptoms reported that washers outperformed checkers on eight out of ten cognitive tests. The symptom dimension of contamination and cleaning may be associated with higher scores on tests of inhibition and verbal memory.
Video game addiction
Pediatric OCD
Approximately 1–2% of children are affected by OCD. There is a lot of similarity between the clinical presentation of OCD in children and adults and it is considered a highly familial disorder, with a phenotypic heritability of around 50%. Obsessive–compulsive disorder symptoms tend to develop more frequently in children 10–14 years of age, with males displaying symptoms at an earlier age, and at a more severe level than females. In children, symptoms can be grouped into at least four types, including sporadic and tic-related OCD.
The Children's Yale–Brown Obsessive–Compulsive Scale (CY-BOCS) is the gold standard measure for assessment of pediatric OCD. It follows the Y-BOCS format, but with a Symptom Checklist that is adapted for developmental appropriateness. Insight, avoidance, indecisiveness, responsibility, pervasive slowness and doubting are not included in a rating of overall severity. The CY-BOCS has demonstrated good convergent validity with clinician-rated OCD severity and good to fair discriminant validity from measures of closely related anxiety, depression and tic severity. The CY-BOCS Total Severity score is an important monitoring tool as it is responsive to pharmacotherapy and psychotherapy. Positive treatment response is characterized by 25% reduction in CY-BOCS total score and diagnostic remission is associated with a 45%-50% reduction in Total Severity score (or a score <15).
CBT is the first line treatment for mild to moderate cases of OCD in children, while medication plus CBT is recommended for moderate to severe cases. Serotonin reuptake inhibitors (SRIs) are first-line medications for OCD in children with established AACAP guidelines for dosing.
Associated conditions
People with OCD may be diagnosed with other conditions as well, such as obsessive–compulsive personality disorder, major depressive disorder, bipolar disorder, generalized anxiety disorder, anorexia nervosa, social anxiety disorder, bulimia nervosa, Tourette syndrome, transformation obsession, ASD, ADHD, dermatillomania, body dysmorphic disorder and trichotillomania. More than 50% of people with OCD experience suicidal tendencies and 15% have attempted suicide. Depression, anxiety and prior suicide attempts increase the risk of future suicide attempts.
It has been found that between 18 and 34% of females currently experiencing OCD scored positively on an inventory measuring disordered eating. Another study found that 7% are likely to have an eating disorder, while another found that fewer than 5% of males have OCD and an eating disorder.
Individuals with OCD have also been found to be affected by delayed sleep phase disorder at a substantially higher rate than the general public. Moreover, severe OCD symptoms are consistently associated with greater sleep disturbance. Reduced total sleep time and sleep efficiency have been observed in people with OCD, with delayed sleep onset and offset.
Some research has demonstrated a link between drug addiction and OCD. For example, there is a higher risk of drug addiction among those with any anxiety disorder, likely as a way of coping with the heightened levels of anxiety. However, drug addiction among people with OCD may be a compulsive behavior. Depression is also extremely prevalent among people with OCD. One explanation for the high depression rate among OCD populations was posited by Mineka, Watson and Clark (1998), who explained that people with OCD, or any other anxiety disorder, may feel "out of control".
Someone exhibiting OCD signs does not necessarily have OCD. Behaviors that present as obsessive–compulsive can also be found in a number of other conditions, including obsessive–compulsive personality disorder (OCPD), autism spectrum disorder (ASD) or disorders in which perseveration is a possible feature (ADHD, PTSD, bodily disorders or stereotyped behaviors). Some cases of OCD present symptoms typically associated with Tourette syndrome, such as compulsions that may appear to resemble motor tics; this has been termed tic-related OCD or Tourettic OCD.
OCD frequently occurs comorbidly with both bipolar disorder and major depressive disorder. Between 60 and 80% of those with OCD experience a major depressive episode in their lifetime. Comorbidity rates have been reported at between 19 and 90%, as a result of methodological differences. Between 9–35% of those with bipolar disorder also have OCD, compared to 1–2% in the general population. About 50% of those with OCD experience cyclothymic traits or hypomanic episodes. OCD is also associated with anxiety disorders. Lifetime comorbidity for OCD has been reported at 22% for specific phobia, 18% for social anxiety disorder, 12% for panic disorder and 30% for generalized anxiety disorder. The comorbidity rate for OCD and ADHD has been reported to be as high as 51%.
Causes
The cause of OCD is unknown. Both environmental and genetic factors are believed to play a role. Risk factors include a history of adverse childhood experiences or other stress-inducing events.
Drug-induced OCD
Some medications, toxin exposures and drugs, such as methamphetamine or cocaine, can induce obsessive–compulsive symptoms in people without a history of OCD. Atypical antipsychotics such as olanzapine and clozapine can induce OCD in some people, particularly individuals with schizophrenia.
The diagnostic criteria include:
General OCD symptoms (obsessions, compulsions, skin picking, hair pulling, etc.) that developed soon after exposure to the substance or medication which can produce such symptoms.
The onset of symptoms cannot be explained by an obsessive–compulsive and related disorder that is not substance/medication-induced and should last for a substantial period of time (about 1 month)
This disturbance does not only occur during delirium.
Clinically induces distress or impairment in social, occupational or other important areas of functioning.
Genetics
There appear to be some genetic components of OCD causation, with identical twins more often affected than fraternal twins. Furthermore, individuals with OCD are more likely to have first-degree family members exhibiting the same disorders than matched controls. In cases in which OCD develops during childhood, there is a much stronger familial link in the disorder than with cases in which OCD develops later in adulthood. In general, genetic factors account for 45–65% of the variability in OCD symptoms in children diagnosed with the disorder. A 2007 study found evidence supporting the possibility of a heritable risk for OCD. OCD is believed to be a heterogeneous disorder.
Research has found there to be a genetic correlation between anorexia nervosa and OCD, suggesting a strong etiology. First and second hand relatives of probands with OCD have a greater risk of developing anorexia nervosa as genetic relatedness increases.
A mutation has been found in the human serotonin transporter gene hSERT in unrelated families with OCD.
A systematic review found that while neither allele was associated with OCD overall, in Caucasians, the L allele was associated with OCD. Another meta-analysis observed an increased risk in those with the homozygous S allele, but found the LS genotype to be inversely associated with OCD.
A genome-wide association study found OCD to be linked with single-nucleotide polymorphisms (SNPs) near BTBD3 and two SNPs in DLGAP1 in a trio-based analysis, but no SNP reached significance when analyzed with case-control data.
One meta-analysis found a small but significant association between a polymorphism in SLC1A1 and OCD.
The relationship between OCD and Catechol-O-methyltransferase (COMT) has been inconsistent, with one meta-analysis reporting a significant association, albeit only in men, and another meta analysis reporting no association.
It has been postulated by evolutionary psychologists that moderate versions of compulsive behavior may have had evolutionary advantages. Examples would be moderate constant checking of hygiene, the hearth or the environment for enemies. Similarly, hoarding may have had evolutionary advantages. In this view, OCD may be the extreme statistical tail of such behaviors, possibly the result of a high number of predisposing genes.
Brain structure and functioning
Imaging studies have shown differences in the frontal cortex and subcortical structures of the brain in patients with OCD. There appears to be a connection between the OCD symptoms and abnormalities in certain areas of the brain, but such a connection is not clear. Some people with OCD have areas of unusually high activity in their brain or low levels of the chemical serotonin, which is a neurotransmitter that some nerve cells use to communicate with each other, and is thought to be involved in regulating many functions, influencing emotions, mood, memory and sleep.
Autoimmune
A controversial hypothesis is that some cases of rapid onset of OCD in children and adolescents may be caused by a syndrome connected to Group A streptococcal infections (GABHS), known as pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS). OCD and tic disorders are hypothesized to arise in a subset of children as a result of a post-streptococcal autoimmune process. The PANDAS hypothesis is unconfirmed and unsupported by data and two new categories have been proposed: PANS (pediatric acute-onset neuropsychiatric syndrome) and CANS (childhood acute neuropsychiatric syndrome). The CANS and PANS hypotheses include different possible mechanisms underlying acute-onset neuropsychiatric conditions, but do not exclude GABHS infections as a cause in a subset of individuals. PANDAS, PANS and CANS are the focus of clinical and laboratory research, but remain unproven. Whether PANDAS is a distinct entity differing from other cases of tic disorders or OCD is debated.
A review of studies examining anti-basal ganglia antibodies in OCD found an increased risk of having anti-basal ganglia antibodies in those with OCD versus the general population.
Environment
OCD may be more common in people who have been bullied, abused or neglected, and it sometimes starts after a significant life event, such as childbirth or bereavement. It has been reported in some studies that there is a connection between childhood trauma and obsessive-compulsive symptoms. More research is needed to understand this relationship better.
Mechanisms
Neuroimaging
Functional neuroimaging during symptom provocation has observed abnormal activity in the orbitofrontal cortex (OFC), left dorsolateral prefrontal cortex (dlPFC), right premotor cortex, left superior temporal gyrus, globus pallidus externus, hippocampus and right uncus. Weaker foci of abnormal activity were found in the left caudate, posterior cingulate cortex and superior parietal lobule. However, an older meta-analysis of functional neuroimaging in OCD reported that the only consistent functional neuroimaging finding was increased activity in the orbital gyrus and head of the caudate nucleus, while anterior cingulate cortex (ACC) activation abnormalities were too inconsistent.
A meta-analysis comparing affective and nonaffective tasks observed differences with controls in regions implicated in salience, habit, goal-directed behavior, self-referential thinking and cognitive control. For nonaffective tasks, hyperactivity was observed in the insula, ACC and head of the caudate/putamen, while hypoactivity was observed in the medial prefrontal cortex (mPFC) and posterior caudate. Affective tasks were observed to relate to increased activation in the precuneus and posterior cingulate cortex, while decreased activation was found in the pallidum, ventral anterior thalamus and posterior caudate. The involvement of the cortico-striato-thalamo-cortical loop in OCD, as well as the high rates of comorbidity between OCD and ADHD, have led some to draw a link in their mechanism. Observed similarities include dysfunction of the anterior cingulate cortex and prefrontal cortex, as well as shared deficits in executive functions. The involvement of the orbitofrontal cortex and dorsolateral prefrontal cortex in OCD is shared with bipolar disorder and may explain the high degree of comorbidity. Decreased volumes of the dorsolateral prefrontal cortex related to executive function has also been observed in OCD.
People with OCD evince increased grey matter volumes in bilateral lenticular nuclei, extending to the caudate nuclei, with decreased grey matter volumes in bilateral dorsal medial frontal/anterior cingulate gyri. These findings contrast with those in people with other anxiety disorders, who evince decreased (rather than increased) grey matter volumes in bilateral lenticular/caudate nuclei, as well as decreased grey matter volumes in bilateral dorsal medial frontal/anterior cingulate gyri. Increased white matter volume and decreased fractional anisotropy in anterior midline tracts has been observed in OCD, possibly indicating increased fiber crossings.
Cognitive models
Generally, two categories of models for OCD have been postulated. The first category involves deficits in executive dysfunction and is based on the observed structural and functional abnormalities in the dlPFC, striatum and thalamus. The second category involves dysfunctional modulatory control and primarily relies on observed functional and structural differences in the ACC, mPFC and OFC.
One proposed model suggests that dysfunction in the orbitalfrontal cortex (OFC) leads to improper valuation of behaviors and decreased behavioral control, while the observed alterations in amygdala activations leads to exaggerated fears and representations of negative stimuli.
Due to the heterogeneity of OCD symptoms, studies differentiating various symptoms have been performed. Symptom-specific neuroimaging abnormalities include the hyperactivity of caudate and ACC in checking rituals, while finding increased activity of cortical and cerebellar regions in contamination-related symptoms. Neuroimaging differentiating content of intrusive thoughts has found differences between aggressive as opposed to taboo thoughts, finding increased connectivity of the amygdala, ventral striatum and ventromedial prefrontal cortex in aggressive symptoms, while observing increased connectivity between the ventral striatum and insula in sexual or religious intrusive thoughts.
Another model proposes that affective dysregulation links excessive reliance on habit-based action selection with compulsions. This is supported by the observation that those with OCD demonstrate decreased activation of the ventral striatum when anticipating monetary reward, as well as increased functional connectivity between the VS and the OFC. Furthermore, those with OCD demonstrate reduced performance in Pavlovian fear-extinction tasks, hyperresponsiveness in the amygdala to fearful stimuli and hyporesponsiveness in the amygdala when exposed to positively valanced stimuli. Stimulation of the nucleus accumbens has also been observed to effectively alleviate both obsessions and compulsions, supporting the role of affective dysregulation in generating both.
Neurobiological
From the observation of the efficacy of antidepressants in OCD, a serotonin hypothesis of OCD has been formulated. Studies of peripheral markers of serotonin, as well as challenges with proserotonergic compounds have yielded inconsistent results, including evidence pointing towards basal hyperactivity of serotonergic systems. Serotonin receptor and transporter binding studies have yielded conflicting results, including higher and lower serotonin receptor 5-HT2A and serotonin transporter binding potentials that were normalized by treatment with SSRIs. Despite inconsistencies in the types of abnormalities found, evidence points towards dysfunction of serotonergic systems in OCD. Orbitofrontal cortex overactivity is attenuated in people who have successfully responded to SSRI medication, a result believed to be caused by increased stimulation of serotonin receptors 5-HT2A and 5-HT2C.
A complex relationship between dopamine and OCD has been observed. Although antipsychotics, which act by antagonizing dopamine receptors, may improve some cases of OCD, they frequently exacerbate others. Antipsychotics, in the low doses used to treat OCD, may actually increase the release of dopamine in the prefrontal cortex, through inhibiting autoreceptors. Further complicating things is the efficacy of amphetamines, decreased dopamine transporter activity observed in OCD, and low levels of D2 binding in the striatum. Furthermore, increased dopamine release in the nucleus accumbens after deep brain stimulation correlates with improvement in symptoms, pointing to reduced dopamine release in the striatum playing a role in generating symptoms.
Abnormalities in glutamatergic neurotransmission have been implicated in OCD. Findings such as increased cerebrospinal glutamate, less consistent abnormalities observed in neuroimaging studies, and the efficacy of some glutamatergic drugs (such as the glutamate-inhibiting riluzole) have implicated glutamate in OCD. OCD has been associated with reduced N-Acetylaspartic acid in the mPFC, which is thought to reflect neuron density or functionality, although the exact interpretation has not been established.
Diagnosis
Formal diagnosis may be performed by a psychologist, psychiatrist, clinical social worker or other licensed mental health professional. OCD, like other mental and behavioral health disorders, cannot be diagnosed by a medical exam, nor are there any medical exams that can predict if one will fall victim to such illnesses. To be diagnosed with OCD, a person must have obsessions, compulsions or both, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM notes that there are multiple characteristics that can turn obsessions and compulsions from normalized behavior to "clinically significant". There has to be recurring and strong thoughts or impulsive that intrude on the day-to-day lives of the patients and cause noticeable levels of anxiousness.
These thoughts, impulses or images are of a degree or type that lies outside the normal range of worries about conventional problems. A person may attempt to ignore or suppress such obsessions, neutralize them with another thought or action, or try to rationalize their anxiety away. People with OCD tend to recognize their obsessions as irrational.
Compulsions become clinically significant when a person feels driven to perform them in response to an obsession or according to rules that must be applied rigidly and when the person consequently feels or causes significant distress. Therefore, while many people who do not have OCD may perform actions often associated with OCD (such as ordering items in a pantry by height), the distinction with clinically significant OCD lies in the fact that the person with OCD must perform these actions to avoid significant psychological distress. These behaviors or mental acts are aimed at preventing or reducing distress or preventing some dreaded event or situation; however, these activities are not logically or practically connected to the issue, or, they are excessive.
Moreover, the obsessions or compulsions must be time-consuming, often taking up more than one hour per day or cause impairment in social, occupational or scholastic functioning. It is helpful to quantify the severity of symptoms and impairment before and during treatment for OCD. In addition to the person's estimate of the time spent each day harboring obsessive-compulsive thoughts or behaviors, concrete tools can be used to gauge the person's condition. This may be done with rating scales, such as the Yale–Brown Obsessive Compulsive Scale (Y-BOCS; expert rating) or the obsessive–compulsive inventory (OCI-R; self-rating). With measurements such as these, psychiatric consultation can be more appropriately determined, as it has been standardized.
In regards to diagnosing, the health professional also looks to make sure that the signs of obsessions and compulsions are not the results of any drugs, prescription or recreational, that the patient may be taking.
There are several types of obsessive thoughts that are found commonly in those with OCD. Some of these include fear of germs, hurting loved ones, embarrassment, neatness, societally unacceptable sexual thoughts etc. Within OCD, these specific categories are often diagnosed into their own type of OCD.
OCD is sometimes placed in a group of disorders called the obsessive–compulsive spectrum.
Another criterion in the DSM is that a person's mental illness does not fit one of the other categories of a mental disorder better. That is to say, if the obsessions and compulsions of a patient could be better described by trichotillomania, it would not be diagnosed as OCD. That being said, OCD does often go hand in hand with other mental disorders. For this reason, one may be diagnosed with multiple mental disorders at once.
A different aspect of the diagnoses is the degree of insight had by the individual in regards to the truth of the obsessions. There are three levels, good/fair, poor and absent/delusional. Good/fair indicated that the patient is aware that the obsessions they have are not true or probably not true. Poor indicates that the patient believes their obsessional beliefs are probably true. Absent/delusional indicates that they fully believe their obsessional thoughts to be true. Approximately 4% or fewer individuals with OCD will be diagnosed as absent/delusional. Additionally, as many as 30% of those with OCD also have a lifetime tic disorder, meaning they have been diagnosed with a tic disorder at some point in their life.
There are several different types of tics that have been observed in individuals with OCD. These include but are not limited to, "grunting", "jerking" or "shrugging" body parts, sniffling and excessive blinking.
There has been a significant amount of progress over the last few decades and as of 2022 there is statically significant improvement in the diagnostic process for individuals with OCD. One study found that of two groups of individuals, one with participants under the age of 27.25 and one with participants over that age, those in the younger group experienced a significantly faster time between the onset of OCD tendencies and their formal diagnoses.
Differential diagnosis
OCD is often confused with the separate condition obsessive–compulsive personality disorder (OCPD). OCD is egodystonic, meaning that the disorder is incompatible with the individual's self-concept. As egodystonic disorders go against a person's self-concept, they tend to cause much distress. OCPD, on the other hand, is egosyntonic, marked by the person's acceptance that the characteristics and behaviors displayed as a result are compatible with their self-image, or are otherwise appropriate, correct or reasonable.
As a result, people with OCD are often aware that their behavior is not rational and are unhappy about their obsessions, but nevertheless feel compelled by them. By contrast, people with OCPD are not aware of anything abnormal; they will readily explain why their actions are rational. It is usually impossible to convince them otherwise and they tend to derive pleasure from their obsessions or compulsions.
Management
Cognitive behavioral therapy (CBT) and psychotropic medications are the first-line treatments for OCD.
Therapy
One specific CBT technique used is called exposure and response prevention (ERP), which involves teaching the person to deliberately come into contact with situations that trigger obsessive thoughts and fears (exposure), without carrying out the usual compulsive acts associated with the obsession (response prevention). This technique causes patients to gradually learn to tolerate the discomfort and anxiety associated with not performing their compulsions. For many patients, ERP is the add-on treatment of choice when selective serotonin reuptake inhibitors (SSRIs) or serotonin–norepinephrine reuptake inhibitors (SNRIs) medication does not effectively treat OCD symptoms, or vice versa, for individuals who begin treatment with psychotherapy. This technique is considered superior to others due to the lack of medication used. However, up to 25% of patients will discontinue treatment due to the severity of their tics. CBT normally lasts anywhere from 12-16 sessions, with homework assigned to the patient in between meetings with a therapist. (Lack 2012). Modalities differ in ERP treatment but both virtual reality based as well as unguided computer assisted treatment programs have shown effective results in treatment programs.
For example, a patient might be asked to touch something very mildly contaminated (exposure) and wash their hands only once afterward (response prevention). Another example might entail asking the patient to leave the house and check the lock only once (exposure), without going back to check again (response prevention). After succeeding at one stage of treatment, the patient's level of discomfort in the exposure phase can be increased. When this therapy is successful, the patient will quickly habituate to an anxiety-producing situation, discovering a considerable drop in anxiety level.
ERP has a strong evidence base and is considered the most effective treatment for OCD. However, this claim was doubted by some researchers in 2000, who criticized the quality of many studies. While ERP can lead a majority of clients to improvements, many do not reach remission or become asymptomatic; some therapists are also hesitant to use this approach.
The recent development of remotely technology-delivered CBT is increasing access to therapy options for those living with OCD and remote versions appear to equally as effective as in-person therapy options. The development of smartphone interventions for OCD that utilize CBT techniques are another alternative that is expanding access to therapy while allowing therapies to be personalized for each patient.
Acceptance and commitment therapy (ACT), a newer therapy also used to treat anxiety and depression, has also been found to be effective in treatment of OCD. ACT uses acceptance and mindfulness strategies to teach patients not to overreact to or avoid unpleasant thoughts and feelings but rather "move toward valued behavior".
Inference-based therapy (IBT) is a form of cognitive therapy specifically developed for treating OCD. The therapy posits that individuals with OCD put a greater emphasis on an imagined possibility than on what can be perceived with the senses, and confuse the imagined possibility with reality, in a process called inferential confusion. According to inference-based therapy, obsessional thinking occurs when the person replaces reality and real probabilities with imagined possibilities. The goal of inference-based therapy is to reorient clients towards trusting the senses and relating to reality in a normal, non-effortful way. Differences between normal and obsessional doubts are presented and clients are encouraged to use their senses and reasoning as they do in non-obsessive–compulsive disorder situations. Research on Inference-Based Cognitive-Behavior Therapy (I-CBT) suggests it can lead to improvements for those with OCD.
A 2007 Cochrane review found that psychological interventions derived from CBT models, such as ERP, ACT and IBT, were more effective than non-CBT interventions. Other forms of psychotherapy, such as psychodynamics and psychoanalysis, may help in managing some aspects of the disorder. However, in 2007, the American Psychiatric Association (APA) noted a lack of controlled studies showing their efficacy, "in dealing with the core symptoms of OCD". For body-focused repetitive behaviors (BFRB), behavioral interventions such as habit-reversal training and decoupling are recommended.
Psychotherapy in combination with psychiatric medication may be more effective than either option alone for individuals with severe OCD. ERP coupled with weight restoration and serotonin reuptake inhibitors has proven the most effective when treating OCD and an eating disorder simultaneously.
Medication
The medications most frequently used to treat OCD are antidepressants, including selective serotonin reuptake inhibitors (SSRIs) and serotonin–norepinephrine reuptake inhibitors (SNRIs). Sertraline and fluoxetine are effective in treating OCD for children and adolescents.
SSRIs are a second-line treatment of adult OCD with mild functional impairment and as first-line treatment for those with moderate or severe impairment. In children, SSRIs can be considered as a second-line therapy in those with moderate to severe impairment, with close monitoring for psychiatric adverse effects. Patients treated with SSRIs are about twice as likely to respond to treatment as are those treated with placebo, so this treatment is qualified as efficacious. Efficacy has been demonstrated both in short-term (6–24 weeks) treatment trials and in discontinuation trials with durations of 28–52 weeks.
Clomipramine, a medication belonging to the class of tricyclic antidepressants, appears to work as well as SSRIs, but has a higher rate of side effects.
In 2006, the National Institute for Health and Care Excellence (NICE) guidelines recommended augmentative second-generation (atypical) antipsychotics for treatment-resistant OCD. Atypical antipsychotics are not useful when used alone and no evidence supports the use of first-generation antipsychotics. For OCD treatment specifically, there is tentative evidence for risperidone and insufficient evidence for olanzapine. Quetiapine is no better than placebo with regard to primary outcomes, but small effects were found in terms of Y-BOCS score. The efficacy of quetiapine and olanzapine are limited by an insufficient number of studies. A 2014 review article found two studies that indicated that aripiprazole was "effective in the short-term" and found that "[t]here was a small effect-size for risperidone or antipsychotics in general in the short-term"; however, the study authors found "no evidence for the effectiveness of quetiapine or olanzapine in comparison to placebo." While quetiapine may be useful when used in addition to an SSRI/SNRI in treatment-resistant OCD, these drugs are often poorly tolerated and have metabolic side effects that limit their use. A guideline by the American Psychological Association suggested that dextroamphetamine may be considered by itself after more well-supported treatments have been attempted.
Procedures
Electroconvulsive therapy (ECT) has been found to have effectiveness in some severe and refractory cases. Transcranial magnetic stimulation has shown to provide therapeutic benefits in alleviating symptoms.
Surgery may be used as a last resort in people who do not improve with other treatments. In this procedure, a surgical lesion is made in an area of the brain (the cingulate cortex). In one study, 30% of participants benefitted significantly from this procedure. Deep brain stimulation and vagus nerve stimulation are possible surgical options that do not require destruction of brain tissue. However, because deep brain stimulation results in such an instant and intense change, individuals may experience identity challenges afterward. In the United States, the Food and Drug Administration (FDA) approved deep brain stimulation for the treatment of OCD under a humanitarian device exemption, requiring that the procedure be performed only in a hospital with special qualifications to do so.
In the United States, psychosurgery for OCD is a treatment of last resort and will not be performed until the person has failed several attempts at medication (at the full dosage) with augmentation, and many months of intensive cognitive behavioral therapy with exposure and ritual/response prevention. Likewise, in the United Kingdom, psychosurgery cannot be performed unless a course of treatment from a suitably qualified cognitive–behavioral therapist has been carried out.
Children
Therapeutic treatment may be effective in reducing ritual behaviors of OCD for children and adolescents. Similar to the treatment of adults with OCD, cognitive behavioral therapy, along with exposure and response prevention (ERP) therapy, stands as an effective and validated first line of treatment of OCD in children. Family involvement, in the form of behavioral observations and reports, is a key component to the success of such treatments. Parental interventions also provide positive reinforcement for a child who exhibits appropriate behaviors as alternatives to compulsive responses. In a recent meta-analysis of evidenced-based treatment of OCD in children, family-focused individual CBT was labeled as "probably efficacious", establishing it as one of the leading psychosocial treatments for youth with OCD. After one or two years of therapy, in which a child learns the nature of their obsession and acquires strategies for coping, they may acquire a larger circle of friends, exhibit less shyness and become less self-critical. Trials have shown that children and adolescents with OCD should begin treatment with the combination of CBT with a selective serotonin reuptake inhibitor or CBT alone, rather than only an SSRI. A 2024 systeramitc review of the literature found that combining ERP therapy with selective serotonin reuptake inhibitors can enhance treatment outcomes compared to using SSRIs alone.
Although the known causes of OCD in younger age groups range from brain abnormalities to psychological preoccupations, life stress such as bullying and traumatic familial deaths may also contribute to childhood cases of OCD, and acknowledging these stressors can play a role in treating the disorder.
Prognosis
Quality of life is reduced across all domains in OCD. While psychological or pharmacological treatment can lead to a reduction of OCD symptoms and an increase in reported quality of life, symptoms may persist at moderate levels even following adequate treatment courses, and completely symptom-free periods are uncommon. In pediatric OCD, around 40% still have the disorder in adulthood and around 40% qualify for remission. The risk of having at least one comorbid personality disorder in OCD is 52%, which is the highest among anxiety disorders and greatly impacts its management and prognosis.
Epidemiology
Obsessive–compulsive disorder affects about 2.3% of people at some point in their life, with the yearly rate about 1.2%. OCD occurs worldwide. It is unusual for symptoms to begin after the age of 35 and half of people develop problems before 20. Males and females are affected about equally. However, there is an earlier age for onset for males than females.
History
Plutarch, an ancient Greek philosopher and historian, describes an ancient Roman man who possibly had scrupulosity, which could be a symptom of OCD or OCPD. This man is described as "turning pale under his crown of flowers", praying with a "faltering voice" and scattering "incense with trembling hands".
In the 7th century AD, John Climacus records an instance of a young monk plagued by constant and overwhelming "temptations to blasphemy" consulting an older monk, who told him: "My son, I take upon myself all the sins which these temptations have led you, or may lead you, to commit. All I require of you is that for the future you pay no attention to them whatsoever." The Cloud of Unknowing, a Christian mystical text from the late 14th century, recommends dealing with recurring obsessions by attempting to ignore them, and, if that fails, to "cower under them like a poor wretch and a coward overcome in battle, and reckon it to be a waste of your time for you to strive any longer against them", a technique now known as emotional flooding.
Abu Zayd Al-Balkhi, the 9th century Islamic polymath, was likely the first to classify OCD into different types and pioneer cognitive behavioral therapy, in a fashion unique to his era and which was not popular in Greek medicine. In his medical treatise entitled Sustenance of the Body and Soul, Al-Balkhi describes obsessions particular to the disorder as "Annoying thoughts that are not real. These intrusive thoughts prevent enjoying life, and performing daily activities. They affect concentration and interfere with ability to carry out different tasks." As treatment, Al-Balkhi suggests treating obsessive thoughts with positive thoughts and mind-based therapy.
From the 14th to the 16th century in Europe, it was believed that people who experienced blasphemous, sexual or other obsessive thoughts were possessed by the devil. Based on this reasoning, treatment involved banishing the "evil" from the "possessed" person through exorcism. The vast majority of people who thought that they were possessed by the devil did not have hallucinations or other "spectacular symptoms" but "complained of anxiety, religious fears, and evil thoughts." In 1584, a woman from Kent, England, named Mrs. Davie, described by a justice of the peace as "a good wife", was nearly burned at the stake after she confessed that she experienced constant, unwanted urges to murder her family.
The English term obsessive–compulsive arose as a translation of German Zwangsvorstellung (obsession) used in the first conceptions of OCD by Karl Westphal. Westphal's description went on to influence Pierre Janet, who further documented features of OCD. In the early 1910s, Sigmund Freud attributed obsessive–compulsive behavior to unconscious conflicts that manifest as symptoms. Freud describes the clinical history of a typical case of "touching phobia" as starting in early childhood, when the person has a strong desire to touch an item. In response, the person develops an "external prohibition" against this type of touching. However, this "prohibition does not succeed in abolishing" the desire to touch; all it can do is repress the desire and "force it into the unconscious." Freudian psychoanalysis remained the dominant treatment for OCD until the mid-1980s, even though medicinal and therapeutic treatments were known and available, because it was widely thought that these treatments would be detrimental to the effectiveness of the psychotherapy. In the mid-1980s, this approach changed and practitioners began treating OCD primarily with medicine and practical therapy rather than through psychoanalysis.
One of the first successful treatments of OCD, exposure and response prevention, emerged during the 1960s, when psychologist Vic Meyer exposed two hospitalized patients to anxiety-inducing situations while preventing them from performing any compulsions. Eventually, both patients' anxiety level dropped to manageable levels. Meyer devised this procedure from his analysis of fear extinguishment in animals via flooding. The success of ERP clinically and scientifically has been summarized as "spectacular" by prominent OCD researcher Stanley Rachman decades following Meyer's creation of the method.
In 1967, psychiatrist Juan José López-Ibor reported that the drug clomipramine was effective in treating OCD. Many reports of its success in treatment followed and several studies had confirmed its effectiveness by the 1980s. However, clomipramine was subsequently displaced by new SSRIs developed in the 1970s, such as fluoxetine and sertraline, which were shown to have fewer side effects.
Obsessive–compulsive symptoms worsened during the early stages of the COVID-19 pandemic, particularly for individuals with contamination-related OCD.
Notable cases
John Bunyan (1628–1688), the author of The Pilgrim's Progress, displayed symptoms of OCD (which had not yet been named). During the most severe period of his condition, he would mutter the same phrase over and over again to himself while rocking back and forth. He later described his obsessions in his autobiography Grace Abounding to the Chief of Sinners, stating, "These things may seem ridiculous to others, even as ridiculous as they were in themselves, but to me they were the most tormenting cogitations." He wrote two pamphlets advising those with similar anxieties. In one of them, he warns against indulging in compulsions: "Have care of putting off your trouble of spirit in the wrong way: by promising to reform yourself and lead a new life, by your performances or duties."
British poet, essayist and lexicographer Samuel Johnson (1709–1784) also had OCD. He had elaborate rituals for crossing the thresholds of doorways and repeatedly walked up and down staircases counting the steps. He would touch every post on the street as he walked past, only step in the middle of paving stones and repeatedly perform tasks as though they had not been done properly the first time.
The "Rat Man", real name Ernst Lanzer, a patient of Sigmund Freud, suffered from what was then called "obsessional neurosis". Lanzer's illness was characterised most famously by a pattern of distressing intrusive thoughts in which he feared that his father or a female friend would be subjected to a purported Chinese method of torture in which rats would be encouraged to gnaw their way out of a victim's body by a hot poker.
American aviator and filmmaker Howard Hughes is known to have had OCD, primarily an obsessive fear of germs and contamination. Friends of Hughes have also mentioned his obsession with minor flaws in clothing. This was conveyed in The Aviator (2004), a film biography of Hughes.
English singer-songwriter George Ezra has openly spoken about his life-long struggle with OCD, particularly primarily obsessional obsessive–compulsive disorder (Pure O).
Swedish climate activist Greta Thunberg is also known to have OCD, among other mental health conditions.
American actor James Spader has also spoken about his OCD. In 2014, when interviewed for Rolling Stone he said: "I'm obsessive-compulsive. I have very, very strong obsessive-compulsive issues. I'm very particular. ... It's very hard for me, you know? It makes you very addictive in behavior, because routine and ritual become entrenched. But in work, it manifests itself in obsessive attention to detail and fixation. It serves my work very well: Things don't slip by. But I'm not very easygoing.
In 2022 the president of Chile Gabriel Boric stated that he had OCD, saying: "I have an obsessive–compulsive disorder that's completely under control. Thank God I've been able to undergo treatment and it doesn't make me unable to carry out my responsibilities as the President of the Republic."
In a documentary released in 2023, David Beckham shared details about his compelling cleaning rituals, need for symmetry in the fridge and the impact of OCD on his life.
Society and culture
Art, entertainment and media
Movies and television shows may portray idealized or incomplete representations of disorders such as OCD. Compassionate and accurate literary and on-screen depictions may help counteract the potential stigma associated with an OCD diagnosis and lead to increased public awareness, understanding and sympathy for such disorders.
The play and film adaptations of The Odd Couple based around the character of Felix, who shows some of the common symptoms of OCD.
In the film As Good as It Gets (1997), actor Jack Nicholson portrays a man with OCD who performs ritualistic behaviors that disrupt his life.
The film Matchstick Men (2003) portrays a con man named Roy (Nicolas Cage) with OCD who opens and closes doors three times while counting aloud before he can walk through them.
In the television series Monk (2002–2009), the titular character Adrian Monk fears both human contact and dirt.
The one-man show The Life and Slimes of Marc Summers (2016), a stage adaptation of Marc Summers' 1999 memoir which recounts how OCD affected his entertainment career.
In the novel Turtles All the Way Down (2017) by John Green, teenage main character Aza Holmes struggles with OCD that manifests as a fear of the human microbiome. Throughout the story, Aza repeatedly opens an unhealed callus on her finger to drain out what she believes are pathogens. The novel is based on Green's own experiences with OCD. He explained that Turtles All the Way Down is intended to show how "most people with chronic mental illnesses also live long, fulfilling lives."
The British TV series Pure (2019) stars Charly Clive as a 24-year-old Marnie who is plagued by disturbing sexual thoughts, as a kind of primarily obsessional obsessive compulsive disorder.
Research
The naturally occurring sugar inositol has been suggested as a treatment for OCD.
μ-Opioid receptor agonists, such as hydrocodone and tramadol, may improve OCD symptoms. Administration of opioids may be contraindicated in individuals concurrently taking CYP2D6 inhibitors such as fluoxetine and paroxetine.
Much research is devoted to the therapeutic potential of the agents that affect the release of the neurotransmitter glutamate or the binding to its receptors. These include riluzole, memantine, gabapentin, N-acetylcysteine (NAC), topiramate and lamotrigine. Research on the potential for other supplements, such as milk thistle, to help with OCD and various neurological disorders, is ongoing.
Researchers have identified over 600 genes related to cortical thickness, a factor that impacts OCD expression. "Notably, the enrichment of genes involved in ion transport regulation, responses to environmental stimuli, and metal ion transport regulation suggests the roles of these processes in OCD pathophysiology."
Research indicates that people with OCD have a lower amplitude of low-frequency fluctuation in both the left and right putamen. The right putamen also displays decreased functional connectivity with the left putamen which extends to the left inferior frontal gyrus (IFG), bilateral precuneus extending to calcarine, right middle occipital cortex extending to the right middle temporal cortex, and left middle occipital gyrus. In addition, the decreased connectivity between the right putamen and the left putamen is negatively correlated with Y-BOCS scores.
In a study exploring the correlation between neural biomarkers and response to transcranial Direct Current Stimulation (tDCS) in people with OCD, researchers found thicker precentral and paracentral areas in people with OCD compared to controls. A significant association was found between a thinner precentral area and reduced YBOCS scores.
Other animals
Advocacy
Many organizations and charities around the world advocate for the wellbeing of people with OCD, stigma reduction, research and awareness. The International OCD Foundation (IOCDF) is the largest 501(c)3 nonprofit organization dedicated to serving a broad community of individuals with OCD and related disorders, their family members and loved ones, and mental health professionals and researchers around the world. Since 1986, the IOCDF provides up-to-date education and resources, strengthens community engagement worldwide, delivers quality professional training to clinicians and funds groundbreaking research.
See also
Anxiety disorder
Bipolar disorder
Body dysmorphic disorder
Compulsive hoarding
Delusional disorder
Hypochondriasis
Major depressive disorder
Obsessive–compulsive spectrum
Tic disorder
Body-focused repetitive behavior
Trichotillomania
References
External links
National Institute Of Mental Health
American Psychiatric Association
APA Division 12 treatment page for obsessive-compulsive disorder
Anxiety disorders
Magical thinking
Ritual
Wikipedia neurology articles ready to translate
Wikipedia medicine articles ready to translate | Obsessive–compulsive disorder | Biology | 14,003 |
60,925,330 | https://en.wikipedia.org/wiki/Medication%20Appropriateness%20Tool%20for%20Comorbid%20Health%20conditions%20during%20Dementia | The Medication Appropriateness Tool for Comorbid Health conditions during Dementia (MATCH-D) criteria supports clinicians to manage medication use specifically for people with dementia without focusing only on the management of the dementia itself.
History
The MATCH-D were developed by medical practitioners and pharmacists at Australian Group of Eight Universities. It was led by Dr Amy Theresa Page at the Western Australian Centre for Health and Ageing at the University of Western Australia. The MATCH-D Criteria were developed through a consensus panel of experts using the Delphi method. The criteria were originally published in the Internal Medicine Journal in 2016. The protocol explaining the rigorous methods used to develop the criteria were originally published in the BMJ Open in 2015. The systematic review that informed the criteria were published subsequently in 2018 and updated in 2022.
Style of the criteria
The MATCH-D is presented in categories of recommendations for all stages of dementia, as well as divided into specific recommendations for early, mid and late-stage dementia. The recommendations are groups as: medication side effects, principles for medication use, medication review, treatment goals, preventative medications, symptom management, psycho-active medications and medications to modify dementia progression.
Reception of the criteria
The MATCH-D attracted media attention as it was under development, and as it was released. Page was interviewed on the ABC national radio's science show during its development. The health media picked up the story as soon as it was published.
Organisations who recommend the criteria
Respected organisations such as the British Geriatrics Society incorporated into their own medicines management guidelines. In New Zealand, the NZ Health Quality & Safety Commission have shared it in their communications.
It is cited and promoted by influential professional bodies in many countries including:
- the British Geriatrics Society's End of Life Care in Frailty guidelines
- New Zealand's Health Quality & Safety Commission's medication management work
- Australia's Royal Australian College of General Practitioners (RACGP) aged care clinical guide known as the Silver Book
- Australia's Pharmaceutical Society of Australia (PSA) Choosing Wisely series
- Australian Commission on Safety and Quality in Health Care
- Australian Deprescribing Network (ADeN)
- Australia's NPS MedicinesWise recommended it in their Medication Management Review Reports: Best practice recommendations program and Changed Behaviour in Dementia.
- New South Wales' Therapeutic Advisory Group (TAG)
Uses
Consumers considered the MATCH-D to be a useful tool for prompting and supporting conversations about their preferences for medication use. They would prefer that these conversations began as early as possible so that their treating health professionals knew their preferences. General practitioners, pharmacists and nurses stated they often felt less comfortable discussing these issues as they were concerned that it may cause distress to the consumer. Health professionals and consumers alike thought that using the MATCH-D as a conversation starter could assist with these conversations.
It is incorporated in to the TaperMD decision support tool and the PIMSPlus platform. This incorporation has hastened the uptake of the criteria in both long term care facilities and community in Canada.
More than one-quarter of Australian consultant pharmacists state that they use the MATCH-D during Home Medicine Reviews. This figure is suggestive of high uptake given that most Home Medicine Reviews are most likely undertaken for people who are not living with dementia.
Research on the criteria
Translational research was undertaken with consumers, general practitioners, nurses and pharmacists to explore the enablers and barriers to using the MATCH-D in practice. This research showed the need for a website (since launched at MATCH-D.com.au), checklists (available at the website) and educational resources. These stakeholder roles have shown that there is a strong need for support and collaboration to improve medication use.
Research at King's College London explored the hazards of suboptimal prescribing and polypharmacy in medicines use for people with dementia. They determined that each year there are globally up to 10 million people living with dementia require hospital treatment (emergency department or hospital admissions) related to medicines related harm for people with dementia. They concluded that, if the MATCH-D were successfully implemented that the relative hazards of medicines use for people with dementia would need to be re-evaluated.
The National Health and Medical Research Council (NHMRC) are currently funding a randomised controlled trial implementing the MATCH-D using pharmacists embedded in general practice.
Educational resources
The Dementia Training Australia funding an interactive online education package for deprescribing in dementia centered around the MATCH-D. It was a joint collaboration between the University of Western Australia, University of Tasmania, La Trobe University, Monash University, Alfred Health and FireFilms. This education package launched in mid-2019. This online course is suitable for consumers and health professionals, with a target audience of nurses working in residential aged care facilities. The training package was in the format of a documentary film, with its original developer, Dr Page featured as narrator and interviewer. It includes simulated patient encounters and expert interviews, interspersed with interactive activities.
The MATCH-D and the training package by Dementia Training Australia have now been incorporated into undergraduate degrees for health professionals including the University of Tasmania's second year Bachelor of Nursing curriculum and Monash University's Bachelor of Pharmacy (Honours) curriculum.
References
Dementia
Pharmacy
Medical assessment and evaluation instruments | Medication Appropriateness Tool for Comorbid Health conditions during Dementia | Chemistry | 1,088 |
760,367 | https://en.wikipedia.org/wiki/Jupiter%20LXI | Jupiter LXI, provisionally known as , is a natural satellite of Jupiter. It was discovered by a team of astronomers led by Brett J. Gladman, et al. in 2003.
is about 2 kilometers in diameter, and orbits Jupiter at an average distance of 22,709 Mm in 699.125 days, at an inclination of 165° to the ecliptic (164° to Jupiter's equator), in a retrograde direction and with an eccentricity of 0.1961.
It belongs to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between 23 and 24 Gm and at an inclination of about 165°.
This moon was lost following its discovery in 2003. It was recovered in 2018 and given its permanent designation that year.
References
Carme group
Moons of Jupiter
Irregular satellites
Astronomical objects discovered in 2003
Moons with a retrograde orbit | Jupiter LXI | Astronomy | 181 |
49,112,440 | https://en.wikipedia.org/wiki/Orientation%20sheaf | In the mathematical field of algebraic topology, the orientation sheaf on a manifold X of dimension n is a locally constant sheaf oX on X such that the stalk of oX at a point x is the local homology group
(in the integer coefficients or some other coefficients).
Let be the sheaf of differential k-forms on a manifold M. If n is the dimension of M, then the sheaf
is called the sheaf of (smooth) densities on M. The point of this is that, while one can integrate a differential form only if the manifold is oriented, one can always integrate a density, regardless of orientation or orientability; there is the integration map:
If M is oriented; i.e., the orientation sheaf of the tangent bundle of M is literally trivial, then the above reduces to the usual integration of a differential form.
See also
There is also a definition in terms of dualizing complex in Verdier duality; in particular, one can define a relative orientation sheaf using a relative dualizing complex.
References
External links
Two kinds of orientability/orientation for a differentiable manifold
Algebraic topology
Orientation (geometry) | Orientation sheaf | Physics,Mathematics | 234 |
49,307,795 | https://en.wikipedia.org/wiki/Infinite%20sites%20model | The Infinite sites model (ISM) is a mathematical model of molecular evolution first proposed by Motoo Kimura in 1969. Like other mutation models, the ISM provides a basis for understanding how mutation develops new alleles in DNA sequences. Using allele frequencies, it allows for the calculation of heterozygosity, or genetic diversity, in a finite population and for the estimation of genetic distances between populations of interest.
The assumptions of the ISM are that (1) there are an infinite number of sites where mutations can occur, (2) every new mutation occurs at a novel site, and (3) there is no recombination. The term ‘site’ refers to a single nucleotide base pair. Because every new mutation has to occur at a novel site, there can be no homoplasy, or back-mutation to an allele that previously existed. All identical alleles are identical by descent. The four gamete rule can be applied to the data to ensure that they do not violate the model assumption of no recombination.
The mutation rate () can be estimated as follows, where is the number of mutations found within a randomly selected DNA sequence (per generation), is the effective population size. The coefficient is the product of twice the gene copies in individuals of the population; in the case of diploid, biparentally-inherited genes the appropriate coefficient is 4 whereas for uniparental, haploid genes, such as mitochondrial genes, the coefficient would be 2 but applied to the female effective population size which is, for most species, roughly half of .
When considering the length of a DNA sequence, the expected number of mutations is calculated as follows
Where k is the length of a DNA sequence and is the probability a mutation will occur at a site.
Watterson developed an estimator for mutation rate that incorporates the number of segregating sites (Watterson's estimator).
One way to think of the ISM is in how it applies to genome evolution. To understand the ISM as it applies to genome evolution, we must think of this model as it applies to chromosomes. Chromosomes are made up of sites, which are nucleotides represented by either A, C, G, or T. While individual chromosomes are not infinite, we must think of chromosomes as continuous intervals or continuous circles.
Multiple assumptions are applied to understanding the ISM in terms of genome evolution:
k breaks are made in these chromosomes, which leaves 2k free ends available. The 2k free ends will rejoin in a new manner rearranging the set of chromosomes (i.e. reciprocal translocation, fusion, fission, inversion, circularized incision, circularized excision).
No break point is ever used twice.
A set of chromosomes can be duplicated or lost.
DNA that never existed before can be observed in the chromosomes, such as horizontal gene transfer of DNA or viral integration.
If the chromosomes become different enough, evolution can form a new species.
Substitutions that alter a single base pair are individually invisible and substitutions occur at a finite rate per site.
The substitution rate is the same for all sites in a species, but is allowed to vary between species (i.e. no molecular clock is assumed).
Instead of thinking about substitutions themselves, think about the effect of the substitution at each point along the chromosome as a continuous increase in evolutionary distance between the previous version of the genome at that site and the next version of the genome at the corresponding site in the descendant.
References
Further reading
Molecular evolution
Population genetics
Mathematical and theoretical biology | Infinite sites model | Chemistry,Mathematics,Biology | 734 |
32,477,472 | https://en.wikipedia.org/wiki/Michael%20F.%20Lappert | Michael Franz Lappert (31 December 1928 – 28 March 2014) was a Czech-born British inorganic chemist. Mainly located at the University of Sussex, he was recognized for contributions to organometallic complexes.
Early life and education
Lappert was born in Czechoslovakia and came to the UK as a Kindertransport refugee. He received his PhD in 1951 at the Northern Polytechnic, London.
Career and research
His areas of research often included studies on low coordination numbers and metal amido complexes.
Awards and honours
Lappert was elected as a Fellow of the Royal Society (FRS) in 1979.
References
1928 births
2014 deaths
Inorganic chemists
British chemists
Fellows of the Royal Society | Michael F. Lappert | Chemistry | 139 |
6,424,117 | https://en.wikipedia.org/wiki/Photonic%20integrated%20circuit | A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits use photons (or particles of light) as opposed to electrons that are used by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near-infrared (850–1650 nm).
One of the most commercially utilized material platforms for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections—a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology, and the University of Twente in the Netherlands.
A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source.
History
Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and the concept of wave–particle duality first proposed by Albert Einstein in 1905, light acts as both an electromagnetic wave and a particle. For example, total internal reflection in an optical fibre allows it to act as a waveguide.
Integrated circuits using electrical components were first developed in the late 1940s and early 1950s, but it took until 1958 for them to become commercially available. When the laser and laser diode were invented in the 1960s, the term "photonics" fell into more common usage to describe the application of light to replace applications previously achieved through the use of electronics.
By the 1980s, photonics gained traction through its role in fibre optic communication. At the start of the decade, an assistant in a new research group at Delft University Of Technology, Meint Smit, started pioneering in the field of integrated photonics. He is credited with inventing the Arrayed Waveguide Grating (AWG), a core component of modern digital connections for the Internet and phones. Smit has received several awards, including an ERC Advanced Grant, a Rank Prize for Optoelectronics and a LEOS Technical Achievement Award.
In October 2022, during an experiment held at the Technical University of Denmark in Copenhagen, a photonic chip transmitted 1.84 petabits per second of data over a fibre-optic cable more than 7.9 kilometres long. First, the data stream was split into 37 sections, each of which was sent down a separate core of the fibre-optic cable. Next, each of these channels was split into 223 parts corresponding to equidistant spikes of light across the spectrum.
Comparison to electronic integration
Unlike electronic integration where silicon is the dominant material, system photonic integrated circuits have been fabricated from a variety of material systems, including electro-optic crystals such as lithium niobate, silica on silicon, silicon on insulator, various polymers, and semiconductor materials which are used to make semiconductor lasers such as GaAs and InP. The different material systems are used because they each provide different advantages and limitations depending on the function to be integrated. For instance, silica (silicon dioxide) based PICs have very desirable properties for passive photonic circuits such as AWGs (see below) due to their comparatively low losses and low thermal sensitivity, GaAs or InP based PICs allow the direct integration of light sources and Silicon PICs enable co-integration of the photonics with transistor based electronics.
The fabrication techniques are similar to those used in electronic integrated circuits in which photolithography is used to pattern wafers for etching and material deposition. Unlike electronics where the primary device is the transistor, there is no single dominant device. The range of devices required on a chip includes low loss interconnect waveguides, power splitters, optical amplifiers, optical modulators, filters, lasers and detectors. These devices require a variety of different materials and fabrication techniques making it difficult to realize all of them on a single chip.
Newer techniques using resonant photonic interferometry is making way for UV LEDs to be used for optical computing requirements with much cheaper costs leading the way to petahertz consumer electronics.
Examples of photonic integrated circuits
The primary application for photonic integrated circuits is in the area of fiber-optic communication though applications in other fields such as biomedical and photonic computing are also possible.
The arrayed waveguide gratings (AWGs) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fiber-optic communication systems are an example of a photonic integrated circuit which has replaced previous multiplexing schemes which utilized multiple discrete filter elements. Since separating optical modes is a need for quantum computing, this technology may be helpful to miniaturize quantum computers (see linear optical quantum computing).
Another example of a photonic integrated chip in wide use today in fiber-optic communication systems is the externally modulated laser (EML) which combines a distributed feed back laser diode with an electro-absorption modulator on a single InP based chip.
Applications
As global data consumption rises and demand for faster networks continues to grow, the world needs to find more sustainable solutions to the energy crisis and climate change. At the same time, ever more innovative applications for sensor technology, such as Lidar in autonomous driving vehicles, appear on the market. There is a need to keep pace with technological challenges.
The expansion of 5G data networks and data centres, safer autonomous driving vehicles, and more efficient food production cannot be sustainably met by electronic microchip technology alone. However, combining electrical devices with integrated photonics provides a more energy efficient way to increase the speed and capacity of data networks, reduce costs and meet an increasingly diverse range of needs across various industries.
Data and telecommunications
The primary application for PICs is in the area of fibre-optic communication. The arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) fibre-optic communication systems are an example of a photonic integrated circuit. Another example in fibre-optic communication systems is the externally modulated laser (EML) which combines a distributed feedback laser diode with an electro-absorption modulator.
The PICs can also increase bandwidth and data transfer speeds by deploying few-modes optical planar waveguides. Especially, if modes can be easily converted from conventional single-mode planar waveguides into few-mode waveguides, and selectively excite the desired modes. For example, a bidirectional spatial mode slicer and combiner can be used to achieve the desired higher or lower-order modes. Its principle of operation depends on cascading stages of V-shape and/ or M-shape graded-index planar waveguides.
Not only can PICs increase bandwidth and data transfer speeds, they can reduce energy consumption in data centres, which spend a large proportion of energy on cooling servers.
Healthcare and medicine
Using advanced biosensors and creating more affordable diagnostic biomedical instruments, integrated photonics opens the door to lab-on-a-chip (LOC) technology, cutting waiting times, and taking diagnosis out of laboratories and into the hands of doctors and patients. Based on an ultrasensitive photonic biosensor, SurfiX Diagnostics' diagnostics platform provides a variety of point-of-care tests. Similarly, Amazec Photonics has developed a fibre optic sensing technology with photonic chips which enables high-resolution temperature sensing (fractions of 0.1 milliKelvin) without having to inject the temperature sensor within the body. This way, medical specialists are able to measure both cardiac output and circulating blood volume from outside the body. Another example of optical sensor technology is EFI's "OptiGrip" device, which offers greater control over tissue feeling for minimal invasive surgery.
Automotive and engineering applications
PICs can be applied in sensor systems, like Lidar (which stands for light detection and ranging), to monitor the surroundings of vehicles. It can also be deployed in-car connectivity through Li-Fi, which is similar to WiFi but uses light. This technology facilitates communication between vehicles and urban infrastructure to improve driver safety. For example, some modern vehicles pick up traffic signs and remind the driver of the speed limit.
In terms of engineering, fibre optic sensors can be used to detect different quantities, such as pressure, temperature, vibrations, accelerations, and mechanical strain. Sensing technology from PhotonFirst uses integrated photonics to measure things like shape changes in aeroplanes, electric vehicle battery temperature, and infrastructure strain.
Agriculture and food
Sensors play a role in innovations in agriculture and the food industry in order to reduce wastage and detect diseases. Light sensing technology powered by PICs can measure variables beyond the range of the human eye, allowing the food supply chain to detect disease, ripeness and nutrients in fruit and plants. It can also help food producers to determine soil quality and plant growth, as well as measuring emissions. A new, miniaturised, near-infrared sensor, developed by MantiSpectra, is small enough to fit into a smartphone, and can be used to analyse chemical compounds of products like milk and plastics.
Types of fabrication and materials
The fabrication techniques are similar to those used in electronic integrated circuits, in which photolithography is used to pattern wafers for etching and material deposition.
The platforms considered most versatile are indium phosphide (InP) and silicon photonics (SiPh):
Indium phosphide (InP) PICs have active laser generation, amplification, control, and detection. This makes them an ideal component for communication and sensing applications.
Silicon nitride (SiN) PICs have a vast spectral range and ultra low-loss waveguide. This makes them highly suited to detectors, spectrometers, biosensors, and quantum computers. The lowest propagation losses reported in SiN (0.1 dB/cm down to 0.1 dB/m) have been achieved by LioniX International's TriPleX waveguides.
Silicon photonics (SiPh) PICs provide low losses for passive components like waveguides and can be used in minuscule photonic circuits. They are compatible with existing electronic fabrication.
The term "silicon photonics" actually refers to the technology rather than the material. It combines high density photonic integrated circuits (PICs) with complementary metal oxide semiconductor (CMOS) electronics fabrication. The most technologically mature and commercially used platform is silicon on insulator (SOI).
Other platforms include:
Lithium niobate (LiNbO3) is an ideal modulator for low loss mode. It is highly effective at matching fibre input–output due to its low index and broad transparency window. For more complex PICs, lithium niobate can be formed into large crystals. As part of project ELENA, there is a European initiative to stimulate production of LiNbO3-PICs. Attempts are also being made to develop lithium niobate on insulator (LNOI).
Silica has a low weight and small form factor. It is a common component of optical communication networks, such as planar light wave circuits (PLCs).
Gallium arsenide (GaAS) has high electron mobility. This means GaAS transistors operate at high speeds, making them ideal analogue integrated circuit drivers for high speed lasers and modulators.
By combining and configuring different chip types (including existing electronic chips) in a hybrid or heterogeneous integration, it is possible to leverage the strengths of each. Taking this complementary approach to integration addresses the demand for increasingly sophisticated energy-efficient solutions.
Current status
As of 2010, photonic integration was an active topic in U.S. Defense contracts. It was included by the Optical Internetworking Forum for inclusion in 100 gigahertz optical networking standards.
A recent study presents a novel two-dimensional photonic crystal design for electro-reflective modulators, offering reduced size and enhanced efficiency compared to traditional bulky structures. This design achieves high optical transmission ratios with precise angle control, addressing critical challenges in miniaturizing optoelectronic devices for improved performance in PICs. In this structure, both lateral and vertical fabrication technologies are combined, introducing a novel approach that merges two-dimensional designs with three-dimensional structures. This hybrid technique offers new possibilities for enhancing the functionality and integration of photonic components within photonic integrated circuits.
See also
Integrated quantum photonics
Optical computing
Optical transistor
Silicon photonics
Notes
References
Photonics
Optical components
Silicon photonics
Integrated circuits | Photonic integrated circuit | Materials_science,Technology,Engineering | 2,761 |
4,247,871 | https://en.wikipedia.org/wiki/Astros%20II | Astros II (Artillery SaTuration ROcket System) is a self-propelled multiple rocket launcher produced in Brazil by the Avibras company. It features modular design and employs rockets with calibers ranging from 127 to 450 mm (5–17.72 inches). It was developed on the basis of a Tectran VBT-2028 6×6 all-terrain vehicle for enhanced mobility based on Mercedes-Benz 2028 truck chassis while later versions use Tatra 815-7 chassis.
Overview
A full Astros system includes 1 wheeled 4×4 Battalion level Command Vehicle (AV-VCC), which commands 3 batteries, and a series of 4x4 and 6×6 wheeled vehicles. Each battery consists of:
1 wheeled 4×4 Battery-level Command vehicle (AV-PCC)
1 wheeled 6×6 Radar Fire Control vehicle (AV-UCF)
6 wheeled 6×6 Universal Multiple Rocket Launchers vehicle (AV-LMU)
3 wheeled 6×6 Ammunition Resupply vehicles (AV-RMD)
1 wheeled 6×6 Field repair/workshop vehicle (AV-OFVE)
1 wheeled 4×4 Mobile Weather Station vehicle (AV-MET).
In the older version of the system, the fire control vehicle were listed as optional vehicle in a battery. The command vehicles and weather stations are recent additions, designed to improve overall system performance on newer versions. All vehicles are transportable in a C-130 Hercules. The launcher is capable of firing rockets of different calibers armed with a range of warheads.
Each rocket resupply truck carries up to two complete reloads.
Service history
The Astros II artillery system entered service with the Brazilian Army in the early 1990s. The system is battle proven, having been used in action by the Iraqi Army in the Gulf Wars.
In the 1980s, Avibrás sold an estimated 66 Astros II artillery systems to Iraq. Iraq also built the Sajeel-60 which is a license-built version of the Brazilian SS-60. Sixty Astros II were sold to Saudi Arabia and an unspecified number sold to Bahrain and Qatar. Total sales of the Astros II between 1982 and 1987 reached US$1 billion. This fact made the Astros II multiple rocket launcher the most profitable weapon produced by Avibrás.
In the 1980s and early 1990s, Avibrás manufactured almost exclusively rockets and multiple-launch rocket systems (MLRS), such as the Astros II, in addition to developing antitank and antiship missiles. At its peak, Avibrás employed 6,000 people; later it would be reduced to 900 people in the early 1990s as the arms industry demand fell. Even so, in the first Gulf War in 1991, the Astros II was successfully used by Saudi Arabia against Iraq. Years earlier, the Astros II system had helped Angola to defeat the UNITA.
New generation
The next step is an ambitious program, the Astros 2020 (Mk6), based on a 6×6 wheeled chassis. Being a new concept, it will require an estimated investment of R$1.2 billion, of which about US$210 million will be invested solely in development. It will be integrated with the cruise missile AVMT-300 with 300-km range during the stage of testing and certification. It is said that the venture will, for example, enable the Army to integrate the Astros with defense anti-aircraft guns, paving the way for the utilization of common platforms, trucks, parts of electronic sensors and command vehicles. The new MK6 system will use Tatra Trucks’ T815-790R39 6×6 and T815-7A0R59 4×4 trucks instead of the original Mercedes-Benz 2028A 6x6 truck. ASTROS 2020 offers several basic improvements including an improved armored cabin, modern digital communications and navigation systems, and a new tracking radar that replaces the AV-UCF's Contraves Fieldguard system. The new tracking radar used by MK6 AV-UCF was later revealed to be the Fieldguard 3 Military Measurement System from Rheinmetall Air Defence. The Astros 2020 will also be equipped with a 180 mm GPS-guided rocket called the SS-AV-40G with a range of and SS-150 newly developed rockets with a claimed maximum range of 150 km. Four of them are carried. 36 Astros 2020 systems are to be acquired.
Rocket variants
SS-09TS – fires 70 mm rockets – Loads 40
SS-30 – fires 127 mm rockets – Loads 32
SS-40 – fires 180 mm rockets – Loads 16
SS-40G – fires 180 mm rockets – Loads 16 (GPS Guided)
SS-60 – fires 300 mm rockets – Loads 4
SS-80 – fires 300 mm rockets – Loads 4
SS-80G – fires 300 mm rockets – Loads 4 (GPS Guided)
SS-150 – fires 450 mm rockets – Loads 4 (GPS Guided)
MANSUP – fires 330 mm anti-ship missile – Loads 1–4
AV-TM 300 – fires 450 mm cruise missile – Loads 2
FOG MPM – fiber optics guided multi-purpose missile – anti-tank, anti-fortification and anti-helicopter missile
FOG MLM – fiber optics guided multi-purpose missile
Specifications
Range in indirect fire mode (first figure is minimum range):
SS-09TS: 4–10 km
SS-30: 9–30 km
SS-40: 15–40 km
SS-40G: 15–40 km
SS-60: 20–60 km
SS-80: 22–90 km
SS-80G: 22–90 km
SS-150: 29–150 km
MANSUP: 70–200 km
AV-TM 300: 30–300 km
FOG MPM: 5–60 km
Armour: classified. Probably light composite to give protection against small-arms fire.
Armament: one battery of 2, 4, 16 or 32 rocket-launcher tubes
Performance:
fording 1.1 m
vertical obstacle 1 m
trench 2.29 m
Ammunition Type: High explosive (HE) with multiple warhead
Operators
Brazilian Army: 20 Astros II Mk3M, 18 Astros II Mk6.
Brazilian Marine Corps: 6 Astros II Mk6.
Indonesian Army: 63 Astros II Mk6 (first batch of 36 ordered in 2012 and second batch of 27 delivered in 2020).
: 66 Astros II. (also built under license as the Sajil-60). Only with rockets of shorter range SS-40 and SS-60.
Malaysian Army: 36 units of Astros II.
Saudi Arabia: 76 Astros II.
Potential operators
: Spain is currently evaluating K239 Chunmoo, Astros II and PULS systems, but the decision regarding a potential order of one of these systems has not been made.
: On December 4, 2022, the Brazilian media reported a Ukrainian interest in the ASTROS system, to equip the Army in the Russo-Ukrainian War efforts. The sale was blocked by the Bolsonaro administration. A diplomatic effort by the United States to persuade the president-elect of Brazil, Luiz Inácio Lula da Silva, to unblock the deal was reported on the 5th of December 2022.
See also
HIMARS
BM-21
RM-70
T-122 Sakarya
9A52-4 Tornado
Fajr-5
TOROS
Falaq-2
Pinaka multi-barrel rocket launcher
References
External links
Astros II Artillery Saturation Rocket System, Brazil
FAS Military Analysis Network
Wheeled self-propelled rocket launchers
Multiple rocket launchers of Brazil
Modular rocket launchers
Military vehicles introduced in the 1980s | Astros II | Engineering | 1,528 |
611,714 | https://en.wikipedia.org/wiki/Web%20development | Web development is the work involved in developing a website for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
Among Web professionals, "Web development" usually refers to the main non-design aspects of building Web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, Web development teams can consist of hundreds of people (Web developers) and follow standard methods like Agile methodologies while developing Web sites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of Web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behavior and visuals that run in the user browser, while back-end developers deal with the servers. Since the commercialization of the Web, the industry has boomed and has become one of the most used technologies ever.
Evolution of the World Wide Web and web development
Origin/ Web 1.0
Tim Berners-Lee created the World Wide Web in 1989 at CERN.
The primary goal in the development of the Web was to fulfill the automated information-sharing needs of academics affiliated with institutions and various global organizations. Consequently, HTML was developed in 1993.
Web 1.0 is described as the first paradigm wherein users could only view material and provide a small amount of information. Core protocols of web 1.0 were HTTP, HTML and URI.
Web 2.0
Web 2.0, a term popularised by Dale Dougherty, then vice president of O'Reilly, during a 2004 conference with Media Live, marks a shift in internet usage, emphasizing interactivity.
Web 2.0 introduced increased user engagement and communication. It evolved from the static, read-only nature of Web 1.0 and became an integrated network for engagement and communication. It is often referred to as a user-focused, read-write online network.
In the realm of Web 2.0 environments, users now have access to a platform that encourages sharing activities such as creating music, files, images, and movies. The architecture of Web 2.0 is often considered the "backbone of the internet," using standardized XML (Extensible Markup Language) tags to authorize information flow from independent platforms and online databases.
Web 3.0
Web 3.0, considered the third and current version of the web, was introduced in 2014. The concept envisions a complete redesign of the web. Key features include the integration of metadata, precise information delivery, and improved user experiences based on preferences, history, and interests.
Web 3.0 aims to turn the web into a sizable, organized database, providing more functionality than traditional search engines. Users can customize navigation based on their preferences, and the core ideas involve identifying data sources, connecting them for efficiency, and creating user profiles.
This version is sometimes also known as Semantic Web.
Evolution of web development technologies
The journey of web development technologies began with simple HTML pages in the early days of the internet. Over time, advancements led to the incorporation of CSS for styling and JavaScript for interactivity. This evolution transformed static websites into dynamic and responsive platforms, setting the stage for the complex and feature-rich web applications we have today.
Static HTML Pages (1990s)
Introduction of CSS (late 1990s)
JavaScript and Dynamic HTML (1990s - early 2000s)
AJAX (1998)
Rise of Content management systems (CMS) (mid-2000s)
Mobile web (late 2000s - 2010s)
Single-page applications (SPAs) and front-end frameworks (2010s)
Server-side javaScript (2010s)
Microservices and API-driven development (2010s - present)
Progressive web apps (PWAs) (2010s - present)
JAMstack Architecture (2010s - present)
WebAssembly (Wasm) (2010s - present)
Serverless computing (2010s - present)
AI and machine learning integration (2010s - present)
Web development in future will be driven by advances in browser technology, Web internet infrastructure, protocol standards, software engineering methods, and application trends.
Web development life cycle
The web development life cycle is a method that outlines the stages involved in building websites and web applications. It provides a structured approach, ensuring optimal results throughout the development process.
A typical Web Development process can be divided into 7 steps.
Analysis
Debra Howcraft and John Carroll proposed a methodology in which web development process can be divided into sequential steps. They mentioned different aspects of analysis.
Phase one involves crafting a web strategy and analyzing how a website can effectively achieve its goals. Keil et al.'s research identifies the primary reasons for software project failures as a lack of top management commitment and misunderstandings of system requirements. To mitigate these risks, Phase One establishes strategic goals and objectives, designing a system to fulfill them. The decision to establish a web presence should ideally align with the organization's corporate information strategy.
The analysis phase can be divided into 3 steps:
Development of a web strategy
Defining objectives
Objective analysis
During this phase, the previously outlined objectives and available resources undergo analysis to determine their feasibility. This analysis is divided into six tasks, as follows:
Technology analysis: Identification of all necessary technological components and tools for constructing, hosting, and supporting the site.
Information analysis: Identification of user-required information, whether static (web page) or dynamic (pulled "live" from a database server).
Skills analysis: Identification of the diverse skill sets necessary to complete the project.
User analysis: Identification of all intended users of the site, a more intricate process due to the varied range of users and technologies they may use.
Cost analysis: Estimation of the development cost for the site or an evaluation of what is achievable within a predefined budget.
Risk analysis: Examination of any major risks associated with site development.
Following this analysis, a more refined set of objectives is documented. Objectives that cannot be presently fulfilled are recorded in a Wish List, constituting part of the Objectives Document. This documentation becomes integral to the iterative process during the subsequent cycle of the methodology.
Planning: sitemap and wireframe
It is crucial for web developers to be engaged in formulating a plan and determining the optimal architecture and selecting the frameworks. Additionally, developers/consultants play a role in elucidating the total cost of ownership associated with supporting a website, which may surpass the initial development expenses.
Key aspects in this step are:
Sitemap creation
Wireframe creation
Tech stack
Design and layout
Following the analysis phase, the development process moves on to the design phase, which is guided by the objectives document. Recognizing the incremental growth of websites and the potential lack of good design architecture, the methodology includes iteration to account for changes and additions over the life of the site. The design phase, which is divided into Information Design and Graphic Design, results in a detailed Design Document that details the structure of the website, database data structures, and CGI scripts.*
The following step, design testing, focuses on early, low-cost testing to identify inconsistencies or flaws in the design. This entails comparing the website's design to the goals and objectives outlined in the first three steps. Phases One and Two involve an iterative loop in which objectives in the Objectives Document are revisited to ensure alignment with the design. Any objectives that are removed are added to the Wish List for future consideration.
Key aspects in this step are:
Page layouts
Review
Approval
Content creation
No matter how visually appealing a website is, good communication with clients is critical. The primary purpose of content production is to create a communication channel through the user interface by delivering relevant information about your firm in an engaging and easily understandable manner. This includes:
Developing appealing calls to action
Making creative headlines
Content formatting for readability
Carrying out line editing
Text updating throughout the site development process.
The stage of content production is critical in establishing the branding and marketing of your website or web application. It serves as a platform for defining the purpose and goals of your online presence through compelling and convincing content.
Development
During this critical stage, the website is built while keeping its fundamental goal in mind, paying close attention to all graphic components to assure the establishment of a completely working site.
The procedure begins with the development of the main page, which is followed by the production of interior pages. The site's navigational structure is being refined in particular.
During this development phase, key functionality such as the Content Management System, interactive contact forms, and shopping carts are activated.
The coding process includes creating all of the site's software and installing it on the appropriate Web servers. This can range from simple things like posting to a Web server to more complex tasks like establishing database connections.
Testing, review and launch
In any web project, the testing phase is incredibly intricate and difficult. Because web apps are frequently designed for a diverse and often unknown user base running in a range of technological environments, their complexity exceeds that of traditional Information Systems (IS). To ensure maximum reach and efficacy, the website must be tested in a variety of contexts and technologies. The website moves to the delivery stage after gaining final approval from the designer. To ensure its preparation for launch, the quality assurance team performs rigorous testing for functionality, compatibility, and performance.
Additional testing is carried out, including integration, stress, scalability, load, resolution, and cross-browser compatibility. When the approval is given, the website is pushed to the server via FTP, completing the development process.
Key aspects in this step are:
Test Lost Links
Use code validators
Check browser
Maintenance and updating
The web development process goes beyond deployment to include a variety of post-deployment tasks.
Websites, in example, are frequently under ongoing maintenance, with new items being uploaded on a daily basis. The maintenance costs increases immensely as the site grows in size. The accuracy of content on a website is critical, demanding continuous monitoring to verify that both information and links, particularly external links, are updated. Adjustments are made in response to user feedback, and regular support and maintenance actions are carried out to maintain the website's long-term effectiveness.
Traditional development methodologies
Debra Howcraft and John Carroll discussed a few traditional web development methodologies in their research paper:
Waterfall: The waterfall methodology comprises a sequence of cascading steps, addressing the development process with minimal iteration between each stage. However, a significant drawback when applying the waterfall methodology to the development of websites (as well as information systems) lies in its rigid structure, lacking iteration beyond adjacent stages. Any methodology used for the development of Web-sites must be flexible enough to cope with change.
Structured Systems Analysis and Design Method (SSADM): Structured Systems Analysis and Design Method (SSADM) is a widely used methodology for systems analysis and design in information systems and software engineering. Although it does not cover the entire lifecycle of a development project, it places a strong emphasis on the stages of analysis and design in the hopes of minimizing later-stage, expensive errors and omissions.
Prototyping: Prototyping is a software development approach in which a preliminary version of a system or application is built to visualize and test its key functionalities. The prototype serves as a tangible representation of the final product, allowing stakeholders, including users and developers, to interact with it and provide feedback.
Rapid Application Development: Rapid Application Development (RAD) is a software development methodology that prioritizes speed and flexibility in the development process. It is designed to produce high-quality systems quickly, primarily through the use of iterative prototyping and the involvement of end-users. RAD aims to reduce the time it takes to develop a system and increase the adaptability to changing requirements.
Incremental Prototyping: Incremental prototyping is a software development approach that combines the principles of prototyping and incremental development. In this methodology, the development process is divided into small increments, with each increment building upon the functionality of the previous one. At the same time, prototypes are created and refined in each increment to better meet user requirements and expectations.
Key technologies in web development
Developing a fundamental knowledge of client-side and server-side dynamics is crucial.
The goal of front-end development is to create a website's user interface and visual components that users may interact with directly. On the other hand, back-end development works with databases, server-side logic, and application functionality. Building reliable and user-friendly online applications requires a comprehensive approach, which is ensured by collaboration between front-end and back-end engineers.
Front-end development
Front-end development is the process of designing and implementing the user interface (UI) and user experience (UX) of a web application. It involves creating visually appealing and interactive elements that users interact with directly. The primary technologies and concepts associated with front-end development include:
Technologies
The 3 core technologies for front-end development are:
HTML (Hypertext Markup Language): HTML provides the structure and organization of content on a webpage.
CSS (Cascading Style Sheet): Responsible for styling and layout, CSS enhances the presentation of HTML elements, making the application visually appealing.
JavaScript: It is used to add interactions to the web pages. Advancement in JavaScript has given rise to many popular front- end frameworks like React, Angular and Vue.js etc.
User interface design
User experience design focuses on creating interfaces that are intuitive, accessible, and enjoyable for users. It involves understanding user behavior, conducting usability studies, and implementing design principles to enhance the overall satisfaction of users interacting with a website or application. This involves wireframing, prototyping, and implementing design principles to enhance user interaction. Some of the popular tools used for UI Wireframing are -
Sketch for detailed, vector-based design
Moqups for beginners
Figma for a free wireframe app
UXPin for handing off design documentation to developers
MockFlow for project organization
Justinmind for interactive wireframes
Uizard for AI-assisted wireframing
Another key aspect to keep in mind while designing is Web Accessibility- Web accessibility ensures that digital content is available and usable for people of all abilities. This involves adhering to standards like the Web Content Accessibility Guidelines (WCAG), implementing features like alternative text for images, and designing with considerations for diverse user needs, including those with disabilities.
Responsive design
It is important to ensure that web applications are accessible and visually appealing across various devices and screen sizes. Responsive design uses CSS media queries and flexible layouts to adapt to different viewing environments.
Front-end frameworks
A framework is a high-level solution for the reuse of software pieces, a step forward in simple library-based reuse that allows for sharing common functions and generic logic of a domain application.
Frameworks and libraries are essential tools that expedite the development process. These tools enhance developer productivity and contribute to the maintainability of large-scale applications. Some popular front-end frameworks are:
React: A JavaScript library for building user interfaces, maintained by Facebook. It allows developers to create reusable UI components.
Angular: A TypeScript-based front-end framework developed and maintained by Google. It provides a comprehensive solution for building dynamic single-page applications.
Vue.js: A progressive JavaScript framework that is approachable yet powerful, making it easy to integrate with other libraries or existing projects.
State management
Managing the state of a web application to ensure data consistency and responsiveness. State management libraries like Redux (for React) or Vuex (for Vue.js) play a crucial role in complex applications.
Back-end development
Back-end development involves building the server-side logic and database components of a web application. It is responsible for processing user requests, managing data, and ensuring the overall functionality of the application. Key aspects of back-end development include:
Server/ cloud instance
An essential component of the architecture of a web application is a server or cloud instance. A cloud instance is a virtual server instance that can be accessed via the Internet and is created, delivered, and hosted on a public or private cloud. It functions as a physical server that may seamlessly move between various devices with ease or set up several instances on one server. It is therefore very dynamic, scalable, and economical.
Databases
Database management is crucial for storing, retrieving, and managing data in web applications. Various database systems, such as MySQL, PostgreSQL, and MongoDB, play distinct roles in organizing and structuring data. Effective database management ensures the responsiveness and efficiency of data-driven web applications. There are 3 types of databases:
Relational databases: Structured databases that use tables to organize and relate data. Common Examples include - MySQL, PostgreSQL and many more.
NoSQL databases: NoSQL databases are designed to handle unstructured or semi-structured data and can be more flexible than relational databases. They come in various types, such as document-oriented, key-value stores, column-family stores, and graph databases. Examples: MongoDB, Cassandra, ScyllaDB, CouchDB, Redis.
Document stores: Document stores store data in a semi-structured format, typically using JSON or XML documents. Each document can have a different structure, providing flexibility. Examples: MongoDB, CouchDB.
Key-value stores: Key-value stores store data as pairs of keys and values. They are simple and efficient for certain types of operations, like caching. Examples: Redis, DynamoDB.
Column-family stores: Column-family stores organize data into columns instead of rows, making them suitable for large-scale distributed systems and analytical workloads. Examples: Apache Cassandra, HBase.
Graph databases: Graph databases are designed to represent and query data in the form of graphs. They are effective for handling relationships and network-type data. Examples: Neo4j, Amazon Neptune.
In-memory databases: In-memory databases store data in the system's main memory (RAM) rather than on disk. This allows for faster data access and retrieval. Examples: Redis, Memcached.
Time-series databases: Time-series databases are optimized for handling time-stamped data, making them suitable for applications that involve tracking changes over time. Examples: InfluxDB, OpenTSDB.
NewSQL databases: NewSQL databases aim to provide the scalability of NoSQL databases while maintaining the ACID properties (Atomicity, Consistency, Isolation, Durability) of traditional relational databases. Examples: Google Spanner, CockroachDB.
Object-oriented databases: Object-oriented databases store data in the form of objects, which can include both data and methods. They are designed to work seamlessly with object-oriented programming languages. Examples: db4o, ObjectDB.
The choice of a database depends on various factors such as the nature of the data, scalability requirements, performance considerations, and the specific use case of the application being developed. Each type of database has its strengths and weaknesses, and selecting the right one involves considering the specific needs of the project.
Application programming interface (APIs)
Application Programming Interfaces are sets of rules and protocols that allow different software applications to communicate with each other. APIs define the methods and data formats that applications can use to request and exchange information.
RESTful APIs and GraphQL are common approaches for defining and interacting with web services.
Types of APIs
Web APIs: These are APIs that are accessible over the internet using standard web protocols such as HTTP. RESTful APIs are a common type of web API.
Library APIs: These APIs provide pre-built functions and procedures that developers can use within their code.
Operating System APIs: These APIs allow applications to interact with the underlying operating system, accessing features like file systems, hardware, and system services.
Server-side languages
Programming languages aimed at server execution, as opposed to client browser execution, are known as server-side languages. These programming languages are used in web development to perform operations including data processing, database interaction, and the creation of dynamic content that is delivered to the client's browser. A key element of server-side programming is server-side scripting, which allows the server to react to client requests in real time.
Some popular server-side languages are:
PHP: PHP is a widely used, open-source server-side scripting language. It is embedded in HTML code and is particularly well-suited for web development.
Python: Python is a versatile, high-level programming language used for a variety of purposes, including server-side web development. Frameworks like Django and Flask make it easy to build web applications in Python.
Ruby: Ruby is an object-oriented programming language, and it is commonly used for web development. Ruby on Rails is a popular web framework that simplifies the process of building web applications.
Java: Java is a general-purpose, object-oriented programming language. Java-based frameworks like Spring are commonly used for building enterprise-level web applications.
Node.js (JavaScript): While JavaScript is traditionally a client-side language, Node.js enables developers to run JavaScript on the server side. It is known for its event-driven, non-blocking I/O model, making it suitable for building scalable and high-performance applications.
C# (C Sharp): C# is a programming language developed by Microsoft and is commonly used in conjunction with the .NET framework for building web applications on the Microsoft stack.
ASP.NET: ASP.NET is a web framework developed by Microsoft, and it supports languages like C# and VB.NET. It simplifies the process of building dynamic web applications.
Go (Golang): Go is a statically typed language developed by Google. It is known for its simplicity and efficiency and is increasingly being used for building scalable and high-performance web applications.
Perl: Perl is a versatile scripting language often used for web development. It is known for its powerful text-processing capabilities.
Swift: Developed by Apple, Swift is used for server-side development in addition to iOS and macOS app development.
Lua: Lua is used for some embedded web servers, e.g. the configuration pages on a router, including OpenWRT.
Security measures
Implementing security measures to protect against common vulnerabilities, including SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Authentication and authorization mechanisms are crucial for securing data and user access.
Testing, debugging and deployment
Thorough testing and debugging processes are essential for identifying and resolving issues in a web application. Testing may include unit testing, integration testing, and user acceptance testing. Debugging involves pinpointing and fixing errors in the code, ensuring the reliability and stability of the application.
Unit Testing: Testing individual components or functions to verify that they work as expected.
Integration Testing: Testing the interactions between different components or modules to ensure they function correctly together.
Continuous Integration and Deployment (CI/CD): CI/CD pipelines automate testing, deployment, and delivery processes, allowing for faster and more reliable releases.
Full-stack development
Full-stack development refers to the practice of designing, building, and maintaining the entire software stack of a web application. This includes both the frontend (client-side) and backend (server-side) components, as well as the database and any other necessary infrastructure. A full-stack developer is someone who has expertise in working with both the frontend and backend technologies, allowing them to handle all aspects of web application development.
MEAN (MongoDB, Express.js, Angular, Node.js) and MERN (MongoDB, Express.js, React, Node.js) are popular full-stack development stacks that streamline the development process by providing a cohesive set of technologies.
Web development tools and environments
Efficient web development relies on a set of tools and environments that streamline the coding and collaboration processes:
Integrated development environments (IDEs): Tools like Visual Studio Code, Atom, and Sublime Text provide features such as code highlighting, autocompletion, and version control integration, enhancing the development experience.
Version control: Git is a widely used version control system that allows developers to track changes, collaborate seamlessly, and roll back to previous versions if needed.
Collaboration tools: Communication platforms like Slack, project management tools such as Jira, and collaboration platforms like GitHub facilitate effective teamwork and project management.
Security practices in web development
Security is paramount in web development to protect against cyber threats and ensure the confidentiality and integrity of user data. Best practices include encryption, secure coding practices, regular security audits, and staying informed about the latest security vulnerabilities and patches.
Common threats: Developers must be aware of common security threats, including SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF).
Secure coding practices: Adhering to secure coding practices involves input validation, proper data sanitization, and ensuring that sensitive information is stored and transmitted securely.
Authentication and authorization: Implementing robust authentication mechanisms, such as OAuth or JSON Web Tokens (JWT), ensures that only authorized users can access specific resources within the application.
Agile methodology in web development
Agile manifesto and principles
Agile is a set of principles and values for software development that prioritize flexibility, collaboration, and customer satisfaction. The four key values are:
Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.
Agile concepts in web development
Iterative and incremental development: Building and refining a web application through small, repeatable cycles, enhancing features incrementally with each iteration.
Scrum and kanban: Employing agile frameworks like Scrum for structured sprints or Kanban for continuous flow to manage tasks and enhance team efficiency.
Cross-functional teams: Forming collaborative teams with diverse skill sets, ensuring all necessary expertise is present for comprehensive web development.
Customer collaboration: Engaging customers throughout the development process to gather feedback, validate requirements, and ensure the delivered product aligns with expectations.
Adaptability to change: Embracing changes in requirements or priorities even late in the development process to enhance the product's responsiveness to evolving needs.
User stories and backlog: Capturing functional requirements through user stories and maintaining a backlog of prioritized tasks to guide development efforts.
Continuous integration and continuous delivery (CI/CD): Implementing automated processes to continuously integrate code changes and deliver updated versions, ensuring a streamlined and efficient development pipeline.
See also
Outline of web design and web development
Web design
Web development tools
Web application development
Web developer
References | Web development | Engineering | 5,633 |
12,950,677 | https://en.wikipedia.org/wiki/Virtual%20mixer | A Virtual Mixer is a software application that runs on a computer or other digital audio system. Providing the same functionality of a digital or analog mixing console, a virtual mixer takes the audio outputs of many separate tracks or live sources and combines them into a pair of stereo outputs or other routed subgroups for auxiliary outputs.
History
Around the mid 1990s, computers achieved a level of processing power that allowed for professional recordings to be done digitally. In the following decade, many artists began recording their own music in home studios with the aid of DAW (digital audio workstation) software like GarageBand or ProTools. It was this move away from high end studios and the rise of computing power in personal computers that gave rise to virtual mixers that required minimal to no physical interface.
Design
The design of most virtual mixers is modeled after physical mixers. The individual channel strips are arranged side-by-side and the user is given control over level and pan. There is also a single master fader for the stereo output. The actual controls are also modeled after physical mixers, featuring faders and knobs that can be controlled using a mouse and keyboard shortcuts.
Each channel displays a decibel meter and slots for optional third-party plugins. These plugins range from built-in effects to EQ, compression, and gates. These plugins can be implemented a number of ways. Each channel allows for plugins to be added via dropdown menus from a number of slots. Through this method, plugins are applied to individual channels. Alternatively, plugins can be applied to a number of channels by busing the desired channels to another track. In this case, the effectiveness of the effect can be controlled through the fader of the bused channel.
See also
Digital Mixing Console
Music Sequencer
MIDI Controller
Plugin
Audio Units
VST
External links
http://www.soundonsound.com/sos/aug00/articles/usingvmixers.htm
Audio engineering | Virtual mixer | Engineering | 406 |
36,059,598 | https://en.wikipedia.org/wiki/Alkyl%20polyglycoside | Alkyl polyglycosides (APGs) are a class of non-ionic surfactants widely used in a variety of cosmetic, household, and industrial applications. Biodegradable and plant-derived from sugars, these surfactants are usually derivatives of glucose and fatty alcohols. The raw materials are typically starch and fat, and the final products are typically complex mixtures of compounds with different sugars comprising the hydrophilic end and alkyl groups of variable length comprising the hydrophobic end. When derived from glucose, they are known as alkyl polyglucosides.
APGs exhibit good wetting, foaming, and detergency properties, making them effective in cleaning and personal care products. They are also stable across a wide pH range and compatible with various other surfactants.
Uses
APGs are used to enhance the formation of foams in detergents. They are also used in the personal care industry because they are biodegradable and safe for sensitive skin.
Preparation
APGs are produced by combining a sugar such as glucose with a fatty alcohol in the presence of acid catalysts at elevated temperatures.
References
Glycosides
Non-ionic surfactants | Alkyl polyglycoside | Chemistry | 247 |
7,071,234 | https://en.wikipedia.org/wiki/Rose%20madder | Rose madder (also known as madder) is a red paint made from the pigment madder lake, a traditional lake pigment extracted from the common madder plant Rubia tinctorum.
Madder lake contains two organic red dyes: alizarin and purpurin. As a paint, it has been described as a fugitive, transparent, nonstaining, mid valued, moderately dull violet red pigment in tints and medium solutions, darkening to an impermanent, dull magenta red in masstone.
History
Madder has been cultivated as a dyestuff since antiquity in Central Asia, South Asia, and Egypt, where it was grown as early as 1500 BC. Cloth dyed with madder root dye was found in the tomb of the Pharaoh Tutankhamun and on an Egyptian tomb painting from the Graeco-Roman period, diluted with gypsum to produce a pink color. It was also found in ancient Greece (in Corinth), and in Italy in the Baths of Titus and the ruins of Pompeii. It is referred to in the Talmud as well as mentioned in writings by Dioscorides (who referred to it as ἐρυθρόδανον, "erythródanon"), Hippocrates, and other literary figures, and in artwork where it is referred to as rubio and used in paintings by J. M. W. Turner and as a color for ceramics. In Spain, madder was introduced and then cultivated by the Moors.
The production of a lake pigment from madder seems to have been first invented by the ancient Egyptians. Several techniques and recipes developed. Ideal color was said to come from plants 18 to 28 months old that had been grown in calcareous soil, which is full of lime and typically chalky. Most were considered relatively weak and extremely fugitive until 1804, when the English dye maker George Field refined the technique of making a lake from madder by treating it with alum and an alkali.
The resulting madder lake had a less fugitive color and could be used more efficaciously, for example by blending it into a paint. Over the following years, other metal salts, including those containing chromium, iron, and tin, were found to be usable in place of alum to give madder-based pigments of various other colors.
In 1827, the French chemists Pierre-Jean Robiquet and Colin began producing garancine, the concentrated version of natural madder. They then found that madder lake contained two colorants, the red alizarin and the more rapidly fading purpurin. Purpurin is only present in the natural form of madder and gives a distinctive orange/red generally warmer tone that pure synthetic alizarin does not. Purpurin fluoresces yellow to red under ultraviolet light, while synthetic alizarin slightly shows violet. Alizarin was discovered before purpurin, by heating the ground madder with acid and potash. A yellow vapor crystallized into bright red needles: alizarin. This alizarin concentrate comprises only 1% of the madder root.
Natural rose madder supplied half the world with red, until 1868, when its alizarin component became the first natural dye to be synthetically duplicated by Carl Gräbe and Carl Liebermann. Advances in the understanding of chemistry, such as chemical structures, chemical formulas, and elemental formulas, aided these Berlin-based scientists in discovering that alizarin had an anthracene base. However, their recipe was not feasible for large-scale production; it required expensive and volatile substances, specifically bromine.
William Perkin, the inventor of mauveine, filed a patent in June 1869 for a new way to produce alizarin without bromine. Gräbe, Liebermann, and Heinrich Caro filed a patent for a similar process just one day before Perkin did – yet both patents were granted, as Perkin's had been sealed first. They divided the market in half: Perkin sold to the English market, and the scientists from Berlin to the United States and mainland Europe.
Because this synthetic alizarin dye could be produced for a fraction of the cost of the natural madder dye, it quickly replaced all madder-based colorants then in use (in, for instance, British army red coats that had been a shade of madder from the late 17th century to 1870, and French military cloth, often called "Turkey Red"). In turn, alizarin itself has now been largely replaced by the more light-resistant quinacridone pigments originally developed at DuPont in 1958.
It is still manufactured in traditional ways to meet the demands of the fine art market.
Other names
Alizarin's chemical composition: 1,2 dihydroxyanthraquinone (C14H8O4)
Alizarin crimson, a paint very similar in color to Rose Madder Genuine but derived from synthetic Alizarin
Lacca di robbia, Italian name
Laque de garance, French name
Natural Red 9 abbreviated NR9, Color Index name
Purpurin's chemical composition: 1,2,4 trihydroxyanthraquinone (C14H8O5)
Rose madder genuine, sometimes used to specify a paint derived from the root of the madder plant in the traditional manner It is still manufactured and used by some, but is too fugitive for professional artistic use.
Rose madder hue, sometimes used to specify a paint made from other pigments but meant to approximate the color of rose madder
Rubia tinctorum, the herbaceous perennial from which the rose madder pigment is derived
Turkey red
Substitutes
As all madder-based pigments are fugitive, artists have long sought a more permanent and lightfast replacement for rose madder and alizarin. Alternative pigments include:
Anthraquinone red (PR177), a chemical cousin of Alizarin
Benzamida carmine (PR176)
Perylene maroon (PR179), for mixing dull violets
Pyrrole rubine (PR264)
Quinacridone magenta (PR122), for a brighter violet
Quinacridone pyrrolodone
Quinacridone rose (PV19), for a brighter violet
Quinacridone violet (PV19), particularly dark and reddish varieties
In art, entertainment, and media
HMS Surprise is a 1973 novel by Patrick O'Brian which mentions rose madder.
Rose Madder is the title of a 1995 novel by Stephen King, in which a woman named Rose Daniels escapes her abusive husband and travels through time by entering a painting of a woman in a gown dyed with rose madder.
"Madder Red" is the title of a 2009 song by Yeasayer on the album Odd Blood.
Jonathon Keats uses the gradual fading of rose madder oil paint to record a single image over the course of 1000 years in his "millennium camera".
Blue Madder is the third album released by Savoy Brown in May 1969 on Decca Records.
Yukino in The Garden of Words is described as having 'a madder-red ribbon' in her school uniform.
The Maddermarket Theatre in Norwich has connections with the use of madder as a dye in the city.
References
Further reading
Biological pigments
Organic pigments | Rose madder | Biology | 1,518 |
22,074,782 | https://en.wikipedia.org/wiki/Nano%20Today | Nano Today is dedicated to publishing the most influential and innovative work across nanoscience and technology. The journal considers any article that informs readers of the latest research and advances in the field, research breakthroughs, and topical issues which express views on developments in related fields. Through its unique mixture of peer-reviewed articles, the latest research news, and information on key developments, Nano Today provides comprehensive coverage of this exciting and dynamic field. Established in 2006, it is published six times a year by Elsevier.
References
Elsevier academic journals
Chemistry journals
Nanotechnology journals | Nano Today | Materials_science | 114 |
2,903,625 | https://en.wikipedia.org/wiki/Nadir | The nadir is the direction pointing directly below a particular location; that is, it is one of two vertical directions at a specified location, orthogonal to a horizontal flat surface.
The direction opposite of the nadir is the zenith.
Definitions
Space science
Since the concept of being below is itself somewhat vague, scientists define the nadir in more rigorous terms. Specifically, in astronomy, geophysics and related sciences (e.g., meteorology), the nadir at a given point is the local vertical direction pointing in the direction of the force of gravity at that location.
The term can also be used to represent the lowest point that a celestial object reaches along its apparent daily path around a given point of observation (i.e. the object's lower culmination). This can be used to describe the position of the Sun, but it is only technically accurate for one latitude at a time and only possible at the low latitudes. The Sun is said to be at the nadir at a location when it is at the zenith at the location's antipode and is 90° below the horizon.
Nadir also refers to the downward-facing viewing geometry of an orbiting satellite, such as is employed during remote sensing of the atmosphere, as well as when an astronaut faces the Earth while performing a spacewalk. A nadir image is a satellite image or aerial photo of the Earth taken vertically. A satellite ground track represents its orbit projected to nadir on to Earth's surface.
Medicine
Generally in medicine, nadir is used to indicate the progression to the lowest point of a clinical symptom (e.g. fever patterns) or a laboratory count. In oncology, the term nadir is used to represent the lowest level of a blood cell count while a patient is undergoing chemotherapy. A diagnosis of neutropenic nadir after chemotherapy typically lasts 7–10 days.
Figurative usage
The word is also used figuratively to mean a low point, such as with a person's spirits, the quality of an activity or profession, or the nadir of American race relations.
Notes
References
Astronomical coordinate systems
Technical factors of astrology
Orientation (geometry) | Nadir | Physics,Astronomy,Mathematics | 441 |
24,549,947 | https://en.wikipedia.org/wiki/Experimental%20Mechanics | Experimental Mechanics is a peer-reviewed scientific journal covering all areas of experimental mechanics. It is an official journal of the Society for Experimental Mechanics and was established in 1961, being published monthly. From 1983 to 2003, it was published quarterly, increasing to 6 issues per year until 2009. Since then it has 9 issues per year. The journal is published by Springer Science+Business Media and the editor-in-chief is Professor Alan Zehnder (Cornell University). The journal occasionally publishes special issues on focused topics.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.808.
References
External links
English-language journals
Engineering journals
Academic journals established in 1961
Materials science journals
Springer Science+Business Media academic journals
9 times per year journals | Experimental Mechanics | Materials_science,Engineering | 168 |
59,038,212 | https://en.wikipedia.org/wiki/Tulipalin%20A | Tulipalin A, also known as α-methylene-γ-butyrolactone, is a naturally occurring compound found in certain flowers such as tulips and alstroemerias. Tulipalin A has the molecular formula C5H6O2 and the CAS registry number 547-65-9. It is an allergen and has been known to cause occupational contact dermatitis, i.e. 'tulip fingers,' in some who are commonly exposed to it such as florists. It has been shown to be synthesized from Tuliposide-A in response to damages to the plant. When the plant is damaged, Tuliposide-A is broken down by Tuliposide-converting enzymes (TCE) to produce Tulipalin-A. More recent experiments with this compound have uncovered potential applications for it in the field of polymerization.
References
Gamma-lactones
Plant toxins
Vinylidene compounds | Tulipalin A | Chemistry | 203 |
33,965,405 | https://en.wikipedia.org/wiki/Reverse%20blog | A reverse blog (also known as a group blog) is a type of blog written entirely by the users, who are given a topic. The blog posts are usually screened and chosen for publication by a core group or the publisher of the blog.
A reverse blog is different from a traditional blog, which is created by a single, specific author (i.e. blogger). The blogger will write about a given topic and other users may view and sometimes comment on the blogger's work.
A reverse blog is characterized primarily by the lack of a blogger on a site providing blog-style content. The number of comments must be limited in order to differentiate a reverse blog from a forum. This number of comments must be fixed as well. These are the primary and necessary characteristics of a reverse blog. The reverse blog is also commonly called an inverse blog.
References
Blogging | Reverse blog | Technology | 176 |
9,434,311 | https://en.wikipedia.org/wiki/Calcium%20chlorate | Calcium chlorate is the calcium salt of chloric acid, with the chemical formula Ca(ClO3)2. Like other chlorates, it is a strong oxidizer.
Production
Calcium chlorate is produced by passing chlorine gas through a hot suspension of calcium hydroxide in water, producing calcium hypochlorite, which disproportionates when heated with excess chlorine to give calcium chlorate and calcium chloride:
6 Ca(OH)2 + 6 Cl2 → Ca(ClO3)2 + 5 CaCl2 + 6 H2O
This is also the first step of the Liebig process for the manufacture of potassium chlorate.
In theory, electrolysis of hot calcium chloride solution will produce the chlorate salt, analogous to the process used for the manufacture of sodium chlorate. In practice, electrolysis is complicated by calcium hydroxide depositing on the cathode, preventing the flow of current.
Reactions
When concentrated solutions of calcium chlorate and potassium chloride are combined, potassium chlorate precipitates:
Ca(ClO3)2 + 2 KCl → 2 KClO3 + CaCl2
This is the second step of the Liebig process for the manufacture of potassium chlorate.
Solutions of calcium chlorate react with solutions of alkali carbonates to give a precipitate of calcium carbonate and the alkali chlorate in solution:
Ca(ClO3)2 + Na2CO3 → 2 NaClO3 + CaCO3
On strong heating, calcium chlorate decomposes to give oxygen and calcium chloride:
Ca(ClO3)2 → CaCl2 + 3 O2
Cold, dilute solutions of calcium chlorate and sulfuric acid react to give a precipitate of calcium sulfate and chloric acid in solution:
Ca(ClO3)2 + H2SO4 → 2 HClO3 + CaSO4
Contact with strong sulfuric acid can result in explosions due to the instability of concentrated chloric acid. Contact with ammonium compounds can also cause violent decomposition due to the formation of unstable ammonium chlorate.
Uses
Calcium chlorate has been used as an herbicide, like sodium chlorate.
Calcium chlorate is occasionally used in pyrotechnics, as an oxidizer and pink flame colorant. Its hygroscopic nature and incompatibility with other common pyrotechnic materials (such as sulfur) limit its utility in these applications.
References
Chlorates
Calcium compounds
Oxidizing agents | Calcium chlorate | Chemistry | 542 |
36,703,918 | https://en.wikipedia.org/wiki/Cantelli%27s%20inequality | In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds. The inequality states that, for
where
is a real-valued random variable,
is the probability measure,
is the expected value of ,
is the variance of .
Applying the Cantelli inequality to gives a bound on the lower tail,
While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928, it originates in Chebyshev's work of 1874. When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality.
Comparison to Chebyshev's inequality
For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get
On the other hand, for two-sided tail bounds, Cantelli's inequality gives
which is always worse than Chebyshev's inequality (when ; otherwise, both inequalities bound a probability by a value greater than one, and so are trivial).
Generalizations
Various stronger inequalities can be shown.
He, Zhang, and Zhang showed (Corollary 2.3) when
and :
In the case this matches a bound in Berger's "The Fourth Moment Method",
This improves over Cantelli's inequality in that we can get a non-zero lower bound, even when .
See also
Chebyshev's inequality
Paley–Zygmund inequality
References
Probabilistic inequalities | Cantelli's inequality | Mathematics | 390 |
59,438 | https://en.wikipedia.org/wiki/Thermal%20conductivity%20and%20resistivity | The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by , , or and is measured in W·m−1·K−1.
Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials such as mineral wool or Styrofoam. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation. The reciprocal of thermal conductivity is called thermal resistivity.
The defining equation for thermal conductivity is , where is the heat flux, is the thermal conductivity, and is the temperature gradient. This is known as Fourier's law for heat conduction. Although commonly expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials which are anisotropic.
Definition
Simple definition
Consider a solid material placed between two environments of different temperatures. Let be the temperature at and be the temperature at , and suppose . An example of this scenario is a building on a cold winter day; the solid material in this case is the building wall, separating the cold outdoor environment from the warm indoor environment.
According to the second law of thermodynamics, heat will flow from the hot environment to the cold one as the temperature difference is equalized by diffusion. This is quantified in terms of a heat flux , which gives the rate, per unit area, at which heat flows in a given direction (in this case minus x-direction). In many materials, is observed to be directly proportional to the temperature difference and inversely proportional to the separation distance :
The constant of proportionality is the thermal conductivity; it is a physical property of the material. In the present scenario, since heat flows in the minus x-direction and is negative, which in turn means that . In general, is always defined to be positive. The same definition of can also be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation, are eliminated or accounted for.
The preceding derivation assumes that the does not change significantly as temperature is varied from to . Cases in which the temperature variation of is non-negligible must be addressed using the more general definition of discussed below.
General definition
Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient. It is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses.
Energy flow due to thermal conduction is classified as heat and is quantified by the vector , which gives the heat flux at position and time . According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it is reasonable to postulate that is proportional to the gradient of the temperature field , i.e.
where the constant of proportionality, , is the thermal conductivity. This is called Fourier's law of heat conduction. Despite its name, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities and . As such, its usefulness depends on the ability to determine for a given material under given conditions. The constant itself usually depends on and thereby implicitly on space and time. An explicit space and time dependence could also occur if the material is inhomogeneous or changing with time.
In some solids, thermal conduction is anisotropic, i.e. the heat flux is not always parallel to the temperature gradient. To account for such behavior, a tensorial form of Fourier's law must be used:
where is symmetric, second-rank tensor called the thermal conductivity tensor.
An implicit assumption in the above description is the presence of local thermodynamic equilibrium, which allows one to define a temperature field . This assumption could be violated in systems that are unable to attain local equilibrium, as might happen in the presence of strong nonequilibrium driving or long-ranged interactions.
Other quantities
In engineering practice, it is common to work in terms of quantities which are derivative to thermal conductivity and implicitly take into account design-specific features such as component dimensions.
For instance, thermal conductance is defined as the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity , area and thickness , the conductance is , measured in W⋅K−1. The relationship between thermal conductivity and conductance is analogous to the relationship between electrical conductivity and electrical conductance.
Thermal resistance is the inverse of thermal conductance. It is a convenient measure to use in multicomponent design since thermal resistances are additive when occurring in series.
There is also a measure known as the heat transfer coefficient: the quantity of heat that passes per unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. In ASTM C168-15, this area-independent quantity is referred to as the "thermal conductance". The reciprocal of the heat transfer coefficient is thermal insulance. In summary, for a plate of thermal conductivity , area and thickness ,
thermal conductance = , measured in W⋅K−1.
thermal resistance = , measured in K⋅W−1.
heat transfer coefficient = , measured in W⋅K−1⋅m−2.
thermal insulance = , measured in K⋅m2⋅W−1.
The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow.
An additional term, thermal transmittance, quantifies the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is also used.
Finally, thermal diffusivity combines thermal conductivity with density and specific heat:
.
As such, it quantifies the thermal inertia of a material, i.e. the relative difficulty in heating a material to a given temperature using heat sources applied at the boundary.
Units
In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(m⋅K)). Some papers report in watts per centimeter-kelvin [W/(cm⋅K)].
However, physicists use other convenient units as well, e.g., in cgs units, where esu/(cm-sec-K) is used.
The Lorentz number, defined as L=κ/σT is a quantity independent of the carrier density and the scattering mechanism. Its value for a gas of non-interacting electrons (typical carriers in good metallic conductors) is 2.72×10−13 esu/K2, or equivalently, 2.44×10−8 Watt-Ohm/K2.
In imperial units, thermal conductivity is measured in BTU/(h⋅ft⋅°F).
The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ).
Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of measures such as the R-value (resistance) and the U-value (transmittance or conductance). Although related to the thermal conductivity of a material used in an insulation product or assembly, R- and U-values are measured per unit area, and depend on the specified thickness of the product or assembly.
Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry.
Measurement
There are several ways to measure thermal conductivity; each is suitable for a limited range of materials. Broadly speaking, there are two categories of measurement techniques: steady-state and transient. Steady-state techniques infer the thermal conductivity from measurements on the state of a material once a steady-state temperature profile has been reached, whereas transient techniques operate on the instantaneous state of a system during the approach to steady state. Lacking an explicit time component, steady-state techniques do not require complicated signal analysis (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed, and the time required to reach steady state precludes rapid measurement.
In comparison with solid materials, the thermal properties of fluids are more difficult to study experimentally. This is because in addition to thermal conduction, convective and radiative energy transport are usually present unless measures are taken to limit these processes. The formation of an insulating boundary layer can also result in an apparent reduction in the thermal conductivity.
Experimental values
The thermal conductivities of common substances span at least four orders of magnitude. Gases generally have low thermal conductivity, and pure metals have high thermal conductivity. For example, under standard conditions the thermal conductivity of copper is over times that of air.
Of all materials, allotropes of carbon, such as graphite and diamond, are usually credited with having the highest thermal conductivities at room temperature. The thermal conductivity of natural diamond at room temperature is several times higher than that of a highly conductive metal such as copper (although the precise value varies depending on the diamond type).
Thermal conductivities of selected substances are tabulated below; an expanded list can be found in the list of thermal conductivities. These values are illustrative estimates only, as they do not account for measurement uncertainties or variability in material definitions.
Influencing factors
Temperature
The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply. In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K.
On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high-quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects.
Chemical phase
When a material undergoes a phase change (e.g. from solid to liquid), the thermal conductivity may change abruptly. For instance, when ice melts to form liquid water at 0 °C, the thermal conductivity changes from 2.18 W/(m⋅K) to 0.56 W/(m⋅K).
Even more dramatically, the thermal conductivity of a fluid diverges in the vicinity of the vapor-liquid critical point.
Thermal anisotropy
Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the c axis and 32 W/(m⋅K) along the a axis.
Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures.
When anisotropy is present, the direction of heat flow may differ from the direction of the thermal gradient.
Electrical conductivity
In metals, thermal conductivity is approximately correlated with electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator but conducts heat via phonons due to its orderly array of atoms.
Magnetic field
The influence of magnetic fields on thermal conductivity is known as the thermal Hall effect or Righi–Leduc effect.
Gaseous phases
In the absence of convection, air and other gases are good insulators. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which obstruct heat conduction pathways. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by trapping air in pores, pockets, or voids.
Low density gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacity. Argon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics.
The thermal conductivity through bulk materials in porous or granular form is governed by the type of gas in the gaseous phase, and its pressure. At low pressures, the thermal conductivity of a gaseous phase is reduced, with this behaviour governed by the Knudsen number, defined as , where is the mean free path of gas molecules and is the typical gap size of the space filled by the gas. In a granular material corresponds to the characteristic size of the gaseous phase in the pores or intergranular spaces.
Isotopic purity
The thermal conductivity of a crystal can depend strongly on isotopic purity, assuming other lattice defects are negligible. A notable example is diamond: at a temperature of around 100 K the thermal conductivity increases from 10,000 W·m−1·K−1 for natural type IIa diamond (98.9% 12C), to 41,000 for 99.9% enriched synthetic diamond. A value of 200,000 is predicted for 99.999% 12C at 80 K, assuming an otherwise pure crystal. The thermal conductivity of 99% isotopically enriched cubic boron nitride is ~ 1400 W·m−1·K−1, which is 90% higher than that of natural boron nitride.
Molecular origins
The molecular mechanisms of thermal conduction vary among different materials, and in general depend on details of the microscopic structure and molecular interactions. As such, thermal conductivity is difficult to predict from first-principles. Any expressions for thermal conductivity which are exact and general, e.g. the Green-Kubo relations, are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions. A notable exception is a monatomic dilute gas, for which a well-developed theory exists expressing thermal conductivity accurately and explicitly in terms of molecular parameters.
In a gas, thermal conduction is mediated by discrete molecular collisions. In a simplified picture of a solid, thermal conduction occurs by two mechanisms: 1) the migration of free electrons and 2) lattice vibrations (phonons). The first mechanism dominates in pure metals and the second in non-metallic solids. In liquids, by contrast, the precise microscopic mechanisms of thermal conduction are poorly understood.
Gases
In a simplified model of a dilute monatomic gas, molecules are modeled as rigid spheres which are in constant motion, colliding elastically with each other and with the walls of their container. Consider such a gas at temperature and with density , specific heat and molecular mass . Under these assumptions, an elementary calculation yields for the thermal conductivity
where is a numerical constant of order , is the Boltzmann constant, and is the mean free path, which measures the average distance a molecule travels between collisions. Since is inversely proportional to density, this equation predicts that thermal conductivity is independent of density for fixed temperature. The explanation is that increasing density increases the number of molecules which carry energy but decreases the average distance a molecule can travel before transferring its energy to a different molecule: these two effects cancel out. For most gases, this prediction agrees well with experiments at pressures up to about 10 atmospheres. At higher densities, the simplifying assumption that energy is only transported by the translational motion of particles no longer holds, and the theory must be modified to account for the transfer of energy across a finite distance at the moment of collision between particles, as well as the locally non-uniform density in a high density gas. This modification has been carried out, yielding Revised Enskog Theory, which predicts a density dependence of the thermal conductivity in dense gases.
Typically, experiments show a more rapid increase with temperature than (here, is independent of ). This failure of the elementary theory can be traced to the oversimplified "hard sphere" model, which both ignores the "softness" of real molecules, and the attractive forces present between real molecules, such as dispersion forces.
To incorporate more complex interparticle interactions, a systematic approach is necessary. One such approach is provided by Chapman–Enskog theory, which derives explicit expressions for thermal conductivity starting from the Boltzmann equation. The Boltzmann equation, in turn, provides a statistical description of a dilute gas for generic interparticle interactions. For a monatomic gas, expressions for derived in this way take the form
where is an effective particle diameter and is a function of temperature whose explicit form depends on the interparticle interaction law. For rigid elastic spheres, is independent of and very close to . More complex interaction laws introduce a weak temperature dependence. The precise nature of the dependence is not always easy to discern, however, as is defined as a multi-dimensional integral which may not be expressible in terms of elementary functions, but must be evaluated numerically. However, for particles interacting through a Mie potential (a generalisation of the Lennard-Jones potential) highly accurate correlations for in terms of reduced units have been developed.
An alternate, equivalent way to present the result is in terms of the gas viscosity , which can also be calculated in the Chapman–Enskog approach:
where is a numerical factor which in general depends on the molecular model. For smooth spherically symmetric molecules, however, is very close to , not deviating by more than for a variety of interparticle force laws. Since , , and are each well-defined physical quantities which can be measured independent of each other, this expression provides a convenient test of the theory. For monatomic gases, such as the noble gases, the agreement with experiment is fairly good.
For gases whose molecules are not spherically symmetric, the expression still holds. In contrast with spherically symmetric molecules, however, varies significantly depending on the particular form of the interparticle interactions: this is a result of the energy exchanges between the internal and translational degrees of freedom of the molecules. An explicit treatment of this effect is difficult in the Chapman–Enskog approach. Alternately, the approximate expression was suggested by Eucken, where is the heat capacity ratio of the gas.
The entirety of this section assumes the mean free path is small compared with macroscopic (system) dimensions. In extremely dilute gases this assumption fails, and thermal conduction is described instead by an apparent thermal conductivity which decreases with density. Ultimately, as the density goes to the system approaches a vacuum, and thermal conduction ceases entirely.
Liquids
The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. An example of a simple but very rough theory is that of Bridgman, in which a liquid is ascribed a local molecular structure similar to that of a solid, i.e. with molecules located approximately on a lattice. Elementary calculations then lead to the expression
where is the Avogadro constant, is the volume of a mole of liquid, and is the speed of sound in the liquid. This is commonly called Bridgman's equation.
Metals
For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So
with k0 a constant. For pure metals, k0 is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k, are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation.
Lattice waves, phonons, in dielectric solids
Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (i.e., phonons). This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm.
The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg is the group velocity of a phonon wave packet, then the relaxation length is defined as:
where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, Vlong is much greater than Vtrans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons.
Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering.
Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL (L) is small.
Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3p(q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λL.
From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly.
Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity.
Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression
where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹n› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation
states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time (τ) approximation
which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation
Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λL can be determined. The temperature dependence for λL originates from the variety of processes, whose significance for λL depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation
where Λ is the mean free path for phonon and denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that for cubic or isotropic systems and .
At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3.
Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ), the conservation of energy and quasimomentum , where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport.
Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution . To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved.
Therefore, these phonons have to possess energy of , which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to , with . Temperature dependence of the mean free path has an exponential form . The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL, as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance.
At high temperatures (T > Θ), the mean free path and therefore λL has a temperature dependence T−1, to which one arrives from formula by making the following approximation and writing . This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur.
Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids.
Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles or structures.
Prediction
Because thermal conductivity depends continuously on quantities like temperature and material composition, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available under the physical conditions of interest. This capability is important in thermophysical simulations, where quantities like temperature and pressure vary continuously with space and time, and may encompass extreme conditions inaccessible to direct measurement.
In fluids
For the simplest fluids, such as monatomic gases and their mixtures at low to moderate densities, ab initio quantum mechanical computations can accurately predict thermal conductivity in terms of fundamental atomic properties—that is, without reference to existing measurements of thermal conductivity or other transport properties. This method uses Chapman-Enskog theory or Revised Enskog Theory to evaluate the thermal conductivity, taking fundamental intermolecular potentials as input, which are computed ab initio from a quantum mechanical description.
For most fluids, such high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing thermal conductivity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures
and pressures, then it is called a "reference correlation" for that material. Reference correlations have been published for many pure materials; examples are carbon dioxide, ammonia, and benzene. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases.
Thermophysical modeling software often relies on reference correlations for predicting thermal conductivity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source).
Thermal conductivity can also be computed using the Green-Kubo relations, which express transport coefficients in terms of the statistics of molecular trajectories. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules.
History
Jan Ingenhousz and the thermal conductivity of different metals
In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities:
See also
Copper in heat exchangers
Heat pump
Heat transfer
Heat transfer mechanisms
Insulated pipe
Interfacial thermal resistance
Laser flash analysis
List of thermal conductivities
Phase-change material
R-value (insulation)
Specific heat capacity
Thermal bridge
Thermal conductance quantum
Thermal contact conductance
Thermal diffusivity
Thermal effusivity
Thermal entrance length
Thermal interface material
Thermal diode
Thermal resistance
Thermistor
Thermocouple
Thermodynamics
Thermal conductivity measurement
Refractory metals
References
Notes
Citations
Sources
Further reading
Undergraduate-level texts (engineering)
. A standard, modern reference.
Undergraduate-level texts (physics)
Halliday, David; Resnick, Robert; & Walker, Jearl (1997). Fundamentals of Physics (5th ed.). John Wiley and Sons, New York . An elementary treatment.
. A brief, intermediate-level treatment.
. An advanced treatment.
Graduate-level texts
. A very advanced but classic text on the theory of transport processes in gases.
Reid, C. R., Prausnitz, J. M., Poling B. E., Properties of gases and liquids, IV edition, Mc Graw-Hill, 1987
Srivastava G. P (1990), The Physics of Phonons. Adam Hilger, IOP Publishing Ltd, Bristol
External links
Thermopedia THERMAL CONDUCTIVITY
Contribution of Interionic Forces to the Thermal Conductivity of Dilute Electrolyte Solutions The Journal of Chemical Physics 41, 3924 (1964)
The importance of Soil Thermal Conductivity for power companies
Thermal Conductivity of Gas Mixtures in Chemical Equilibrium. II The Journal of Chemical Physics 32, 1005 (1960)
Heat conduction
Heat transfer
Physical quantities
Thermodynamic properties | Thermal conductivity and resistivity | Physics,Chemistry,Mathematics | 7,090 |
22,328,498 | https://en.wikipedia.org/wiki/Efferocytosis | In cell biology, efferocytosis (from efferre, Latin for 'to carry out' (to the grave), extended meaning 'to bury') is the process by which apoptotic cells are removed by phagocytic cells. It can be regarded as the 'burying of dead cells'.
During efferocytosis, the cell membrane of phagocytic cells engulfs the apoptotic cell, forming a large fluid-filled vesicle containing the dead cell. This ingested vesicle is called an efferosome (in analogy to the term phagosome). This process is similar to macropinocytosis.
Mechanism
For apoptosis, the effect of efferocytosis is that dead cells are removed before their membrane integrity is breached and their contents leak into the surrounding tissue. This prevents exposure of tissue to toxic enzymes, oxidants and other intracellular components such as proteases and caspases.
Efferocytosis can be performed not only by 'professional' phagocytic cells such as macrophages or dendritic cells, but also by many other cell types including epithelial cells and fibroblasts. To distinguish them from living cells, apoptotic cells carry specific 'eat me' signals, such as the presence of phosphatidyl serine (resulting from phospholipid flip-flop) or calreticulin on the outer leaflet of the cell membrane.
Down stream consequences
Efferocytosis triggers specific downstream intracellular signal transduction pathways, for example resulting in anti-inflammatory, anti-protease and growth-promoting effects. Conversely, impaired efferocytosis has been linked to autoimmune disease and tissue damage. Efferocytosis results in production by the ingesting cell of mediators such as hepatocyte- and vascular endothelial growth factor, which are thought to promote replacement of the dead cells.
Specialized pro-resolving mediators are cell-derived metabolites of certain polyunsaturated fatty acids viz.: arachidonic acid which is metabolized to the lipoxins; eicosapentaenoic acid which is metabolized to the Resolvin E's; docosahexaenoic acid which is metabolized to the resolvin D's, maresins, and neuroprotectins; and n-3 docosapentaenoic acid which is metabolized to the n-3 docosapentaenoic acid-derived resolvins and n-3 docosapentaenoic acid-derived neuroprotectins (See Specialized pro-resolving mediators). These mediators possess a broad range of overlapping activities which act to resolve inflammation; one of the important activities which many of these mediators possess is the stimulation of efferocytosis in inflamed tissues. Failure to form sufficient amounts of these mediators is proposed to be one cause of chronic and pathological inflammatory responses (see Specialized pro-resolving mediators#SPM and inflammation).
Clincal significance
Defective efferocytosis has been demonstrated in such diseases as cystic fibrosis and bronchiectasis, Chronic obstructive pulmonary disease, asthma and idiopathic pulmonary fibrosis, rheumatoid arthritis, systemic lupus erythematosus, glomerulonephritis and atherosclerosis.
Footnotes
Cellular processes | Efferocytosis | Biology | 738 |
69,333,714 | https://en.wikipedia.org/wiki/Lenovo%20ThinkPad%20X100e | The Lenovo ThinkPad X100e is a laptop from the ThinkPad line that was manufactured by Lenovo.
References
External links
Arch Linux Wiki - X100e
Thinkwiki.de - X100e
X100e
ThinkPad X100e
Computer-related introductions in 2010 | Lenovo ThinkPad X100e | Technology | 59 |
28,940,359 | https://en.wikipedia.org/wiki/Chlorophenoxy%20herbicide | Chlorophenoxy herbicides are a subclass of phenoxy herbicides which includes: MCPA, 2,4-D, 2,4,5-T and mecoprop. Large amounts have been produced since the 1950s for agriculture. Acute toxic effects after oral consumption are varied and may include: vomiting, abdominal pain, diarrhoea, gastrointestinal haemorrhage acutely followed by coma, hypertonia, hyperreflexia, ataxia, nystagmus, miosis, hallucinations and convulsions. Treatment with urinary alkalinization may be helpful but evidence to support this practice is limited.
See also
Health effects of pesticides
References
Herbicides
Phenol ethers
Chlorobenzene derivatives | Chlorophenoxy herbicide | Biology | 164 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.