text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In taxonomy, a kleptotype is an unofficial term referring to a stolen, unrightfully displaced type specimen or part of a type specimen. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The term is composed of klepto- , from the Ancient Greek κλέπτω (kléptō) meaning "to steal", [ 5 ] [ 6 ] and -type referring to type specimens . It translates to "stolen type". [ citation needed ]
During the Second World War biological collections, like the herbarium in Berlin have been destroyed. This led to the loss of type specimens. [ 7 ] [ 8 ] In some cases only kleptotypes have survived the destruction, as the type material had been removed from their original collections. [ 8 ] For instance, the type of Taxus celebica was thought to be destroyed during the Second World War, but a kleptotype has survived the war in Stockholm. [ 9 ]
Kleptotypes have been taken by researchers, who subsequently added their unauthorised type duplicates to their own collections. [ 10 ]
Taking kleptotypes has been criticised as destructive, wasteful, and unethical. The displacement of type material complicates the work of taxonomists, as species identities may become ambiguous due to the lacking type material. [ 10 ] It can cause problems, as researchers have to search in multiple collections to get a complete perspective on the displaced material. [ 11 ] [ 10 ] To combat this issue it has been proposed to weigh specimens before loaning types, and to identify loss of material through comparing the types weight upon return. [ 12 ] Also, in some herbaria, such as the herbarium Kew, specimens are glued to the herbarium sheets to hinder the removal of plant material. However, this also makes it difficult to handle the specimens. [ 13 ]
The International Code of Nomenclature for algae, fungi, and plants (ICN) does not explicitly prohibit the removal of material from type specimens, however it strongly recommends to conserve the type specimens properly. [ 2 ] It is paramount that types remain intact, as they are an irreplaceable resource [ 11 ] and point of reference. | https://en.wikipedia.org/wiki/Kleptotype |
The Klerer–May System is a programming language developed in the mid-1960s, oriented to numerical scientific programming, whose most notable feature is its two-dimensional syntax based on traditional mathematical notation .
For input and output, the Klerer–May system used a Friden Flexowriter modified to allow half-line motions for subscripts and superscripts. [ 1 ] The character set included digits, upper-case letters, subsets of 14 lower-case Latin letters and 18 Greek letters, arithmetic operators ( + − × / | ) and punctuation ( . , ( ) ), and eight special line-drawing characters (resembling ╲ ╱ ⎜ _ ⎨ ⎬ ˘ ⁔ ) used to construct multi-line brackets and symbols for summation , products , roots , and for multi-line division or fractions. [ 2 ] The system was intended to be forgiving of input mistakes, and easy to learn; its reference manual was only two pages. [ 3 ]
The system was developed by Melvin Klerer and Jack May at Columbia University 's Hudson Laboratories in Dobbs Ferry, New York , for the Office of Naval Research , and ran on GE-200 series computers. [ 2 ]
This typography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Klerer–May_System |
The Klimisch score is a method of assessing the reliability of toxicological studies, mainly for regulatory purposes, that was proposed by H.J. Klimisch, M. Andreae and U. Tillmann of the chemical company BASF in 1997 in a paper entitled A Systematic Approach for Evaluating the Quality of Experimental Toxicological and Ecotoxicological Data which was published in Regulatory Toxicology and Pharmacology . [ 1 ] It assigns studies to one of four categories as follows:
The applicable guidelines are the ( OECD Guidelines for the Testing of Chemicals , EU Test Methods ), and other such methods. Often studies are performed to more than one test guideline where they are in agreement as to the requirements. GLP is Good Laboratory Practice .
The scoring system is the standard method used in both the EU regulatory schemes (e.g. REACH Regulation ). Generally, only Klimisch scores of 1 or 2 can be used by themselves to cover an endpoint. However, Klimisch score 3 and 4 data can still be used as supporting studies or as part of a weight of evidence approach. The Klimisch score can be found as a standard field within the IUCLID database.
ECHA has produced guidance on how to assess the reliability of data [ 2 ]
Klimisch score has been criticized for favoring studies conducted under Good Laboratory Practice guidelines, which are mostly industry-funded studies. [ 3 ] A reliable study according to the Klimisch score can actually be highly flawed. [ 4 ] Klimisch score does not assess a number of study design criteria: randomization , blinding , sample size calculation, …. [ 5 ]
The ToxRTool was developed to assist with Klimisch scoring. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Klimisch_score |
In mathematics, a Kline sphere characterization, named after John Robert Kline , is a topological characterization of a two-dimensional sphere in terms of what sort of subset separates it. Its proof was one of the first notable accomplishments of R. H. Bing ; Bing gave an alternate proof using brick partitioning in his paper Complementary domains of continuous curves [ 1 ]
A simple closed curve in a two-dimensional sphere (for instance, its equator) separates the sphere into two pieces upon removal. If one removes a pair of points from a sphere, however, the remainder is connected . Kline's sphere characterization states that the converse is true: If a nondegenerate locally connected metric continuum is separated by any simple closed curve but by no pair of points, then it is a two-dimensional sphere. | https://en.wikipedia.org/wiki/Kline_sphere_characterization |
In Petrophysics a Klinkenberg correction is a procedure for calibration of permeability data obtained from a minipermeameter device. A more accurate correction factor can be obtained using Knudsen correction . When using nitrogen gas for core plug measurements, the Klinkenberg correction is usually necessary due to the so-called Klinkenberg gas slippage effect. This takes place when the pore space approaches the mean free path of the gas
Under steady state and laminar flow condition, Klinkenberg [ 1 ] demonstrated that the permeability of porous media to gases is approximately a linear function of the reciprocal pressure.
When Klinkenberg defined the interactions to be considered, he supposed the existence of a layer (sometimes called Knudsen layer), thinner than molecular mean free path, adjacent to the pore's wall where only molecules-wall collisions would occur and collisions among molecules could be ignored. Thus the slippage velocity, as obtained from the Klinkenberg's approach, captures the contribution of molecule-wall interactions and when this velocity is zero, the Poiseuille velocity profile (which results from molecule-molecule interactions) is recovered. However, Klinkenberg's formulation ignores the transition flow region, where neither molecule-molecule nor molecule-wall interactions can be neglected because both are playing a relevant role. [ 2 ] The feasibility of Klinkenberg linear function of the reciprocal pressure depends on the Knudsen number. For Knudsen numbers from 0.01 to 0.1 the Klinkenberg approach is acceptable.
Permeability is measured in the laboratory by encasing a core plug of known length and diameter in an air-tight sleeve (the Hassler Sleeve). A fluid of known viscosity is injected into the core plug while mounted in a steel chamber. The samples are either full diameter core samples that are intervals of whole core cut, typically 6 inches long, or 1-in plugs drilled from the cores. The pressure drop across the sample and the flow rate are measured and permeability is calculated using Darcy's law .
Normally, either nitrogen or brine can be used as a fluid. When high rates of flow can be maintained, the results are comparable. At low rates, air permeability will be higher than brine permeability. This is because gas does not adhere to the pore walls as liquid does, and the slippage of gases along the pore walls gives rise to an apparent dependence of permeability on pressure. This is called the Klinkenberg effect, and it is especially important in low-permeable rocks.
In probe permeametry (mini-permeameter) measurement nitrogen gas is injected from the probe into core through a probe sealed to a core slab by a gasket . The gas flows from the end of a small-diameter tube that is sealed against the core surface. The pressure in the probe and the corresponding volumetric gas flow rate is measured together. The gas permeability is determined by the equation:
Where,
Obviously what can be obtained from minipermeameter measurement is gas permeability. Gas slippage will occur during the measurement because nitrogen is injected quickly from probe to core and it is very difficult to get to equilibrium in very short time span. Therefore, to get the permeability equivalent to the brine permeability at formation condition Klinkenberg calibration is necessary. | https://en.wikipedia.org/wiki/Klinkenberg_correction |
In the theory of chemical reactivity , the Klopman–Salem equation describes the energetic change that occurs when two species approach each other in the course of a reaction and begin to interact, as their associated molecular orbitals begin to overlap with each other and atoms bearing partial charges begin to experience attractive or repulsive electrostatic forces. First described independently by Gilles Klopman [ 1 ] and Lionel Salem [ 2 ] in 1968, this relationship provides a mathematical basis for the key assumptions of frontier molecular orbital theory (i.e., theory of HOMO–LUMO interactions) and hard soft acid base (HSAB) theory . Conceptually, it highlights the importance of considering both electrostatic interactions and orbital interactions (and weighing the relative significance of each) when rationalizing the selectivity or reactivity of a chemical process.
In modern form, [ 3 ] the Klopman–Salem equation is commonly given as:
Δ E = ( − ∑ a , b ( q a + q b ) β a b S a b ) + ( ∑ k < ℓ Q k Q ℓ ε R k ℓ ) + ( ∑ r o c c . ∑ s u n o c c . − ∑ s o c c . ∑ r u n o c c . 2 ( ∑ a , b c r a c s b β a b ) 2 E r − E s ) {\displaystyle \Delta E={\Big (}-\sum _{a,b}(q_{a}+q_{b})\beta _{ab}S_{ab}{\Big )}+{\Big (}\sum _{k<\ell }{\frac {Q_{k}Q_{\ell }}{\varepsilon R_{k\ell }}}{\Big )}+{\Big (}\sum _{r}^{\mathrm {occ.} }\sum _{s}^{\mathrm {unocc.} }-\sum _{s}^{\mathrm {occ.} }\sum _{r}^{\mathrm {unocc.} }{\frac {2(\sum _{a,b}c_{ra}c_{sb}\beta _{ab})^{2}}{E_{r}-E_{s}}}{\Big )}} ,
where:
q a {\displaystyle q_{a}} is the electron population in atomic orbital a {\displaystyle a} ,
β a b {\displaystyle \beta _{ab}} , S a b {\displaystyle S_{ab}} are the resonance and overlap integrals for the interaction of atomic orbitals a {\displaystyle a} and b {\displaystyle b} ,
Q k {\displaystyle Q_{k}} is the total charge on atom k {\displaystyle k} ,
ε {\displaystyle \varepsilon } is the local dielectric constant,
R k ℓ {\displaystyle R_{k\ell }} is the distance between the nuclei of atoms k {\displaystyle k} and l {\displaystyle l} ,
c r a {\displaystyle c_{ra}} is the coefficient of atomic orbital a {\displaystyle a} in molecular orbital r {\displaystyle r} , and
E r {\displaystyle E_{r}} is the energy of molecular orbital r {\displaystyle r} .
Broadly speaking, the first term describes the closed-shell repulsion of the occupied molecular orbitals of the reactants (contribution from four-electron filled–filled interactions, exchange interactions or Pauli repulsion [ 4 ] ). The second term describes the coulombic attraction or repulsion between the atoms of the reactants (contribution from ionic interactions, electrostatic effects or coulombic interactions ). Finally, the third term accounts for all possible interactions between the occupied and unoccupied molecular orbitals of the reactants (contribution from two-electron filled–unfilled interactions, stereoelectronic effects or electron delocalization [ 5 ] ). Although conceptually useful, the Klopman–Salem equation seldom serves as the basis for energetic analysis in modern quantum chemical calculations.
Because of the difference in MO energies appearing in the denominator of the third term, energetically close orbitals make the biggest contribution. Hence, approximately speaking, analysis can often be simplified by considering only the highest occupied and lowest unoccupied molecular orbitals of the reactants (the HOMO–LUMO interaction in frontier molecular orbital theory). [ 6 ] The relative contributions of the second (ionic) and third (covalent) terms play an important role in justifying HSAB theory, with hard–hard interactions governed by the ionic term and soft-soft interactions governed by the covalent term. [ 7 ] | https://en.wikipedia.org/wiki/Klopman–Salem_equation |
Klotz Digital AG was a manufacturer of audio media products based in Munich, Germany ; it was founded in 1990 and acquired by United Screens Media AG in 2009. The company was active in the two business segments Public Address and Radio & TV Broadcast . Its products include systems for radio broadcast, television broadcast, live sound , public address , and commercial sound.
Klotz Digital was founded in 1990 by Thomas Klotz.
The company's products were first used in live sound installations and later in the 1990s found their way into broadcast facilities. In 2002 the company entered into the public address market with a digital public address product line named Varizone . [ 1 ] The live sound, broadcast, and public address markets were the main markets for the company.
At the end of 2009, Klotz Digital AG was acquired by United Screens Media AG. Thomas Klotz resigned from his position, and Dr. Andreas Gruettner, known as CEO of United Screens Media AG, was appointed Klotz Digital’s new CEO. [ 2 ] The company was then renamed to QPhonics AG and finally after its insolvency in 2013 turned into a company named Qphonics GmbH which went into insolvency in 2015.
Klotz Communications GmbH, the new company from Thomas Klotz and his partner Andre Sauer, has purchased the assets of the former Qphonics GmbH from the company's insolvency lawyers. Klotz Communications is now the sole owner of all intellectual property, including hardware and software, and controls all licensing, maintenance and upgrades. [ 3 ] [ 4 ]
The broadcast products range from stand-alone on-air mixing consoles for radio and TV stations to a suite of products to enable efficient workflows in large broadcast facilities and production studios. | https://en.wikipedia.org/wiki/Klotz_Digital |
A kludge or kluge ( / k l ʌ dʒ , k l uː dʒ / ) is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. This term is used in diverse fields such as computer science , aerospace engineering , Internet slang , evolutionary neuroscience , animation and government. It is similar in meaning to the naval term jury rig .
The word has alternate spellings ( kludge and kluge ), pronunciations ( / k l ʌ dʒ / and / k l uː dʒ / , rhyming with judge and stooge , respectively), and several proposed etymologies .
The Oxford English Dictionary (2nd ed., 1989), cites Jackson W. Granholm's 1962 "How to Design a Kludge" article [ 1 ] in the American computer magazine Datamation . [ 2 ]
kludge /kluːdʒ/ Also kluge . (J. W. Granholm's jocular invention: see first quot.; cf. also bodge v., fudge v.) 'An ill-assorted collection of poorly-matching parts, forming a distressing whole' (Granholm); esp. in Computing , a machine, system, or program that has been improvised or 'bodged' together; a hastily improvised and poorly thought-out solution to a fault or 'bug'. ...
OED defines these two kludge cognates as: bodge 'to patch or mend clumsily' and fudge 'to fit together or adjust in a clumsy, makeshift, or dishonest manner'. The OED entry also includes the verb kludge ('to improvise with a kludge or kludges') and kludgemanship ('skill in designing or applying kludges').
Granholm humorously imagined a fictitious source for the term: [ 1 ]
Phineas Burling is the chief calligrapher with the Fink and Wiggles Publishing Company, Inc. ... According to Burling, the word "kludge" first appeared in the English language in the early fifteen-hundreds. ...
The word "kludge" is, according to Burling, derived from the same root as the German klug (Dutch kloog , Swedish klag , Danish klog , Gothic klaugen , Lettish [Latvian] kladnis and Sanskrit veklaunn ), originally meaning 'smart' or 'witty'. In the typical machinations of language in evolutionary growth, the word "kludge" eventually came to mean 'not so smart' or 'pretty ridiculous' .... Today "kludge" forms one of the most beloved words in design terminology, and it stands ready for handy application to the work of anyone who gins up 110-volt circuitry to plug into the 220 VAC source. The building of a kludge, however, is not work for amateurs.
Although OED accepts Granholm's coinage of the term (not the fanciful pseudo-etymology quoted above), there are examples of its use before the 1960s.
American Yiddish speakers use klug ( קלוג ) to mean 'too smart by half', the reflected meaning of German klug ('clever'). This may explain the idea of 'clever but clumsy and temporary', as well as the pronunciation variation from German. [ 3 ] A reasonable translation of kludge into German yields Krücke i.e. 'crutch' (cf. bridge vs. Brücke ).
Cf. German Kloß ('dumpling', 'clod', diminutive Klößchen ), Low Saxon klut , klute , Dutch kluit , [ 4 ] perhaps related to Low German diminutive klütje ('dumpling', 'clod'), standard Danish kludder ('mess, disorder, clutter') and Danish Jutland dialect klyt ('piece of bad workmanship'),. [ 5 ]
Arguments against the derivation from German klug :
An alternative etymology [ 6 ] suggests that the kludge spelling in particular derives ultimately from a word in Scots (a language closely related to English): cludge or cludgie/cludgey meaning 'toilet' (in either the room or device sense), [ 7 ] with the kluge spelling possibly deriving from German, until the two terms were confused in the mid-20th century, as British and American (respectively) military slang. [ 6 ] (See below .)
The Jargon File (a.k.a. The New Hacker's Dictionary ), a glossary of computer programmer slang maintained by Eric S. Raymond , differentiates kludge from kluge and cites usage examples pre-dating 1962. Kluge seems to have the sense of 'overcomplicated', while kludge has only the sense of 'poorly done'. [ 6 ]
kludge /kluhj/
This Jargon File entry notes that kludge apparently derives via British military slang from Scots cludge/cludgie ('toilet'), and became confused with American kluge during or after World War II. [ 6 ]
kluge : /klooj/ [from the German klug , 'clever'; poss. related to Polish & Russian klucz ('a key, a hint, a main point')]
This entry notes kluge , which is now often spelled kludge , "was the original spelling, reported around computers as far back as the mid-1950s and, at that time, used exclusively of hardware kluges". [ 6 ]
Kluge "was common Navy slang in the World War II era for any piece of electronics that worked well on shore but consistently failed at sea". [ 6 ] A summary of a 1947 article in the New York Folklore Quarterly states: [ 8 ] [ 9 ]
On being drafted into the navy, Murgatroyd gave his profession as "kluge maker" .... Whenever Murgatroyd was asked what he was doing, he said he was making a kluge, and actually he was one of the world's best kluge makers. Not wanting to seem ignorant, his superiors kept giving him commendations and promotions. ... One day ... the admiral asked him what a kluge was – the first person ever to do so. Murgatroyd said it was hard to explain, but he would make one so the admiral could see what it was. After a couple of days, he returned with a complex object.
"Interesting," said the admiral, "but what does it do?" In reply, Murgatroyd dropped it over the side of the ship. As the thing sank, it went "kluge".
The Jargon File further includes kluge around , 'to avoid a bug or difficult condition by inserting a kluge', and kluge up , 'to lash together a quick hack to perform a task'.
After Granholm's 1962 article popularized the kludge variant, both were interchangeably used and confused. The Jargon File concludes: [ 6 ]
The result of this history is a tangle. Many younger U.S. hackers pronounce the word as /klooj/ but spell it, incorrectly for its meaning and pronunciation, as 'kludge'. ... British hackers mostly learned /kluhj/ orally, use it in a restricted negative sense and are at least consistent. European hackers have mostly learned the word from written American sources and tend to pronounce it /kluhj/ but use the wider American meaning! Some observers consider this mess appropriate in view of the word's meaning.
In aerospace , a kludge was a temporary design using separate commonly available components that were not flightworthy in order to proof the design and enable concurrent software development while the integrated components were developed and manufactured. The term was in common enough use to appear in a fictional movie about the US space program. [ 10 ]
Perhaps the ultimate kludge was the first US space station , Skylab . Its two major components, the Saturn Workshop and the Apollo Telescope Mount , began development as separate projects (the SWS was kludged from the S-IVB stage of the Saturn 1B and Saturn V launch vehicles, the ATM was kludged from an early design for the descent stage of the Apollo Lunar Module ). Later the SWS and ATM were folded into the Apollo Applications Program , but the components were to have been launched separately, then docked in orbit. In the final design, the SWS and ATM were launched together, but for the single-launch concept to work, the ATM had to pivot 90 degrees on a truss structure from its launch position to its on-orbit orientation, clearing the way for the crew to dock its Apollo Command/Service Module at the axial docking port of the Multiple Docking Adapter.
The Airlock Module's manufacturer, McDonnell Douglas , even recycled the hatch design from its Gemini spacecraft and kludged what was originally designed for the conical Gemini Command Module onto the cylindrical Skylab Airlock Module. The Skylab project, managed by the National Aeronautics and Space Administration 's Marshall Space Flight Center , was seen by the Manned Spacecraft Center (later Johnson Space Center ) as an invasion of its historical role as the NASA center for manned spaceflight. Thus, MSC personnel missed no opportunity to disparage the Skylab project, calling it "the kludge". [ 11 ]
In modern computing terminology, a "kludge" (or often a " hack ") is a solution to a problem, the performance of a task, or a system fix which is inefficient, inelegant ("hacky"), or even incomprehensible, but which somehow works. It is similar to a workaround , but quick. To "kludge around something" is to avoid a bug or difficulty by building a kludge, perhaps exploiting properties of the bug itself. A kludge is often used to modify a working system while avoiding fundamental changes, or to ensure backwards compatibility. Hack can also be used with a positive connotation, for a quick solution to a frustrating problem. [ 12 ] [ 13 ]
A kludge is often used to fix an unanticipated problem in an earlier kludge; this is essentially a kind of cruft .
A solution might be a kludge if it fails in corner cases . An intimate knowledge of the problem domain and execution environment is typically required to build a corner-case kludge. More commonly, a kludge is a heuristic which was expected to work almost always, but ends up failing often.
A 1960s Soviet anecdote tells of a computer part which needed a slightly delayed signal to work. Rather than setting up a timing system, the kludge was to connect long coils of internal wires to slow the electrical signal.
Another type of kludge is the evasion of an unknown problem or bug in a computer program . Rather than continue to struggle to diagnose and fix the bug, the programmer may write additional code to compensate. For example, if a variable keeps ending up doubled, a kludge may be to add later code that divides by two rather than to search for the original incorrect computation.
In computer networking, use of NAT (Network Address Translation) (RFC 1918) or PAT (Port Address Translation) to cope with the shortage of IPv4 addresses is an example of a kludge.
In FidoNet terminology, kludge refers to a piece of control data embedded inside a message.
The kludge or kluge metaphor has been adapted in fields such as evolutionary neuroscience , particularly in reference to the human brain .
The neuroscientist David Linden discusses how intelligent design proponents have misconstrued brain anatomy: [ 14 ]
The transcendent aspects of our human experience, the things that touch our emotional and cognitive core, were not given to us by a Great Engineer. These are not the latest design features of an impeccably crafted brain. Rather, at every turn, brain design has been a kludge, a workaround, a jumble, a pastiche. The things we hold highest in our human experience (love, memory, dreams, and a predisposition for religious thought) result from a particular agglomeration of ad hoc solutions that have been piled on through millions of years of evolution history. It's not that we have fundamentally human thoughts and feelings despite the kludgy design of the brain as molded by the twists and turns of evolutionary history. Rather, we have them precisely because of that history.
The research psychologist Gary Marcus 's book Kluge: The Haphazard Construction of the Human Mind compares evolutionary kluges with engineering ones like manifold vacuum -powered windshield wipers – when accelerating or driving uphill, "Your wipers slowed to a crawl, or even stopped working altogether." Marcus described a biological kluge: [ 15 ]
For instance, the vertebrate eye's retina that is installed backward, facing the back of the head rather than the front. As a result, all kinds of stuff gets in its way, including a bunch of wiring that passes through the eye and leaves us with a pair of blind spots , one in each eye.
In John Varley 's 1985 short story "Press Enter_", the antagonist, a reclusive hacker, adopts the identity Charles Kluge.
In the science fiction television series Andromeda , genetically engineered human beings called Nietzscheans use the term disparagingly to refer to genetically unmodified humans.
In a 2012 article, political scientist Steven Teles used the term "kludgeocracy" to criticize the complexity of social welfare policy in the United States. Teles argues that institutional and political obstacles to passing legislation often drive policy makers to accept expedient fixes rather than carefully thought out reforms. [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Kludge |
The Klumpke-Roberts Award , one of seven international and national awards for service to astronomy and astronomy education given by the Astronomical Society of the Pacific , was established from a bequest by astronomer Dorothea Klumpke-Roberts to honor her husband Isaac Roberts and her parents.
It recognizes outstanding contributions to the public understanding and appreciation of astronomy. [ 1 ] It is open to "individuals involved in science, education, writing/publishing, broadcasting, astronomy popularization, the arts, or other pursuits" from all nations and is the most prestigious award of its kind.
Source: Astronomical Society of the Pacific | https://en.wikipedia.org/wiki/Klumpke-Roberts_Award |
In stereochemistry , the Klyne–Prelog system (named for William Klyne and Vladimir Prelog ) for describing conformations about a single bond offers a more systematic means to unambiguously name complex structures, where the torsional or dihedral angles are not found to occur in 60° increments. [ 1 ] Klyne notation views the placement of the substituent on the front atom as being in regions of space called anti/syn and clinal/periplanar relative to a reference group on the rear atom. A plus (+) or minus (−) sign is placed at the front to indicate the sign of the dihedral angle. Anti or syn indicates the substituents are on opposite sides or the same side, respectively. Clinal substituents are found within 30° of either side of a dihedral angle of 60° (from 30° to 90°), 120° (90°–150°), 240° (210°–270°), or 300° (270°–330°). Periplanar substituents are found within 30° of either 0° (330°–30°) or 180° (150°–210°). Juxtaposing the designations produces the following terms for the conformers of butane (see Alkane stereochemistry for an explanation of conformation nomenclature): gauche butane is syn-clinal ( +sc or −sc , depending on the enantiomer ), anti butane is anti-periplanar , and eclipsed butane is syn-periplanar . [ 2 ]
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Klyne–Prelog_system |
In mathematics , a partially ordered set P is said to have Knaster's condition upwards (sometimes property (K) ) if any uncountable subset A of P has an upwards-linked uncountable subset. An analogous definition applies to Knaster's condition downwards .
The property is named after Polish mathematician Bronisław Knaster .
Knaster's condition implies the countable chain condition (ccc), and it is sometimes used in conjunction with a weaker form of Martin's axiom , where the ccc requirement is replaced with Knaster's condition. Not unlike ccc, Knaster's condition is also sometimes used as a property of a topological space , in which case it means that the topology (as in, the family of all open sets) with inclusion satisfies the condition.
Furthermore, assuming MA ( ω 1 {\displaystyle \omega _{1}} ), ccc implies Knaster's condition, making the two equivalent.
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knaster's_condition |
The Knaster–Kuratowski–Mazurkiewicz lemma is a basic result in mathematical fixed-point theory published in 1929 by Knaster , Kuratowski and Mazurkiewicz . [ 1 ]
The KKM lemma can be proved from Sperner's lemma and can be used to prove the Brouwer fixed-point theorem .
Let Δ n − 1 {\displaystyle \Delta _{n-1}} be an ( n − 1 ) {\displaystyle (n-1)} -dimensional simplex with n vertices labeled as 1 , … , n {\displaystyle 1,\ldots ,n} .
A KKM covering is defined as a set C 1 , … , C n {\displaystyle C_{1},\ldots ,C_{n}} of closed sets such that for any I ⊆ { 1 , … , n } {\displaystyle I\subseteq \{1,\ldots ,n\}} , the convex hull of the vertices corresponding to I {\displaystyle I} is covered by ⋃ i ∈ I C i {\displaystyle \bigcup _{i\in I}C_{i}} .
The KKM lemma says that in every KKM covering, the common intersection of all n sets is nonempty , i.e:
When n = 3 {\displaystyle n=3} , the KKM lemma considers the simplex Δ 2 {\displaystyle \Delta _{2}} which is a triangle, whose vertices can be labeled 1, 2 and 3. We are given three closed sets C 1 , C 2 , C 3 {\displaystyle C_{1},C_{2},C_{3}} such that:
The KKM lemma states that the sets C 1 , C 2 , C 3 {\displaystyle C_{1},C_{2},C_{3}} have at least one point in common.
The lemma is illustrated by the picture on the right, in which set #1 is blue, set #2 is red and set #3 is green. The KKM requirements are satisfied, since:
The KKM lemma states that there is a point covered by all three colors simultaneously; such a point is clearly visible in the picture.
Note that it is important that all sets are closed, i.e., contain their boundary. If, for example, the red set is not closed, then it is possible that the central point is contained only in the blue and green sets, and then the intersection of all three sets may be empty.
There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in
the top row can be deduced from the one below it in the same column. [ 2 ]
David Gale proved the following generalization of the KKM lemma. [ 3 ] Suppose that, instead of one KKM covering, we have n different KKM coverings: C 1 1 , … , C n 1 , … , C 1 n , … , C n n {\displaystyle C_{1}^{1},\ldots ,C_{n}^{1},\ldots ,C_{1}^{n},\ldots ,C_{n}^{n}} . Then, there exists a permutation π {\displaystyle \pi } of the coverings with a non-empty intersection, i.e:
The name "rainbow KKM lemma" is inspired by Gale's description of his lemma:
"A colloquial statement of this result is... if each of three people paint a triangle red, white and blue according to the KKM rules, then there will be a point which is in the red set of one person, the white set of another, the blue of the third". [ 3 ]
The rainbow KKM lemma can be proved using a rainbow generalization of Sperner's lemma . [ 4 ]
The original KKM lemma follows from the rainbow KKM lemma by simply picking n identical coverings.
A connector of a simplex is a connected set that touches all n faces of the simplex.
A connector-free covering is a covering C 1 , … , C n {\displaystyle C_{1},\ldots ,C_{n}} in which no C i {\displaystyle C_{i}} contains a connector.
Any KKM covering is a connector-free covering, since in a KKM covering, no C i {\displaystyle C_{i}} even touches all n faces. However, there are connector-free coverings that are not KKM coverings. An example is illustrated at the right. There, the red set touches all three faces, but it does not contain any connector, since no connected component of it touches all three faces.
A theorem of Ravindra Bapat , generalizing Sperner's lemma , [ 5 ] : chapter 16, pp. 257–261 implies the KKM lemma extends to connector-free coverings (he proved his theorem for n = 3 {\displaystyle n=3} ).
The connector-free variant also has a permutation variant, so that both these generalizations can be used simultaneously.
The KKMS theorem is a generalization of the KKM lemma by Lloyd Shapley . It is useful in economics , especially in cooperative game theory . [ 6 ]
While a KKM covering contains n closed sets, a KKMS covering contains 2 n − 1 {\displaystyle 2^{n}-1} closed sets - indexed by the nonempty subsets of [ n ] {\displaystyle [n]} (equivalently: by nonempty faces of Δ n − 1 {\displaystyle \Delta _{n-1}} ). For any I ⊆ [ n ] {\displaystyle I\subseteq [n]} , the convex hull of the vertices corresponding to I {\displaystyle I} should be covered by the union of sets corresponding to subsets of I {\displaystyle I} , that is:
conv ( { v i : i ∈ I } ) ⊆ ⋃ J ⊆ I C J {\displaystyle \operatorname {conv} (\{v_{i}:i\in I\})\subseteq \bigcup _{J\subseteq I}C_{J}} .
Any KKM covering is a special case of a KKMS covering. In a KKM covering, the n sets corresponding to singletons are nonempty, while the other sets are empty. However, there are many other KKMS coverings.
in general, it is not true that the common intersection of all 2 n − 1 {\displaystyle 2^{n}-1} sets in a KKMS covering is nonempty; this is illustrated by the special case of a KKM covering, in which most sets are empty.
The KKMS theorem says that, in every KKMS covering, there is a balanced collection B {\displaystyle B} of 2 [ n ] {\displaystyle 2^{[n]}} , such that the intersection of sets indexed by B {\displaystyle B} is nonempty : [ 7 ]
It remains to explain what a "balanced collection" is. A collection B {\displaystyle B} of subsets of [ n ] {\displaystyle [n]} is called balanced if there is a weight function on B {\displaystyle B} (assigning a weight w J ≥ 0 {\displaystyle w_{J}\geq 0} to every J ∈ B {\displaystyle J\in B} ), such that, for each element i ∈ [ n ] {\displaystyle i\in [n]} , the sum of weights of all subsets containing i {\displaystyle i} is exactly 1. For example, suppose n = 3 {\displaystyle n=3} . Then:
In hypergraph terminology , a collection B is balanced with respect to its ground-set V , iff the hypergraph with vertex-set V and edge-set B admits a perfect fractional matching.
The KKMS theorem implies the KKM lemma. [ 7 ] Suppose we have a KKM covering C i {\displaystyle C_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} . Construct a KKMS covering C J ′ {\displaystyle C'_{J}} as follows:
The KKM condition on the original covering C i {\displaystyle C_{i}} implies the KKMS condition on the new covering C J ′ {\displaystyle C'_{J}} . Therefore, there exists a balanced collection such that the corresponding sets in the new covering have nonempty intersection. But the only possible balanced collection is the collection of all singletons; hence, the original covering has nonempty intersection.
The KKMS theorem has various proofs. [ 8 ] [ 9 ] [ 10 ]
Reny and Wooders proved that the balanced set can also be chosen to be partnered . [ 11 ]
Zhou proved a variant of the KKMS theorem where the covering consists of open sets rather than closed sets. [ 12 ]
Hidetoshi Komiya generalized the KKMS theorem from simplices to polytopes . [ 9 ] Let P be any compact convex polytope. Let Faces ( P ) {\displaystyle {\textrm {Faces}}(P)} be the set of nonempty faces of P . A Komiya covering of P is a family of closed sets { C F : F ∈ Faces ( P ) } {\displaystyle \{C_{F}:F\in {\textrm {Faces}}(P)\}} such that for every face F ∈ Faces ( P ) {\displaystyle F\in {\textrm {Faces}}(P)} : F ⊆ ⋃ G ⊆ F , G ∈ Faces ( P ) C G . F\subseteq \bigcup _{G\subseteq F,~G\in {\textrm {Faces}}(P)}C_{G}. Komiya's theorem says that for every Komiya covering of P , there is a balanced collection B ⊆ Faces ( P ) {\displaystyle B\subseteq {\textrm {Faces}}(P)} , such that the intersection of sets indexed by B {\displaystyle B} is nonempty: [ 7 ]
Komiya's theorem also generalizes the definition of a balanced collection: instead of requiring that there is a weight function on B {\displaystyle B} such that the sum of weights near each vertex of P is 1, we start by choosing any set of points b = { b F : F ∈ Faces ( P ) , b F ∈ F } {\displaystyle {\textbf {b}}=\{b^{F}:F\in {\textrm {Faces}}(P),b^{F}\in F\}} . A collection B ⊆ Faces ( P ) {\displaystyle B\subseteq {\textrm {Faces}}(P)} is called balanced with respect to b {\displaystyle {\textbf {b}}} iff b P ∈ conv { b F : F ∈ B } {\displaystyle b^{P}\in \operatorname {conv} \{b^{F}:F\in B\}} , that is, the point assigned to the entire polygon P is a convex combination of the points assigned to the faces in the collection B .
The KKMS theorem is a special case of Komiya's theorem in which the polytope P = Δ n − 1 {\displaystyle P=\Delta _{n-1}} and b F {\displaystyle b^{F}} is the barycenter of the face F (in particular, b P {\displaystyle b^{P}} is the barycenter of Δ n − 1 {\displaystyle \Delta _{n-1}} , which is the point ( 1 / n , … , 1 / n ) {\displaystyle (1/n,\ldots ,1/n)} ).
Oleg R. Musin proved several generalizations of the KKM lemma and KKMS theorem, with boundary conditions on the coverings. The boundary conditions are related to homotopy . [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Knaster–Kuratowski–Mazurkiewicz_lemma |
In the mathematical areas of order and lattice theory , the Knaster–Tarski theorem , named after Bronisław Knaster and Alfred Tarski , states the following:
It was Tarski who stated the result in its most general form, [ 1 ] and so the theorem is often known as Tarski's fixed-point theorem . Some time earlier, Knaster and Tarski established the result for the special case where L is the lattice of subsets of a set, the power set lattice. [ 2 ]
The theorem has important applications in formal semantics of programming languages and abstract interpretation , as well as in game theory .
A kind of converse of this theorem was proved by Anne C. Davis : If every order-preserving function f : L → L on a lattice L has a fixed point, then L is a complete lattice. [ 3 ]
Since complete lattices cannot be empty (they must contain a supremum and infimum of the empty set), the theorem in particular guarantees the existence of at least one fixed point of f , and even the existence of a least fixed point (or greatest fixed point ). In many practical cases, this is the most important implication of the theorem.
The least fixpoint of f is the least element x such that f ( x ) = x , or, equivalently, such that f ( x ) ≤ x ; the dual holds for the greatest fixpoint , the greatest element x such that f ( x ) = x .
If f (lim x n ) = lim f ( x n ) for all ascending sequences x n , then the least fixpoint of f is lim f n (0) where 0 is the least element of L , thus giving a more "constructive" version of the theorem. (See: Kleene fixed-point theorem .) More generally, if f is monotonic, then the least fixpoint of f is the stationary limit of f α (0), taking α over the ordinals , where f α is defined by transfinite induction : f α+1 = f ( f α ) and f γ for a limit ordinal γ is the least upper bound of the f β for all β ordinals less than γ. [ 4 ] The dual theorem holds for the greatest fixpoint.
For example, in theoretical computer science , least fixed points of monotonic functions are used to define program semantics , see Least fixed point § Denotational semantics for an example. Often a more specialized version of the theorem is used, where L is assumed to be the lattice of all subsets of a certain set ordered by subset inclusion . This reflects the fact that in many applications only such lattices are considered. One then usually is looking for the smallest set that has the property of being a fixed point of the function f . Abstract interpretation makes ample use of the Knaster–Tarski theorem and the formulas giving the least and greatest fixpoints.
The Knaster–Tarski theorem can be used to give a simple proof of the Cantor–Bernstein–Schroeder theorem [ 5 ] [ 6 ] and it is also used in establishing the Banach–Tarski paradox .
Weaker versions of the Knaster–Tarski theorem can be formulated for ordered sets, but involve more complicated assumptions. For example: [ citation needed ]
This can be applied to obtain various theorems on invariant sets , e.g. the Ok's theorem:
In particular, using the Knaster-Tarski principle one can develop the theory of global attractors for noncontractive discontinuous (multivalued) iterated function systems . For weakly contractive iterated function systems the Kantorovich theorem (known also as Tarski-Kantorovich fixpoint principle) suffices.
Other applications of fixed-point principles for ordered sets come from the theory of differential , integral and operator equations.
Let us restate the theorem.
For a complete lattice ⟨ L , ≤ ⟩ {\displaystyle \langle L,\leq \rangle } and a monotone function f : L → L {\displaystyle f\colon L\rightarrow L} on L , the set of all fixpoints of f is also a complete lattice ⟨ P , ≤ ⟩ {\displaystyle \langle P,\leq \rangle } , with:
Proof. We begin by showing that P has both a least element and a greatest element. Let D = { x | x ≤ f ( x )} and x ∈ D (we know that at least 0 L belongs to D ). Then because f is monotone we have f ( x ) ≤ f ( f ( x )) , that is f ( x ) ∈ D .
Now let u = ⋁ D {\displaystyle u=\bigvee D} ( u exists because D ⊆ L and L is a complete lattice). Then for all x ∈ D it is true that x ≤ u and f ( x ) ≤ f ( u ) , so x ≤ f ( x ) ≤ f ( u ) . Therefore, f ( u ) is an upper bound of D , but u is the least upper bound, so u ≤ f ( u ) , i.e. u ∈ D . Then f ( u ) ∈ D (because f ( u ) ≤ f ( f ( u ))) and so f ( u ) ≤ u from which follows f ( u ) = u . Because every fixpoint is in D we have that u is the greatest fixpoint of f .
The function f is monotone on the dual (complete) lattice ⟨ L o p , ≥ ⟩ {\displaystyle \langle L^{op},\geq \rangle } . As we have just proved, its greatest fixpoint exists. It is the least fixpoint of L , so P has least and greatest elements, that is more generally, every monotone function on a complete lattice has a least fixpoint and a greatest fixpoint.
For a , b in L we write [ a , b ] for the closed interval with bounds a and b : { x ∈ L | a ≤ x ≤ b } . If a ≤ b , then ⟨[ a , b ], ≤⟩ is a complete lattice.
It remains to be proven that P is a complete lattice. Let 1 L = ⋁ L {\displaystyle 1_{L}=\bigvee L} , W ⊆ P and w = ⋁ W {\displaystyle w=\bigvee W} . We show that f ([ w , 1 L ]) ⊆ [ w , 1 L ] . Indeed, for every x ∈ W we have x = f ( x ) and since w is the least upper bound of W , x ≤ f ( w ) . In particular w ≤ f ( w ) . Then from y ∈ [ w , 1 L ] follows that w ≤ f ( w ) ≤ f ( y ) , giving f ( y ) ∈ [ w , 1 L ] or simply f ([ w , 1 L ]) ⊆ [ w , 1 L ] . This allows us to look at f as a function on the complete lattice [ w , 1 L ]. Then it has a least fixpoint there, giving us the least upper bound of W . We've shown that an arbitrary subset of P has a supremum, that is, P is a complete lattice.
Chang, Lyuu and Ti [ 7 ] present an algorithm for finding a Tarski fixed-point in a totally-ordered lattice, when the order-preserving function is given by a value oracle . Their algorithm requires O ( log L ) {\displaystyle O(\log L)} queries, where L is the number of elements in the lattice. In contrast, for a general lattice (given as an oracle), they prove a lower bound of Ω ( L ) {\displaystyle \Omega (L)} queries.
Deng, Qi and Ye [ 8 ] present several algorithms for finding a Tarski fixed-point. They consider two kinds of lattices: componentwise ordering and lexicographic ordering . They consider two kinds of input for the function f : value oracle , or a polynomial function. Their algorithms have the following runtime complexity (where d is the number of dimensions, and N i is the number of elements in dimension i ):
The algorithms are based on binary search . On the other hand, determining whether a given fixed point is unique is computationally hard:
For d =2, for componentwise lattice and a value-oracle, the complexity of O ( log 2 L ) {\displaystyle O(\log ^{2}L)} is optimal. [ 9 ] But for d >2, there are faster algorithms:
Tarski's fixed-point theorem has applications to supermodular games . [ 8 ] A supermodular game (also called a game of strategic complements [ 12 ] ) is a game in which the utility function of each player has increasing differences , so the best response of a player is a weakly-increasing function of other players' strategies. For example, consider a game of competition between two firms. Each firm has to decide how much money to spend on research. In general, if one firm spends more on research, the other firm's best response is to spend more on research too. Some common games can be modeled as supermodular games, for example Cournot competition , Bertrand competition and Investment Games .
Because the best-response functions are monotone, Tarski's fixed-point theorem can be used to prove the existence of a pure-strategy Nash equilibrium (PNE) in a supermodular game. Moreover, Topkis [ 13 ] showed that the set of PNE of a supermodular game is a complete lattice, so the game has a "smallest" PNE and a "largest" PNE.
Echenique [ 14 ] presents an algorithm for finding all PNE in a supermodular game. His algorithm first uses best-response sequences to find the smallest and largest PNE; then, he removes some strategies and repeats, until all PNE are found. His algorithm is exponential in the worst case, but runs fast in practice. Deng, Qi and Ye [ 8 ] show that a PNE can be computed efficiently by finding a Tarski fixed-point of an order-preserving mapping associated with the game. | https://en.wikipedia.org/wiki/Knaster–Tarski_theorem |
A kneader reactor (or kneading reactor ) is a device used for mixing and kneading substances with high viscosity . Many industries, such as the food processing , utilize kneader reactors to produce goods, as for example, polymers or chewing gum . Although the machine has existed for decades, kneader reactors are only recently gaining popularity in the processing industry .
The kneading reactor is a horizontal mixing machine with two Sigma, or Z-type blades. These blades are driven by separate gears at different speeds, one running 1.5 times faster than the other. The reactor has one powerful motor and a speed reducer to drive the two blades. The kneader reactor usually has a W-type barrel with a hydraulic tilt that turns it, and a heating jacket outside.
The kneader reactor uses very high viscosity materials such as chewing gum , dough , toffee , Plasticine , rubber , silicone , adhesive or resin. These materials have a viscosity of approximately 1,000,000 cps. They are mixed with reactants such as liquids, powders or slurries ; the reaction mass does not undergo a physical phase change while the reaction takes place.
If a phase change does occur during processing, the conventional technology requires the use of diluents (or dilutants). Diluents are solvents which decrease the viscosity of the reaction mass, enabling mixing in the reactor, and help to control the reaction temperature.
More recently, manufacturers have sought technological solutions that allow synthesis in the concentrated phase, minimizing or eliminating the use of solvents and thus intensifying the process. This "dry" process is possible in a kneader reactor.
The Sigma kneader was developed by Heinz List , a pioneer of modern industrial processing technology. List recognized that processing in the concentrated phase with little to no solvent, also known as "dry processing", would increase process yield per unit volume and would therefore be more profitable. List developed the reactor to overcome the technical complexities of processing in the concentrated phase.
Kneader reactors offer a number of technological advantages for dry processing:
Kneader reactor technology has long been used for what is known as “Process Intensification”, where multiple processing steps are performed in the same unit. Such units are characterized by high yield per performance volume and also have the flexibility to produce different grades and/or products. | https://en.wikipedia.org/wiki/Kneader_reactor |
A knee wall is a short wall , typically under three feet (one metre) in height, used to support the rafters in timber roof construction. In his book A Visual Dictionary of Architecture , Francis D. K. Ching defines a knee wall as "a short wall supporting rafters at some intermediate position along their length." [ 1 ] The knee wall provides support to rafters which therefore need not be large enough to span from the ridge to the eaves . Typically the knee wall is covered with plaster or gypsum board.
The term is derived from the association with a human knee, partly bent. Knee walls are common in houses in which the ceiling on the top floor is an attic , i.e. the ceiling is the underside of the roof and slopes down on one or more sides.
Since there is no legal definition of the jamb height, further specifications are required as to how it is to be measured. It is generally accepted that the knee wall begins at the upper edge of the soffit of the floor below. There is no uniform use of the term for the end of the knee stick. In the greatest extent of the knee wall, this extends to the imaginary intersection of the outer wall with the upper edge of the rafter. The smallest dimension is when the knee wall is understood to mean only the wall that goes beyond the attic ceiling (excluding the base purlin). Between these possibilities, other methods of measurement are in use. [ 2 ]
This architectural element –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knee_wall |
A Knelson concentrator is a type of gravity concentration apparatus, predominantly used in the gold mining industry. It is used for the recovery of fine particles of free gold, meaning gold that does not require gold cyanidation for recovery. [ 1 ]
The machines utilise the principles of a centrifuge to enhance the gravitational force experienced by feed particles to effect separation based on particle density. The key components of the unit are a cone shaped "concentrate" bowl, rotated at high speed by an electric motor and a pressurized water jacket encompassing the bowl. Feed material, typically from a ball mill discharge or cyclone underflow bleed, is fed as a slurry toward the centre of the bowl from above. The feed slurry contacts the base plate of the vessel and, due to its rotation, is thrust outward. The outer extremities of the concentrate bowl house a series of ribs and between each pair of ribs is a groove. During operation the lighter material flows upward over the grooves and heavy mineral particles (usually of economic value) become trapped within them. Pressurized water is injected through a series of tangential water inlets along the perimeter of each groove to maintain a fluidized bed of particles in which heavy mineral particles can be efficiently concentrated. The Knelson concentrator typically operates as a batch process, with lighter gangue material being continuously discharged via overflow and a heavy mineral concentrate periodically removed by flushing the bowl with water.
The first Knelson concentrator was developed by Byron Knelson in 1980. [ 2 ]
Knelson concentrators are used in a number of gold mines operated by AngloGold Ashanti , like Geita , Sunrise Dam and the Serra Grande Gold Mine as well as Barrick's TGO Tomingley Gold Operation Bulyanhulu Gold Mine . [ 3 ] | https://en.wikipedia.org/wiki/Knelson_concentrator |
In mathematics , the Kneser theorem can refer to two distinct theorems in the field of ordinary differential equations :
Consider an ordinary linear homogeneous differential equation of the form
with
continuous .
We say this equation is oscillating if it has a solution y with infinitely many zeros, and non-oscillating otherwise.
The theorem states [ 1 ] that the equation is non-oscillating if
and oscillating if
To illustrate the theorem consider
where a {\displaystyle a} is real and non-zero. According to the theorem, solutions will be oscillating or not depending on whether a {\displaystyle a} is positive (non-oscillating) or negative (oscillating) because
To find the solutions for this choice of q ( x ) {\displaystyle q(x)} , and verify the theorem for this example, substitute the 'Ansatz'
which gives
This means that (for non-zero a {\displaystyle a} ) the general solution is
where A {\displaystyle A} and B {\displaystyle B} are arbitrary constants.
It is not hard to see that for positive a {\displaystyle a} the solutions do not oscillate while for negative a = − ω 2 {\displaystyle a=-\omega ^{2}} the identity
shows that they do.
The general result follows from this example by the Sturm–Picone comparison theorem .
There are many extensions to this result, such as the Gesztesy–Ünal criterion. [ 2 ]
While Peano's existence theorem guarantees the existence of solutions of certain initial values problems with continuous right hand side, H. Kneser's theorem deals with the topology of the set of those solutions. Precisely, H. Kneser's theorem states the following: [ 3 ] [ 4 ]
Let f : R × R n → R n {\displaystyle f\colon \mathbb {R} \times \mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} be a continuous function on the region R := [ t 0 , t 0 + a ] × { x ∈ R n : ‖ x − x 0 ‖ ≤ b } {\displaystyle {\mathcal {R}}:=[t_{0},t_{0}+a]\times \{x\in \mathbb {R} ^{n}:\Vert x-x_{0}\Vert \leq b\}} , and such that | f ( t , x ) | ≤ M {\displaystyle |f(t,x)|\leq M} for all ( t , x ) ∈ R {\displaystyle (t,x)\in {\mathcal {R}}} .
Given a real number c {\displaystyle c} satisfying t 0 < c ≤ t 0 + min ( a , b / M ) {\displaystyle t_{0}<c\leq t_{0}+\min(a,b/M)} , define the set S c {\displaystyle S_{c}} as the set of points x c {\displaystyle x_{c}} for which there is a solution x = x ( t ) {\displaystyle x=x(t)} of x ˙ = f ( t , x ) {\displaystyle {\dot {x}}=f(t,x)} such that x ( t 0 ) = x 0 {\displaystyle x(t_{0})=x_{0}} and x ( c ) = x c {\displaystyle x(c)=x_{c}} . Then S c {\displaystyle S_{c}} is a closed and connected set. | https://en.wikipedia.org/wiki/Kneser's_theorem_(differential_equations) |
In graph theory , the Kneser graph K ( n , k ) (alternatively KG n , k ) is the graph whose vertices correspond to the k -element subsets of a set of n elements , and where two vertices are adjacent if and only if the two corresponding sets are disjoint . Kneser graphs are named after Martin Kneser , who first investigated them in 1956.
The Kneser graph K ( n , 1) is the complete graph on n vertices.
The Kneser graph K ( n , 2) is the complement of the line graph of the complete graph on n vertices.
The Kneser graph K (2 n − 1, n − 1) is the odd graph O n ; in particular O 3 = K (5, 2) is the Petersen graph (see top right figure).
The Kneser graph O 4 = K (7, 3) , visualized on the right.
The Kneser graph K ( n , k ) {\displaystyle K(n,k)} has ( n k ) {\displaystyle {\tbinom {n}{k}}} vertices. Each vertex has exactly ( n − k k ) {\displaystyle {\tbinom {n-k}{k}}} neighbors.
The Kneser graph is vertex transitive and arc transitive . When k = 2 {\displaystyle k=2} , the Kneser graph is a strongly regular graph , with parameters ( ( n 2 ) , ( n − 2 2 ) , ( n − 4 2 ) , ( n − 3 2 ) ) {\displaystyle ({\tbinom {n}{2}},{\tbinom {n-2}{2}},{\tbinom {n-4}{2}},{\tbinom {n-3}{2}})} . However, it is not strongly regular when k > 2 {\displaystyle k>2} , as different pairs of nonadjacent vertices have different numbers of common neighbors depending on the size of the intersection of the corresponding pairs of sets.
Because Kneser graphs are regular and edge-transitive , their vertex connectivity equals their degree , except for K ( 2 k , k ) {\displaystyle K(2k,k)} which is disconnected . More precisely, the connectivity of K ( n , k ) {\displaystyle K(n,k)} is ( n − k k ) , {\displaystyle {\tbinom {n-k}{k}},} the same as the number of neighbors per vertex. [ 1 ]
As Kneser ( 1956 ) conjectured , the chromatic number of the Kneser graph K ( n , k ) {\displaystyle K(n,k)} for n ≥ 2 k {\displaystyle n\geq 2k} is exactly n − 2 k + 2 ; for instance, the Petersen graph requires three colors in any proper coloring . This conjecture was proved in several ways.
In contrast, the fractional chromatic number of these graphs is n / k {\displaystyle n/k} . [ 6 ] When n < 2 k {\displaystyle n<2k} , K ( n , k ) {\displaystyle K(n,k)} has no edges and its chromatic number is 1. When n = 2 k {\displaystyle n=2k} , the graph is a perfect matching and its chromatic number is 2.
It is well-known that the Petersen graph is not Hamiltonian , but it was long conjectured that this was the sole exception and that every other connected Kneser graph K ( n , k ) is Hamiltonian.
In 2003, Chen showed that the Kneser graph K ( n , k ) contains a Hamiltonian cycle if [ 7 ]
Since
holds for all k {\displaystyle k} , this condition is satisfied if
Around the same time, Shields showed (computationally) that, except the Petersen graph, all connected Kneser graphs K ( n , k ) with n ≤ 27 are Hamiltonian. [ 8 ]
In 2021, Mütze, Nummenpalo, and Walczak proved that the Kneser graph K ( n , k ) contains a Hamiltonian cycle if there exists a non-negative integer a {\displaystyle a} such that n = 2 k + 2 a {\displaystyle n=2k+2^{a}} . [ 9 ] In particular, the odd graph O n has a Hamiltonian cycle if n ≥ 4 . Finally, in 2023, Merino, Mütze and Namrata completed the proof of the conjecture. [ 10 ]
When n < 3 k , the Kneser graph K ( n , k ) contains no triangles. More generally, when n < ck it does not contain cliques of size c , whereas it does contain such cliques when n ≥ ck . Moreover, although the Kneser graph always contains cycles of length four whenever n ≥ 2 k + 2 , for values of n close to 2 k the shortest odd cycle may have variable length. [ 11 ]
The diameter of a connected Kneser graph K ( n , k ) is [ 12 ] ⌈ k − 1 n − 2 k ⌉ + 1. {\displaystyle \left\lceil {\frac {k-1}{n-2k}}\right\rceil +1.}
The spectrum of the Kneser graph K ( n , k ) consists of k + 1 distinct eigenvalues : λ j = ( − 1 ) j ( n − k − j k − j ) , j = 0 , … , k . {\displaystyle \lambda _{j}=(-1)^{j}{\binom {n-k-j}{k-j}},\qquad j=0,\ldots ,k.} Moreover λ j {\displaystyle \lambda _{j}} occurs with multiplicity ( n j ) − ( n j − 1 ) {\displaystyle {\tbinom {n}{j}}-{\tbinom {n}{j-1}}} for j > 0 {\displaystyle j>0} and λ 0 {\displaystyle \lambda _{0}} has multiplicity 1. [ 13 ]
The Erdős–Ko–Rado theorem states that the independence number of the Kneser graph K ( n , k ) for n ≥ 2 k {\displaystyle n\geq 2k} is α ( K ( n , k ) ) = ( n − 1 k − 1 ) . {\displaystyle \alpha (K(n,k))={\binom {n-1}{k-1}}.}
The Johnson graph J ( n , k ) is the graph whose vertices are the k -element subsets of an n -element set, two vertices being adjacent when they meet in a ( k − 1) -element set. The Johnson graph J ( n , 2) is the complement of the Kneser graph K ( n , 2) . Johnson graphs are closely related to the Johnson scheme , both of which are named after Selmer M. Johnson .
The generalized Kneser graph K ( n , k , s ) has the same vertex set as the Kneser graph K ( n , k ) , but connects two vertices whenever they correspond to sets that intersect in s or fewer items. [ 11 ] Thus K ( n , k , 0) = K ( n , k ) .
The bipartite Kneser graph H ( n , k ) has as vertices the sets of k and n − k items drawn from a collection of n elements. Two vertices are connected by an edge whenever one set is a subset of the other. Like the Kneser graph it is vertex transitive with degree ( n − k k ) . {\displaystyle {\tbinom {n-k}{k}}.} The bipartite Kneser graph can be formed as a bipartite double cover of K ( n , k ) in which one makes two copies of each vertex and replaces each edge by a pair of edges connecting corresponding pairs of vertices. [ 14 ] The bipartite Kneser graph H (5, 2) is the Desargues graph and the bipartite Kneser graph H ( n , 1) is a crown graph . | https://en.wikipedia.org/wiki/Kneser_conjecture |
A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square exactly once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is "closed", or "re-entrant"; otherwise, it is "open". [ 1 ] [ 2 ]
The knight's tour problem is the mathematical problem of finding a knight's tour. Creating a program to find a knight's tour is a common problem given to computer science students. [ 3 ] Variations of the knight's tour problem involve chessboards of different sizes than the usual 8 × 8 , as well as irregular (non-rectangular) boards.
The knight's tour problem is an instance of the more general Hamiltonian path problem in graph theory . The problem of finding a closed knight's tour is similarly an instance of the Hamiltonian cycle problem . Unlike the general Hamiltonian path problem, the knight's tour problem can be solved in linear time . [ 4 ]
The earliest known reference to the knight's tour problem dates back to the 9th century AD. In Rudrata 's Kavyalankara [ 5 ] (5.15), a Sanskrit work on Poetics, the pattern of a knight's tour on a half-board has been presented as an elaborate poetic figure ( citra-alaṅkāra ) called the turagapadabandha or 'arrangement in the steps of a horse'. The same verse in four lines of eight syllables each can be read from left to right or by following the path of the knight on tour. Since the Indic writing systems used for Sanskrit are syllabic, each syllable can be thought of as representing a square on a chessboard. Rudrata's example is as follows:
transliterated:
For example, the first line can be read from left to right or by moving from the first square to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2.
The Sri Vaishnava poet and philosopher Vedanta Desika , during the 14th century, in his 1,008-verse magnum opus praising the deity Ranganatha 's divine sandals of Srirangam , Paduka Sahasram (in chapter 30: Chitra Paddhati ) has composed two consecutive Sanskrit verses containing 32 letters each (in Anushtubh meter) where the second verse can be derived from the first verse by performing a Knight's tour on a 4 × 8 board, starting from the top-left corner. [ 6 ] The transliterated 19th verse is as follows:
(1)
(30)
(9)
(20)
(3)
(24)
(11)
(26)
(16)
(19)
(2)
(29)
(10)
(27)
(4)
(23)
(31)
(8)
(17)
(14)
(21)
(6)
(25)
(12)
(18)
(15)
(32)
(7)
(28)
(13)
(22)
(5)
The 20th verse that can be obtained by performing Knight's tour on the above verse is as follows:
sThi thA sa ma ya rA ja thpA
ga tha rA mA dha kE ga vi |
dhu ran ha sAm sa nna thA dhA
sA dhyA thA pa ka rA sa rA ||
It is believed that Desika composed all 1,008 verses (including the special Chaturanga Turanga Padabandham mentioned above) in a single night as a challenge. [ 7 ]
A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclopedic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700 describes three knight's tours. The tours are not only reentrant but also symmetrical, and the verses are based on the same tour, starting from different squares. [ 8 ] Nilakantha's work is an extraordinary achievement being a fully symmetric closed tour, predating the work of Euler (1759) by at least 60 years.
After Nilakantha, one of the first mathematicians to investigate the knight's tour was Leonhard Euler . The first procedure for completing the knight's tour was Warnsdorf's rule, first described in 1823 by H. C. von Warnsdorf.
In the 20th century, the Oulipo group of writers used it, among many others. The most notable example is the 10 × 10 knight's tour which sets the order of the chapters in Georges Perec 's novel Life a User's Manual .
The sixth game of the World Chess Championship 2010 between Viswanathan Anand and Veselin Topalov saw Anand making 13 consecutive knight moves (albeit using both knights); online commentators jested that Anand was trying to solve the knight's tour problem during the game.
Schwenk [ 10 ] proved that for any m × n board with m ≤ n , a closed knight's tour is always possible unless one or more of these three conditions are met:
Cull et al. and Conrad et al. proved that on any rectangular board whose smaller dimension is at least 5, there is a (possibly open) knight's tour. [ 4 ] [ 11 ] For any m × n board with m ≤ n , a (possibly open) knight's tour is always possible unless one or more of these three conditions are met:
On an 8 × 8 board, there are exactly 26,534,728,821,064 directed closed tours (i.e. two tours along the same path that travel in opposite directions are counted separately, as are rotations and reflections ). [ 14 ] [ 15 ] [ 16 ] The number of undirected closed tours is half this number, since every tour can be traced in reverse. There are 9,862 undirected closed tours on a 6 × 6 board. [ 17 ]
There are several ways to find a knight's tour on a given board with a computer. Some of these methods are algorithms , while others are heuristics .
A brute-force search for a knight's tour is impractical on all but the smallest boards. [ 18 ] On an 8 × 8 board, for instance, there are 13,267,364,410,532 knight's tours, [ 14 ] and a much greater number of sequences of knight moves of the same length. It is well beyond the capacity of modern computers (or networks of computers) to perform operations on such a large set. However, the size of this number is not indicative of the difficulty of the problem, which can be solved "by using human insight and ingenuity ... without much difficulty." [ 18 ]
By dividing the board into smaller pieces, constructing tours on each piece, and patching the pieces together, one can construct tours on most rectangular boards in linear time – that is, in a time proportional to the number of squares on the board. [ 11 ] [ 19 ]
Warnsdorf's rule is a heuristic for finding a single knight's tour. The knight is moved so that it always proceeds to the square from which the knight will have the fewest onward moves. When calculating the number of onward moves for each candidate square, we do not count moves that revisit any square already visited. It is possible to have two or more choices for which the number of onward moves is equal; there are various methods for breaking such ties, including one devised by Pohl [ 20 ] and another by Squirrel and Cull. [ 21 ]
This rule may also more generally be applied to any graph. In graph-theoretic terms, each move is made to the adjacent vertex with the least degree . [ 22 ] Although the Hamiltonian path problem is NP-hard in general, on many graphs that occur in practice this heuristic is able to successfully locate a solution in linear time . [ 20 ] The knight's tour is such a special case. [ 23 ]
The heuristic was first described in "Des Rösselsprungs einfachste und allgemeinste Lösung" by H. C. von Warnsdorf in 1823. [ 23 ]
A computer program that finds a knight's tour for any starting position using Warnsdorf's rule was written by Gordon Horsington and published in 1984 in the book Century/Acorn User Book of Computer Puzzles . [ 24 ]
The knight's tour problem also lends itself to being solved by a neural network implementation. [ 25 ] The network is set up such that every legal knight's move is represented by a neuron , and each neuron is initialized randomly to be either "active" or "inactive" (output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron also has a state function (described below) which is initialized to 0.
When the network is allowed to run, each neuron can change its state and output based on the states and outputs of its neighbors (those exactly one knight's move away) according to the following transition rules:
where t {\displaystyle t} represents discrete intervals of time, U ( N i , j ) {\displaystyle U(N_{i,j})} is the state of the neuron connecting square i {\displaystyle i} to square j {\displaystyle j} , V ( N i , j ) {\displaystyle V(N_{i,j})} is the output of the neuron from i {\displaystyle i} to j {\displaystyle j} , and G ( N i , j ) {\displaystyle G(N_{i,j})} is the set of neighbors of the neuron.
Although divergent cases are possible, the network should eventually converge, which occurs when no neuron changes its state from time t {\displaystyle t} to t + 1 {\displaystyle t+1} . When the network converges, either the network encodes a knight's tour or a series of two or more independent circuits within the same board. | https://en.wikipedia.org/wiki/Knight's_tour |
The Knight shift is a shift in the nuclear magnetic resonance (NMR) frequency of a paramagnetic substance first published in 1949 by the UC Berkeley physicist Walter D. Knight . [ 1 ] [ 2 ] [ 3 ]
For an ensemble of N spins in a magnetic induction field B → {\displaystyle {\vec {B}}} , the nuclear Hamiltonian for the Knight shift is expressed in Cartesian form by: [ 4 ]
H ^ KS = − ∑ i N γ i ⋅ I → ^ i ⋅ K ^ i ⋅ B → {\displaystyle {{\hat {\mathcal {H}}}_{\text{KS}}}=-\sum \limits _{\mathit {i}}^{N}{{{\gamma }_{\mathit {i}}}\cdot {{\hat {\vec {I}}}_{\mathit {i}}}\cdot {{\hat {\mathbf {K} }}_{\mathit {i}}}\cdot {\vec {B}}}} , where for the i th spin γ i {\displaystyle {\gamma }_{\mathit {i}}} is the gyromagnetic ratio , I → ^ i {\displaystyle {{\hat {\vec {I}}}_{\mathit {i}}}} is a vector of the Cartesian nuclear angular momentum operators , the K ^ i = ( K x x K x y K x z K y x K y y K y z K z x K z y K z z ) {\displaystyle {{\hat {\mathbf {K} }}_{i}}=\left({\begin{matrix}{{K}_{xx}}&{{K}_{xy}}&{{K}_{xz}}\\{{K}_{yx}}&{{K}_{yy}}&{{K}_{yz}}\\{{K}_{zx}}&{{K}_{zy}}&{{K}_{zz}}\\\end{matrix}}\right)} matrix is a second- rank tensor similar to the chemical shift shielding tensor.
The Knight shift refers to the relative shift K in NMR frequency for atoms in a metal (e.g. sodium) compared with the same atoms in a nonmetallic environment (e.g. sodium chloride ). The observed shift reflects the local magnetic field produced at the sodium nucleus by the magnetization of the conduction electrons. The average local field in sodium augments the applied resonance field by approximately one part per 1000. In nonmetallic sodium chloride the local field is negligible in comparison.
The Knight shift is due to the conduction electrons in metals. They introduce an "extra" effective field at the nuclear site, due to the spin orientations of the conduction electrons in the presence of an external field. This is responsible for the shift observed in the nuclear magnetic resonance. The shift comes from two sources, one is the Pauli paramagnetic spin susceptibility, the other is the s-component wavefunctions at the nucleus.
Depending on the electronic structure, the Knight shift may be temperature dependent. However, in metals which normally have a broad featureless electronic density of states, Knight shifts are temperature independent.
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knight_shift |
In economics , Knightian uncertainty is a lack of any quantifiable knowledge about some possible occurrence, as opposed to the presence of quantifiable risk (e.g., that in statistical noise or a parameter's confidence interval). The concept acknowledges some fundamental degree of ignorance, a limit to knowledge, and an essential unpredictability of future events.
Knightian uncertainty is named after University of Chicago economist Frank Knight (1885–1972), who distinguished risk and uncertainty in his 1921 work Risk, Uncertainty, and Profit: [ 1 ]
In this matter Knight's own views were widely shared by key economists [ 2 ] in the 1920s and 1930s who played a key role distinguishing the effects of risk from uncertainty. They were particularly concerned with the different impact on human behavior as economic agents. Entrepreneurs invest for quantifiable risk and return; savers may mistrust potential future inflation.
Whilst Frank Knight's seminal book [ 1 ] elaborated the problem, his focus was on how uncertainty generates imperfect market structures and explains actual profits. Work on estimating and mitigating uncertainty was continued by G. L. S. Shackle who later followed up with Potential Surprise Theory. [ 3 ] [ 4 ] However, the concept is largely informal and there is no single best formal system of probability and belief to represent Knightian uncertainty. Economists and management scientists continue to look at practical methodologies for decision under different types of uncertainty.
The difference between predictable variation and unpredictable variation is one of the fundamental issues in the philosophy of probability , and different probability interpretations treat predictable and unpredictable variation differently. The debate about the distinction has a long history.
The Ellsberg paradox is based on the difference between these two types of imperfect knowledge, and the problems it poses for utility theory – one is faced with an urn that contains 30 red balls and 60 balls that are either all yellow or all black, and one then draws a ball from the urn. This poses both uncertainty – whether the non-red balls are all yellow or all black – and probability – whether the ball is red or non-red, which is 1 ⁄ 3 vs. 2 ⁄ 3 . Expressed preferences in choices faced with this situation reveal that people do not treat these types of imperfect knowledge the same. This difference in treatment is also termed " ambiguity aversion ".
A black swan event , as analyzed by Nassim Nicholas Taleb , is an important and inherently unpredictable event that, once occurred, is rationalized with the benefit of hindsight. Another position of the black swan theory is that appropriate preparation for these events is frequently hindered by the pretense of knowledge of all the risks; in other words, Knightian uncertainty is presumed to not exist in day-to-day affairs, often with disastrous consequences. Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk. [ 5 ]
Saras Sarasvathy has proposed effectuation as a way to manage Knightian Uncertainty, based on her study of serial entrepreneurs, and summarised her findings in five principles: Brian D Smith (2014-07-25). "Managing Pharma's Uncertainty" . | https://en.wikipedia.org/wiki/Knightian_uncertainty |
Knockdown resistance , also called kdr , describes cases of resistance to diphenylethane (e.g. DDT ) and pyrethroid [ 1 ] insecticides in insects and other arthropods that result from reduced sensitivity of the nervous system caused by point mutations in the insect population's genetic makeup. Such mutative resistance is characterized by the presence of kdr alleles in the insect's genome . Knockdown resistance, first identified and characterized in the house fly ( Musca domestica ) in the 1950s, remains a threat to the continued usefulness of pyrethroids in the control of many pest species. Research since 1990 has provided a wealth of new information on the molecular basis of knockdown resistance. [ 2 ] | https://en.wikipedia.org/wiki/Knockdown_resistance |
Knockdown texture is a drywall finishing style. It is a mottled texture, it has more changes in textures than a simple flat finish, but less changes than orange peel , or popcorn , texture.
Knockdown texture is created by watering down joint compound to a soupy consistency. A trowel is then used to apply the joint compound. The joint compound will begin to form stalactites as it dries. The trowel is then run over the surface of the drywall, knocking off the stalactites and leaving the mottled finish.
A much more common, and faster technique is to apply the texture mud (which is slightly different from joint compound, in that it has less shrinkage upon drying) with a texture machine – a compressor and a texture spray hopper which sprays mud instead of paint. This applies what is referred to as a splatter coat. The use of a compressor allows this to be applied to walls as well as ceilings. When knocking this down, the mud is allowed to dry for a short period, then skimmed with a knockdown knife – a large, usually plastic (to reduce noticeable edges) knife.
Knockdown texture reduces construction costs because it conceals imperfections in the drywall that normally require higher more expensive stages of sand and prime for drywall installers. | https://en.wikipedia.org/wiki/Knockdown_texture |
A knockout moss is a kind of genetically modified moss . One or more of the moss's specific genes are deleted or inactivated (" knocked out "), for example by gene targeting or other methods. After the deletion of a gene, the knockout moss has lost the trait encoded by this gene. Thus, the function of this gene can be inferred. This scientific approach is called reverse genetics because the scientist wants to understand the function of a specific gene. In classical genetics , the scientist starts with a phenotype of interest and searches for the gene that causes this phenotype. Knockout mosses are relevant for basic research in biology as well as in biotechnology .
The targeted deletion or alteration of genes relies on the integration of a DNA strand at a specific and predictable position into the genome of the host cell. This DNA strand must be engineered in such a way that both ends are identical to this specific gene locus . This is a prerequisite for being efficiently integrated via homologous recombination (HR). This is similar to the process used for creating knockout mice .
So far, this method of gene targeting in land plants has been carried out in the mosses Physcomitrella patens and Ceratodon purpureus , [ 2 ] since in these non-seed plant species the efficiency of HR is several orders of magnitude higher than in seed plants . [ 3 ]
Knockout mosses are stored at and distributed by a specialized biobank , the International Moss Stock Center .
For altering moss genes in a targeted way, the DNA-construct needs to be incubated together with moss protoplasts and with polyethylene glycol (PEG). Because mosses are haploid organisms, the regenerating moss filaments ( protonemata ) can be directly assayed for gene targeting within six weeks when utilizing PCR methods. [ 4 ]
The first scientific publication in which knockout moss was used to identify the function of a hitherto-unknown gene appeared in 1998, and was authored by Ralf Reski and coworkers. They deleted the ftsZ -gene and thus functionally identified the first gene pivotal for the division of an organelle in any eukaryote . [ 5 ]
Physcomitrella plants were engineered with multiple knockouts to prevent the plant-specific glycosylation of proteins, an important post-translational modification . These knockout mosses are used to produce complex biopharmaceuticals in the field of molecular farming . [ 6 ]
In cooperation with the chemical company BASF , Ralf Reski and coworkers established a collection of knockout mosses to use for gene identification. [ 1 ] [ 7 ] | https://en.wikipedia.org/wiki/Knockout_moss |
In organic chemistry , the Knoevenagel condensation ( pronounced [ˈknøːvənaːɡl̩] ) reaction is a type of chemical reaction named after German chemist Emil Knoevenagel . It is a modification of the aldol condensation . [ 1 ] [ 2 ]
A Knoevenagel condensation is a nucleophilic addition of an active hydrogen compound to a carbonyl group followed by a dehydration reaction in which a molecule of water is eliminated (hence condensation ). The product is often an α,β-unsaturated ketone (a conjugated enone ).
In this reaction the carbonyl group is an aldehyde or a ketone . The catalyst is usually a weakly basic amine . The active hydrogen component has the forms: [ 3 ]
where Z is an electron withdrawing group . Z must be powerful enough to facilitate deprotonation to the enolate ion even with a mild base. Using a strong base in this reaction would induce self-condensation of the aldehyde or ketone.
The Hantzsch pyridine synthesis , the Gewald reaction and the Feist–Benary furan synthesis all contain a Knoevenagel reaction step. The reaction also led to the discovery of CS gas .
The Doebner modification of the Knoevenagel condensation entails the use of pyridine as a solvent with at least one of the withdrawing groups on the nucleophile is a carboxylic acid , for example, with malonic acid . Under these conditions the condensation is accompanied by decarboxylation . [ 4 ] For example, the reaction of acrolein and malonic acid in pyridine gives trans -2,4-entadienoic acid with one carboxylic acid group and not two. [ 5 ] Sorbic acid can be prepared similarly by replacing acrolein with crotonaldehyde . [ 6 ]
A Knoevenagel condensation is demonstrated in the reaction of 2-methoxybenzaldehyde 1 with the thiobarbituric acid 2 in ethanol using piperidine as a base. [ 7 ] The resulting enone 3 is a charge transfer complex molecule.
The Knoevenagel condensation is a key step in the commercial production of the antimalarial drug lumefantrine (a component of Coartem ): [ 8 ]
The initial reaction product is a 50:50 mixture of E and Z isomers but because both isomers equilibrate rapidly around their common hydroxyl precursor, the more stable Z-isomer can eventually be obtained.
A multicomponent reaction featuring a Knoevenagel condensation is demonstrated in this MORE synthesis with cyclohexanone , malononitrile and 3-amino-1,2,4-triazole : [ 9 ]
The Weiss–Cook reaction consists in the synthesis of cis-bicyclo[3.3.0]octane-3,7-dione employing an acetonedicarboxylic acid ester and a diacyl (1,2 ketone). The mechanism operates in the same way as the Knoevenagel condensation: [ 10 ] | https://en.wikipedia.org/wiki/Knoevenagel_condensation |
The Knorr pyrrole synthesis is a widely used chemical reaction that synthesizes substituted pyrroles (3) . [ 1 ] [ 2 ] [ 3 ] The method involves the reaction of an α- amino - ketone (1) and a compound containing an electron-withdrawing group (e.g. an ester as shown) α to a carbonyl group (2) . [ 4 ]
The mechanism requires zinc and acetic acid as catalysts. It will proceed at room temperature.
Because α-aminoketones self-condense very easily, they must be prepared in situ . The usual way of doing this is from the relevant oxime , via the Neber rearrangement . [ 5 ] [ 6 ]
The original Knorr synthesis employed two equivalents of ethyl acetoacetate , one of which was converted to ethyl 2-oximinoacetoacetate by dissolving it in glacial acetic acid , and slowly adding one equivalent of saturated aqueous sodium nitrite , under external cooling. Zinc dust was then stirred in, reducing the oxime group to the amine. This reduction consumes two equivalents of zinc and four equivalents of acetic acid.
Modern practice is to add the oxime solution resulting from the nitrosation and the zinc dust gradually to a well-stirred solution of ethyl acetoacetate in glacial acetic acid. The reaction is exothermic , and the mixture can reach the boiling point, if external cooling is not applied. The resulting product, diethyl 3,5-dimethylpyrrole-2,4-dicarboxylate, has been called Knorr's Pyrrole ever since. In the Scheme above, R 2 = COOEt, and R 1 = R 3 = Me represent this original reaction.
Knorr's pyrrole can be derivatized in a number of useful manners. One equivalent of sodium hydroxide will saponify the 2-ester selectively. Dissolving Knorr's pyrrole in concentrated sulfuric acid , and then pouring the resulting solution into water will hydrolyze the 4-ester group selectively. The 5-methyl group can be variously oxidized to chloromethyl, aldehyde, or carboxylic acid functionality by the use of stoichiometric sulfuryl chloride in glacial acetic acid. [ 7 ] Alternatively, the nitrogen atom can be alkylated. The two ester positions can be more smoothly differentiated by incorporating benzyl or tert -butyl groups via the corresponding acetoacetate esters. Benzyl groups can be removed by catalytic hydrogenolysis over palladium on carbon , and tertiary-butyl groups can be removed by treatment with trifluoroacetic acid , or boiling aqueous acetic acid. R 1 and R 3 (as well as R 2 and "Et") can be varied by the application of appropriate β-ketoesters readily made by a synthesis emanating from acid chlorides , Meldrum's acid , and the alcohol of one's choice. Ethyl and benzyl esters are easily made thereby, and the reaction is noteworthy in that even the highly hindered tert -butyl alcohol gives very high yields in this synthesis. [ 8 ]
Levi and Zanetti extended the Knorr synthesis in 1894 to the use of acetylacetone (2,4-pentanedione) in reaction with ethyl 2-oximinoacetoacetate. The result was ethyl 4-acetyl-3,5-dimethylpyrrole-2-carboxylate, where "OEt" = R 1 = R 3 = Me, and R 2 = COOEt. [ 9 ] The 4-acetyl group could easily be converted to a 4-ethyl group by Wolff-Kishner reduction (hydrazine and alkali, heated); hydrogenolysis, or the use of diborane . Benzyl or tert -butyl acetoacetates also work well in this system, and with close temperature control, the tert -butyl system gives a very high yield (close to 80%). [ 10 ] N , N -dialkyl pyrrole-2- and/or 4-carboxamides may be prepared by the use of N , N -dialkyl acetoacetamides in the synthesis. Even thioesters have been successfully prepared, using the method. [ 11 ] As for the nitrosation of β-ketoesters, despite the numerous literature specifications of tight temperature control on the nitrosation, the reaction behaves almost like a titration, and the mixture can be allowed to reach even 40 °C without significantly impacting the final yield.
The mechanism of the Knorr pyrrole synthesis begins with condensation of the amine and ketone to give an imine. The imine then tautomerizes to an enamine, followed by cyclization, elimination of water, and isomerization to the pyrrole.
There are a number of important syntheses of pyrroles that are operated in the manner of the Knorr Synthesis, despite having mechanisms of very different connectivity between the starting materials and the pyrrolic product.
Hans Fischer and Emmy Fink found that Zanetti's synthesis from 2,4-pentanedione and ethyl 2-oximinoacetoacetate gave ethyl 3,5-dimethylpyrrole-2-carboxylate as a trace byproduct. Similarly, 3-ketobutyraldehyde diethyl acetal led to the formation of ethyl 5-methylpyrrole-2-carboxylate. Both of these products resulted from the loss of the acetyl group from the inferred ethyl 2-aminoacetoacetate intermediate. An important product of the Fischer-Fink synthesis was ethyl 4,5-dimethylpyrrole-2-carboxylate, made from ethyl 2-oximinoacetoacetate and 2-methyl-3-oxobutanal, in turn made by the Claisen condensation of 2-butanone with ethyl formate . [ 12 ]
George Kleinspehn reported that the Fischer–Fink connectivity could be forced to occur exclusively, by the use of diethyl oximinomalonate in the synthesis, with 2,4-pentanedione, or its 3-alkyl substituted derivatives. Yields were high, around 60%, and this synthesis eventually came to be one of the most important in the repertory. [ 13 ] Yields were significantly improved, by the use of preformed diethyl aminomalonate (prepared by the hydrogenolysis of diethyl oximinomalonate in ethanol, over Pd/C), and adding a mixture of diethyl aminomalonate and the β-diketone to actively boiling glacial acetic acid. [ 14 ]
Meanwhile, Johnson had extended the Fischer-Fink synthesis by reacting 2-oximinoacetoacetate esters (ethyl, benzyl, or tertiary-butyl), with 3-alkyl substituted 2,4-pentanediones. [ 15 ] The Kleinspehn synthesis was extended under David Dolphin by the use of unsymmetrical β-diketones (such as 3-alkyl substituted 2,4-hexanediones), which preferentially reacted initially at the less hindered acetyl group and afforded the corresponding 5-methylpyrrole-2-carboxylate esters. N , N -Dialkyl 2-oximinoacetoacetamides also were found to give pyrroles when reacted under Knorr conditions with 3-substituted-2,4-pentanediones, in yields comparable to the corresponding esters (around 45%). However, when unsymmetrical diketones were used, it was found that the acetyl group from the acetoacetamide was retained in the product, and one of the acyl groups from the diketone had been lost. [ 16 ] This same mechanism occurs to a minor extent in the acetoacetate ester systems, and had previously been detected radiochemically by Harbuck and Rapoport . [ 17 ] Most of the above-described syntheses have application in the synthesis of porphyrins, bile pigments, and dipyrrins. | https://en.wikipedia.org/wiki/Knorr_pyrrole_synthesis |
The Knorr quinoline synthesis is an intramolecular organic reaction converting a β-ketoanilide to a 2-hydroxyquinoline using sulfuric acid . This reaction was first described by Ludwig Knorr (1859–1921) in 1886 [ 1 ]
The reaction is a type of electrophilic aromatic substitution accompanied by elimination of water. A 1964 study found that with certain reaction conditions formation of a 4-hydroxyquinoline is a competing reaction. [ 2 ] For instance, the compound benzoylacetanilide ( 1 ) forms the 2-hydroxyquinoline ( 2 ) in a large excess of polyphosphoric acid (PPA) but 4-hydroxyquinoline 3 when the amount of PPA is small. A reaction mechanism identified a N,O-dicationic intermediate A with excess acid capable of ring-closing and a monocationic intermediate B which fragments to aniline and (ultimately to) acetophenone . Aniline reacts with another equivalent of benzoylacetanilide before forming the 4-hydroxyquinoline.
A 2007 study [ 3 ] revised the reaction mechanism showing that based on NMR spectroscopy and theoretical calculations an O,O-dicationic intermediate (a superelectrophile ) is favored comparing to the N,O dicationic intermediate. For preparative purposes triflic acid is recommended: | https://en.wikipedia.org/wiki/Knorr_quinoline_synthesis |
In the mathematical field of knot theory , a knot polynomial is a knot invariant in the form of a polynomial whose coefficients encode some of the properties of a given knot .
The first knot polynomial, the Alexander polynomial , was introduced by James Waddell Alexander II in 1923. Other knot polynomials were not found until almost 60 years later.
In the 1960s, John Conway came up with a skein relation for a version of the Alexander polynomial, usually referred to as the Alexander–Conway polynomial . The significance of this skein relation was not realized until the early 1980s, when Vaughan Jones discovered the Jones polynomial . This led to the discovery of more knot polynomials, such as the so-called HOMFLY polynomial .
Soon after Jones' discovery, Louis Kauffman noticed the Jones polynomial could be computed by means of a partition function (state-sum model), which involved the bracket polynomial , an invariant of framed knots . This opened up avenues of research linking knot theory and statistical mechanics .
In the late 1980s, two related breakthroughs were made. Edward Witten demonstrated that the Jones polynomial, and similar Jones-type invariants, had an interpretation in Chern–Simons theory . Viktor Vasilyev and Mikhail Goussarov started the theory of finite type invariants of knots. The coefficients of the previously named polynomials are known to be of finite type (after perhaps a suitable "change of variables").
In recent years, the Alexander polynomial has been shown to be related to Floer homology . The graded Euler characteristic of the knot Floer homology of Peter Ozsváth and Zoltan Szabó is the Alexander polynomial.
Alexander–Briggs notation organizes knots by their crossing number.
Alexander polynomials and Conway polynomials can not recognize the difference of left-trefoil knot and right-trefoil knot.
So we have the same situation as the granny knot and square knot since the addition of knots in R 3 {\displaystyle \mathbb {R} ^{3}} is the product of knots in knot polynomials . | https://en.wikipedia.org/wiki/Knot_polynomial |
Knotted column (also serpent column) is an architectural element, consisting of a pair of columns joined together by a "flat knot". [ 1 ] [ 2 ] The column was particularly used during the Romanesque period when it spread in a wide geographical area between Northern Italy, Bavaria, and Burgundy, and was particularly associated with the work of the Comacine masters and the Cistercian order .
The knot is a symbol of the double human and divine nature of Christ, as well as of the Father and the Son united by the Holy Spirit.
One of the oldest examples can be considered the pulpit of the parish church of San Pietro di Gropina, perhaps from the Lombards era. One of its supports is made up of a pair of knotted columns. There are also examples in which four columns are knotted together, as in the Trento Cathedral .
According to some scholars, however, the origin of the knotted column would be Byzantine, [ 3 ] [ 4 ] as evidenced by a column found and exhibited at the Provincial Museum of Torcello (Venice). | https://en.wikipedia.org/wiki/Knotted_column |
Single Chain Cyclized/Knotted Polymers are a new class of polymer architecture with a general structure consisting of multiple intramolecular cyclization units within a single polymer chain . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Such a structure was synthesized via the controlled polymerization of multivinyl monomers, which was first reported in Dr. Wenxin Wang's research lab. These multiple intramolecular cyclized/knotted units mimic the characteristics of complex knots found in proteins and DNA which provide some elasticity to these structures. [ 7 ] [ 8 ] Of note, 85% of elasticity in natural rubber is due to knot-like structures within its molecular chain. [ 9 ] [ 10 ] An intramolecular cyclization reaction is where the growing polymer chain reacts with a vinyl functional group on its own chain, rather than with another growing chain in the reaction system. In this way the growing polymer chain covalently links to itself in a fashion similar to that of a knot in a piece of string. As such, single chain cyclized/knotted polymers consist of many of these links (intramolecularly cyclized), as opposed to other polymer architectures including branched and crosslinked polymers that are formed by two or more polymer chains in combination.
Linear polymers can also fold into knotted topologies via non-covalent linkages. Knots and slipknots have been identified in naturally evolved polymers such as proteins as well. Circuit topology and knot theory formalise and classify such molecular conformations.
A simple modification to atom transfer radical polymerization (ATRP) was introduced in 2007 [ 11 ] to kinetically control the polymerization by increasing the ratio of inactive copper(II) catalyst to active copper(I) catalyst. The modification to this strategy is termed deactivation enhanced ATRP, whereby different ratios of copper(II)/copper(I) are added. Alternatively a copper(II) catalyst may be used in the presence of small amounts of a reducing agent such as ascorbic acid to produce low percentages of copper(I) in situ and to control the ratio of copper (II)/copper (I). [ 1 ] [ 3 ] Deactivation enhanced ATRP features the decrease of the instantaneous kinetic chain length ν as defined by: ν = R p R deac = k p [ M ] [ P ] k deac [ Cu II ] [ P ] = k p [ M ] k deac [ Cu II ] {\displaystyle \nu ={\frac {R_{\text{p}}}{R_{\text{deac}}}}={\frac {k_{\text{p}}[{\text{M}}][{\text{P}}]}{k_{\text{deac}}[{\text{Cu}}^{\text{II}}][{\text{P}}]}}={\frac {k_{\text{p}}[{\text{M}}]}{k_{\text{deac}}[{{\text{Cu}}^{\text{II}}}]}}} , meaning an average number of monomer units are added to a propagating chain end during each activation/deactivation cycle, [ 12 ] The resulting chain growth rate is slowed down to allow sufficient control over the reaction thus greatly increasing the percentage of multi-vinyl monomers in the reaction system (even up to 100 percent (homopolymerization)).
Typically, single chain cyclized/knotted polymers are synthesized by deactivation enhanced ATRP of multivinyl monomers via kinetically controlled strategy. There are several main reactions during this polymerization process: initiation, activation, deactivation, chain propagation, intramolecular cyclization and intermolecular crosslinking. The polymerization process is explained in Figure 2.
In a similar way to normal ATRP, the polymerization is started by initiation to produce a free radical , followed by chain propagation and reversible activation/deactivation equilibrium. Unlike the polymerization of single vinyl monomers, for the polymerization of multivinyl monomers, the chain propagation occurs between the active centres and one of the vinyl groups from the free monomers. Therefore, multiple unreacted pendent vinyl groups are introduced into the linear primary polymer chains, resulting in a high local/spatial vinyl concentration. As the chain grows, the propagating centre reacts with their own pendent vinyl groups to form intramolecular cyclized rings (i.e. intramolecular cyclization). The unique alternating chain propagation/intramolecular cyclization process eventually leads to the single chain cyclized/knotted polymer architecture.
It is worthy to note that due to the multiple reactive sites of the multivinyl monomers, plenty of unreacted pendent vinyl groups are introduced to linear primary polymer chains. These pendent vinyl groups have the potential to react with propagating active centres either from their own polymer chain or others. Therefore, both of the intramolecular cyclization and intermolecular crosslinking might occur in this process.
Using the deactivation enhanced strategy, a relatively small instantaneous kinetic chain length limits the number of vinyl groups that can be added to a propagating chain end during each activation/deactivation cycles and thus keeps the polymer chains growing in a limited space. In this way, unlike what happens in free radical polymerization (FRP) , the formation of huge polymer chains and large-scale combinations at early reaction stages is avoided. Therefore, a small instantaneous kinetic chain length is the prerequisite for further manipulation of intramolecular cyclization or intermolecular crosslinking. Based on the small instantaneous kinetic chain length, regulation of different chain dimensions and concentrations would lead to distinct reaction types. A low ratio of initiator to monomer would result in the formation of longer chains but of a lower chain concentration, This scenario would no doubt increases the chances of intramolecular cyclization due to the high local/spatial vinyl concentration within the growth boundary. Although the opportunity for intermolecular reactions can increase as the polymer chains grow, the likelihood of this occurring at the early stage of reactions is minimal due to the low chain concentration, which is why single chain cyclized/knotted polymers can form. However, in contrast, a high initiator concentration not only diminishes the chain dimension during the linear-growth phase thus suppressing the intramolecular cyclization, but it also increases the chain concentration within the system so that pendent vinyl groups in one chain are more likely to fall into the growth boundary of another chain. Once the monomers are converted to short chains, the intermolecular combination increases and allows the formation of hyperbranched structures with a high density of branching and vinyl functional groups. [ 3 ]
Single chain cyclized polymers consist of multiple cyclized rings which afford them some unique properties, including high density, low intrinsic viscosity , low translational friction coefficients , high glass transition temperatures, [ 13 ] [ 14 ] and excellent elasticity of the formed network. [ 15 ] In particular, an abundance of internal space makes the single chain cyclized polymers ideal candidates as efficient cargo-carriers.
It is well established that the macromolecular structure of nonviral gene delivery vectors alters their transfection efficacy and cytotoxicity. The cyclized structure has been proven to reduce cytotoxicity and increase circulation time for drug and gene delivery applications. [ 16 ] [ 17 ] [ 18 ] The unique structure of cyclizing chains provides the single chain cyclized polymers a different method of interaction between the polymer and plasmid DNA, and results in a general trend of higher transfection capabilities than branched polymers. [ 19 ] [ 20 ] Moreover, due to the nature of the single chain structure, this cyclized polymer can “untie” to a linear chain under reducing conditions. Transfection profiles on astrocytes comparing 25 kDa- PEI , SuperFect ® and Lipofectamine®2000 and cyclized polymer showed greater efficiency and cell viability whilst maintaining neural cell viability above 80% four days post transfections. [ 21 ] | https://en.wikipedia.org/wiki/Knotted_polymers |
Knotted proteins are proteins whose backbones entangle themselves in a knot. One can imagine pulling a protein chain from both termini, as though pulling a string from both ends. When a knotted protein is “pulled” from both termini, it does not get disentangled. Knotted proteins are very rare, making up only about one percent of the proteins in the Protein Data Bank , and their folding mechanisms and function are not well understood. Although there are experimental and theoretical studies that hint to some answers, systematic answers to these questions have not yet been found.
Although number of computational methods have been developed to detect protein knots, there are still no completely automatic methods to detect protein knots without necessary manual intervention due to the missing residues or chain breaks in the X-ray structures or the nonstandard PDB formats.
Most of the knots discovered in proteins are deep trefoil (3 1 ) knots . Figure eight knots (4 1 ) , three-twist knots (5 2 ) , Stevedore knots (6 1 ) and Septoil knot (7 1 ) [ 1 ] have also been discovered. Recently, use of machine learning techniques for predicting protein structure, resulted in highly accurate prediction of 6 3 knot . [ 2 ] Furthermore, using same techniques, composite knots (namely 3 1 #3 1 ) were found [ 3 ] and crystallised. [ 4 ]
Mathematically, a knot is defined as a subset of three-dimensional space that is homeomorphic to a circle. [ 6 ] According to this definition, a knot must exist in a closed loop, while knotted proteins instead exist within open, unclosed chains. In order to apply mathematical knot theory to knotted proteins, various strategies can be used to create an artificial closed loop. One such strategy is to choose a point in space at infinite distance to be connected to the protein's N- and C-termini through a virtual bond, thus the protein can be treated as a closed loop. Another such strategy is to use stochastic methods that create random closures.
The depth of a protein knot relates to the ability of the protein to resist unknotting. A deep knot is preserved even though the removal of a considerable number of residues from either end does not destroy the knot. The higher the number of residues that can be removed without destroying the knot, the deeper the knot.
Considering how knots may be produced with a string, the folding of knotted proteins should involve first the formation of a loop, and then the threading of one terminus through the loop. This is the only topological way that the trefoil knot can be formed. For more complex knots, it is theoretically possible to have the loop to twist multiple times around itself, meaning that one end of the chain gets wrapped around at least once, and then threading to occur. It has also been observed in a theoretical study that a 6 1 knot can form by the C-terminus threading through a loop, and another loop flipping over the first loop, as well as the C-terminus threading through both the loops which have previously flipped over each other. [ 7 ]
The folding of knotted proteins may be explained by interaction of the nascent chain with the ribosome. In particular, the affinity of the chain to the ribosome surface may result in creation of the loop, which may be next threaded by a nascent chain. Such mechanism was shown to be plausible for one of the most deeply knotted proteins known. [ 8 ]
There have been experimental studies involving YibK and YbeA, knotted proteins containing trefoil knots. It has been established that these knotted proteins fold slowly, and that the knotting in folding is the rate limiting step. [ 9 ] In another experimental study, a 91-residue-long protein was attached to the termini of YibK and YbeA. [ 10 ] Attaching the protein to both termini produces a deep knot with about 125 removable residues on each terminus before the knot is destroyed. Yet it was seen that the resulting proteins could fold spontaneously. The attached proteins were shown to fold more quickly than YibK and YbeA themselves, so during folding they are expected to act as plugs at either end of YibK and YbeA. It was found that attaching the protein to the N-terminus did not alter the folding speed, but the attachment to the C-terminus slows folding down, suggesting that the threading event happens at the C-terminus. The chaperones, although facilitate the protein knotting, are not crucial in proteins' self-tying. [ 10 ] [ 11 ]
The class of knotted proteins contains only structures, for which the backbone, after closure forms a knotted loop. However, some proteins contain "internal knots" called slipknots, i.e. unknotted structures containing a knotted subchain. [ 12 ] Another topologically complex structure is the link formed by covalent loops, closed by disulfide bridges. [ 13 ] [ 14 ] Three types of disulfide-based links were identified in proteins: two versions of Hopf link (differing in chirality) and one version of Solomon link . Another complex structure arising by closing part of the chain with covalent bridge are complex lasso proteins, for which the covalent loop is threaded by the chain one or more times. [ 15 ] Yet another complex structures arising as a result of the existence of disulfide bridges are the cystine knots , for which two disulfide bridges form a closed, covalent loop, which is threaded by third chain. The term "knot" in the name of the motif is misleading, as the motif does not contain any knotted closed cycle. Moreover, formation of the cystine knots in general is not different from the folding of an unknotted protein
Apart from closing only one chain, one may perform also the chain closure procedure for all the chains present in the crystal structure. In some cases one obtains the non-trivially linked structures, called probabilistic links. [ 16 ]
One can also consider loops in proteins formed by pieces of the main chain and the disulfide bridges and interaction via ions. Such loops can also be knotted of form links even within structures with unknotted main chain. [ 17 ] [ 18 ]
Marc L. Mansfield proposed in 1994, that there can be knots in proteins. [ 19 ] He gave unknot scores to proteins by constructing a sphere centered at the center of mass of the alpha carbons of the backbone, with a radius twice the distance between the center of mass and the Calpha that is the farthest away from the center of mass, and by sampling two random points on the surface of the sphere. He connected the two points by tracing a geodesic on the surface of the sphere (arcs of great circles), and then connected each end of the protein chain with one of these points. Repeating this procedure a 100 times and counting the times where the knot is destroyed in the mathematical sense yields the unknot score. Human carbonic anhydrase was identified to have a low unknot score (22). Upon visually inspecting the structure, it was seen that the knot was shallow, meaning that the removal of a few residues from either end destroys the knot.
In 2000, William R. Taylor identified a deep knot in acetohydroxy acid isomeroreductase ( PDB ID: 1YVE), by using an algorithm that smooths protein chains and makes knots more visible. [ 20 ] The algorithm keeps both termini fixed, and iteratively assigns to the coordinates of each residue the average of the coordinates of the neighboring residues. It has to be made sure that the chains do not pass through each other, otherwise the crossings and therefore the knot might get destroyed. If there is no knot, the algorithm eventually produces a straight line that joins both termini.
Some proposals about the function of knots have been that it might increase thermal and kinetic stability. One particular suggestion was that for the human ubiquitin hydrolase, which contains a 5 2 knot, the presence of the knot might be preventing it from being pulled into the proteasome. [ 21 ] Because it is a deubiquitinating enzyme, it is often found in proximity of proteins soon to be degraded by proteasome, and therefore it faces the danger of being degraded itself. Therefore, the presence of the knot might be functioning as a plug that prevents it. This notion was further analyzed on other proteins like YbeA and YibK with computer simulations. [ 22 ] The knots seem to tighten when they are pulled into a pore, and depending on the force with which they are pulled in, they either get stuck and block the pore, the likeliness of which increases with stronger pulling forces, or in the case of a small pulling force they might get disentangled as one terminus is pulled out of the knot. For deeper knots, it is more likely that the pore will be blocked, as there are too many residues that need to be pulled through the knot. In another theoretical study, [ 23 ] it was found that the modeled knotted protein was not thermally stable, but it was kinetically stable. It was also shown that the knot in proteins creates the places on the verge of hydrophobic and hydrophilic parts of the chain, characteristic for active sites. [ 24 ] This may explain why over 80% of knotted proteins are enzymes. [ 25 ] Another study shows that knotted and slipknotted proteins constitute a significant number of membrane proteins. They comprise one of the largest groups of secondary active transporters. [ 26 ]
Some local programs and a number of web servers are available, providing convenient query services for knotted structures and analysis tools for detecting protein knots, including: | https://en.wikipedia.org/wiki/Knotted_protein |
The knower paradox is a paradox belonging to the family of the paradoxes of self-reference (like the liar paradox ). Informally, it consists in considering a sentence saying of itself that it is not known, and apparently deriving the contradiction that such sentence is both not known and known.
A version of the paradox occurs already in chapter 9 of Thomas Bradwardine ’s Insolubilia . [ 1 ] In the wake of the modern discussion of the paradoxes of self-reference, the paradox has been rediscovered (and dubbed with its current name) by the US logicians and philosophers David Kaplan and Richard Montague , [ 2 ] and is now considered an important paradox in the area. [ 3 ] The paradox bears connections with other epistemic paradoxes such as the hangman paradox and the paradox of knowability .
The notion of knowledge seems to be governed by the principle that knowledge is factive :
(where we use single quotes to refer to the linguistic expression inside the quotes and where 'is known' is short for 'is known by someone at some time'). It also seems to be governed by the principle that proof yields knowledge:
Consider however the sentence:
Assume for reductio ad absurdum that (K) is known. Then, by (KF), (K) is not known, and so, by reductio ad absurdum , we can conclude that (K) is not known. Now, this conclusion, which is the sentence (K) itself, depends on no undischarged assumptions, and so has just been proved. Therefore, by (PK), we can further conclude that (K) is known. Putting the two conclusions together, we have the contradiction that (K) is both not known and known.
Since, given the diagonal lemma , every sufficiently strong theory will have to accept something like (K), absurdity can only be avoided either by rejecting one of the two principles of knowledge (KF) and (PK) or by rejecting classical logic (which validates the reasoning from (KF) and (PK) to absurdity). The first kind of strategy subdivides in several alternatives. One approach takes its inspiration from the hierarchy of truth predicates familiar from Alfred Tarski 's work on the Liar paradox and constructs a similar hierarchy of knowledge predicates. [ 4 ] Another approach upholds a single knowledge predicate but takes the paradox to call into doubt either the unrestricted validity of (PK) [ 5 ] or at least knowledge of (KF). [ 6 ] The second kind of strategy also subdivides in several alternatives. One approach rejects the law of excluded middle and consequently reductio ad absurdum . [ 7 ] Another approach upholds reductio ad absurdum and thus accepts the conclusion that (K) is both not known and known, thereby rejecting the law of non-contradiction . [ 8 ] | https://en.wikipedia.org/wiki/Knower_paradox |
Knowledge-based processors (KBPs) are used for processing packets in computer networks . Knowledge-based processors are designed to increase the performance of the IPv6 network. By contributing to the build-out of the IPv6 network, KBPs provide the means for an improved and secure networking system.
All networks are required to perform the following functions:
All of the above functions must occur at high speeds in advanced networks. Knowledge-based processors contain embedded databases that store information required to process packets that travel through a network at wired speeds. Knowledge-based processors are a new addition to intelligent networking that allows these functions to occur at high speeds and at the same time provide for lower power consumption. [ citation needed ]
Knowledge-based processors currently target the 3rd layer of the 7-layer OSI model which is devoted to packet processing. [ citation needed ]
The advantages that knowledge-based processors offer are the ability to execute multiple simultaneous decision-making processes for a range of network-aware processing functions. These include routing, Quality of Service (QOS), access control for both security and billing, as well as the forwarding of voice/video packets. These functions improve the performance of advanced Internet applications in IPv6 networks such as VOD (Video on demand), VoIP (voice over Internet protocol), and streaming of video and audio. [ citation needed ]
Knowledge-based processors use a variety of techniques to improve network functioning such as parallel processing, deep pipelining, and advanced power management techniques. Improvements in each of these areas allow for existing components to carry on their functions at wired speeds more efficiently thus improving the performance of the overall network. [ citation needed ]
The databases in a knowledge-based processor include classification tables, forwarding tables, and exact match tables- all of which are utilized by the CPU and network processors.
Knowledge-based processors mainly process packet headers (20% of the packet approximately) which enables network awareness. Content processors, by contrast, allow for packet payload inspection (80% of the packet is data) and therefore must search "deeper" into the packet. | https://en.wikipedia.org/wiki/Knowledge-based_processor |
A knowledge-based system ( KBS ) is a computer program that reasons and uses a knowledge base to solve complex problems . Knowledge-based systems were the focus of early artificial intelligence researchers in the 1980s. The term can refer to a broad range of systems. However, all knowledge-based systems have two defining components: an attempt to represent knowledge explicitly, called a knowledge base , and a reasoning system that allows them to derive new knowledge, known as an inference engine .
The knowledge base contains domain-specific facts and rules [ 1 ] about a problem domain (rather than knowledge implicitly embedded in procedural code, as in a conventional computer program). In addition, the knowledge may be structured by means of a subsumption ontology , frames , conceptual graph , or logical assertions. [ 2 ]
The inference engine uses general-purpose reasoning methods to infer new knowledge and to solve problems in the problem domain. Most commonly, it employs forward chaining or backward chaining . Other approaches include the use of automated theorem proving , logic programming , blackboard systems , and term rewriting systems such as Constraint Handling Rules (CHR). These more formal approaches are covered in detail in the Wikipedia article on knowledge representation and reasoning .
The term "knowledge-based system" was often used interchangeably with " expert system ", possibly because almost all of the earliest knowledge-based systems were designed for expert tasks. However, these terms tell us about different aspects of a system:
Today, virtually all expert systems are knowledge-based, whereas knowledge-based system architecture is used in a wide range of types of system designed for a variety of tasks.
The first knowledge-based systems were primarily rule-based expert systems. These represented facts about the world as simple assertions in a flat database and used domain-specific rules to reason about these assertions, and then to add to them. One of the most famous of these early systems was Mycin , a program for medical diagnosis.
Representing knowledge explicitly via rules had several advantages:
Later [ when? ] architectures for knowledge-based reasoning, such as the BB1 blackboard architecture (a blackboard system ), [ 4 ] allowed the reasoning process itself to be affected by new inferences, providing meta-level reasoning. BB1 allowed the problem-solving process itself to be monitored. Different kinds of problem-solving (e.g., top-down, bottom-up, and opportunistic problem-solving) could be selectively mixed based on the current state of problem solving. Essentially, the problem-solver was being used both to solve a domain-level problem along with its own control problem, which could depend on the former.
Other examples of knowledge-based system architectures supporting meta-level reasoning are MRS [ 5 ] and SOAR .
In the 1980s and 1990s, in addition to expert systems, other applications of knowledge-based systems included real-time process control, [ 6 ] intelligent tutoring systems, [ 7 ] and problem-solvers for specific domains such as protein structure analysis, [ 8 ] construction-site layout, [ 9 ] and computer system fault diagnosis. [ 10 ]
As knowledge-based systems became more complex, the techniques used to represent the knowledge base became more sophisticated and included logic, term-rewriting systems, conceptual graphs, and frames .
Frames, for example, are a way representing world knowledge using techniques that can be seen as analogous to object-oriented programming , specifically classes and subclasses, hierarchies and relations between classes, and behavior [ clarification needed ] of objects. With the knowledge base more structured, reasoning could now occur not only by independent rules and logical inference, but also based on interactions within the knowledge base itself. For example, procedures stored as daemons on [ clarification needed ] objects could fire and could replicate the chaining behavior of rules. [ 11 ]
Another advancement in the 1990s was the development of special purpose automated reasoning systems called classifiers . Rather than statically declare the subsumption relations in a knowledge-base, a classifier allows the developer to simply declare facts about the world and let the classifier deduce the relations. In this way a classifier also can play the role of an inference engine. [ 12 ]
The most recent [ as of? ] advancement of knowledge-based systems was to adopt the technologies, especially a kind of logic called description logic , for the development of systems that use the internet. The internet often has to deal with complex, unstructured data that cannot be relied on to fit a specific data model. The technology of knowledge-based systems, and especially the ability to classify objects on demand, is ideal for such systems. The model for these kinds of knowledge-based internet systems is known as the Semantic Web . [ 13 ] | https://en.wikipedia.org/wiki/Knowledge-based_systems |
kgb (stylized in lower case) is a privately held , New York –based company that provides directory assistance and enhanced information services across Europe and North America. It describes itself as "the world’s largest independent provider of directory assistance and enhanced information services." Founded in 1992 by Robert Pines under the name INFONXX , the company rebranded in 2008. [ 2 ] The term knowledge generation bureau is from an advertising copy line, and is not the name of the company, which is kgb.
In 2003, after the UK yellow pages directory market has been opened, kgb launched 118 118 (UK) , a UK directory enquiries provider that assists customers with telephone number enquiries and general queries.
After the success of 118 118 in the UK, kgb launched 118 218 in France when the French market has been opened. Thanks to a very good advertising campaign, it became quickly and is still the most used directory assistance service in France with around 200,000 calls each day on the phone number. Its 118218.fr website was in the top 100 French websites based on traffic. [ citation needed ]
In December 2008, kgb acquired Texperts , a United Kingdom –based firm, in order to benefit from their "innovative software platform and industry experience." [ citation needed ] Shortly afterwards, in January 2009, kgb launched a new suite of products in the United States, providing answers to customers’ questions through multiple platforms. The first is through a mobile search service known as 542542 (kgbkgb). It launched January 5, 2009, following the launch of the similar 118118 "Ask Us Anything" service in the United Kingdom. [ 2 ] In February 2013, after investigation by the US Department of Labor , a judge ordered kgb to pay $1.3 million to internet researchers who had been sharply underpaid. [ 3 ]
The company has also entered the internet group buying space with a business called kgbdeals. kgbdeals seeks to offer lifestyle deals to consumers in the US, UK, France and Italy.
542542 is an "Ask Anything 2-way text service", whereby United States users can submit questions via SMS for a cost of $1.49 per question. [ 4 ] [ 5 ]
In January 2013, kgb USA settled to pay $1.3 million in unpaid minimum wage and overtime wages to 14,000 current and former employees. [ 6 ] The lawsuit alleged that from January 19, 2009 to December 4, 2012, kgb USA repeatedly violated the Fair Labor Standards Act (FLSA) by (1) misclassifying the Special Agents as independent contractors instead of employees, (2) failing to pay minimum wage and overtime amounts, and (3) failing to make, keep, and preserve adequate and accurate employment-related records of the “Special Agents.” | https://en.wikipedia.org/wiki/Knowledge_Generation_Bureau |
Knowledge inertia ( KI ) is a concept in knowledge management . The term initially proposed by Shu-hsien Liao comprises a two dimensional model which incorporates experience inertia and learning inertia. [ 1 ] Later, another dimension—the dimension of thinking inertia—has been added based on the theoretical exploration of the existing concepts of experience inertia and learning inertia. [ 2 ]
One of the central problems in knowledge management related to organizational learning is to deal with " inertia ". Besides, individuals may also exhibit a natural tendency of inertia when facing problems during utilization of knowledge. Inertia in technical jargon means inactivity or torpor. Inertia in organizational learning context may be referred to as a slowdown in organizational learning-related activities. In fact, there are many other kinds of organizational inertia: e.g., innovation inertia, workforce inertia, productivity inertia, decision inertia, emotional inertia besides others that have different meanings in their own individual contexts. Some organization theorists have adopted the definition proposed by Liao (2002) [ 1 ] to extend its further use in organizational learning studies.
Knowledge inertia (KI) may be defined as a problem solving strategy using old, redundant, stagnant knowledge and past experience without recourse to new knowledge and experience . Inertia is a concept in physics that is used to explain the state of an object either remaining in stationary or uniform motion. Organizational theorists adopted this concept of inertia and applied it to different contexts which resulted in the emergence of diverse concepts—such as, for example, organizational inertia , consumer inertia, outsourcing inertia, [ 3 ] and cognitive inertia . Some organization theorists [ who? ] have adopted the definition proposed by Liao (2002) to extend its further use in organizational learning studies. Not every instances of knowledge inertia result in gloomy of negative outcome: one study suggested that knowledge inertia could positively affect a firm's product innovation. [ 4 ]
Knowledge inertia stems from the use of routine problem solving procedures that involves the utilization of redundant, stagnant knowledge and past experience without any recourse to new knowledge and thinking processes. Different methodologies exist for diverse types of knowledge that could be applied to manage knowledge efficiently. Since KI is a component of knowledge management, it is essential to consider the circulation of various knowledge types in avoiding inertia. The theory of KI supposedly studies the extent to which an organization's ability on problem solving is inhibited. Numerous factors could be attributed as enablers or inhibitors of the abilities on problem solving of an individual or an organization . Knowledge inertia applicable in the context of problem solving, therefore, may require inputs from all these diverse knowledge types, or it may require learning, new thinking, and experience. Emergence of new ideas to supplement the existing knowledge and assimilation of the same could be of help in avoiding the use of stagnant, outdated information while attempting to solve problems. [ 2 ] [ 5 ] | https://en.wikipedia.org/wiki/Knowledge_inertia |
A knowledge production mode is a term from the sociology of science which refers to the way (scientific) knowledge is produced. So far, three modes have been conceptualized. Mode 1 production of knowledge is knowledge production motivated by scientific knowledge alone ( basic research ) which is not primarily concerned by the applicability of its findings. Mode 1 is founded on a conceptualization of science as separated into discrete disciplines (e.g., a biologist does not bother about chemistry). Mode 2 was coined in 1994 in juxtaposition to Mode 1 by Michael Gibbons , Camille Limoges , Helga Nowotny , Simon Schwartzman , Peter Scott and Martin Trow . [ 1 ] In Mode 2, multidisciplinary teams are brought together for short periods of time to work on specific problems in the real world for knowledge production ( applied research ) in the knowledge society . Mode 2 can be explained by the way research funds are distributed among scientists and how scientists focus on obtaining these funds in terms of five basic features: knowledge produced in the context of application; transdisciplinarity; heterogeneity and organizational diversity; social accountability and reflexivity; and quality control. [ 2 ] [ 3 ] Subsequently, Carayannis and Campbell described a Mode 3 knowledge in 2006. [ 4 ]
Gibbons and colleagues argued that a new form of knowledge production began emerging in the mid-20th century that was context-driven, problem-focused and interdisciplinary. It involved multidisciplinary teams that worked together for short periods of time on specific problems in the real world. Gibbons and his colleagues labelled this "Mode 2" knowledge production. He and his colleagues distinguished this from traditional research, labelled 'Mode 1', which is academic, investigator-initiated and discipline-based knowledge production. [ 1 ] [ 5 ] In support, Limoges wrote, "We now speak of 'context-driven' research, meaning 'research carried out in a context of application, arising from the very work of problem solving and not governed by the paradigms of traditional disciplines of knowledge." [ 6 ] John Ziman drew a similar distinction between academic science and post-academic science, [ 7 ] and in 2001 Helga Nowotny , Peter Scott and Michael Gibbons extended their analysis to the implications of Mode 2 knowledge production for society. [ 8 ]
Mode 1 is characterized by theory building and testing within a discipline towards the aim of universal knowledge , while Mode 2 is characterized by knowledge produced for application. In the type of knowledge acquired, Mode 1 knowledge is universal law , primarily cognitive, while Mode 2 knowledge is particular and situational, and in Mode 1 data is context free but in Mode 2 contextually embedded. In Mode 1, the knowledge is validated by logic and measurement , together with consistency of prediction and control, while in Mode 2 knowledge is validated by experiential, collaborative, and transdisciplinary processes. In Mode 1, the researcher's role is to be a detached, neutral observer, while in Mode 2 the researcher is a socially accountable, immersed and reflexive actor or change agent. [ 5 ]
Carayannis and Campbell describe a Mode 3 knowledge, which emphasizes the coexistence and co-development of diverse knowledge and innovation modes, at the individual (micro or local), structural and organizational (meso or institutional), and systemic (macro or global) levels. It describes mutual interdisciplinary and transdisciplinary knowledge via concepts such as, at the micro level, creative milieus and entrepreneurs and employees, at the meso level, knowledge clusters, innovation networks, entrepreneurial universities, and academic firms, and at the macro level, the quadruple and quintuple innovation helix framework , the "democracy of knowledge" (knowledge within a democratic system), and "democratic capitalism" (capitalism within a democratic system). [ 9 ] [ 10 ] [ 11 ]
While the theory of knowledge production modes and especially the notion of Mode 2 knowledge production have attracted considerable interest, the theory has not been universally accepted in the terms put forth by Gibbons and colleagues. Scholars in science policy studies have pointed to three types of problems with the concept of Mode 2: its empirical validity, its conceptual strength, and its political value. [ 12 ]
Concerning the empirical validity of the Mode 2 claims, Etzkowitz and Leydesdorff [ 13 ] argue that:
The so-called Mode 2 is not new; it is the original format of science (or art) before its academic institutionalization in the 19th century. Another question to be answered is why Mode 1 has arisen after Mode 2: the original organizational and institutional basis of science, consisting of networks and invisible colleges. Where have these ideas, of the scientist as the isolated individual and of science separated from the interests of society, come from? Mode 2 represents the material base of science, how it actually operates. Mode 1 is a construct, built upon that base in order to justify autonomy for science, especially in an earlier era when it was still a fragile institution and needed all the help it could get (references omitted).
Thus, Mode 1 is essentially a theoretical construct, not a description of actual scientific research, as the boundaries between different disciplines and "basic" and "applied research" have always been blurred. [ 14 ] In the same article, Etzkowitz and Leydesdorff [ 15 ] use the notion of the triple helix of the nation state (government), academia (university) and industry to explain innovation, the development of new technology and knowledge transfer. Etzkowitz and Leydesdorff argue, "The Triple Helix overlay provides a model at the level of social structure for the explanation of Mode 2 as an historically emerging structure for the production of scientific knowledge, and its relation to Mode 1." [ 16 ]
Steve Fuller similarly criticized the "modists" view of the history of science because they wrongly give the impression that Mode 1 dates back to seventeenth-century Scientific Revolution whereas Mode 2 is traced to the end of either World War II or the Cold War, whereas in fact the two modes were institutionalized only within a generation of each other (the third and the fourth quarters of the nineteenth century, respectively). Fuller claims that the Kaiser Wilhelm Institutes in Germany, jointly funded by the state, the industry and the universities, predated today's "triple helix" institutions by an entire century. [ 17 ]
Regarding the conceptual strength of Mode 2, it has been argued that the coherence of its five features is questionable, as there might be a lot of multi-disciplinary, application oriented research that does not show organizational diversity or novel types of quality control. [ 18 ] Moreover, Mode 2 lends itself to a normative reading, and authors have criticized the way Gibbons and his co-authors seem to blend descriptive and normative elements. According to Godin, the Mode 2 approach is more a political ideology than a descriptive theory. [ 19 ] Similarly, Shinn complains: "Instead of theory or data, the New Production of Knowledge—both book and concept—seems tinged with political commitment". [ 20 ]
One of the fields which has implemented mode-based knowledge production research most enthusiastically is that of management and organization studies. MacLean, MacIntosh and Grant offer a review of Mode 2 management research, [ 21 ] while MacIntosh, Bonnet, and Eikeland review the ways in which Mode 2-influenced management research has an impact on the lives of those working in organizations; [ 22 ] Mode 2's implications have also been considered in terms of business processes [ 23 ] The role of the different knowledge production modes has been considered in diverse fields, for example evidence-based policy making, [ 24 ] fisheries, [ 25 ] entrepreneurship and innovation, [ 26 ] medical research, [ 27 ] science diplomacy , [ 28 ] sustainability science, [ 29 ] and working life research. [ 30 ] | https://en.wikipedia.org/wiki/Knowledge_production_modes |
A known error is a software bug that has not been fixed, but has a known root cause and either has little disruptive impact on the end user or a known work around. [ 1 ]
Tested systems are often described as "free from known errors" in recognition that complex systems cannot be proven to be error free. [ 2 ]
In IT Operations known errors may be logged in a system's known error database ( KEDB ) which is then used to prioritize changes and to develop customer support reference information where a work around exists. [ 3 ] In the ITIL framework KEDB is a part of the Problem Management process.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Known_error |
In crystal growth , a Knudsen cell is an effusion evaporator source for relatively low partial pressure elementary sources (e.g. Ga, Al, Hg, As). Because it is easy to control the temperature of the evaporating material in Knudsen cells, they are commonly used in molecular-beam epitaxy .
The Knudsen effusion cell was developed by Martin Knudsen (1871–1949). A typical Knudsen cell contains a crucible (made of pyrolytic boron nitride , quartz , tungsten or graphite ), heating filaments (often made of metal tantalum ), water cooling system, heat shields , and an orifice shutter.
The Knudsen cell is used to measure the vapor pressures of a solid with very low vapor pressure. Such a solid forms a vapor at low pressure by sublimation . The vapor slowly effuses through the pinhole, and the loss of mass is proportional to the vapor pressure and can be used to determine this pressure. [ 1 ] The heat of sublimation can also be determined by measuring the vapor pressure as a function of temperature, using the Clausius–Clapeyron relation . [ 2 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knudsen_cell |
In fluid dynamics , the Knudsen equation is used to describe how gas flows through a tube in free molecular flow . When the mean free path of the molecules in the gas is larger than or equal to the diameter of the tube , the molecules will interact more often with the walls of the tube than with each other. For typical tube dimensions, this occurs only in high or ultrahigh vacuum.
The equation was developed by Martin Hans Christian Knudsen (1871–1949), a Danish physicist who taught and conducted research at the Technical University of Denmark .
For a cylindrical tube, the Knudsen equation is: [ 1 ]
where:
For nitrogen (or air) at room temperature, the conductivity C {\displaystyle C} (in liters per second) of a tube can be calculated from this equation: [ 2 ] | https://en.wikipedia.org/wiki/Knudsen_equation |
Free molecular flow describes the fluid dynamics of gas where the mean free path of the molecules is larger than the size of the chamber or of the object under test. For tubes/objects of the size of several cm, this means pressures well below 10 −3 mbar . This is also called the regime of high vacuum , or even ultra-high vacuum . This is opposed to viscous flow encountered at higher pressures. [ 1 ] The presence of free molecular flow can be calculated, at least in estimation, with the Knudsen number (Kn). [ 2 ] If Kn > 10, the system is in free molecular flow, [ 3 ] also known as Knudsen flow. [ 4 ] Knudsen flow has been defined as the transitional range between viscous flow and molecular flow, which is significant in the medium vacuum range where λ ≈ d. [ 5 ]
Gas flow can be grouped in four regimes: For Kn≤0.001, flow is continuous, and the Navier–Stokes equations are applicable, from 0.001<Kn<0.1, slip flow occurs, from 0.1≤Kn<10, transitional flow occurs and for Kn≥10, free molecular flow occurs. [ 6 ]
In free molecular flow, the pressure of the remaining gas can be considered as effectively zero. Thus, boiling points do not depend on the residual pressure. The flow can be considered to be individual particles moving in straight lines. Practically, the "vapor" cannot move around bends or into other spaces behind obstacles, as they simply hit the tube wall. This implies conventional pumps cannot be used, as they rely on viscous flow and fluid pressure. Instead, special sorption pumps , ion pumps and momentum transfer pumps i.e. turbomolecular pumps are used.
Free molecular flow occurs in various processes such as molecular distillation , ultra-high vacuum equipment such as particle accelerators , and naturally in outer space .
The definition of a free molecular flow depends on the distance scale under consideration. For example, in the interplanetary medium , the plasma is in a free molecular flow regime in scales less than 1 AU ; thus, planets and moons are effectively under particle bombardment. However, on larger scales, fluid-like behavior is observed, because the probability of collisions between particles becomes significant.
Knudsen flow describes the movement of fluids with a Knudsen number near unity, that is, where the characteristic length in the flow space is of the same order of magnitude as the mean free path . Depending on the source there is a range mentioned of 0.1<Kn<10 for which Knudsen flow occurs. Other names for this flow regime are intermediate, transitional, or slip flow, since it represents a transition state between free molecular flow and viscous flow . Thus the flow of fluids under Knudsen flow conditions is established both by molecular phenomena and by the viscosity. [ 7 ]
For a gas passing through small holes in a thin wall in the Knudsen-flow regime, the number of molecules that pass through a hole is proportional to the pressure of the gas and inversely proportional to its molecular mass. It is therefore possible to effect a partial separation of a mixture of gases if the components have different molecular masses. The technique is used to separate isotopic mixtures , such as uranium , using gaseous diffusion through porous membranes. [ 8 ] It has also been successfully demonstrated for use in hydrogen production , as a technique for separating hydrogen from the gaseous product mixture created when water is heated at high temperatures using solar or other energy sources. [ 9 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knudsen_flow |
A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions ( mean free path ) is greater than the diameter of the receptacle that contains it. [ 1 ] If the mean free path is much greater than the diameter, the flow regime is dominated by collisions between the gas molecules and the walls of the receptacle, rather than intermolecular collisions with each other. [ 2 ] It is named after Martin Knudsen .
For a Knudsen gas, the Knudsen number must be greater than 1. The Knudsen number can be defined as:
K n = λ L {\displaystyle {\rm {{Kn}={\frac {\lambda }{L}}}}}
where
λ {\displaystyle \lambda } is the mean free path [m]
L {\displaystyle L} is the diameter of the receptacle [m].
When 10 − 1 < K n < 10 {\displaystyle 10^{-1}<{\rm {{Kn}<10}}} , the flow regime of the gas is transitional flow . In this regime the intermolecular collisions between gas particles are not yet negligible compared to collisions with the wall. However when K n > 10 {\displaystyle {\rm {{Kn}>10}}} , the flow regime is free molecular flow , so the intermolecular collisions between the particles are negligible compared to the collisions with the wall. [ 3 ]
For example, consider a receptacle of air at room temperature and pressure with a mean free path of 68nm. [ 4 ] If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm.
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knudsen_gas |
The Knudsen layer , also known as evaporation layer, is the thin layer of vapor near a liquid or solid. It is named after Danish physicist Martin Knudsen (1871–1949).
At the interface of a vapor and a liquid/solid, the gas interaction with the liquid/solid dominates the gas behavior, and the gas is, very locally, not in equilibrium. [ 1 ] This region, several mean free path lengths thick, is called the Knudsen layer. [ 2 ]
The Knudsen layer thickness can be approximated by l c {\displaystyle l_{c}} , given by [ 3 ]
where k {\displaystyle k} is the Boltzmann constant , T s {\displaystyle T_{s}} is the temperature, d {\displaystyle d} is the molecular diameter and p s {\displaystyle p_{s}} is the pressure.
One of the applications of Knudsen layer is in the coma of comets. It has been used specially in the coma chemistry model (ComChem model). [ 4 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knudsen_layer |
The Knudsen number ( Kn ) is a dimensionless number defined as the ratio of the molecular mean free path length to a representative physical length scale . This length scale could be, for example, the radius of a body in a fluid. The number is named after Danish physicist Martin Knudsen (1871–1949).
The Knudsen number helps determine whether statistical mechanics or the continuum mechanics formulation of fluid dynamics should be used to model a situation. If the Knudsen number is near or greater than one, the mean free path of a molecule is comparable to a length scale of the problem, and the continuum assumption of fluid mechanics is no longer a good approximation. In such cases, statistical methods should be used.
The Knudsen number is a dimensionless number defined as
where
The representative length scale considered, L {\displaystyle L} , may correspond to various physical traits of a system, but most commonly relates to a gap length over which thermal transport or mass transport occurs through a gas phase. This is the case in porous and granular materials, where the thermal transport through a gas phase depends highly on its pressure and the consequent mean free path of molecules in this phase. [ 1 ] For a Boltzmann gas , the mean free path may be readily calculated, so that
where
If the temperature is increased, but the volume kept constant, then the Knudsen number (and the mean free path) doesn't change (for an ideal gas ). In this case, the density stays the same. If the temperature is increased, and the pressure kept constant, then the gas expands and therefore its density decreases. In this case, the mean free path increases and so does the Knudsen number. Hence, it may be helpful to keep in mind that the mean free path (and therefore the Knudsen number) is really dependent on the thermodynamic variable density (proportional to the reciprocal of density), and only indirectly on temperature and pressure.
For particle dynamics in the atmosphere , and assuming standard temperature and pressure , i.e. 0 °C and 1 atm, we have λ {\displaystyle \lambda } ≈ 8 × 10 −8 m (80 nm).
The Knudsen number can be related to the Mach number and the Reynolds number .
Using the dynamic viscosity
with the average molecule speed (from Maxwell–Boltzmann distribution )
the mean free path is determined as follows: [ 2 ]
Dividing through by L (some characteristic length), the Knudsen number is obtained:
where
The dimensionless Mach number can be written as
where the speed of sound is given by
where
The dimensionless Reynolds number can be written as
Dividing the Mach number by the Reynolds number:
and by multiplying by γ π 2 {\displaystyle {\sqrt {\frac {\gamma \pi }{2}}}} yields the Knudsen number:
The Mach, Reynolds and Knudsen numbers are therefore related by
The Knudsen number can be used to determine the rarefaction of a flow: [ 3 ] [ 4 ]
This regime classification is empirical and problem dependent but has proven useful to adequately model flows. [ 3 ] [ 6 ]
Problems with high Knudsen numbers include the calculation of the motion of a dust particle through the lower atmosphere and the motion of a satellite through the exosphere . One of the most widely used applications for the Knudsen number is in microfluidics and MEMS device design where flows range from continuum to free-molecular. [ 3 ] In recent years, it has been applied in other disciplines such as transport in porous media, e.g., petroleum reservoirs. [ 4 ] Movements of fluids in situations with a high Knudsen number are said to exhibit Knudsen flow , also called free molecular flow . [ citation needed ]
Airflow around an aircraft such as an airliner has a low Knudsen number, making it firmly in the realm of continuum mechanics. Using the Knudsen number an adjustment for Stokes' law can be used in the Cunningham correction factor , this is a drag force correction due to slip in small particles (i.e. d p < 5 μm). The flow of water through a nozzle will usually be a situation with a low Knudsen number. [ 5 ]
Mixtures of gases with different molecular masses can be partly separated by sending the mixture through small holes of a thin wall because the numbers of molecules that pass through a hole is proportional to the pressure of the gas and inversely proportional to its molecular mass. The technique has been used to separate isotopic mixtures, such as uranium , using porous membranes, [ 7 ] It has also been successfully demonstrated for use in hydrogen production from water. [ 8 ]
The Knudsen number also plays an important role in thermal conduction in gases. For insulation materials, for example, where gases are contained under low pressure, the Knudsen number should be as high as possible to ensure low thermal conductivity . [ 9 ] | https://en.wikipedia.org/wiki/Knudsen_number |
The Knudsen paradox has been observed in experiments of channel flow with varying channel width or equivalently different pressures. [ 1 ] If the normalized mass flux through the channel is plotted over the Knudsen number based on the channel width a distinct minimum is observed around K n = 0.8 {\displaystyle Kn=0.8} . This is a paradoxical behaviour because, based on the Navier–Stokes equations , one would expect the mass flux to decrease with increasing the Knudsen number. The minimum can be understood intuitively by considering the two extreme cases of very small and very large Knudsen number. For very small Kn the viscosity vanishes and a fully developed steady state channel flow shows infinite flux. On the other hand, the particles stop interacting for large Knudsen numbers. Because of the constant acceleration due to the external force, the steady state again will show infinite flux. [ 2 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knudsen_paradox |
Knut Rønningen (20 July 1938 – 3 February 2020) was a Norwegian biotechnologist.
He was born in Tylldalen . He studied at the Norwegian College of Agriculture , Edinburgh and Cornell University , before finally taking his dr.agric. degree at the Norwegian College of Agriculture in 1970. He became a professor at the Swedish University of Agricultural Sciences in 1972, then at the Norwegian School of Veterinary Science in 1981. Here he also served as prorector from 1989 to 1995. [ 1 ]
He was a member of the Norwegian Academy of Science and Letters from 1987. [ 2 ] Among his board memberships, he chaired the board of Matforsk . [ 1 ]
This biographical article about a Norwegian academic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knut_Rønningen |
Simpath is an algorithm introduced by Donald Knuth that constructs a zero-suppressed decision diagram (ZDD) representing all simple paths between two vertices in a given graph. [ 1 ] [ 2 ]
This algorithms or data structures -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knuth's_Simpath_algorithm |
In number theory , an n - Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies i m − n ≡ 1 ( mod m ) {\displaystyle i^{m-n}\equiv 1{\pmod {m}}} . [ 1 ] The concept is named after Walter Knödel . [ citation needed ]
The set of all n -Knödel numbers is denoted K n . [ 1 ] The special case K 1 is the Carmichael numbers . [ 1 ] There are infinitely many n -Knödel numbers for a given n .
Due to Euler's theorem every composite number m is an n -Knödel number for n = m − φ ( m ) {\displaystyle n=m-\varphi (m)} where φ {\displaystyle \varphi } is Euler's totient function .
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Knödel_number |
The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number N ( k ) of nonoverlapping triangles whose sides lie on an arrangement of k lines . Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement. [ 1 ]
Saburo Tamura proved that the number of nonoverlapping triangles realizable by k {\displaystyle k} lines is at most ⌊ k ( k − 2 ) / 3 ⌋ {\displaystyle \lfloor k(k-2)/3\rfloor } . G. Clément and J. Bader proved more strongly that this bound cannot be achieved when k {\displaystyle k} is congruent to 0 or 2 (mod 6). [ 2 ] The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function , as: { 1 3 k ( k − 2 ) when k ≡ 3 , 5 ( mod 6 ) ; 1 3 ( k + 1 ) ( k − 3 ) when k ≡ 0 , 2 ( mod 6 ) ; 1 3 ( k 2 − 2 k − 2 ) when k ≡ 1 , 4 ( mod 6 ) . {\displaystyle {\begin{cases}{\frac {1}{3}}k(k-2)&{\text{when }}k\equiv 3,5{\pmod {6}};\\{\frac {1}{3}}(k+1)(k-3)&{\text{when }}k\equiv 0,2{\pmod {6}};\\{\frac {1}{3}}(k^{2}-2k-2)&{\text{when }}k\equiv 1,4{\pmod {6}}.\end{cases}}}
Solutions yielding this number of triangles are known when k {\displaystyle k} is 3, 4, 5, 6, 7, 8, 9, 13, 15, 17, 19, 21, 25 or 29. [ 3 ] [ 4 ] [ 5 ] For k = 10, 11 and 12, the best solutions known reach a number of triangles one less than this upper bound.
Furthermore, the upper bound for even values k {\displaystyle k} can be improved: ⌊ k ( k − 7 / 3 ) / 3 ⌋ {\displaystyle \lfloor k(k-7/3)/3\rfloor } . [ 4 ] This bound can be reached for 10, 12 and 16. [ 3 ]
The following bounds are known:
The version of the problem in the projective plane allows more triangles. In this version, it is convenient to include the line at infinity as one of the given lines, after which the triangles appear in three forms:
For instance, an arrangement of five finite lines forming a pentagram , together with a sixth line at infinity, has ten triangles: five in the pentagram, and five more bounded by pairs of rays.
D. Forge and J. L. Ramirez Alfonsin provided a method for going from an arrangement in the projective plane with k > 3 {\displaystyle k>3} lines and 1 3 k ( k − 1 ) {\displaystyle {\tfrac {1}{3}}k(k-1)} triangles (the maximum possible for k > 3 {\displaystyle k>3} ), with certain additional properties, to another solution with K = 2 k − 1 {\displaystyle K=2k-1} lines and 1 3 K ( K − 1 ) {\displaystyle {\tfrac {1}{3}}K(K-1)} triangles (again maximum), with the same additional properties. As they observe, it is possible to start this method with the projective arrangement of six lines and ten triangles described above, producing optimal projective arrangements whose numbers of lines are
Thus, in the projective case, there are infinitely many different numbers of lines for which an optimal solution is known. [ 1 ] | https://en.wikipedia.org/wiki/Kobon_triangle_problem |
Koch's postulates ( / k ɒ x / KOKH ) [ 2 ] are four criteria designed to establish a causal relationship between a microbe and a disease . The postulates were formulated by Robert Koch and Friedrich Loeffler in 1884, based on earlier concepts described by Jakob Henle , and the statements were refined and published by Koch in 1890. [ 3 ] Koch applied the postulates to describe the etiology of cholera and tuberculosis , both of which are now ascribed to bacteria . The postulates have been controversially generalized to other diseases. More modern concepts in microbial pathogenesis cannot be examined using Koch's postulates, including viruses (which are obligate intracellular parasites ) and asymptomatic carriers . They have largely been supplanted by other criteria such as the Bradford Hill criteria for infectious disease causality in modern public health and the Molecular Koch's postulates for microbial pathogenesis. [ 4 ]
Koch's four postulates are: [ 5 ]
However, Koch later abandoned the universalist requirement of the first postulate when he discovered asymptomatic carriers of cholera [ 6 ] and, later, of typhoid fever . [ 7 ] Subclinical infections and asymptomatic carriers are now known to be a common feature of many infectious diseases, especially viral diseases such as polio , herpes simplex , HIV/AIDS , hepatitis C , and COVID-19 . For example, poliovirus only causes paralysis in a small percentage of those infected. [ 7 ]
The second postulate does not apply to pathogens incapable of growing in pure culture. For example, viruses are dependent on entering and hijacking host cells to use their resources for growth and reproduction, incapable of growing alone. [ 8 ]
The third postulate specifies "should", rather than "must", because Koch's experiments with tuberculosis and cholera showed that not all organisms exposed to an infectious agent will acquire the infection. [ 9 ] Some individuals may avoid infection by maintaining their health for proper immune functioning, acquiring immunity from previous exposure or vaccination, or through genetic immunity, such as sickle cell trait conferring resistance to malaria . [ 10 ]
Other exceptions to Koch's postulates include evidence that some pathogens can cause several diseases, such as the varicella-zoster virus causing chickenpox and shingles . Conversely, diseases like meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens. [ 11 ]
Robert Koch developed the postulates based on pathogens that could be isolated using 19th century methods. [ 12 ] Nonetheless, Koch was already aware that the causative agent of cholera, Vibrio cholerae , could be found in both sick and healthy people, invalidating his first postulate. [ 6 ] [ 9 ] Since the 1950s, Koch's postulates have been treated as obsolete for epidemiology research, but they are still taught to emphasize historical approaches to determining the microbial causative agents of disease. [ 3 ] [ 13 ]
Koch formulated his postulates too early in the history of virology to recognize that many viruses do not cause illness in all infected individuals, a requirement of the first postulate. HIV/AIDS denialism includes claims that the viral spread of HIV/AIDS violates Koch's second postulate, despite that criticism being applicable to all viruses. Nonetheless, HIV/AIDS fulfills all of the other postulates with all AIDS patients being HIV-positive and laboratory workers exposed to HIV eventually developing the same symptoms of AIDS. [ 14 ] Similarly, evidence that some oncovirus infections can contribute to cancers has been unfairly criticized for failing to fulfill criteria developed before viruses were fully understood as host-dependent. [ 15 ]
The bacterial pathogen Staphylococcus aureus showcases lethal synergy with the opportunistic fungi Candida albicans by using the latter's extracellular matrix to protect itself from host immune cells and antibiotic compounds. [ 16 ] Biofilm -producing species aim to clump individual cells on solid or liquid surfaces, growing poorly in a pure culture and leaving those that survive potentially too weak to cause disease if transferred to a healthy organism, violating the second and third postulates. [ 17 ]
Physicians Barry Marshall and Robin Warren argued that Helicobacter pylori contributes to peptic ulcer disease , but throughout the early 1980s, the scientific community initially rejected their findings because not all H. pylori infections cause peptic ulcers, violating the first postulate. [ 18 ]
Priority effects are another major concern, as the success of pathogenic bacteria is dependent on the other species already colonizing that habitat, as the earliest resident microbes establish the environmental conditions, providing colonization resistance against certain species. [ 19 ]
In 1988, microbiologist Stanley Falkow developed a set of three Molecular Koch's postulates for identifying the microbial genes encoding virulence factors . First, the phenotype of a disease symptom must be associated with a specific genotype only found in pathogenic strains. Second, that symptom should not be present when the associated gene is inactivated. Third, the symptom should return when the gene is reactivated. [ 20 ]
Modern DNA sequencing allows researchers to identify whether the genes of specific pathogens are only present in infected hosts, offering a modified approach for determining correlation between viruses and certain diseases. Since viruses cannot grow in axenic cultures, requiring a host cell to hijack for growth and replication, scientists are limited to analyzing which viral genes contribute to host diseases. Additionally, this method has supported correlations between prions (pathogenic misfolded proteins) and conditions like Creutzfeldt–Jakob disease because Koch's postulates are focused on foreign microorganisms, rather than the results of host mutations. [ 21 ] | https://en.wikipedia.org/wiki/Koch's_postulates |
The Koch reaction is an organic reaction for the synthesis of tertiary carboxylic acids from alcohols or alkenes and carbon monoxide . Some commonly industrially produced Koch acids include pivalic acid , 2,2-dimethylbutyric acid and 2,2-dimethylpentanoic acid. [ 1 ] The Koch reaction employs carbon monoxide as a reagent and can therefore be classified as a carbonylation . The carbonylated product is converted to a carboxylic acid, so in this respect the Koch reaction can also be classified as a carboxylation .
Pivalic acid is produced from isobutene using the Koch reaction, [ 2 ] as well as several other branched carboxylic acids. An estimated 150,000 tonnes of "Koch acids" and their derivatives annually. [ 2 ]
Koch–Haaf-type reactions have been used to carboxylate adamantanes. [ 3 ] [ 4 ] [ 5 ]
The reaction is a strongly acid - catalyzed carbonylation and typically proceeds under pressures of CO and at elevated temperatures. The commercially important synthesis of pivalic acid from isobutenes operates near 50 °C and 50 k Pa (50 atm). Generally the reaction is conducted with strong mineral acids such as sulfuric acid , HF , or phosphoric acid in combination with BF 3 . [ 6 ]
Formic acid , which readily decomposes to carbon monoxide in the presence of acids, can be used instead of carbon monoxide. This method is referred to as the Koch–Haaf reaction . This variation allows for reactions at nearly standard room temperature and pressure . [ 7 ]
The mechanism has been intensively scrutinized. [ 8 ] The mechanism involves generation of a tertiary carbenium ion , which binds carbon monoxide. The resulting acylium ion is then hydrolysed to the tertiary carboxylic acid :
The carbenium ion can be produced either by protonation of an alkene or protonation/ elimination of a tertiary alcohol:
Standard acid catalysts are sulfuric acid or a mixture of BF 3 and HF .
Although the use of acidic ionic liquids for the Koch reaction requires relatively high temperatures and pressures (8 MPa and 430 K in one 2006 study [ 9 ] ), acidic ionic solutions themselves can be reused with only a very slight decrease in yield, and the reactions can be carried out biphasically to ensure easy separation of products.
A large number of transition metal catalyst carbonyl cations have also been investigated for usage in Koch-like reactions: Cu(I), [ 10 ] Au(I) [ 11 ] and Pd(I) [ 12 ] carbonyl cations catalysts dissolved in sulfuric acid can allow the reaction to progress at room temperature and atmospheric pressure. Usage of a Nickel tetracarbonyl catalyst with CO and water as a nucleophile is known as the Reppe carbonylation , and there are many variations on this type of metal-mediated carbonylation used in industry, particularly those used by Monsanto and the Cativa processes , which convert methanol to acetic acid using acid catalysts and carbon monoxide in the presence of metal catalysts .
Because of the use of strong mineral acids , industrial implementation of the Koch reaction is complicated by equipment corrosion , separation procedures for products and difficulty in managing large amounts of waste acid. Several acid resins [ 13 ] [ 14 ] and acidic ionic liquids [ 9 ] have been investigated in order to discover if Koch acids can be synthesized in more mild environments.
Koch reactions can involve a large number of side products, although high yields are generally possible (Koch and Haaf reported yields of over 80% for several alcohols in their 1958 paper). Carbocation rearrangements , etherization (in case an alcohol is used as a substrate, instead of an alkene), and occasionally substrate C N+1 carboxylic acids are observed due to fragmentation and dimerization of carbon monoxide-derived carbenium ions, especially since each step of the reaction is reversible. [ 15 ] Alkyl sulfuric acids are also known to be possible side products, but are usually eliminated by the excess sulfuric acid used. | https://en.wikipedia.org/wiki/Koch_reaction |
The Kochi reaction is an organic reaction for the decarboxylation of carboxylic acids to alkyl halides with lead(IV) acetate and a lithium halide. [ 1 ]
The reaction is a variation of the Hunsdiecker reaction .
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Kochi_reaction |
The French Louis Pasteur (1822–1895) and German Robert Koch (1843–1910) are the two greatest figures in medical microbiology and in establishing acceptance of the germ theory of disease (germ theory). [ 1 ] In 1882, fueled by national rivalry and a language barrier, the tension between Pasteur and the younger Koch erupted into an acute conflict. [ 1 ]
Pasteur had already discovered molecular chirality , investigated fermentation , refuted spontaneous generation , inspired Lister 's introduction of antisepsis to surgery, introduced pasteurization to France's wine industry, answered the silkworm diseases blighting France's silkworm industry, attenuated a Pasteurella species of bacteria to develop vaccine to chicken cholera (1879), and introduced anthrax vaccine (1881). [ 1 ]
Koch had transformed bacteriology by introducing the technique of pure culture , whereby he established the microbial cause of the disease anthrax (1876), had introduced both staining and solid culture plates to bacteriology (1881), had identified the microbial cause of tuberculosis (1882), had incidentally popularized Koch's postulates for identifying the microbial cause of a disease, and would later identify the microbial cause of cholera (1883).
Although Koch had briefly and, thereafter, his bacteriological followers regarded a bacterial species' properties as unalterable, [ 2 ] Pasteur's modification of virulence to develop vaccine demonstrated this doctrine's falsity. [ 1 ] At an 1882 conference, a mistranslated term from French to German during Pasteur's lecture triggered Koch's indignation, whereupon Koch's two bacteriologist colleagues, Friedrich Loeffler and Georg Gaffky , published denigration of the entirety of Pasteur's research on anthrax since 1877. [ 1 ]
Germany had unified by way of its victory in the Franco-Prussian War (1870–71), seizing Alsace-Lorraine from France. Pasteur was professor in the University of Strasbourg , located in Alsace, where he married the daughter of the rector. Jean Baptiste Pasteur, the only son of Louis and Marie Pasteur, was a soldier in the Franco-Prussian War. The tone set by this war contributed to the rivalry between Koch and Pasteur. [ 1 ] The "German Problem", as Germany increasingly gained scientific, technological, and industrial dominance, fed tensions among European nations. [ 3 ] Germ theory's applications were embedded in the heightening quest by France, Germany, Britain, and Italy to colonize Africa and Asia with the aid of tropical medicine , [ 4 ] a new variant of colonial medicine, [ 5 ] while medical scientists in respective nations vied to lead advances. [ 6 ]
In 1863, influenced by Pasteur's research on fermentation, fellow Frenchman Casimir Davaine mostly explained the cause of anthrax, but Davaine's explanation was opposed by those who opposed the idea that infection with a microorganism could explain it. In 1840, Jakob Henle had proposed microorganism infections caused diseases, and in 1875 German botanist Ferdinand Cohn weighed in on a controversy in microbiology by declaring that the elementary unit was the cell and that each form of bacteria was constant and naturally divided from the other forms. Influenced by Henle and by Cohn, Koch developed a pure culture of the bacteria described by Davaine, traced its spore stage, inoculated it into animals, and showed it caused anthrax. Pasteur called this a "remarkable achievement". [ 7 ] In pure culture, bacteria tend to keep constant traits, and Koch reported having already observed constancy.
Pasteur undertook investigation, yet gave much credit to Davaine. Meanwhile, Pasteur's researchers always reported variation in their cultures. In 1879, Henri Toussaint identified a bacterial species involved in chicken cholera and named the genus in honor of Pasteur, Pasteurella . In Pasteur's laboratory, a culture of Pasteurella multocida was left out over a weekend exposed to air, and Pasteur and Emile Roux noticed upon return to the laboratory that its virulence to chickens was diminished. Pasteur applied the discovery to develop chicken cholera vaccine, introduced in a public experiment, an empirical challenge to the stance of Koch's bacteriologists that bacterial traits were unalterable. [ 1 ]
From 1878 to 1880, when publishing on anthrax, Pasteur referred to the bacteria by the name given it by Frenchman Davaine , but in one footnote called it " Bacillus anthracis of the Germans". [ 1 ] In July 1880 Toussaint reported developing a technique of chemical deactivation to produce anthrax vaccine that successfully protected dogs and cattle, and was praised by the Academy of Science, but Pasteur attacked the feat—chemical deactivation and not virulence attenuation to make a vaccine—as impossible. [ 8 ] Pasteur soon introduced his own anthrax vaccine in a highly successful public experiment, and entered commerce with it. [ 8 ] Pasteur was criticized by Koch and colleagues. [ 8 ] [ 9 ] (Pasteur had not used attenuation, but secretly used Toussaint's technique.) [ 8 ] [ 10 ] [ 11 ]
In 1883, responding to a cholera epidemic in Alexandria , Egypt, both Pasteur and Koch sent missions vying to identify its cause. Koch returned victorious, whereupon Pasteur switched research direction and began development of rabies vaccine. [ 6 ] As to public health, Koch's bacteriologists feuded with Max von Pettenkofer —whose miasmatic theory claimed the bacteria was but one causal factor among at least several—but von Pettenkoffer stubbornly opposed water treatment, and the massive cholera epidemic in Hamburg, Germany, in 1892 devastated von Pettenkofer's position, and German public health was grounded on Koch's bacteriology. [ 12 ] Meanwhile, Pasteur led introduction of pasteurization in France.
Rabies , uncommon but excruciating and almost invariably fatal, was dreaded. Amid anthrax vaccine's success, Pasteur introduced rabies vaccine (1885), the first human vaccine since Jenner 's smallpox vaccine (1796). On 6 July 1885, the vaccine was tested on 9-year old Joseph Meister who had been bitten by a rabid dog but failed to develop rabies, and Pasteur was called a hero. [ 13 ] (Even without vaccination, not everyone bitten by a rabid dog develops rabies.) After other apparently successful cases, donations poured in from across the globe, funding the establishment of the Pasteur Institute , the globe's first biomedical institute, which opened in Paris in 1888. [ 14 ]
Pasteur Institute trained military physicians in colonial medicine, although French government soon took over this role. [ 6 ] The success of Pasteur's modification of bacterial virulence inspired confidence in the universality of Pasteurian science, though Pasteur's researchers preferred the term microbiology over the term bacteriology . [ 6 ] Koch discouraged use of rabies vaccine, [ 1 ] whose production later became a premise for opening Pasteur Institutes abroad, as in Shanghai , China . [ 15 ] The first overseas Pasteur Institute was opened by Albert Calmette in Saigon in French Indochina in 1891, although Pasteur's nephew Adrien Loir was already planning to open one in Australia . [ 6 ]
In 1882, Koch reported identification of the tubercle bacillus as the cause of tuberculosis , [ 16 ] cementing germ theory. Koch took his research into a new direction— applied research —to develop a tuberculosis treatment and use the profits to found his own research institute, autonomous from government. [ 17 ] In 1890 Koch introduced the intended drug, tuberculin , but it soon proved ineffective, and accounts of deaths followed in news press. [ 18 ] Amid Koch's reluctance to disclose tuberculin's formula, Koch's reputation sustained damage, but Koch retained lasting acclaim and received the 1905 Nobel Prize in Physiology or Medicine "for his investigations and discoveries in relation to tuberculosis". [ 19 ] Koch accepted government's offer to direct the Institute for Infectious Diseases (1891), in Berlin , a prestigious position but not the kind of institute that Koch had sought. [ 17 ] It was later renamed the Robert Koch Institute , which remains a government organization.
The monomorphist doctrine of Koch's bacteriologists suggested public health interventions to eliminate bacteria, whereas Pasteur's acceptance of variation suggested attenuating bacterial virulence in the laboratory to develop vaccines. [ 1 ] Although inspired by Pasteur's applications suggesting medicine's potential, American physicians traveled to Germany to learn Koch's bacteriology as basic science , [ 20 ] though Pasteur emphasized the fuzzy boundary between basic science and applied science . [ 1 ]
From 1876 to 1878, the American William Henry Welch trained in Germany pathology, and in 1879 opened America's first scientific laboratory, a pathology laboratory in Bellevue 's medical school in New York. [ 21 ] While in Germany, Welch had met John Shaw Billings who had been appointed by Daniel Coit Gilman —the first president of the newly forming Johns Hopkins University —to plan Hopkins' hospital and medical school. [ 21 ] Named the medical school's first dean in 1883, [ 21 ] Welch promptly traveled for training in Koch's bacteriology, and returned to America eager to transform medicine with the "secrets of nature". [ 22 ] Hopkins medical school opened in 1894 with Welch emphasizing Koch's bacteriology, [ 22 ] which became the foundation of modern medicine. [ 1 ] [ 23 ]
As "dean of American medicine", William H Welch became the first scientific director of Rockefeller Institute for Medical Research (1901), and appointed his former Hopkins student Simon Flexner the first director of pathology and bacteriology laboratories. Aided by the " Flexner report ", published in 1910 while Welch was president of the American Medical Association , Welch's view of science and medicine became the national standard, [ 24 ] a transformation of American medical education completed around 1930. [ 25 ] As first dean of America's first public health school, founded in 1916 at Hopkins , Welch set the standard for public health, and with Simon Flexner exported the Hopkins model internationally.
Although Pasteur died in 1895, eventually over thirty official Pasteur Institutes opened across the globe. [ 26 ] Pasteur's team had planned in 1885 to open a rabies-treatment facility in St. Louis , Missouri , and an American Pasteur Institute in New York City , but the plans were abandoned, and America has never hosted an official Pasteur Institute. [ 26 ]
A number of American copycats appeared, however, starting with "Chicago Pasteur Institute" in 1890, and "New York Pasteur Institute" in 1891. [ 26 ] In 1897 a "Pasteur Institute" opened in Baltimore , in 1900 in Pittsburgh and St. Louis , in 1903 in Ann Arbor and Austin , and in 1904 perhaps in Philadelphia . [ 26 ] In 1908, Georgia Department of Public Health opened a "Pasteur Department" in Atlanta , California State Hygienic Laboratory opened a "Pasteur Division" in Berkeley , and a "Pasteur Institute" opened in Washington, D.C. [ 26 ]
In 1900, Paul Gibier , the French medical scientist who opened "New York Pasteur Institute", accidentally died, but his nephew, George Gibier Rambaud , continued it on reduced scale until he closed it when US Medical Corps commissioned him overseas in 1918. [ 26 ] While MDs ascended in American public health, it was thought that "the greatest contribution of all, the foundation upon which modern sanitary science is built, was made by Pasteur." [ 27 ]
Koch was celebrated by the American medical community, including by Welch, when at last Koch visited America in 1908. Soon, however, America was influenced by the British and French view that although their denizens appreciated Germany's progress in science and arts, Germany was elitist and dismissive while socially and politically antiquated, so authoritarian and aggressive as to resemble medieval tyranny. [ 28 ]
In 1917, when America entered World War I (1914–18), US government seized German-owned property and assets, including Bayer AG 's American trademarks and the 80% of Merck & Co's shares owned by George Merck . [ 29 ] [ 30 ] Welch exhibited gratuitous anti-German bias despite the debt of his own career, thus American medicine, to Germany, [ 31 ] especially to Koch's bacteriology. [ 23 ]
After World War II (1939–45), and more of the "German Problem", Merck & Co became the global leader in vaccinology .
Tuberculin 's main use rapidly became in determining M tuberculosis infection—a use remaining till today—but this use soon revealed that in London , 9 of 10 individuals were infected, whereas only 1 in 10 of the infected developed the disease. [ 32 ] In 1901 at the London Congress on Tuberculosis, Koch stated on theoretical grounds that M bovis , which infects cows, was not transmissible to humans. [ 33 ] British attendees disagreed, and later Theobald Smith and the English Royal Commission empirically established that M bovis was transmissible and could result in human disease. [ 33 ] Though widely considered ineffective as treatment, tuberculin might have remained in use for this purpose until the 1940s and maybe had some effectiveness. [ 34 ]
Milk pasteurization became popular in America around 1920. [ 26 ] In 1921 Albert Calmette and Camille Guérin of Pasteur Institute introduced tuberculosis vaccine , whose virulence of strains varied in the late 1920s. [ 35 ] BCG vaccine was not used in the public health of America, which virtually eliminated tuberculosis without it. BCG vaccine's effectiveness preventing tuberculosis remains uncertain, [ 36 ] but appears to confer nonspecific survival gains, [ 36 ] as perhaps by preventing leprosy, and is a cancer treatment. [ 37 ]
Pasteur had highlighted a new threat—microorganisms benign to humans passing among and multiplying in nonhuman animals while gaining new virulence for humans—that is thought to loom and to have approximately materialized with AIDS . [ 1 ] Although Koch's postulates are often inapplicable, they remain heuristic , and the authority of "fulfilling Koch's postulates" is still invoked in medical science, though often in modified form, [ 38 ] as in the identification of HIV-1 as the cause of AIDS or the identification of SARS coronavirus as the cause of SARS . [ 39 ] [ 40 ] [ 41 ]
Germ theory's stance that the "germ" was the disease's necessary and sufficient cause —the single factor both required and complete to result in the disease—proved false. [ 12 ] Germ theory gradually evolved to include other factors, whereupon germ theory resembled miasmatic theory , which had had to recognize bacteria as a causal factor, and so the two competing explanations merged without true, decisive victor. [ 12 ] Twentieth-century philosophy, inspired by revolutions in physics , establishment of molecular biology , and advances in epidemiology , revealed that any claim of a single causal factor both necessary (required) and sufficient (complete), the cause, is untenable. [ 42 ] [ 43 ] French-born microbiologist René Dubos , a biographer of Pasteur, discussed tuberculosis to illustrate disease's social causes and to illustrate the failure of germ theory, [ 44 ] [ 45 ] whose apparent successes were aided by improvements in nutrition and living conditions but sparked scientific research that brought a wealth of new understandings. [ 46 ] | https://en.wikipedia.org/wiki/Koch–Pasteur_rivalry |
Kodaikanal mercury poisoning is a proven case of mercury contamination at the hill station of Kodaikanal , Tamil Nadu , India by Hindustan Unilever in the process of making mercury thermometers for export around the world. The exposé of the environmental abuse led to the closure of the factory in 2001 and opened up a series of issues in India such as corporate liability , corporate accountability and corporate negligence .
The mercury contamination in Kodaikanal originated at a thermometer factory owned by Hindustan Unilever . Unilever acquired the thermometer factory from cosmetics maker Pond's India Ltd. Pond's moved the factory from the United States to India in 1982 after the plant owned there by its parent, Chesebrough-Pond's, had to be dismantled following increased awareness in developed countries of polluting industries. In 1987, Pond's India and the thermometer factory went to Hindustan Unilever when it acquired Cheseborough-Pond's globally. [ 2 ]
The factory imported mercury from the United States, and exported finished thermometers to markets in the United States and Europe. Around 2001, a number of workers at the factory began complaining of kidney and related ailments. Public interest groups such as Tamil Nadu Alliance Against Mercury (TNAAC) alleged that the Company had been disposing mercury waste without following proper protocols. In early 2001, public interest groups unearthed a pile of broken glass thermometers with remains of Mercury from an interior of part of the shola forest, which they suspected could have come from the company. [ 3 ] In March, a public protest led by local workers' union and international environmental organisation Greenpeace forced the company to shut down the factory. Soon the company admitted that it did dispose of mercury contaminated waste. [ 4 ] [ 5 ] The company said in its 2002 annual report and its latest Sustainability Report that it did not dump glass waste contaminated with mercury on the land behind its factory, but only a quantity of 5.3 metric tonnes of glass containing 0.15% residual mercury had been sold to a scrap recycler located about three kilometers from the factory, in breach of the company procedures. Quoting a report prepared by an international environmental consultant, Unilever said there was no health effect on the workers of the factory [ 6 ] or any impact on the environment. [ 7 ] This is hotly contested by a book published by Pan MacMillan in 2023, Heavy Metal: How a Global Corporation Poisoned Kodaikanal , authored by veteran journalist-tuned-public policy leader Ameer Shahul . [ 8 ]
Once the factory was shut down, public interest groups demanded the return of the remaining mercury waste to the United States for recycling, remediation of the factory site, and address of the health complaints of the workers. Local groups and workers' union under the leadership of Greenpeace , represented to the company, regulatory bodies, and the government, besides initiating legal action against the company. [ 9 ]
Greenpeace campaigner Ameer Shahul led the public affairs groups and workers collaboration in forcing the Company to collect 290 tonnes of dumped mercury waste from the shola forest and send back to the United States for recycling in 2003. [ 10 ] [ 11 ] This was widely hailed by the media as ‘reverse dumping’. [ 12 ] Later Greenpeace campaigners Ameer Shahul and Navroz Mody led the groups in lobbying for remediation of the site [ 13 ] and initiated an investigation by the Department of Atomic Energy of Government of India , which found that the free mercury level in the atmosphere of Kodaikanal was 1000 times more than what is found in normal conditions. [ 14 ] [ 15 ] [ 16 ] Analysis of water, sediment and fish samples collected from Kodaikanal Lake by a team of scientists of the Department of Atomic Energy showed elevated levels of mercury four years after the stoppage of mercury emissions. [ 14 ] A series of scientific studies have also been carried out by Governmental and non-governmental organisations to determine the extent of damage caused to the environment and to the people who were exposed to mercury in the factory. [ 17 ]
Greenpeace and workers' unions continued to mount pressure on the company to take responsibility for the dumping crimes it had committed and for meddling with a pristine environment. [ 18 ] They asked the regulatory bodies to prosecute the company. [ 19 ] With these demands, public interest groups led by Greenpeace campaign head Shahul spooked the annual general body meeting of Hindustan Unilever in 2004. [ 20 ] Consequently, the company began working with the regulatory body Tamil Nadu Pollution Control Board (TNPCB) to remediate the soil, de-contaminate and scrap the thermometer-making equipment at the Kodaikanal site. The company appointed National Environmental Engineering Research Institute (NEERI) to finalise the scope for remediation, which was vehemently opposed by environmentalists. In 2006, the plant, machinery and materials used in thermometer manufacturing at the site were decontaminated and disposed of as scrap to industrial recyclers. In the following year, NEERI conducted trials at the factory for remediation of the contaminated soil on site, and recommended a remediation protocol of soil washing and thermal retorting. These were hotly contested by environmental groups under the leadership of Nityanand Jayaraman. Ultimately, the Tamil Nadu Pollution Control Board (TNPCB) recommended a remediation standard of up to 20 mg/kg of mercury concentration in soil, which means 95% of the samples analysed after the remediation process should be of less than 20 mg/kg. Consequently, pre-remediation work started in May 2009. [ 21 ]
Public interest groups contested the soil clean-up criteria and alleged that TNPCB is helping Unilever clean up to lower standards to cut costs. The acceptable mercury level being suggested by TNPCB is at least 20 times higher than what Unilever would have been required to do if they had caused the same contamination in the United Kingdom, where they are based. They also called for transparency and public participation in the process of deciding the levels of clean-up and in the process of clean-up.
After the shut down of the factory, the health specialists from Bangalore-based Community Health Centre conducted a survey among the former workers of the factory. [ 22 ] It found that former workers of the factory had visible signs of mercury poisoning such as gum and skin allergy and related problems, 'which appeared to be due to exposure to mercury'.
The company claims [ 23 ] [ 24 ] that comprehensive occupational safety and health systems existed at the Kodaikanal factory prior to its closure in 2001. Internal monitoring within the factory and external audits carried out by statutory authorities during the operations of the factory showed that there were no adverse health effects to the workers on account of their employment at the factory. It says there had been a comprehensive medical examination conducted by a panel of doctors using a questionnaire developed by Mine Safety and Health Administration (MSHA) of the United States Department of Labor ; a study by the Certifying Surgeon from the Inspectorate of Factories; an assessment by P N Viswanathan of Indian Institute of Toxicology Research (IITR); a study by Tom van Teuenbroek of TNO; and a study by IITR, formerly known as Industrial Toxicology Research Centre(ITRC) as directed by a Monitoring Committee set up by the Supreme Court of India .
The company says its conclusions of its occupational health surveillance were also endorsed by the All India Institute of Medical Sciences (AIIMS) and the National Institute of Occupational Health (NIOH).
In February 2006, a group of ex-employees of the factory approached the Madras High Court seeking directions for conducting a fresh health survey and providing economic rehabilitation. A year later, the Madras High Court constituted a five-member expert committee, with representatives from ITRC, AIIMS and NIOH to decide whether the alleged health conditions of the workers and their families were related to mercury exposure, and recommend whether there was need for a new health study. The Committee after examining the ex-workers, questioning the Hindustan Unilever Limited (HUL) officials and after a visit to the factory in October 2007 submitted its report suggesting that there is "no sufficient evidence to link the current clinical condition of the factory workers to the mercury exposure in the factory in the past". Accepting the report, the Madras High Court ruled out the need for any fresh health study. In the meantime, the Ministry of Labour and Employment , which is also a respondent in the case before the Madras High Court conducted a detailed study by a team comprising experts from various fields found that there is prima facie evidence to suggest that not only the workers of the factory, but even the children of the workers, have suffered because of exposure to mercury. The Ministry submitted its report to the Madras High Court in 2011. It also recommended setting up a Board to examine the extent of damage or disability suffered by workers and their children because of exposure to mercury, and based on the assessment of the Board workers can approach the Employment Compensation Commissioner to seek compensation.
In March 2016, Hindustan Unilever entered into an out of court settlement with its ex-employees to provide "undisclosed" ex-gratia payment, in addition to long-term health and well-being benefits, to 511 of its former workers of the thermometer factory who were exposed to toxic mercury vapour. Accordingly, the ex-employees withdrew the 'class action litigation' before the Madras High Court and the High Court of Justice , London. [ 25 ] | https://en.wikipedia.org/wiki/Kodaikanal_mercury_poisoning |
A kodecyte (ko•de•cyte) is a living cell that has been modified (koded) by the incorporation of one or more function-spacer-lipid constructs (FSL constructs) [ 1 ] [ 2 ] [ 3 ] to gain a new or novel biological, chemical or technological function. The cell is modified by the lipid tail of the FSL construct incorporating into the bilipid membrane of the cell.
All kodecytes retain their normal vitality and functionality while gaining the new function of the inserted FSL constructs. The combination of dispersibility in biocompatible media, spontaneous incorporation into cell membranes, and apparent low toxicity, makes FSL constructs suitable as research tools and for the development of new diagnostic and therapeutic applications.
Kode FSL constructs consist of three components; [ 3 ] [ 4 ] a functional moiety (F), a spacer (S) and a lipid (L).
Function groups on FSL constructs that can be used to create kodecytes include saccharides (including ABO blood group -related determinants, [ 4 ] [ 5 ] [ 6 ] sialic acids , hyaluronin polysaccharides ), fluorophores , [ 7 ] [ 8 ] biotin , [ 9 ] and a range of peptides . [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ]
Although kodecytes are created by modifying natural cells, they are different from natural cells. For example, FSL constructs, influenced by the composition of the lipid tail, are laterally mobile in the membrane and some FSL constructs may also cluster due to the characteristics of the functional group (F). [ 1 ] As FSL constructs are anchored in the membrane via a lipid tail (L) it is believed they do not participate in signal transduction, but may be designed to act as agonists or antagonists of the initial binding event. FSL constructs will not actively pass through the plasma membrane but may enter the cell via membrane invagination and endocytosis. [ 7 ]
The "koding" of cells is stable (subject to the rate of turnover of the membrane components). FSL constructs will remain in the membrane of inactive cells (e.g. red blood cells) for the life of the cell provided it is stored in lipid free media. [ 7 ] In the peripheral circulation FSL constructs are observed to be lost from red cell kodecytes at a rate of about 1% per hour. [ 9 ] [ 19 ] The initial "koding" dose and the minimum level required for detection determine how long the presence of "kodecytes" in the circulation can be monitored. For red blood "kodecytes" reliable monitoring of the presence of the "kodecytes" for up to 3 days post intravenous administration has been demonstrated in small mammals. [ 9 ]
The spacer (S) of a FSL construct has been selected so as to have negligible cross-reactivity with serum antibodies so kodecytes can be used with undiluted serum. By increasing the length of the FSL spacer from 1.9 to 7.2 nm it has been shown sensitivity can improve two-fold in red cell agglutination based kodecyte assays. However, increasing the size of the spacer further from 7.2 to 11.5 nm did not result in any further enhancement. [ 1 ]
To view a simple video explaining how Kode Technology works, click the following link: https://www.youtube.com/watch?v=TIbjAl5KYpA
FSL constructs, when in solution ( saline ) and in contact, will spontaneously incorporate into cell membranes. [ 20 ] The methodology involves simply preparing a solution of FSL constructs in the range of 1–1000 μg / mL , with the concentration used determining the amount of antigen present on the kodecyte. The ability to control antigen levels on the outside of a kodecyte has allowed for manufacture of quality control sensitivity systems [ 2 ] and serologic teaching kits incorporating the entire range of serologic agglutination reactions. [ 21 ] The actual concentration will depend on the construct and the quantity of construct required in the membrane. One part of FSL solution is added to one part of cells (up to 100% suspension ) and they are incubated at a set temperature within the range of 4–37 °C (39–99 °F) depending on temperature compatibility of the cells being modified. The higher the temperature, the faster the rate of FSL insertion into the membrane. For red blood cells incubation for 2 hours at 37 °C achieves >95% FSL insertion with at least 50% insertion being achieved within 20 minutes. In general, for carbohydrate based FSLs insertion into red blood cells, incubation for 4 hours at room temperature or 20 hours at 4 °C are similar to one hour at 37 °C. [ 20 ] The resultant kodecytes do not required to be washed, however this option should be considered if an excess of FSL construct is used in the "koding process".
Kodecytes can also be created in vivo by injection of constructs directly into the circulation. [ 19 ] However this process will modify all cells in contact with the constructs and usually require significantly more construct than in vitro preparation, as FSL constructs will preferentially associate with free lipids. [ 19 ] The in vivo creation of kodecytes is untargeted and FSL constructs will insert into all cells non-specifically, but may show a preference for some cell types.
Diagnostic serological analyses [ 4 ] including flow cytometry [ 5 ] and scanning electron microscopy usually can't see a difference between "kodecytes" and unmodified cells. However, when compared with natural cells there does appear to be a difference between IgM and IgG antibody reactivities when the functional group (F) is a monomeric peptide antigen. IgM antibodies appear to react poorly with kodecytes made with FSL peptides. [ 10 ] [ 17 ] Furthermore, FSL constructs may have a restricted antigen/epitope and may not react with a monoclonal antibody unless the FSL construct and monoclonal antibody are complementary. [ 10 ] [ 17 ]
Kodecytes can be studied using standard histological techniques. Kodecytes can be fixed after "koding" subject to the functional moiety (F) of the FSL construct being compatible with the fixative. However, freeze cut or formalin-fixed freeze cut tissues are required because the lipid based FSL constructs (and other glycolipids) will be leached from the "kodecytes" in paraffin imbedded samples during the deparaffination steps. [ 20 ]
Koded membranes are described by the construct and the concentration of FSL (in μg / mL ) used to create them. [ 20 ] For example, kodecytes created with a 100 μg/mL solution of FSL-A would be termed A100 kodecytes. If multiple FSL constructs were used then the definition is expanded accordingly, e.g. A100+B300 kodecytes are created with a solution containing 100 μg/mL solution of FSL-A and 300 μg/mL solution of FSL-B. The "+" symbol is used to separate the construct mixes, e.g. A100+B300. If FSL concentrations are constant then the μg/mL component of the terminology can be dropped, e.g. A kodecytes. Alternatively unrelated constructs such as FSL-A and FSL-biotin will create A+biotin kodecytes, etc. If different cells are used in the same study then inclusion of the cell type into the name is recommended, e.g. RBC A100 kodecytes vs WBC A100 kodecytes, or platelet A100 kodecytes, etc.
Kode Technology has been used for the in vitro modification of murine embryos , spermatozoa , zebra fish , epithelial / endometrial cells and red blood cells [ 3 ] [ 4 ] [ 5 ] [ 8 ] [ 11 ] [ 12 ] [ 22 ] to create cellular quality controls systems, [ 2 ] [ 3 ] [ 10 ] serologic kits (teaching), [ 21 ] [ 23 ] rare antigen expression, add infectious markers onto cells, [ 3 ] [ 13 ] [ 18 ] modified cell adhesion/interaction/separation/immobilisation, [ 3 ] [ 7 ] [ 9 ] and labelling. [ 5 ] [ 8 ] It has also been intravascularly infused for in vivo modification of blood cells and neutralisation of circulating antibodies [ 3 ] [ 19 ] [ 24 ] and in in vivo imaging of circulating bone marrow kodecytes in zebrafish. [ 25 ] Kode FSL constructs have also been applied to non-biological surfaces such as modified cellulose, paper, [ 22 ] silica, polymers, natural fibers, glass and metals and has been shown to be ultra-fast in labelling these surfaces. [ 3 ] [ 26 ] | https://en.wikipedia.org/wiki/Kodecyte |
Kodenshi AUK Group is a conglomerate of two companies, Kodenshi Corporation based in Kyoto, Japan and AUK Corporation based in Iksan , South Korea .
Kodenshi Corporation was established in May 1972, in Uji-shi, Kyoto, Japan as a semiconductor-producing company. The company has been active in the optical semiconductor market for the past forty years [ vague ] through continual research, development, production, and sales of solar cells. Since the establishment of Kodenshi Corp, the company has been developing new forms of photo diodes , light receiving element of photo transistors, red LEDs , photo ICs, etc. [ vague ] In 1980 the company expanded its base to Iksan Korea and then in 1992 to Shenyang, China . [ 1 ]
The AUK Corp. was founded on 1984 in Iksan, South Korea and has since become a global electronic component company. AUK Corp. engages in the research, development, and provision of nonmemory semiconductor products primarily in Korea, Hong Kong, Japan, and Singapore. However, "as of July 1, 2010, AUK Corp was acquired by Kodenshi Korea Corp. & Knowledge*On, Inc. in a reverse merger transaction." [ 2 ] Nakajima Hirokazu is CEO as of 2011. [ 3 ]
Kodenshi America Inc. is a branch office of the international Optoelectronic and semiconductor manufacturer, Kodenshi AUK Group. This particular subsidiary is in charge of marketing, developing plans for new products, targeting top manufacturing territories in their territory. It was established in 2011 in San Diego , CA, employs 3 people and has an annual revenue of about $327,369. [ 4 ]
Territories covered by Kodenshi America, Inc. are: | https://en.wikipedia.org/wiki/Kodenshi_AUK_Group |
The term kodecyte is used to describe cells with detectable Function-Spacer-Lipid (FSL) constructs, [ 1 ] [ 2 ] [ 3 ] [ 4 ] and in concert, the term kodevirion (pronounced co-da-virion), is used to describe virions with detectable FSL constructs. [ 5 ] [ 6 ]
The method for labeling virions with FSL constructs is simple, non covalent and only involves incubation of the virion with the FSL construct in saline for a few hours – nothing further is required. [ 5 ] [ 6 ] The FSL construct will spontaneously, stably and quantitatively incorporate into the virion membrane . Virions have been labelled with fluorescent (FSL-FLRO4) and radioactive iodine (FSL-125I). FSL-FLRO4 could be shown to label virions in a dose dependent manner and could be visualized by flow cytometry either directly, or indirectly if the virion had bound to the cell or fused with the cell membrane. [ 6 ] FSLs do not appear to significantly affect the virions infectivity or their ability to bind target cells, probably because they integrate into the membrane without exposing the virion to chemical agents or covalent modification [ citation needed ] . | https://en.wikipedia.org/wiki/Kodevirion |
In operator algebra , the Koecher–Vinberg theorem is a reconstruction theorem for real Jordan algebras . It was proved independently by Max Koecher in 1957 [ 1 ] and Ernest Vinberg in 1961. [ 2 ] It provides a one-to-one correspondence between formally real Jordan algebras and so-called domains of positivity. Thus it links operator algebraic and convex order theoretic views on state spaces of physical systems.
A convex cone C {\displaystyle C} is called regular if a = 0 {\displaystyle a=0} whenever both a {\displaystyle a} and − a {\displaystyle -a} are in the closure C ¯ {\displaystyle {\overline {C}}} .
A convex cone C {\displaystyle C} in a vector space A {\displaystyle A} with an inner product has a dual cone C ∗ = { a ∈ A : ∀ b ∈ C ⟨ a , b ⟩ > 0 } {\displaystyle C^{*}=\{a\in A:\forall b\in C\langle a,b\rangle >0\}} . The cone is called self-dual when C = C ∗ {\displaystyle C=C^{*}} . It is called homogeneous when to any two points a , b ∈ C {\displaystyle a,b\in C} there is a real linear transformation T : A → A {\displaystyle T\colon A\to A} that restricts to a bijection C → C {\displaystyle C\to C} and satisfies T ( a ) = b {\displaystyle T(a)=b} .
The Koecher–Vinberg theorem now states that these properties precisely characterize the positive cones of Jordan algebras.
Theorem : There is a one-to-one correspondence between formally real Jordan algebras and convex cones that are:
Convex cones satisfying these four properties are called domains of positivity or symmetric cones . The domain of positivity associated with a real Jordan algebra A {\displaystyle A} is the interior of the 'positive' cone A + = { a 2 : a ∈ A } {\displaystyle A_{+}=\{a^{2}\colon a\in A\}} .
For a proof, see Koecher (1999) [ 3 ] or Faraut & Koranyi (1994) . [ 4 ] | https://en.wikipedia.org/wiki/Koecher–Vinberg_theorem |
Koedoe , subtitled African Protected Area Conservation and Science , is a peer-reviewed open access scientific journal covering biology , ecology , and biodiversity conservation in Africa. It was established in 1958. Koedoe is Afrikaans for Kudu .
For full information visit the journal website link http://koedoe.co.za/index.php/koedoe/pages/view/about#7
This article about a journal on African studies is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about an ecology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Koedoe |
The Koelsch radical (also known as Koelsch's radical and 1,3-bisdiphenylene-2-phenylallyl or α,γ-bisdiphenylene-β-phenylallyl , abbreviated BDPA ) [ 1 ] [ 2 ] is a chemical compound that is an unusually stable carbon -centered radical , due to its resonance structures.
BDPA is an unusually stable radical compound due to the extent to which its electrons are delocalized through resonance structures. The unpaired electron is located predominantly at the 1 and 3 positions. [ 3 ] Steric effects from the biphenyl substituents also contribute to the compound's stability. [ 4 ]
BDPA and closely related compounds are used as molecular standards in electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) experiments, [ 5 ] [ 6 ] and as a polarizing agent in dynamic nuclear polarization (DNP) nuclear magnetic resonance (NMR) experiments. [ 7 ] [ 8 ] Because BDPA itself is hydrophobic , derivatives have been developed that are more soluble in aqueous solution . [ 9 ]
The compound was first synthesized by C. Frederick Koelsch while he was a postdoctoral fellow at Harvard University in the 1930s. He attempted to publish a paper describing the compound, but the paper was rejected on the grounds that the described properties, particularly stability, were unlikely to be those of a radical. Subsequent experimental evidence and quantum mechanics calculations suggested his interpretation of the original experiment was correct, resulting in the publication of the paper in 1957, nearly 25 years after the original experiments. [ 1 ] [ 10 ] [ 11 ] Although the original report described stability on the order of years, modern experiments suggest that this family of compounds, while unusually stable for radicals, shows measurable degradation in months after preparation. [ 7 ] | https://en.wikipedia.org/wiki/Koelsch_radical |
Koenig's manometric flame apparatus was a laboratory instrument invented in 1862 by the German physicist Rudolph Koenig , and used to visualize sound waves . It was the nearest equivalent of the modern oscilloscope in the late nineteenth and early twentieth centuries.
The manometric flame apparatus consisted of a chamber which acted in the same way as a modern microphone . Sound from the source to be measured was concentrated by means of a horn or tube into one half of the capsule chamber. The chamber was divided in two by an elastic diaphragm , usually rubber. The sound caused the diaphragm to vibrate which modulated a flow of flammable illumination gas passing through the other half of the chamber. The illumination gas was passed to a Bunsen burner , the flame of which would then increase or decrease in size at the same frequency as the sound source. [ 1 ] [ 2 ]
The change in flame size was too fast to be easily seen with the naked eye, and a stroboscope — usually in the form of a rotating many sided mirror — was used to view the flame. The frequency of the sound could then be calculated from the apparent distance between the flame images in the mirror and the known speed of its rotation. [ 1 ] [ 2 ]
Alexander Graham Bell used this type of equipment to study the performance of his microphones and demonstrated it in his display at the 1876 Philadelphia Centenarian Exhibition . He replaced the rubber diaphragm with an iron disc which was driven by an electromagnet with current fed from a microphone. This apparatus was capable of giving quantitative measures of the performance of his microphones. [ 1 ] [ 3 ]
A type of Fourier analyzer can be constructed by connecting a number of manometric flame capsules each to a Helmholtz resonator tuned to either the fundamental frequency of the sound to be analyzed, or one of its harmonics . The flames produced from each capsule are then an indication of the strength of each of the Fourier components of the sound. [ 4 ]
1. The Koinge manometric flame apparatus Jim & Rhoda Morris at SciTechAntiques. Accessed March 2008
2. Manometric Flame Apparatus Kenyon College. Gambier, Ohio. Accessed March 2008
3. Fourier Analysis Kenyon College. Gambier, Ohio. Accessed March 2008
4. Flame manometer Case Western Reserve University Physics Department. Accessed March 2008 | https://en.wikipedia.org/wiki/Koenig's_manometric_flame_apparatus |
In mathematics , the Koenigs function is a function arising in complex analysis and dynamical systems . Introduced in 1884 by the French mathematician Gabriel Koenigs , it gives a canonical representation as dilations of a univalent holomorphic mapping , or a semigroup of mappings, of the unit disk in the complex numbers into itself.
Let D be the unit disk in the complex numbers. Let f be a holomorphic function mapping D into itself, fixing the point 0, with f not identically 0 and f not an automorphism of D , i.e. a Möbius transformation defined by a matrix in SU(1,1).
By the Denjoy-Wolff theorem , f leaves invariant each disk | z | < r and the iterates of f converge uniformly on compacta to 0: in fact for 0 < r < 1,
for | z | ≤ r with M ( r ) < 1. Moreover f '(0) = λ with 0 < | λ | < 1.
Koenigs (1884) proved that there is a unique holomorphic function h defined on D , called the Koenigs function ,
such that h (0) = 0, h '(0) = 1 and Schröder's equation is satisfied,
The function h is the uniform limit on compacta of the normalized iterates , g n ( z ) = λ − n f n ( z ) {\displaystyle g_{n}(z)=\lambda ^{-n}f^{n}(z)} .
Moreover, if f is univalent, so is h . [ 1 ] [ 2 ]
As a consequence, when f (and hence h ) are univalent, D can be identified with the open domain U = h ( D ) . Under this conformal identification, the mapping f becomes multiplication by λ , a dilation on U .
Let f t ( z ) be a semigroup of holomorphic univalent mappings of D into itself fixing 0 defined
for t ∈ [0, ∞) such that
Each f s with s > 0 has the same Koenigs function, cf. iterated function . In fact, if h is the Koenigs function of f = f 1 , then h ( f s ( z )) satisfies Schroeder's equation and hence is proportion to h .
Taking derivatives gives
Hence h is the Koenigs function of f s .
On the domain U = h ( D ) , the maps f s become multiplication by λ ( s ) = f s ′ ( 0 ) {\displaystyle \lambda (s)=f_{s}^{\prime }(0)} , a continuous semigroup.
So λ ( s ) = e μ s {\displaystyle \lambda (s)=e^{\mu s}} where μ is a uniquely determined solution of e μ = λ with Re μ < 0. It follows that the semigroup is differentiable at 0. Let
a holomorphic function on D with v (0) = 0 and v' (0) = μ .
Then
so that
and
the flow equation for a vector field.
Restricting to the case with 0 < λ < 1, the h ( D ) must be starlike so that
Since the same result holds for the reciprocal,
so that v ( z ) satisfies the conditions of Berkson & Porta (1978)
Conversely, reversing the above steps, any holomorphic vector field v ( z ) satisfying these conditions is associated to a semigroup f t , with | https://en.wikipedia.org/wiki/Koenigs_function |
The Koenigsberger ratio is the proportion of remanent magnetization relative to induced magnetization in natural rocks. [ 1 ] It was first described by J.G. Koenigsberger [ de ] . [ 2 ] It is a dimensionless parameter often used in geophysical exploration to describe the magnetic characteristics of a geological body for help in interpreting magnetic anomaly patterns.
Q = M r e m M i n d = M r e m χ H {\displaystyle Q={\frac {M_{rem}}{M_{ind}}}={\frac {M_{rem}}{\chi H}}} [ 1 ]
The total magnetization of a rock is the sum of its natural remanent magnetization and the magnetization induced by the ambient geomagnetic field . Thus, a Koenigsberger ratio, Q , greater than 1 indicates that the remanence properties contribute the majority of the total magnetization of the rock. [ 3 ] | https://en.wikipedia.org/wiki/Koenigsberger_ratio |
The Koenigs–Knorr reaction in organic chemistry is the substitution reaction of a glycosyl halide with an alcohol to give a glycoside . It is one of the oldest glycosylation reactions. It is named after Wilhelm Koenigs (1851–1906), a student of von Baeyer and fellow student with Hermann Emil Fischer , and Edward Knorr, a student of Koenigs.
In its original form, Koenigs and Knorr treated acetobromoglucose with alcohols in the presence of silver carbonate . [ 1 ] Shortly afterwards Fischer and Armstrong reported very similar findings. [ 2 ]
In the above example, the stereochemical outcome is determined by the presence of the neighboring group at C2 that lends anchimeric assistance , resulting in the formation of a 1,2-trans stereochemical arrangement. Esters (e.g. acetyl , benzoyl , pivalyl ) generally provide good anchimeric assistance, whereas ethers (e.g. benzyl , methyl etc.) do not, leading to mixtures of stereoisomers .
In the first step of the mechanism , the glycosyl bromide reacts with silver carbonate upon elimination of silver bromide and the silver carbonate anion to the oxocarbenium ion. From this structure a dioxolanium ring is formed, which is attacked by methanol via an SN 2 mechanism at the carbonyl carbon atom. This attack leads to the inversion. After deprotonation of the intermediate oxonium, the product glycoside is formed. [ 3 ]
The reaction can also be applied to carbohydrates with other protecting groups. In the oligosaccharide synthesis in place of the methanol other carbohydrates are used, which have been modified with protective groups in such a way that only one hydroxyl group is accessible.
The method was later transferred by Emil Fischer and Burckhardt Helferich to other chloro-substituted purines and produced thus for the first time synthetic nucleosides. It was later improved and modified by numerous chemists.
Generally, the Koenigs–Knorr reaction refers to the use of glycosyl chlorides, bromides and more recently iodides as glycosyl donors. The Koenigs–Knorr reaction can be performed with alternative promoters such as various heavy metal salts including mercuric bromide / mercuric oxide , mercuric cyanide and silver triflate . [ 4 ] [ 5 ] When mercury salts are used, the reaction is normally called the Helferich method .
Other glycosidation methods are Fischer glycosidation , use of glycosyl acetates , thioglycosides , glycosyl trichloroacetimidates , glycosyl fluorides or n-pentenyl glycosides as glycosyl donors , or intramolecular aglycon delivery . | https://en.wikipedia.org/wiki/Koenigs–Knorr_reaction |
A Kofler bench , or Kofler heating bar; Kofler hot bar; Kofler hot bench, in German, Kofler-Heizbank , [ 1 ] is a metal strip with a temperature gradient (range room temperature to 300°C). Any substance can be placed on a section of the strip revealing its thermal behaviour at the temperature at that point. [ 2 ] The gradient is engineered to be approximately linear. [ 3 ]
This melting-point apparatus for use with a microscope was developed by the Austrian pharmacognosist Ludwig Kofler (30 November 1891 Dornbirn - 23 August 1951 Innsbruck ) and his wife mineralogist Adelheid Kofler . In 1936, the Koflers and Mayrhofer published their "Mikroskopische Methoden in der Mikrochemie" [Kofler, L., A. Kofler and Mayrhofer, A. (1936)], Kofler and Kofler published their "Thermomikromethoden" [Kofler L., and A. Kofler (1954)] in 1954. [ 4 ] [ 5 ] The integration of microscope and Kofler bench is known as the Kofler hot stage microscope. [ 6 ]
Kofler, his wife Adelheid, and their colleague, Maria Kuhnert-Brandstätter , investigated numerous organic molecules, and published some 250 papers describing their work. [ 7 ]
Thermomicroscopy, incepted by Ludwig and Adelheid Kofler and developed further by Maria Kuhnert-Brandstätter (1919–2011) and Walter C. McCrone used the technique for studying the phases of solid drug substances. | https://en.wikipedia.org/wiki/Kofler_bench |
A Kohn anomaly or the Kohn effect is an anomaly in the dispersion relation of a phonon branch in a metal. The anomaly is named for Walter Kohn , who first proposed it in 1959.
In condensed matter physics , a Kohn anomaly (also called the Kohn effect [ 1 ] ) is an anomaly in the dispersion relation of a phonon branch in a metal.
For a specific wavevector , the frequency (and thus the energy ) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative . In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface , that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms or a spherical Fermi surface this vector would be 2 k F {\textstyle 2k_{\rm {F}}} ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically.
In the phonon spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that is produced by the abrupt change in the screening of lattice vibrations by conduction electrons. It can occur at any point in the Brillouin Zone because 2 k F {\textstyle 2k_{\rm {F}}} ) is unrelated to crystal symmetry. In one dimension, it is equivalent to a Peierls instability , and it is similar to the Jahn-Teller effect seen in molecular systems.
Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part Re ( ε ( q , ω ) ) {\textstyle \operatorname {Re} (\varepsilon (\mathbf {q} ,\omega ))} of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at q = 2 k F {\textstyle \mathbf {q} =2\mathbf {k} _{\rm {F}}} , where k F {\textstyle \mathbf {k} _{\rm {F}}} is the Fermi wavevector . Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of Re ( ε ( r , ω ) ) {\textstyle \operatorname {Re} (\varepsilon (\mathbf {r} ,\omega ))} in the proximity of the singularity mentioned above. In the context of phonon dispersion relations , these oscillations appear as a vertical tangent in the plot of ω 2 ( q ) {\textstyle \omega ^{2}(\mathbf {q} )} , called the Kohn anomalies.
Many different systems exhibit Kohn anomalies, including graphene , [ 2 ] bulk metals, [ 3 ] and many low-dimensional systems (the reason involves the condition q = 2 k F {\textstyle \mathbf {q} =2\mathbf {k} _{\rm {F}}} , which depends on the topology of the Fermi surface ). However, it is important to emphasize that only materials showing metallic behaviour can exhibit a Kohn anomaly, since the model emerges from a homogeneous electron gas approximation. [ 4 ] [ 5 ]
The anomaly is named for Walter Kohn . They have been first proposed by Walter Kohn in 1959. [ 6 ] | https://en.wikipedia.org/wiki/Kohn_anomaly |
Kohn–Luttinger superconductivity is a theoretical mechanism for unconventional superconductivity proposed by Walter Kohn and Joaquin Mazdak Luttinger [ 1 ] based on Friedel oscillations . In contrast to BCS theory , in which Cooper pairs are formed due to electron–phonon interaction, Kohn–Luttinger mechanism is based on fact that screened Coulomb interaction oscillates as cos ( 2 k F r + ϕ ) / r 3 {\displaystyle \cos(2k_{F}r+\phi )/r^{3}} and can create Cooper instability for non-zero angular momentum ℓ {\displaystyle \ell } .
Since Kohn–Luttinger mechanism does not require any additional interactions beyond Coulomb interactions, it can lead to superconductivity in any electronic system.
However, the estimated critical temperature, T c {\displaystyle T_{\rm {c}}} , for Kohn–Luttinger superconductor is exponential in − ℓ 4 {\displaystyle -\ell ^{4}} and thus is extremely small. For example, for metals the critical temperature is given by [ 1 ]
k B T c E F = exp ( − ( 2 ℓ ) 4 ) , {\displaystyle {\frac {k_{\rm {B}}T_{\rm {c}}}{E_{\rm {F}}}}=\exp(-(2\ell )^{4}),}
where k B {\displaystyle k_{\rm {B}}} is Boltzmann constant and E F {\displaystyle E_{\rm {F}}} is Fermi energy . However, Kohn and Luttinger conjectured that nonspherical Fermi surfaces and variation of parameters may enhance the effect. Indeed, it is proposed that Kohn–Luttinger mechanism is responsible for superconductivity in rhombohedral graphene , [ 2 ] [ 3 ] which has an annular Fermi surface.
Gor'kov, L. P.; Melik-Barkhudarov, T. K. (November 1961). "Contribution to the theory of superfluidity in an imperfect Fermi gas" (PDF) . Soviet Physics JETP . 13 (5): 1018– 1022.
Chubukov, Andrey V. (15 July 1993). "Kohn-Luttinger effect and the instability of a two-dimensional repulsive Fermi liquid at T=0" . Physical Review B . 48 (2). American Physical Society: 1097– 1104. Bibcode : 1993PhRvB..48.1097C . doi : 10.1103/PhysRevB.48.1097 . PMID 10007968 .
Maiti, S.; Chubukov, A. V. (November 2014). "Superconductivity from repulsive interaction". In Bennemann, Karl-Heinz; Ketterson, John B. (eds.). Novel Superfluids: Volume 2 . Oxford. pp. 89– 158. doi : 10.1093/acprof:oso/9780198719267.003.0004 . ISBN 978-0198719267 . | https://en.wikipedia.org/wiki/Kohn–Luttinger_superconductivity |
The Kohn-Sham equations are a set of mathematical equations used in quantum mechanics to simplify the complex problem of understanding how electrons behave in atoms and molecules. They introduce fictitious non-interacting electrons and use them to find the most stable arrangement of electrons, which helps scientists understand and predict the properties of matter at the atomic and molecular scale.
In physics and quantum chemistry , specifically density functional theory , the Kohn–Sham equation is the non-interacting Schrödinger equation (more clearly, Schrödinger-like equation) of a fictitious system (the " Kohn–Sham system ") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles. [ 1 ] [ 2 ]
In the Kohn–Sham theory the introduction of the noninteracting kinetic energy functional T s into the energy expression leads, upon functional differentiation, to a collection of one-particle equations whose solutions are the Kohn–Sham orbitals.
The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as v s ( r ) or v eff ( r ), called the Kohn–Sham potential . If the particles in the Kohn–Sham system are non-interacting fermions (non-fermion Density Functional Theory has been researched [ 3 ] [ 4 ] ), the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest-energy solutions to ( − ℏ 2 2 m ∇ 2 + v eff ( r ) ) φ i ( r ) = ε i φ i ( r ) . {\displaystyle \left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+v_{\text{eff}}(\mathbf {r} )\right)\varphi _{i}(\mathbf {r} )=\varepsilon _{i}\varphi _{i}(\mathbf {r} ).}
This eigenvalue equation is the typical representation of the Kohn–Sham equations . Here ε i is the orbital energy of the corresponding Kohn–Sham orbital φ i {\displaystyle \varphi _{i}} , and the density for an N -particle system is
ρ ( r ) = ∑ i N | φ i ( r ) | 2 . {\displaystyle \rho (\mathbf {r} )=\sum _{i}^{N}|\varphi _{i}(\mathbf {r} )|^{2}.}
The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham , who introduced the concept at the University of California, San Diego , in 1965.
Kohn received a Nobel Prize in Chemistry in 1998 for the Kohn–Sham equations and other work related to density functional theory (DFT). [ 5 ]
In Kohn–Sham density functional theory, the total energy of a system is expressed as a functional of the charge density as
E [ ρ ] = T s [ ρ ] + ∫ d r v ext ( r ) ρ ( r ) + E H [ ρ ] + E xc [ ρ ] , {\displaystyle E[\rho ]=T_{s}[\rho ]+\int d\mathbf {r} \,v_{\text{ext}}(\mathbf {r} )\rho (\mathbf {r} )+E_{\text{H}}[\rho ]+E_{\text{xc}}[\rho ],}
where T s is the Kohn–Sham kinetic energy , which is expressed in terms of the Kohn–Sham orbitals as
T s [ ρ ] = ∑ i = 1 N ∫ d r φ i ∗ ( r ) ( − ℏ 2 2 m ∇ 2 ) φ i ( r ) , {\displaystyle T_{s}[\rho ]=\sum _{i=1}^{N}\int d\mathbf {r} \,\varphi _{i}^{*}(\mathbf {r} )\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\right)\varphi _{i}(\mathbf {r} ),}
v ext is the external potential acting on the interacting system (at minimum, for a molecular system, the electron–nuclei interaction), E H is the Hartree (or Coulomb) energy
E H [ ρ ] = e 2 2 ∫ d r ∫ d r ′ ρ ( r ) ρ ( r ′ ) | r − r ′ | , {\displaystyle E_{\text{H}}[\rho ]={\frac {e^{2}}{2}}\int d\mathbf {r} \int d\mathbf {r} '\,{\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}},}
and E xc is the exchange–correlation energy. The Kohn–Sham equations are found by varying the total energy expression with respect to a set of Kohn-Sham orbitals subject to the constraint that they are orthogonal, [ 6 ] this yields a time-independent Schrödinger equation with a scalar potential equal to the Kohn–Sham potential
v eff ( r ) = v ext ( r ) + e 2 ∫ ρ ( r ′ ) | r − r ′ | d r ′ + δ E xc [ ρ ] δ ρ ( r ) , {\displaystyle v_{\text{eff}}(\mathbf {r} )=v_{\text{ext}}(\mathbf {r} )+e^{2}\int {\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} '+{\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}},}
where the last term
v xc ( r ) ≡ δ E xc [ ρ ] δ ρ ( r ) , {\displaystyle v_{\text{xc}}(\mathbf {r} )\equiv {\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}},}
is the exchange–correlation potential. This term, and the corresponding energy expression, are the only unknowns in the Kohn–Sham approach to density functional theory. An approximation that does not vary the orbitals is Harris functional theory.
The Kohn–Sham orbital energies ε i , in general, have little physical meaning (see Koopmans' theorem ). The sum of the orbital energies is related to the total energy as
E = ∑ i N ε i − E H [ ρ ] + E xc [ ρ ] − ∫ δ E xc [ ρ ] δ ρ ( r ) ρ ( r ) d r . {\displaystyle E=\sum _{i}^{N}\varepsilon _{i}-E_{\text{H}}[\rho ]+E_{\text{xc}}[\rho ]-\int {\frac {\delta E_{\text{xc}}[\rho ]}{\delta \rho (\mathbf {r} )}}\rho (\mathbf {r} )\,d\mathbf {r} .}
Because the orbital energies are non-unique in the more general restricted open-shell case, this equation only holds true for specific choices of orbital energies (see Koopmans' theorem ). | https://en.wikipedia.org/wiki/Kohn–Sham_equations |
The Koide formula is an unexplained empirical equation discovered by Yoshio Koide in 1981. In its original form, it is not fully empirical but a set of guesses for a model for masses of quarks and leptons, as well as CKM angles. From this model it survives the observation about the masses of the three charged leptons ; later authors have extended the relation to neutrinos , quarks , and other families of particles . [ 1 ] : 64–66
The Koide formula is
where the masses of the electron , muon , and tau are measured respectively as m e = 0.510 998 950 00 (15) MeV/ c 2 , m μ = 105.658 3755 (23) MeV/ c 2 , and m τ = 1 776 .93(09) MeV/ c 2 ; the digits in parentheses are the uncertainties in the last digits. [ 2 ] This gives Q = 0.666 664 46 (508) . [ a ]
No matter what masses are chosen to stand in place of the electron, muon, and tau, the ratio Q is constrained to 1 / 3 ≤ Q < 1 . The upper bound follows from the fact that the square roots are necessarily positive, and the lower bound follows from the Cauchy–Bunyakovsky–Schwarz inequality . The experimentally determined value, 2 / 3 , lies at the center of the mathematically allowed range. But note that removing the requirement of positive roots, it is possible to fit an extra tuple in the quark sector (the one with strange, charm and bottom).
The mystery is in the physical value. Not only is the result peculiar, in that three ostensibly arbitrary numbers give a simple fraction, but also in that in the case of electron, muon, and tau, Q is exactly halfway between the two extremes of all possible combinations: 1 / 3 (if the three masses were equal) and 1 (if one mass dwarfs the other two). Q is a dimensionless quantity , so the relation holds regardless of which unit is used to express the magnitudes of the masses.
Robert Foot also interpreted the Koide formula as a geometrical relation, in which the value 1 3 Q {\displaystyle \textstyle {\frac {1}{3Q}}} is the squared cosine of the angle between the vector [ m e , m μ , m τ ] {\displaystyle [{\sqrt {m_{\text{e}}}},{\sqrt {m_{\mu }}},{\sqrt {m_{\tau }}}]} and the vector [1, 1, 1] (see Dot product ). [ 3 ] That angle is almost exactly 45 degrees: θ = 45.000 ∘ ± 0.001 ∘ . {\displaystyle \theta =45.000^{\circ }\pm 0.001^{\circ }.} [ 3 ]
When the formula is assumed to hold exactly ( Q = 2 / 3 ), it may be used to predict the tau mass from the (more precisely known) electron and muon masses; that prediction is m τ = 1 776 .969 MeV/ c 2 . [ 4 ]
While the original formula arose in the context of preon models, other ways have been found to derive it (both by Sumino and by Koide – see references below). As a whole, however, understanding remains incomplete. Similar matches have been found for triplets of quarks depending on running masses. [ 5 ] [ 6 ] [ 7 ] With alternating quarks, chaining Koide equations for consecutive triplets, it is possible to reach a result of 173.263 947 (6) GeV/ c 2 for the mass of the top quark . [ 8 ]
The Koide relation exhibits permutation symmetry among the three charged lepton masses m e {\displaystyle m_{\text{e}}} , m μ {\displaystyle m_{\mu }} , and m τ {\displaystyle m_{\tau }} . [ 9 ] This means that the value of Q {\displaystyle Q} remains unchanged under any interchange of these masses. Since the relation depends on the sum of the masses and the sum of their square roots, any permutation of m e {\displaystyle m_{\text{e}}} , m μ {\displaystyle m_{\mu }} , and m τ {\displaystyle m_{\tau }} leaves Q {\displaystyle Q} invariant: Q = m e + m μ + m τ ( m e + m μ + m τ ) 2 = m σ ( e ) + m σ ( μ ) + m σ ( τ ) ( m σ ( e ) + m σ ( μ ) + m σ ( τ ) ) 2 {\displaystyle Q={\frac {m_{\text{e}}+m_{\mu }+m_{\tau }}{\left({\sqrt {m_{\text{e}}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}\right)^{2}}}={\frac {m_{\sigma ({\text{e}})}+m_{\sigma (\mu )}+m_{\sigma (\tau )}}{\left({\sqrt {m_{\sigma ({\text{e}})}}}+{\sqrt {m_{\sigma (\mu )}}}+{\sqrt {m_{\sigma (\tau )}}}\right)^{2}}}} for any permutation σ {\displaystyle \sigma } of { e , μ , τ } {\displaystyle \{{\text{e}},\mu ,\tau \}} .
The Koide relation is scale invariant; that is, multiplying each mass by a common constant λ {\displaystyle \lambda } does not affect the value of Q {\displaystyle Q} . Let m i ′ = λ m i {\displaystyle m'_{i}=\lambda m_{i}} for i = e , μ , τ {\displaystyle i={\text{e}},\mu ,\tau } . Then: Q ′ = m e ′ + m μ ′ + m τ ′ ( m e ′ + m μ ′ + m τ ′ ) 2 = λ m e + λ m μ + λ m τ ( λ m e + λ m μ + λ m τ ) 2 = λ ( m e + m μ + m τ ) ( λ ( m e + m μ + m τ ) ) 2 = λ ( m e + m μ + m τ ) λ ( m e + m μ + m τ ) 2 = m e + m μ + m τ ( m e + m μ + m τ ) 2 = Q {\displaystyle {\begin{aligned}Q'&={\frac {m'_{\text{e}}+m'_{\mu }+m'_{\tau }}{\left({\sqrt {m'_{\text{e}}}}+{\sqrt {m'_{\mu }}}+{\sqrt {m'_{\tau }}}\right)^{2}}}\\&={\frac {\lambda m_{\text{e}}+\lambda m_{\mu }+\lambda m_{\tau }}{\left({\sqrt {\lambda m_{\text{e}}}}+{\sqrt {\lambda m_{\mu }}}+{\sqrt {\lambda m_{\tau }}}\right)^{2}}}\\&={\frac {\lambda (m_{\text{e}}+m_{\mu }+m_{\tau })}{\left({\sqrt {\lambda }}({\sqrt {m_{\text{e}}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}})\right)^{2}}}\\&={\frac {\lambda (m_{\text{e}}+m_{\mu }+m_{\tau })}{\lambda \left({\sqrt {m_{\text{e}}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}\right)^{2}}}\\&={\frac {m_{\text{e}}+m_{\mu }+m_{\tau }}{\left({\sqrt {m_{\text{e}}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}\right)^{2}}}\\&=Q\end{aligned}}}
Therefore, Q {\displaystyle Q} remains unchanged under scaling of the masses by a common factor.
Carl Brannen has proposed [ 4 ] the lepton masses are given by the squares of the eigenvalues of a circulant matrix with real eigenvalues, corresponding to the relation
which can be fit to experimental data with η 2 = 0.500003(23) (corresponding to the Koide relation) and phase δ = 0.2222220(19), which is almost exactly 2 / 9 . However, the experimental data are in conflict with simultaneous equality of η 2 = 1 / 2 and δ = 2 / 9 . [ 4 ]
This kind of relation has also been proposed for the quark families, with phases equal to low-energy values 2 / 27 = 2 / 9 × 1 / 3 and 4 / 27 = 2 / 9 × 2 / 3 , hinting at a relation with the charge of the particle family ( 1 / 3 and 2 / 3 for quarks vs. 3 / 3 = 1 for the leptons, where 1 / 3 × 2 / 3 × 3 / 3 ≈ δ ) . [ 10 ]
The original derivation [ 11 ] postulates m e i ∝ ( z 0 + z i ) 2 {\displaystyle m_{e_{i}}\propto (z_{0}+z_{i})^{2}} with the conditions
from which the formula follows. Besides, masses for neutrinos and down quarks were postulated to be proportional to z i 2 {\displaystyle z_{i}^{2}} while masses for up quarks were postulated to be ∝ ( z 0 + 2 z i ) 2 . {\displaystyle \propto (z_{0}+2z_{i})^{2}~.}
The published model [ 12 ] justifies the first condition as part of a symmetry breaking scheme, and the second one as a "flavor charge" for preons in the interaction that causes this symmetry breaking.
Note that in matrix form with M = A A † {\displaystyle M=A\ A^{\dagger }} and A = Z 0 + Z {\displaystyle A=Z_{0}+Z} the equations are simply tr Z = 0 {\displaystyle \operatorname {tr} Z=0} and tr Z 0 2 = tr Z 2 . {\displaystyle \operatorname {tr} Z_{0}^{2}=\operatorname {tr} Z^{2}.}
There are similar formulae which relate other masses.
Quark masses depend on the energy scale used to measure them, which makes an analysis more complicated. [ 13 ]
Taking the heaviest three quarks, charm ( 1.275 ± 0.03 GeV/ c 2 ), bottom ( 4.180 ± 0.04 GeV/ c 2 ) and top ( 173.0 ± 0.40 GeV/ c 2 ), regardless of their uncertainties, one arrives at the value cited by F. G. Cao (2012): [ 14 ]
This was noticed by Rodejohann and Zhang in the preprint of their 2011 article, [ 15 ] but the observation was removed in the published version, [ 5 ] so the first published mention is in 2012 from Cao. [ 14 ]
The relation
is published as part of the analysis of Rivero, [ 16 ] who notes (footnote 3 in the reference) that an increase of the value for charm mass makes both equations, heavy and middle , exact.
The masses of the lightest quarks, up ( 2.2 ± 0.4 MeV/ c 2 ), down ( 4.7 ± 0.3 MeV/ c 2 ), and strange ( 95.0 ± 4.0 MeV/ c 2 ), without using their experimental uncertainties, yield
a value also cited by Cao in the same article. [ 14 ] An older article, H. Harari , et al., [ 17 ] calculates theoretical values for up, down and strange quarks, coincidentally matching the later Koide formula, albeit with a massless up-quark.
This could be considered the first appearance of a Koide-type formula in the literature.
In quantum field theory , quantities like coupling constant and mass "run" with the energy scale. [ 18 ] That is, their value depends on the energy scale at which the observation occurs, in a way described by a renormalization group equation (RGE). [ 19 ] One usually expects relationships between such quantities to be simple at high energies (where some symmetry is unbroken ) but not at low energies, where the RG flow will have produced complicated deviations from the high-energy relation. The Koide relation is exact (within experimental error) for the pole masses , which are low-energy quantities defined at different energy scales. For this reason, many physicists regard the relation as "numerology" . [ 20 ]
However, the Japanese physicist Yukinari Sumino has proposed mechanisms to explain origins of the charged lepton spectrum as well as the Koide formula, e.g., by constructing an effective field theory with a new gauge symmetry that causes the pole masses to exactly satisfy the relation. [ 21 ] Koide has published his opinions concerning Sumino's model. [ 22 ] [ 23 ] François Goffinet's doctoral thesis gives a discussion on pole masses and how the Koide formula can be reformulated to avoid using square roots for the masses. [ 24 ]
A cubic equation usually arises in symmetry breaking when solving for the Higgs vacuum, and is a natural object when considering three generations of particles. This involves finding the eigenvalues of a 3 × 3 mass matrix.
For this example, consider a characteristic polynomial
with roots m j : j = 1 , 2 , 3 , {\displaystyle m_{j}:j=1,2,3,} that must be real and positive.
To derive the Koide relation, let m ≡ x 2 {\displaystyle m\equiv x^{2}} and the resulting polynomial can be factored into
or
The elementary symmetric polynomials of the roots must reproduce the corresponding coefficients from the polynomial that they solve, so x 1 + x 2 + x 3 = ± 3 n {\displaystyle ~~x_{1}+x_{2}+x_{3}=\pm 3n~~} and x 1 x 2 + x 2 x 3 + x 3 x 1 = + 3 2 n 2 . {\displaystyle ~~x_{1}x_{2}+x_{2}x_{3}+x_{3}x_{1}=+{\tfrac {3}{2}}n^{2}~.} Taking the ratio of these symmetric polynomials, but squaring the first so we divide out the unknown parameter n , {\displaystyle n,} we get a Koide-type formula: Regardless of the value of n , {\displaystyle n,} the solutions to the cubic equation for x {\displaystyle x} must satisfy
so
and
Converting back to m = x {\displaystyle {\sqrt {m}}=x}
For the relativistic case, Goffinet's dissertation presented a similar method to build a polynomial with only even powers of m . {\displaystyle m.}
Koide proposed that an explanation for the formula could be a Higgs particle with U ( 3 ) {\displaystyle \mathrm {U} (3)} flavour charge Φ a b ¯ {\displaystyle \Phi ^{a{\overline {b}}}} given by:
with the charged lepton mass terms given by ψ ¯ Φ 2 ψ . {\displaystyle {\overline {\psi }}\Phi ^{2}\psi .} [ 25 ] Such a potential is minimised when the masses fit the Koide formula. Minimising does not give the mass scale, which would have to be given by additional terms of the potential, so the Koide formula might indicate existence of additional scalar particles beyond the Standard Model's Higgs boson .
In fact one such Higgs potential would be precisely V ( Φ ) = det [ ( Φ − m e ) ] 2 + det [ ( Φ − m μ ) ] 2 + det [ ( Φ − m τ ) ] 2 {\displaystyle V(\Phi )=\det[(\Phi -{\sqrt {m_{\text{e}}}})]^{2}+\det[(\Phi -{\sqrt {m_{\mu }}})]^{2}+\det[(\Phi -{\sqrt {m_{\tau }}})]^{2}} which when expanded out the determinant in terms of traces would simplify using the Koide relations. | https://en.wikipedia.org/wiki/Koide_formula |
Koinophilia is an evolutionary hypothesis proposing that during sexual selection , animals preferentially seek mates with a minimum of unusual or mutant features, including functionality, appearance and behavior. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Koinophilia intends to explain the clustering of sexual organisms into species and other issues described by Darwin's dilemma . [ 3 ] [ 4 ] [ 5 ] The term derives from the Greek word koinos meaning "common" or "that which is shared", and philia , meaning "fondness".
Natural selection causes beneficial inherited features to become more common at the expense of their disadvantageous counterparts. The koinophilia hypothesis proposes that a sexually-reproducing animal would therefore be expected to avoid individuals with rare or unusual features, and to prefer to mate with individuals displaying a predominance of common or average features. [ 2 ] [ 3 ] Mutants with peculiar features would be avoided because most mutations that manifest themselves as changes in appearance, functionality or behavior are disadvantageous. [ 7 ] Because it is impossible to judge whether a new mutation is beneficial (or might be advantageous in the unforeseeable future) or not, koinophilic animals avoid them all, at the cost of avoiding the very occasional potentially beneficial mutation. [ 8 ] Thus, koinophilia, although not infallible in its ability to distinguish fit from unfit mates, is a good strategy when choosing a mate . A koinophilic choice ensures that offspring are likely to inherit a suite of features and attributes that have served all the members of the species well in the past. [ 3 ]
Koinophilia differs from the "like prefers like" mating pattern of assortative mating . [ 9 ] [ 10 ] If like preferred like, leucistic animals (such as white peacocks) would be sexually attracted to one another, and a leucistic subspecies would come into being. Koinophilia predicts that this is unlikely because leucistic animals are attracted to the average in the same way as are all the other members of its species. Since non-leucistic animals are not attracted by leucism, few leucistic individuals find mates, and leucistic lineages will rarely form.
Koinophilia provides simple explanations for the almost universal canalization of sexual creatures into species, [ 3 ] [ 4 ] [ 5 ] the rarity of transitional forms between species (between both extant and fossil species), [ 3 ] [ 4 ] evolutionary stasis , punctuated equilibria , [ 3 ] [ 4 ] [ 5 ] and the evolution of cooperation . [ 11 ] [ 12 ] Koinophilia might also contribute to the maintenance of sexual reproduction , preventing its reversion to the much simpler asexual form of reproduction . [ 13 ] [ 14 ]
The koinophilia hypothesis is supported by the findings of Judith Langlois and her co-workers. [ 2 ] [ 15 ] [ 16 ] [ 17 ] They found that the average of two human faces was more attractive than either of the faces from which that average was derived. [ 18 ] The more faces (of the same gender and age) that were used in the averaging process the more attractive and appealing the average face became. [ 19 ] This work into averageness [ 2 ] [ 15 ] [ 16 ] [ 20 ] supports koinophilia as an explanation of what constitutes a beautiful face. [ 17 ] [ 21 ] [ 22 ]
Biologists from Darwin onwards [ 23 ] have puzzled over how evolution produces species whose adult members look extraordinarily alike, and distinctively different from the members of other species. Lions and leopards are, for instance, both large carnivores that inhabit the same general environment, and hunt much the same prey, but look quite different. The question is why intermediates do not exist. [ 7 ] [ 24 ]
This is the "horizontal" dimension of a two-dimensional problem, [ 28 ] [ 29 ] referring to the almost complete absence of transitional or intermediate forms between present-day species (e.g. between lions, leopards, and cheetahs). [ 24 ] [ 30 ] [ 31 ]
The "vertical" dimension concerns the fossil record. Fossil species are frequently remarkably stable over extremely long periods of geological time, despite continental drift, major climate changes, and mass extinctions. [ 32 ] [ 33 ] [ 34 ] When a change in form occurs, it tends to be abrupt in geological terms, again producing phenotypic gaps (i.e. an absence of intermediate forms), but now between successive species, which then often co-exist for long periods of time. Thus the fossil record suggests that evolution occurs in bursts, interspersed by long periods of evolutionary stagnation in so-called punctuated equilibria . [ 32 ] Why this is so has been an evolutionary enigma ever since Darwin first recognized the problem . [ 23 ] [ 34 ] [ 35 ]
Koinophilia could explain both the horizontal and vertical manifestations of speciation , and why it, as a general rule, involves the entire external appearance of the animals concerned. [ 3 ] [ 4 ] [ 5 ] Since koinophilia affects the entire external appearance, the members of an interbreeding group are driven to look alike in every detail. [ 25 ] [ 36 ] Each interbreeding group will rapidly develop its own characteristic appearance. [ 5 ] An individual from one group which wanders into another group will consequently be recognized as different, and will be discriminated against during the mating season. Reproductive isolation induced by koinophilia might thus be the first crucial step in the development of, ultimately, physiological, anatomical and behavioral barriers to hybridization, and thus, ultimately, full specieshood. Koinophilia will thereafter defend that species' appearance and behavior against invasion by unusual or unfamiliar forms (which might arise by immigration or mutation), and thus be a paradigm of punctuated equilibria (or the "vertical" aspect of the speciation problem). [ 3 ] [ 4 ]
Evolution can be extremely rapid, as shown by the creation of domesticated animals and plants in a very short period of geological time, spanning only a few tens of thousands of years, by humans with little or no knowledge of genetics. Maize , Zea mays , for instance, was created in Mexico in only a few thousand years, starting about 7 000 to 12 000 years ago. [ 37 ] This raises the question of why the long term rate of evolution is far slower than is theoretically possible. [ 7 ] [ 32 ] [ 34 ] [ 38 ]
Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. [ 39 ] [ 40 ] The mutations on which the process depends are random events, and, except for the " silent mutations " which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit by being able to adapt to a new environment through the accumulation of a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve is a group adaptation , which has been discredited by, among others, George C. Williams , [ 41 ] John Maynard Smith [ 42 ] and Richard Dawkins . [ 43 ] [ 44 ] [ 45 ] [ 46 ] because it is not to the benefit of the individual.
Consequently, sexual individuals would be expected to avoid transmitting mutations to their progeny by avoiding mates with strange or unusual characteristics. [ 1 ] [ 2 ] [ 3 ] [ 5 ] Mutations that therefore affect the external appearance and habits of their carriers will seldom be passed on to the next and subsequent generations. They will therefore seldom be tested by natural selection. Evolutionary change in a large population with a wide choice of mates, will, therefore, come to a virtual standstill. The only mutations that can accumulate in a population are ones that have no noticeable effect on the outward appearance and functionality of their bearers (they are thus termed " silent " or " neutral mutations ").
The restraint koinophilia exerts on phenotypic change suggests that evolution can only occur if mutant mates cannot be avoided as a result of a severe scarcity of potential mates. This is most likely to occur in small restricted communities , such as on small islands, in remote valleys, lakes, river systems, caves, [ 9 ] or during periods of glaciation , [ 47 ] or following mass extinctions , when sudden bursts of evolution can be expected. [ 48 ] Under these circumstances, not only is the choice of mates severely restricted, but population bottlenecks , founder effects , genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. [ 9 ] Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. [ 49 ] [ 50 ] If an isolated population such as this survives its genetic upheavals , and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come in being. In geological terms this will be an abrupt event. A resumption of avoiding mutant mates will, thereafter, result, once again, in evolutionary stagnation.
Thus the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear hundreds of thousands or millions of years later, without any change in external appearance. [ 33 ] [ 35 ] [ 48 ] [ 51 ] Graphically, these fossil species are represented by horizontal lines, whose lengths depict how long each of them existed. The horizontality of the lines illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The degree of relatedness and the lines of descent of these concurrent species is generally impossible to determine. This is illustrated in the following diagram depicting the evolution of modern humans from the time that the hominins separated from the line that led to the evolution of our closest living primate relatives, the chimpanzees . [ 51 ]
This proposal, that population bottlenecks are possibly the primary generators of the variation that fuels evolution, predicts that evolution will usually occur in intermittent, relatively large scale morphological steps, interspersed with prolonged periods of evolutionary stagnation, [ 52 ] instead of in a continuous series of finely graded changes. [ 53 ] However, it makes a further prediction. [ 4 ] Darwin emphasized that the shared biologically useless oddities and incongruities that characterize a species are signs of an evolutionary history – something that would not be expected if a bird's wing, for instance, was engineered de novo , as argued by his detractors. [ 54 ] The present model predicts that, in addition to vestiges which reflect an organism's evolutionary heritage, all the members of a given species will also bear the stamp of their isolationary past – arbitrary, random features, accumulated through founder effects , genetic drift and the other genetic consequences of sexual reproduction in small, isolated communities . [ 4 ] [ 55 ] Thus all lions, African and Asian, have a highly characteristic black tuft of fur at the end of their tails, which is difficult to explain in terms of an adaptation, or as a vestige from an early feline , or more ancient ancestor. The unique, often color- and pattern-rich plumage of each of today's wide variety of bird species presents a similar evolutionary enigma. This richly varied array of phenotypes is more easily explained as the products of isolates, subsequently defended by koinophilia, than as assemblies of very diverse evolutionary relics, or as sets of uniquely evolved adaptations.
Co-operation is any group behavior that benefits the individuals more than if they were to act as independent agents.
However selfish individuals can exploit the co-operativeness of others by not taking part in the group activity, but still enjoying its benefits. For instance, a selfish individual which does not join the hunting pack and share in its risks, but nevertheless shares in the spoils, has a fitness advantage over the other members of the pack. Thus, although a group of co-operative individuals is fitter than an equivalent group of selfish individuals, selfish individuals interspersed among a community of co-operators are always fitter than their hosts. They will raise, on average, more offspring than their hosts, and will ultimately replace them. [ 43 ] [ 44 ] [ 45 ] [ 46 ]
If, however, the selfish individuals are ostracized, and rejected as mates, because of their deviant and unusual behavior, then their evolutionary advantage becomes an evolutionary liability. [ 3 ] Co-operation then becomes evolutionarily stable . [ 11 ] [ 12 ]
The best-documented creations of new species in the laboratory were performed in the late 1980s. William Rice and G.W. Salt bred fruit flies, Drosophila melanogaster , using a maze with three different choices of habitat, such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. [ 56 ] The history of such attempts is described in Rice and Hostert (1993). [ 57 ] [ 58 ]
Diane Dodd used a laboratory experiment to show how reproductive isolation can evolve in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- or maltose-based media. [ 59 ]
Dodd's experiment has been easy for many others to replicate, including with other kinds of fruit flies and foods. [ 60 ]
The carrion crow ( Corvus corone ) and hooded crow ( Corvus cornix ) are two closely related species whose geographical distribution across Europe is illustrated in the accompanying diagram. It is believed that this distribution might have resulted from the glaciation cycles during the Pleistocene , which caused the parent population to split into isolates which subsequently re-expanded their ranges when the climate warmed causing secondary contact. [ 47 ] [ 61 ] Jelmer W. Poelstra and coworkers sequenced almost the entire genomes of both species in populations at varying distances from the contact zone to find that the two species were genetically identical, both in their DNA and in its expression (in the form of RNA), except for the lack of expression of a small portion (<0.28%) of the genome (situated on avian chromosome 18) in the hooded crow, which imparts the lighter plumage coloration on its torso. [ 47 ] Thus the two species can viably hybridize, and occasionally do so at the contact zone, but the all-black carrion crows on the one side of the contact zone mate almost exclusively with other all-black carrion crows, while the same occurs among the hooded crows on the other side of the contact zone. It is therefore clear that it is only the outward appearance of the two species that inhibits hybridization. [ 47 ] [ 61 ] The authors attribute this to assortative mating , the advantage of which is not clear, and it would lead to the rapid appearance of streams of new lineages, and possibly even species, through mutual attraction between mutants. Unnikrishnan and Akhila [ 62 ] propose, instead, that koinophilia is a more precise explanation for the resistance to hybridization across the contact zone, despite the absence of physiological, anatomical or genetic barriers to such hybridization.
William B. Miller, [ 5 ] in an extensive recent (2013) review of koinophilia theory, notes that while it provides precise explanations for the grouping of sexual animals into species, their unchanging persistence in the fossil record over long periods of time, and the phenotypic gaps between species, both fossil and extant, it represents a major departure from the widely accepted view that beneficial mutations spread, ultimately, to the whole, or some portion of the population (causing it to evolve gene by gene). [ 63 ] [ 64 ] Darwin recognized that this process had no inherent, or inevitable propensity to produce species. [ 24 ] [ 23 ] Instead populations would be in a perpetual state of transition both in time and space . [ 24 ] [ 23 ] They would, at any given moment, consist of individuals with varying numbers of beneficial characteristics that may or may not have reached them from their various points of origin in the population, and neutral features will have a scattering determined by random mechanisms such as genetic drift . [ 65 ] [ 66 ] [ 67 ]
He also notes that koinophilia provides no explanation as to how the physiological, anatomical and genetic causes of reproductive isolation come about. It is only the behavioral reproductive isolation that is addressed by koinophilia. It is furthermore difficult to see how koinophilia might apply to plants, and certain marine creatures that discharge their gametes into the environment to meet up and fuse, it seems, entirely randomly (within conspecific confines). However, when pollen from several compatible donors is used to pollinate stigmata, the donors typically do not sire equal numbers of seeds. [ 68 ] Marshall and Diggle state that the existence of some kind of non-random seed paternity is, in fact, not in question in flowering plants. How this occurs remains unknown. Pollen choice is one of the possibilities, [ 68 ] taking into account that 50% of the pollen grain's haploid genome is expressed during its tube's growth towards the ovule. [ 69 ]
The apparent preference of the females of certain, particularly bird, species for exaggerated male ornaments , such as the peacock's tail, [ 7 ] [ 70 ] [ 71 ] is not easily reconciled with the concept of koinophilia. | https://en.wikipedia.org/wiki/Koinophilia |
The Kolbe electrolysis or Kolbe reaction is an organic reaction named after Hermann Kolbe . [ 1 ] The Kolbe reaction is formally a decarboxylative dimerisation of two carboxylic acids (or carboxylate ions ). The overall reaction is:
If a mixture of two different carboxylates are used, all combinations of them are generally seen as the organic product structures:
The reaction mechanism involves a two-stage radical process: electrochemical decarboxylation gives a radical intermediate, which combine to form a covalent bond. [ 2 ] As an example, electrolysis of acetic acid yields ethane and carbon dioxide :
Another example is the synthesis of 2,7-dimethyl-2,7-dinitrooctane from 4-methyl-4-nitrovaleric acid: [ 3 ]
The Kolbe reaction has also been occasionally used in cross-coupling reactions .
In 2022, it was discovered that the Kolbe electrolysis is enhanced if an alternating square wave current is used instead of a direct current . [ 4 ] [ 5 ]
Kolbe electrolysis has a few industrial applications. [ 6 ] In one example, sebacic acid has been produced commercially by Kolbe electrolysis of adipic acid . [ 7 ]
Kolbe electrolysis has been examined for converting biomass into biodiesel [ 8 ] [ 9 ] and for grafting of carbon electrodes. [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Kolbe_electrolysis |
The Kolbe nitrile synthesis is a method for the preparation of alkyl nitriles by reaction of the corresponding alkyl halide with a metal cyanide . [ 1 ] A side product for this reaction is the formation of an isonitrile because the cyanide ion is an ambident nucleophile . The reaction is named after Hermann Kolbe .
The ratio of product isomers depends on the solvent and the reaction mechanism , and can be predicted by Kornblum's rule . With the Using alkali cyanides such as sodium cyanide and polar solvents, the reaction occurs by an S N 2 mechanism via the more-nucleophilic carbon atom of the cyanide ion. [ citation needed ] [ dubious – discuss ]
This type of reaction together with dimethyl sulfoxide as a solvent is a convenient method for the synthesis of nitriles. [ 2 ] The use of DMSO was a major advancement in the development of this reaction, as it works for more sterically hindered electrophilies (secondary and neopentyl halides) without rearrangement side-reactions. [ citation needed ] | https://en.wikipedia.org/wiki/Kolbe_nitrile_synthesis |
The Kolbe–Schmitt reaction or Kolbe process (named after Hermann Kolbe and Rudolf Schmitt ) is a carboxylation chemical reaction that proceeds by treating phenol with sodium hydroxide to form sodium phenoxide , [ 1 ] then heating sodium phenoxide with carbon dioxide under pressure (100 atm , 125 °C), then treating the product with sulfuric acid . The final product is an aromatic hydroxy acid which is also known as salicylic acid (the precursor to aspirin ). [ 2 ] [ 3 ] [ 4 ] [ 5 ]
By using potassium hydroxide , 4-hydroxybenzoic acid is accessible, an important precursor for the versatile paraben class of biocides used e.g. in personal care products.
The methodology is also used in the industrial synthesis of 3-hydroxy-2-naphthoic acid ; the regiochemistry of the carboxylation in this case is sensitive to temperature. [ 6 ]
The Kolbe–Schmitt reaction proceeds via the nucleophilic addition of a phenoxide , classically sodium phenoxide (NaOC 6 H 5 ), to carbon dioxide to give the salicylate.
The final step is the reaction ( protonation ) of the salicylate anion with an acid to form the desired salicylic acids (ortho- and para- isomers).
( animation ) | https://en.wikipedia.org/wiki/Kolbe–Schmitt_reaction |
In probability theory , Kolmogorov's inequality is a so-called "maximal inequality " that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.
Let X 1 , ..., X n : Ω → R be independent random variables defined on a common probability space (Ω, F , Pr), with expected value E[ X k ] = 0 and variance Var[ X k ] < +∞ for k = 1, ..., n . Then, for each λ > 0,
where S k = X 1 + ... + X k .
The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.
The following argument employs discrete martingales .
As argued in the discussion of Doob's martingale inequality , the sequence S 1 , S 2 , … , S n {\displaystyle S_{1},S_{2},\dots ,S_{n}} is a martingale.
Define ( Z i ) i = 0 n {\displaystyle (Z_{i})_{i=0}^{n}} as follows. Let Z 0 = 0 {\displaystyle Z_{0}=0} , and
for all i {\displaystyle i} .
Then ( Z i ) i = 0 n {\displaystyle (Z_{i})_{i=0}^{n}} is also a martingale.
For any martingale M i {\displaystyle M_{i}} with M 0 = 0 {\displaystyle M_{0}=0} , we have that
∑ i = 1 n E [ ( M i − M i − 1 ) 2 ] = ∑ i = 1 n E [ M i 2 − 2 M i M i − 1 + M i − 1 2 ] = ∑ i = 1 n E [ M i 2 − 2 ( M i − 1 + M i − M i − 1 ) M i − 1 + M i − 1 2 ] = ∑ i = 1 n E [ M i 2 − M i − 1 2 ] − 2 E [ M i − 1 ( M i − M i − 1 ) ] = E [ M n 2 ] − E [ M 0 2 ] = E [ M n 2 ] . {\displaystyle {\begin{aligned}\sum _{i=1}^{n}{\text{E}}[(M_{i}-M_{i-1})^{2}]&=\sum _{i=1}^{n}{\text{E}}[M_{i}^{2}-2M_{i}M_{i-1}+M_{i-1}^{2}]\\&=\sum _{i=1}^{n}{\text{E}}\left[M_{i}^{2}-2(M_{i-1}+M_{i}-M_{i-1})M_{i-1}+M_{i-1}^{2}\right]\\&=\sum _{i=1}^{n}{\text{E}}\left[M_{i}^{2}-M_{i-1}^{2}\right]-2{\text{E}}\left[M_{i-1}(M_{i}-M_{i-1})\right]\\&={\text{E}}[M_{n}^{2}]-{\text{E}}[M_{0}^{2}]={\text{E}}[M_{n}^{2}].\end{aligned}}}
Applying this result to the martingale ( S i ) i = 0 n {\displaystyle (S_{i})_{i=0}^{n}} , we have
Pr ( max 1 ≤ i ≤ n | S i | ≥ λ ) = Pr [ | Z n | ≥ λ ] ≤ 1 λ 2 E [ Z n 2 ] = 1 λ 2 ∑ i = 1 n E [ ( Z i − Z i − 1 ) 2 ] ≤ 1 λ 2 ∑ i = 1 n E [ ( S i − S i − 1 ) 2 ] = 1 λ 2 E [ S n 2 ] = 1 λ 2 Var [ S n ] {\displaystyle {\begin{aligned}{\text{Pr}}\left(\max _{1\leq i\leq n}|S_{i}|\geq \lambda \right)&={\text{Pr}}[|Z_{n}|\geq \lambda ]\\&\leq {\frac {1}{\lambda ^{2}}}{\text{E}}[Z_{n}^{2}]={\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(Z_{i}-Z_{i-1})^{2}]\\&\leq {\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(S_{i}-S_{i-1})^{2}]={\frac {1}{\lambda ^{2}}}{\text{E}}[S_{n}^{2}]={\frac {1}{\lambda ^{2}}}{\text{Var}}[S_{n}]\end{aligned}}}
where the first inequality follows by Chebyshev's inequality .
This inequality was generalized by Hájek and Rényi in 1955.
This article incorporates material from Kolmogorov's inequality on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Kolmogorov's_inequality |
In probability theory , Kolmogorov's three-series theorem , named after Andrey Kolmogorov , gives a criterion for the almost sure convergence of an infinite series of random variables in terms of the convergence of three different series involving properties of their probability distributions . Kolmogorov's three-series theorem, combined with Kronecker's lemma , can be used to give a relatively easy proof of the strong law of large numbers . [ 1 ]
Let ( X n ) n ∈ N {\displaystyle (X_{n})_{n\in \mathbb {N} }} be independent random variables . The random series ∑ n = 1 ∞ X n {\textstyle \sum _{n=1}^{\infty }X_{n}} converges almost surely in R {\displaystyle \mathbb {R} } if the following conditions hold for some A > 0 {\displaystyle A>0} , and only if the following conditions hold for any A > 0 {\displaystyle A>0} :
Condition (i) and Borel–Cantelli give that X n = Y n {\displaystyle X_{n}=Y_{n}} for n {\displaystyle n} large, almost surely . Hence ∑ n = 1 ∞ X n {\displaystyle \textstyle \sum _{n=1}^{\infty }X_{n}} converges if and only if ∑ n = 1 ∞ Y n {\displaystyle \textstyle \sum _{n=1}^{\infty }Y_{n}} converges. Conditions (ii)-(iii) and Kolmogorov's two-series theorem give the almost sure convergence of ∑ n = 1 ∞ Y n {\displaystyle \textstyle \sum _{n=1}^{\infty }Y_{n}} .
Suppose that ∑ n = 1 ∞ X n {\displaystyle \textstyle \sum _{n=1}^{\infty }X_{n}} converges almost surely.
Without condition (i), by Borel–Cantelli there would exist some A > 0 {\displaystyle A>0} such that { | X n | ≥ A } {\displaystyle \{|X_{n}|\geq A\}} for infinitely many n {\displaystyle n} , almost surely. But then the series would diverge. Therefore, we must have condition (i).
We see that condition (iii) implies condition (ii): Kolmogorov's two-series theorem along with condition (i) applied to the case A = 1 {\displaystyle A=1} gives the convergence of ∑ n = 1 ∞ ( Y n − E [ Y n ] ) {\displaystyle \textstyle \sum _{n=1}^{\infty }(Y_{n}-\mathbb {E} [Y_{n}])} . So given the convergence of ∑ n = 1 ∞ Y n {\displaystyle \textstyle \sum _{n=1}^{\infty }Y_{n}} , we have ∑ n = 1 ∞ E [ Y n ] {\displaystyle \textstyle \sum _{n=1}^{\infty }\mathbb {E} [Y_{n}]} converges, so condition (ii) is implied.
Thus, it only remains to demonstrate the necessity of condition (iii), and we will have obtained the full result. It is equivalent to check condition (iii) for the series ∑ n = 1 ∞ Z n = ∑ n = 1 ∞ ( Y n − Y n ′ ) {\displaystyle \textstyle \sum _{n=1}^{\infty }Z_{n}=\textstyle \sum _{n=1}^{\infty }(Y_{n}-Y'_{n})} where for each n {\displaystyle n} , Y n {\displaystyle Y_{n}} and Y n ′ {\displaystyle Y'_{n}} are IID —that is, to employ the assumption that E [ Y n ] = 0 {\displaystyle \mathbb {E} [Y_{n}]=0} , since Z n {\displaystyle Z_{n}} is a sequence of random variables bounded by 2, converging almost surely, and with v a r ( Z n ) = 2 v a r ( Y n ) {\displaystyle \mathrm {var} (Z_{n})=2\mathrm {var} (Y_{n})} . So we wish to check that if ∑ n = 1 ∞ Z n {\displaystyle \textstyle \sum _{n=1}^{\infty }Z_{n}} converges, then ∑ n = 1 ∞ v a r ( Z n ) {\displaystyle \textstyle \sum _{n=1}^{\infty }\mathrm {var} (Z_{n})} converges as well. This is a special case of a more general result from martingale theory with summands equal to the increments of a martingale sequence and the same conditions ( E [ Z n ] = 0 {\displaystyle \mathbb {E} [Z_{n}]=0} ; the series of the variances is converging; and the summands are bounded ). [ 2 ] [ 3 ] [ 4 ]
As an illustration of the theorem, consider the example of the harmonic series with random signs :
Here, " ± {\displaystyle \pm } " means that each term 1 / n {\displaystyle 1/n} is taken with a random sign that is either 1 {\displaystyle 1} or − 1 {\displaystyle -1} with respective probabilities 1 / 2 , 1 / 2 {\displaystyle 1/2,\ 1/2} , and all random signs are chosen independently. Let X n {\displaystyle X_{n}} in the theorem denote a random variable that takes the values 1 / n {\displaystyle 1/n} and − 1 / n {\displaystyle -1/n} with equal probabilities. With A = 2 {\displaystyle A=2} the summands of the first two series are identically zero and var(Y n )= n − 2 {\displaystyle n^{-2}} . The conditions of the theorem are then satisfied, so it follows that the harmonic series with random signs converges almost surely. On the other hand, the analogous series of (for example) square root reciprocals with random signs, namely
diverges almost surely, since condition (3) in the theorem is not satisfied for any A. Note that this is different from the behavior of the analogous series with alternating signs, ∑ n = 1 ∞ ( − 1 ) n / n {\displaystyle \sum _{n=1}^{\infty }(-1)^{n}/{\sqrt {n}}} , which does converge. | https://en.wikipedia.org/wiki/Kolmogorov's_three-series_theorem |
In probability theory , Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers .
Let ( X n ) n = 1 ∞ {\displaystyle \left(X_{n}\right)_{n=1}^{\infty }} be independent random variables with expected values E [ X n ] = μ n {\displaystyle \mathbf {E} \left[X_{n}\right]=\mu _{n}} and variances V a r ( X n ) = σ n 2 {\displaystyle \mathbf {Var} \left(X_{n}\right)=\sigma _{n}^{2}} , such that ∑ n = 1 ∞ μ n {\displaystyle \sum _{n=1}^{\infty }\mu _{n}} converges in R {\displaystyle \mathbb {R} } and ∑ n = 1 ∞ σ n 2 {\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}} converges in R {\displaystyle \mathbb {R} } . Then ∑ n = 1 ∞ X n {\displaystyle \sum _{n=1}^{\infty }X_{n}} converges in R {\displaystyle \mathbb {R} } almost surely .
Assume WLOG μ n = 0 {\displaystyle \mu _{n}=0} . Set S N = ∑ n = 1 N X n {\displaystyle S_{N}=\sum _{n=1}^{N}X_{n}} , and we will see that lim sup N S N − lim inf N S N = 0 {\displaystyle \limsup _{N}S_{N}-\liminf _{N}S_{N}=0} with probability 1.
For every m ∈ N {\displaystyle m\in \mathbb {N} } , lim sup N → ∞ S N − lim inf N → ∞ S N = lim sup N → ∞ ( S N − S m ) − lim inf N → ∞ ( S N − S m ) ≤ 2 max k ∈ N | ∑ i = 1 k X m + i | {\displaystyle \limsup _{N\to \infty }S_{N}-\liminf _{N\to \infty }S_{N}=\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\leq 2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|}
Thus, for every m ∈ N {\displaystyle m\in \mathbb {N} } and ϵ > 0 {\displaystyle \epsilon >0} , P ( lim sup N → ∞ ( S N − S m ) − lim inf N → ∞ ( S N − S m ) ≥ ϵ ) ≤ P ( 2 max k ∈ N | ∑ i = 1 k X m + i | ≥ ϵ ) = P ( max k ∈ N | ∑ i = 1 k X m + i | ≥ ϵ 2 ) ≤ lim sup N → ∞ 4 ϵ − 2 ∑ i = m + 1 m + N σ i 2 = 4 ϵ − 2 lim N → ∞ ∑ i = m + 1 m + N σ i 2 {\displaystyle {\begin{aligned}\mathbb {P} \left(\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\geq \epsilon \right)&\leq \mathbb {P} \left(2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq \epsilon \ \right)\\&=\mathbb {P} \left(\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq {\frac {\epsilon }{2}}\ \right)\\&\leq \limsup _{N\to \infty }4\epsilon ^{-2}\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\\&=4\epsilon ^{-2}\lim _{N\to \infty }\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\end{aligned}}}
While the second inequality is due to Kolmogorov's inequality .
By the assumption that ∑ n = 1 ∞ σ n 2 {\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}} converges, it follows that the last term tends to 0 when m → ∞ {\displaystyle m\to \infty } , for every arbitrary ϵ > 0 {\displaystyle \epsilon >0} . | https://en.wikipedia.org/wiki/Kolmogorov's_two-series_theorem |
In probability theory , Kolmogorov's zero–one law , named in honor of Andrey Nikolaevich Kolmogorov , specifies that a certain type of event , namely a tail event of independent σ-algebras , will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.
Tail events are defined in terms of countably infinite families of σ-algebras. For illustrative purposes, we present here the special case in which each sigma algebra is generated by a random variable X k {\displaystyle X_{k}} for k ∈ N {\displaystyle k\in \mathbb {N} } . Let F {\displaystyle {\mathcal {F}}} be the sigma-algebra generated jointly by all of the X k {\displaystyle X_{k}} . Then, a tail event F ∈ F {\displaystyle F\in {\mathcal {F}}} is an event the occurrence of which cannot depend on the outcome of a finite subfamily of these random variables. (Note: F {\displaystyle F} belonging to F {\displaystyle {\mathcal {F}}} implies that membership in F {\displaystyle F} is uniquely determined by the values of the X k {\displaystyle X_{k}} , but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence of the X k {\displaystyle X_{k}} converges, and the event that its sum converges are both tail events. If the X k {\displaystyle X_{k}} are, for example, all Bernoulli-distributed, then the event that there are infinitely many k ∈ N {\displaystyle k\in \mathbb {N} } such that X k = X k + 1 = ⋯ = X k + 100 = 1 {\displaystyle X_{k}=X_{k+1}=\dots =X_{k+100}=1} is a tail event. If each X k {\displaystyle X_{k}} models the outcome of the k {\displaystyle k} -th coin toss in a modeled, infinite sequence of coin tosses, this means that a sequence of 100 consecutive heads occurring infinitely many times is a tail event in this model.
Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of the X k {\displaystyle X_{k}} is removed.
In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one.
A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let (Ω, F , P ) be a probability space and let F n be a sequence of σ-algebras contained in F . Let
be the smallest σ-algebra containing F n , F n +1 , .... The terminal σ-algebra of the F n is defined as T ( ( F n ) n ∈ N ) = ⋂ n = 1 ∞ G n {\displaystyle {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })=\bigcap _{n=1}^{\infty }G_{n}} .
Kolmogorov's zero–one law asserts that, if the F n are stochastically independent, then for any event E ∈ T ( ( F n ) n ∈ N ) {\displaystyle E\in {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })} , one has either P ( E ) = 0 or P ( E )=1.
The statement of the law in terms of random variables is obtained from the latter by taking each F n to be the σ-algebra generated by the random variable X n . A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by all X n , but which is independent of any finite number of X n . That is, a tail event is precisely an element of the terminal σ-algebra ⋂ n = 1 ∞ G n {\displaystyle \textstyle {\bigcap _{n=1}^{\infty }G_{n}}} .
An invertible measure-preserving transformation on a standard probability space that obeys the 0-1 law is called a Kolmogorov automorphism . [ clarification needed ] All Bernoulli automorphisms are Kolmogorov automorphisms but not vice versa . The presence of an infinite cluster in the context of percolation theory also obeys the 0-1 law.
Let { X n } n {\displaystyle \{X_{n}\}_{n}} be a sequence of independent random variables, then the event { lim n → ∞ ∑ k = 1 n X k exists } {\displaystyle \left\{\lim _{n\rightarrow \infty }\sum _{k=1}^{n}X_{k}{\text{ exists }}\right\}} is a tail event. Thus by Kolmogorov 0-1 law, it has either probability 0 or 1 to happen. Note that independence is required for the tail event condition to hold. Without independence we can consider a sequence that's either ( 0 , 0 , 0 , … ) {\displaystyle (0,0,0,\dots )} or ( 1 , 1 , 1 , … ) {\displaystyle (1,1,1,\dots )} with probability 1 2 {\displaystyle {\frac {1}{2}}} each. In this case the sum converges with probability 1 2 {\displaystyle {\frac {1}{2}}} . | https://en.wikipedia.org/wiki/Kolmogorov's_zero–one_law |
In 1973, Andrey Kolmogorov proposed a non-probabilistic approach to statistics and model selection . Let each datum be a finite binary string and a model be a finite set of binary strings. Consider model classes consisting of models of given maximal Kolmogorov complexity .
The Kolmogorov structure function of an individual data string expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. The structure function determines all stochastic properties of the individual data string: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the true model is in the model class considered or not. In the classical case we talk about a set of data with a probability distribution, and the properties are those of the expectations. In contrast, here we deal with individual data strings and the properties of the individual string focused on. In this setting, a property holds with certainty rather than with high probability as in the classical case. The Kolmogorov structure function precisely quantifies the goodness-of-fit of an individual model with respect to individual data.
The Kolmogorov structure function is used in the algorithmic information theory , also known as the theory of Kolmogorov complexity, for describing the structure of a string by use of models of increasing complexity.
The structure function was originally proposed by Kolmogorov in 1973 at a Soviet Information Theory symposium in Tallinn, but these results were not published [ 1 ] p. 182. But the results were announced in [ 2 ] in 1974, the only written record by Kolmogorov himself. One of his last scientific statements is (translated from the original Russian by L.A. Levin):
To each constructive object corresponds a function Φ x ( k ) {\displaystyle \Phi _{x}(k)} of a natural number k—the log of minimal cardinality of x-containing sets that allow definitions of complexity at most k. If the element x itself allows a simple definition, then the function Φ {\displaystyle \Phi } drops to 0 even for small k. Lacking such definition, the element is "random" in a negative sense. But it is positively "probabilistically random" only when function Φ {\displaystyle \Phi } having taken the value Φ 0 {\displaystyle \Phi _{0}} at a relatively small k = k 0 {\displaystyle k=k_{0}} , then changes approximately as Φ ( k ) = Φ 0 − ( k − k 0 ) {\displaystyle \Phi (k)=\Phi _{0}-(k-k_{0})} .
It is discussed in Cover and Thomas. [ 1 ] It is extensively studied in Vereshchagin and Vitányi [ 3 ] where also the main properties are resolved.
The Kolmogorov structure function can be written as
where x {\displaystyle x} is a binary string of length n {\displaystyle n} with x ∈ S {\displaystyle x\in S} where S {\displaystyle S} is a contemplated model (set of n-length strings) for x {\displaystyle x} , K ( S ) {\displaystyle K(S)} is the Kolmogorov complexity of S {\displaystyle S} and α {\displaystyle \alpha } is a nonnegative integer value bounding the complexity of the contemplated S {\displaystyle S} 's. Clearly, this function is nonincreasing and reaches log | { x } | = 0 {\displaystyle \log |\{x\}|=0} for α = K ( x ) + c {\displaystyle \alpha =K(x)+c} where c {\displaystyle c} is the required number of bits to change x {\displaystyle x} into { x } {\displaystyle \{x\}} and K ( x ) {\displaystyle K(x)} is the Kolmogorov complexity of x {\displaystyle x} .
We define a set S {\displaystyle S} containing x {\displaystyle x} such that
The function h x ( α ) {\displaystyle h_{x}(\alpha )} never decreases more than a fixed independent constant below the diagonal called sufficiency line L defined by
It is approached to within a constant distance by the graph of h x {\displaystyle h_{x}} for certain arguments (for instance, for α = K ( x ) + c {\displaystyle \alpha =K(x)+c} ). For these α {\displaystyle \alpha } 's we have α + h x ( α ) = K ( x ) + O ( 1 ) {\displaystyle \alpha +h_{x}(\alpha )=K(x)+O(1)} and the associated model S {\displaystyle S} (witness for h x ( α ) {\displaystyle h_{x}(\alpha )} ) is called an optimal set for x {\displaystyle x} , and its description of K ( S ) ≤ α {\displaystyle K(S)\leq \alpha } bits is therefore an algorithmic sufficient statistic . We write `algorithmic' for `Kolmogorov complexity' by convention. The main properties of an algorithmic sufficient statistic are the following: If S {\displaystyle S} is an algorithmic sufficient statistic for x {\displaystyle x} , then
That is, the two-part description of x {\displaystyle x} using the model S {\displaystyle S} and as data-to-model code the index of x {\displaystyle x} in the enumeration of S {\displaystyle S} in log | S | {\displaystyle \log |S|} bits, is as concise as the shortest one-part code of x {\displaystyle x} in K ( x ) {\displaystyle K(x)} bits. This can be easily seen as follows:
using straightforward inequalities and the sufficiency property, we find that K ( x | S ) = log | S | + O ( 1 ) {\displaystyle K(x|S)=\log |S|+O(1)} . (For example, given S ∋ x {\displaystyle S\ni x} , we can describe x {\displaystyle x} self-delimitingly (you can determine its end) in log | S | + O ( 1 ) {\displaystyle \log |S|+O(1)} bits.) Therefore, the randomness deficiency log | S | − K ( x | S ) {\displaystyle \log |S|-K(x|S)} of x {\displaystyle x} in S {\displaystyle S} is a constant, which means that x {\displaystyle x} is a typical (random) element of S. However, there can be models S {\displaystyle S} containing x {\displaystyle x} that are not sufficient statistics. An algorithmic sufficient statistic S {\displaystyle S} for x {\displaystyle x} has the additional property, apart from being a model of best fit, that K ( x , S ) = K ( x ) + O ( 1 ) {\displaystyle K(x,S)=K(x)+O(1)} and therefore by the Kolmogorov complexity symmetry of information (the information about x {\displaystyle x} in S {\displaystyle S} is about the same as the information about S {\displaystyle S} in x) we have K ( S | x ∗ ) = O ( 1 ) {\displaystyle K(S|x^{*})=O(1)} : the algorithmic sufficient statistic S {\displaystyle S} is a model of best fit that is almost completely determined by x {\displaystyle x} . ( x ∗ {\displaystyle x^{*}} is a shortest program for x {\displaystyle x} .) The algorithmic sufficient statistic associated with the least such α {\displaystyle \alpha } is called the algorithmic minimal sufficient statistic .
With respect to the picture: The MDL structure function λ x ( α ) {\displaystyle \lambda _{x}(\alpha )} is explained below. The Goodness-of-fit structure function β x ( α ) {\displaystyle \beta _{x}(\alpha )} is the least randomness deficiency (see above) of any model S ∋ x {\displaystyle S\ni x} for x {\displaystyle x} such that K ( S ) ≤ α {\displaystyle K(S)\leq \alpha } . This structure function gives the goodness-of-fit of a model S {\displaystyle S} (containing x) for the string x. When it is low the model fits well, and when it is high the model doesn't fit well. If β x ( α ) = 0 {\displaystyle \beta _{x}(\alpha )=0} for some α {\displaystyle \alpha } then there is a typical model S ∋ x {\displaystyle S\ni x} for x {\displaystyle x} such that K ( S ) ≤ α {\displaystyle K(S)\leq \alpha } and x {\displaystyle x} is typical (random) for S. That is, S {\displaystyle S} is the best-fitting model for x. For more details see [ 1 ] and especially [ 3 ] and. [ 4 ]
Within the constraints that the graph goes down at an angle of at least 45 degrees, that it starts at n and ends approximately at K ( x ) {\displaystyle K(x)} , every graph (up to a O ( log n ) {\displaystyle O(\log n)} additive term in argument and value) is realized by the structure function of some data x and vice versa. Where the graph hits the diagonal first the argument (complexity) is that of the minimum sufficient statistic. It is incomputable to determine this place. See. [ 3 ]
It is proved that at each level α {\displaystyle \alpha } of complexity the structure function allows us to select the best model S {\displaystyle S} for the individual string x within a strip of O ( log n ) {\displaystyle O(\log n)} with certainty, not with great probability. [ 3 ]
The Minimum description length (MDL) function: The length of the minimal two-part code for x consisting of the model cost K(S) and the
length of the index of x in S, in the model class of sets of given maximal Kolmogorov complexity α {\displaystyle \alpha } , the complexity of S upper bounded by α {\displaystyle \alpha } , is given by the MDL function or constrained MDL estimator:
where Λ ( S ) = log | S | + K ( S ) ≥ K ( x ) − O ( 1 ) {\displaystyle \Lambda (S)=\log |S|+K(S)\geq K(x)-O(1)} is the total length of two-part code of x with help of model S.
It is proved that at each level α {\displaystyle \alpha } of complexity the structure function allows us to select the best model S for the individual string x within a strip of O ( log n ) {\displaystyle O(\log n)} with certainty, not with great probability. [ 3 ]
The mathematics developed above were taken as the foundation of MDL by its inventor Jorma Rissanen . [ 5 ]
For every computable probability distribution P {\displaystyle P} it can be proved [ 6 ] that
For example, if P {\displaystyle P} is some computable distribution on the set S {\displaystyle S} of strings of length n {\displaystyle n} , then each x ∈ S {\displaystyle x\in S} has probability P ( x ) = exp ( O ( log n ) ) / | S | = n O ( 1 ) / | S | {\displaystyle P(x)=\exp(O(\log n))/|S|=n^{O(1)}/|S|} . Kolmogorov's structure function becomes
where x is a binary string of length n with − log P ( x ) > 0 {\displaystyle -\log P(x)>0} where P {\displaystyle P} is a contemplated model (computable probability of n {\displaystyle n} -length strings) for x {\displaystyle x} , K ( P ) {\displaystyle K(P)} is the Kolmogorov complexity of P {\displaystyle P} and α {\displaystyle \alpha } is an integer value bounding the complexity of the contemplated P {\displaystyle P} 's. Clearly, this function is non-increasing and reaches log | { x } | = 0 {\displaystyle \log |\{x\}|=0} for α = K ( x ) + c {\displaystyle \alpha =K(x)+c} where c is the required number of bits to change x {\displaystyle x} into { x } {\displaystyle \{x\}} and K ( x ) {\displaystyle K(x)} is the Kolmogorov complexity of x {\displaystyle x} . Then h x ′ ( α ) = h x ( α ) + O ( log n ) {\displaystyle h'_{x}(\alpha )=h_{x}(\alpha )+O(\log n)} . For every complexity level α {\displaystyle \alpha } the function h x ′ ( α ) {\displaystyle h'_{x}(\alpha )} is the Kolmogorov complexity version of the maximum likelihood (ML).
It is proved that at each level α {\displaystyle \alpha } of complexity the structure function allows us to select the best model S {\displaystyle S} for the individual string x {\displaystyle x} within a strip of O ( log n ) {\displaystyle O(\log n)} with certainty, not with great probability. [ 3 ]
The MDL function: The length of the minimal two-part code for x consisting of the model cost K(P) and the
length of − log P ( x ) {\displaystyle -\log P(x)} , in the model class of computable probability mass functions of given maximal Kolmogorov complexity α {\displaystyle \alpha } , the complexity of P upper bounded by α {\displaystyle \alpha } , is given by the MDL function or constrained MDL estimator:
where Λ ( P ) = − log P ( x ) + K ( P ) ≥ K ( x ) − O ( 1 ) {\displaystyle \Lambda (P)=-\log P(x)+K(P)\geq K(x)-O(1)} is the total length of two-part code of x with help of model P.
It is proved that at each level α {\displaystyle \alpha } of complexity the MDL function allows us to select the best model P for the individual string x within a strip of O ( log n ) {\displaystyle O(\log n)} with certainty, not with great probability. [ 3 ]
It turns out that the approach can be extended to a theory of rate distortion of individual finite sequences
and denoising of individual finite sequences [ 7 ] using Kolmogorov complexity. Experiments using real compressor programs have been carried out with success. [ 8 ] Here the assumption is that for natural data the Kolmogorov complexity is not far from the length of a compressed version using a good compressor. | https://en.wikipedia.org/wiki/Kolmogorov_structure_function |
The Kolmogorov–Arnold–Moser ( KAM ) theorem is a result in dynamical systems about the persistence of quasiperiodic motions under small perturbations. The theorem partly resolves the small-divisor problem that arises in the perturbation theory of classical mechanics .
The problem is whether or not a small perturbation of a conservative dynamical system results in a lasting quasiperiodic orbit . The original breakthrough to this problem was given by Andrey Kolmogorov in 1954. [ 1 ] This was rigorously proved and extended by Jürgen Moser in 1962 [ 2 ] (for smooth twist maps ) and Vladimir Arnold in 1963 [ 3 ] (for analytic Hamiltonian systems ), and the general result is known as the KAM theorem.
Arnold originally thought that this theorem could apply to the motions of the Solar System or other instances of the n -body problem , but it turned out to work only for the three-body problem because of a degeneracy in his formulation of the problem for larger numbers of bodies. Later, Gabriella Pinzari showed how to eliminate this degeneracy by developing a rotation-invariant version of the theorem. [ 4 ]
The KAM theorem is usually stated in terms of trajectories in phase space of an integrable Hamiltonian system . The motion of an integrable system is confined to an invariant torus (a doughnut -shaped surface). Different initial conditions of the integrable Hamiltonian system will trace different invariant tori in phase space. Plotting the coordinates of an integrable system would show that they are quasiperiodic.
The KAM theorem states that if the system is subjected to a weak nonlinear perturbation, some of the invariant tori are deformed and survive, i.e. there is a map from the original manifold to the deformed one that is continuous in the perturbation. Conversely, other invariant tori are destroyed: even arbitrarily small perturbations cause the manifold to no longer be invariant and there exists no such map to nearby manifolds. Surviving tori meet the non-resonance condition, i.e., they have “sufficiently irrational” frequencies. This implies that the motion on the deformed torus continues to be quasiperiodic , with the independent periods changed (as a consequence of the non-degeneracy condition). The KAM theorem quantifies the level of perturbation that can be applied for this to be true.
Those KAM tori that are destroyed by perturbation become invariant Cantor sets , named Cantori by Ian C. Percival in 1979. [ 5 ]
The non-resonance and non-degeneracy conditions of the KAM theorem become increasingly difficult to satisfy for systems with more degrees of freedom. As the number of dimensions of the system increases, the volume occupied by the tori decreases.
As the perturbation increases and the smooth curves disintegrate we move from KAM theory to Aubry–Mather theory which requires less stringent hypotheses and works with the Cantor-like sets.
The existence of a KAM theorem for perturbations of quantum many-body integrable systems is still an open question, although it is believed that arbitrarily small perturbations will destroy integrability in the infinite size limit.
An important consequence of the KAM theorem is that for a large set of initial conditions the motion remains perpetually quasiperiodic. [ which? ]
The methods introduced by Kolmogorov, Arnold, and Moser have developed into a large body of results related to quasiperiodic motions, now known as KAM theory . Notably, it has been extended to non-Hamiltonian systems (starting with Moser), to non-perturbative situations (as in the work of Michael Herman ) and to systems with fast and slow frequencies (as in the work of Mikhail B. Sevryuk).
A manifold T d {\displaystyle {\mathcal {T}}^{d}} invariant under the action of a flow ϕ t {\displaystyle \phi ^{t}} is called an invariant d {\displaystyle d} -torus, if there exists a diffeomorphism φ : T d → T d {\displaystyle {\boldsymbol {\varphi }}:{\mathcal {T}}^{d}\rightarrow \mathbb {T} ^{d}} into the standard d {\displaystyle d} -torus T d := S 1 × S 1 × ⋯ × S 1 ⏟ d {\displaystyle \mathbb {T} ^{d}:=\underbrace {\mathbb {S} ^{1}\times \mathbb {S} ^{1}\times \cdots \times \mathbb {S} ^{1}} _{d}} such that the resulting motion on T d {\displaystyle \mathbb {T} ^{d}} is uniform linear but not static, i.e. d φ / d t = ω {\displaystyle \mathrm {d} {\boldsymbol {\varphi }}/\mathrm {d} t={\boldsymbol {\omega }}} ,where ω ∈ R d {\displaystyle {\boldsymbol {\omega }}\in \mathbb {R} ^{d}} is a non-zero constant vector, called the frequency vector .
If the frequency vector ω {\displaystyle {\boldsymbol {\omega }}} is:
then the invariant d {\displaystyle d} -torus T d {\displaystyle {\mathcal {T}}^{d}} ( d ≥ 2 {\displaystyle d\geq 2} ) is called a KAM torus . The d = 1 {\displaystyle d=1} case is normally excluded in classical KAM theory because it does not involve small divisors. | https://en.wikipedia.org/wiki/Kolmogorov–Arnold–Moser_theorem |
In general relativity , the Komar superpotential , [ 1 ] corresponding to the invariance of the Hilbert–Einstein Lagrangian L G = 1 2 κ R − g d 4 x {\displaystyle {\mathcal {L}}_{\mathrm {G} }={1 \over 2\kappa }R{\sqrt {-g}}\,\mathrm {d} ^{4}x} , is the tensor density :
associated with a vector field ξ = ξ ρ ∂ ρ {\displaystyle \xi =\xi ^{\rho }\partial _{\rho }} , and where ∇ σ {\displaystyle \nabla _{\sigma }} denotes covariant derivative with respect to the Levi-Civita connection .
The Komar two-form :
where d x α β = ι ∂ α d x β = ι ∂ α ι ∂ β d 4 x {\displaystyle \mathrm {d} x_{\alpha \beta }=\iota _{\partial {\alpha }}\mathrm {d} x_{\beta }=\iota _{\partial {\alpha }}\iota _{\partial {\beta }}\mathrm {d} ^{4}x} denotes interior product , generalizes to an arbitrary vector field ξ {\displaystyle \xi } the so-called above Komar superpotential, which was originally derived for timelike Killing vector fields .
Komar superpotential is affected by the anomalous factor problem: In fact, when computed, for example, on the Kerr–Newman solution , produces the correct angular momentum, but just one-half of the expected mass. [ 2 ]
This relativity -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Komar_superpotential |
Kombucha (also tea mushroom , tea fungus , or Manchurian mushroom when referring to the culture ; Latin name Medusomyces gisevii ) [ 1 ] is a fermented , effervescent , sweetened black tea drink. Sometimes the beverage is called kombucha tea to distinguish it from the culture of bacteria and yeast . [ 2 ] Juice, spices, fruit, or other flavorings are often added. Commercial kombucha contains minimal amounts of alcohol .
Kombucha is thought to have originated in China, where the drink is traditional. [ 3 ] [ 4 ] By the early 20th century it spread to Russia, then other parts of Eastern Europe and Germany. [ 5 ] Kombucha is now homebrewed globally, and also bottled and sold commercially. [ 1 ] The global kombucha market was worth approximately US$1.7 billion as of 2019 [update] . [ 6 ]
Kombucha is produced by symbiotic fermentation of sugared tea using a symbiotic culture of bacteria and yeast ( SCOBY ) commonly called a "mother" or "mushroom". The microbial populations in a SCOBY vary. The yeast component generally includes Saccharomyces cerevisiae , along with other species; the bacterial component almost always includes Gluconacetobacter xylinus to oxidize yeast-produced alcohols to acetic acid (and other acids). [ 7 ] Although the SCOBY is commonly called "tea fungus" or "mushroom", it is actually "a symbiotic growth of acetic acid bacteria and osmophilic yeast species in a zoogleal mat [ biofilm ]". [ 1 ] The living bacteria are said to be probiotic , one of the reasons for the popularity of the drink. [ 8 ] [ 9 ]
Numerous health benefits have been claimed to correlate with drinking kombucha; [ 10 ] there is little evidence to support any of these claims. [ 8 ] [ 10 ] [ 11 ] The beverage has caused rare serious adverse effects , possibly arising from contamination during home preparation . [ 12 ] [ 13 ] It is not recommended for therapeutic purposes . [ 10 ] [ 14 ]
Kombucha likely originated in the Bohai Sea region of China. [ 4 ] It spread to Russia before reaching Europe and gained popularity in the United States in the early 21st century. [ 15 ] [ 16 ] [ 17 ] In the intervening years, its popularity in the West eclipsed its popularity in most parts of China, where it remains less known, [ 18 ] though consumption is increasing in many East Asian countries. [ 19 ] With an alcohol content under 0.5%, it is not federally regulated in the U.S. [ 20 ] [ 21 ]
Prior to 2015, some commercially available kombucha brands were found to contain alcohol content exceeding this threshold, sparking the development of new testing methods. [ 22 ] With rising popularity in developed countries in the early 21st century, kombucha sales increased after it was marketed as an alternative to beer and other alcoholic drinks in restaurants and pubs . [ 23 ]
According to the market research firm Grand View Research , [ clarification needed ] kombucha had a global market size of US$1.67 billion as of 2019 [update] , and this is expected to grow to US$9.7 billion by 2030. [ 6 ]
The etymology of kombucha is uncertain, but it is believed to be a misapplied loanword from Japanese. [ 24 ] English speakers may have confused the Japanese word konbucha with kōcha kinoko ( 紅茶キノコ , 'black tea mushroom') , popularized around 1975. [ 25 ] [ 26 ]
In Japanese, the term konbu-cha ( 昆布茶 , ' kelp tea ') refers to a kelp tea made with konbu (an edible kelp from the family Laminariaceae ) and is a completely different beverage from the fermented tea usually associated with kombucha elsewhere in the world. [ 27 ]
Merriam-Webster 's Dictionary suggests kombucha in English arose from misapplication of Japanese words like konbucha , kobucha ' tea made from kelp ', konbu , from kobu 'kelp', + cha ' tea '. [ 28 ] The American Heritage Dictionary notes the term might have originated from the belief that the gelatinous film of kombucha resembled seaweed. [ 29 ]
The first known use in the English language of the word appeared in the British Chemical Abstracts in 1928. [ 30 ]
A kombucha culture is a symbiotic culture of bacteria and yeast (SCOBY), similar to mother of vinegar , containing one or more species each of bacteria and yeasts, which form a zoogleal mat [ 31 ] known as a "mother". [ 1 ] There is a broad spectrum of yeast species spanning several genera reported to be present in kombucha cultures, including species of Zygosaccharomyces , Candida, Kloeckera/Hanseniaspora , Torulaspora , Pichia , Brettanomyces/Dekkera , Saccharomyces , Lachancea , Saccharomycoides , Schizosaccharomyces , Kluyveromyces, Starmera, Eremothecium, Merimbla, Sugiyamaella. [ 32 ] [ 33 ] [ 34 ]
The bacterial component of kombucha comprises several species, almost always including the acetic acid bacteria Komagataeibacter xylinus (formerly Gluconacetobacter xylinus ), which ferments alcohols produced by the yeasts into acetic and other acids, increasing the acidity and limiting ethanol content. [ 35 ] [ citation needed ] The population of bacteria and yeasts found to produce acetic acid has been reported to increase for the first 4 days of fermentation, decreasing thereafter. [ 36 ] K. xylinus produces bacterial cellulose , and is reportedly responsible for most or all of the physical structure of the "mother", which may have been selectively encouraged over time for firmer (denser) and more robust cultures by brewers. [ 37 ] [ non-primary source needed ] The highest diversity of kombucha bacteria was found to be on the 7th day of fermentation with the diversity being less in the SCOBY. Acetobacteraceae dominate 88 percent of the bacterial community of the SCOBY. [ 34 ] The acetic acid bacteria in kombucha are aerobic , meaning that they require oxygen for their growth and activity. [ 32 ] Hence, the bacteria initially migrate and assemble at the air interface, followed by the excretion of bacterial cellulose after about 2 days. [ 38 ]
The mixed, presumably mutualistic culture has been further described as being lichenous, in accord with the reported presence of the known lichenous natural product usnic acid , though as of 2015, no report appears indicating the standard cyanobacterial species of lichens in association with kombucha fungal components. [ 39 ]
Kombucha is made by adding the kombucha culture into a broth of sugared tea. [ 1 ] The sugar serves as a nutrient for the SCOBY that allows for bacterial growth in the tea. [ citation needed ] Sucrose is converted, biochemically, into fructose and glucose, and these into gluconic acid and acetic acid. [ 15 ] In addition, kombucha contains enzymes and amino acids , polyphenols , and various other organic acids which vary between preparations. [ citation needed ]
Other specific components include ethanol (see below), glucuronic acid , glycerol , lactic acid , and usnic acid (a hepatotoxin, see above). [ 40 ] [ 41 ] [ 42 ]
The alcohol content of kombucha is usually less than 0.5%, but increases with extended fermentation times. [ 43 ] Some tests have found commercial kombuchas with a range of alcohol contents ranging from undetectable to 4%. [ 44 ] The concentration of alcohol specifically ethanol increases initially but then begins to decrease when acetic acid bacteria use it to produce acetic acid. [ 34 ] Over-fermentation generates high amounts of acids similar to vinegar. [ 1 ] The pH of the drink is typically about 3.5. [ 10 ]
Kombucha tea is 95% water and contains 4% carbohydrates and several B vitamins , such as thiamin , riboflavin , niacin , and vitamin B 6 . [ 45 ]
Kombucha can be prepared at home or commercially. [ 1 ] It is made by dissolving sugar in non-chlorinated boiling water. Tea leaves are then steeped in the hot sugar water and discarded. The sweetened tea is cooled and the SCOBY culture is added. The mixture is then poured into a sterilized beaker along with previously fermented kombucha tea to lower the pH . This technique is known as "backslopping". [ 46 ] The container is covered with a paper towel or breathable fabric to prevent insects, such as fruit flies, from contaminating the kombucha.
The tea is left to ferment for a period of up to 10 to 14 days at room temperature (18 °C to 26 °C). A new "daughter" SCOBY will form on the surface of the tea to the diameter of the container. After fermentation is completed, the SCOBY is removed and stored along with a small amount of the newly fermented tea. The remaining kombucha is strained and bottled for a secondary ferment for a few days or stored at a temperature of 4 °C. [ 1 ]
Commercially bottled kombucha became available in the late 1990s. [ 47 ] In 2010, elevated alcohol levels were found in many bottled kombucha products, leading retailers including Whole Foods to pull the drinks from store shelves temporarily. [ 48 ] In response, kombucha suppliers reformulated their products to have lower alcohol levels. [ 49 ]
By 2014, US sales of bottled kombucha were $400 million, $350 million of which was by Millennium Products, Inc. which sells GT's Kombucha . [ 50 ] In 2014, several companies that make and sell kombucha formed a trade organization , Kombucha Brewers International. [ 51 ] In 2016, PepsiCo purchased kombucha maker KeVita for approximately $200 million. [ 52 ] In the US, sales of kombucha and other fermented drinks rose by 37 percent in 2017. [ 23 ] Beer companies like Full Sail Brewing Company and Molson Coors Beverage Company produce kombucha by themselves or via subsidiaries. [ 53 ]
As of 2021, the drink had some popularity in India's National Capital Region , partly due to its success in the west. [ 54 ]
Some commercial kombucha producers sell what they call "hard kombucha" with an alcohol content of over 5 percent. [ 53 ] [ 55 ]
Kombucha is promoted with many claims for health benefits, from alleviating hemorrhoids to combating cancer. [ 56 ] Although people may drink kombucha for such supposed health effects (attributed first to the protective impact of tea itself, and to fermentation products including glucuronic acid, acetic acid, polyphenols, phenols, and B-complex vitamins such as folic acid [ 57 ] : 15 ), there is no clinical proof that it provides any benefit. [ 1 ] [ 8 ] [ 58 ] [ 59 ] In a 2003 review, physician Edzard Ernst characterized kombucha as an "extreme example" of an unconventional remedy because of the disparity between implausible, wide-ranging health claims and the potential risks of the product. [ 10 ] It concluded that the proposed, unsubstantiated therapeutic claims did not outweigh known risks, and that kombucha should not be recommended for therapeutic use , being in a class of "remedies that only seem to benefit those who sell them". [ 10 ]
Reports of adverse effects related to kombucha consumption are rare, but may be underreported, according to a 2003 review. [ 10 ] The American Cancer Society said in 2009 that "serious side effects and occasional deaths have been associated with drinking Kombucha tea." [ 13 ] Because kombucha is a commonly homemade fermentation, caution should be taken because pathogenic microorganisms can contaminate the tea during preparation. [ 14 ] [ 32 ] The risk of proliferation of bacteria associated with botulinum toxin is one reason that the pH of kombucha must be low, as Clostridium botulinum struggles to proliferate below pH 4.6. [ 60 ] [ 61 ]
Adverse effects associated with kombucha consumption may include severe hepatic (liver) and renal (kidney) toxicity as well as metabolic acidosis . [ 62 ] [ 63 ] [ 64 ]
Some adverse health effects may arise from the acidity of the tea causing acidosis , and brewers are cautioned to avoid over-fermentation. [ 12 ] [ 65 ] [ 43 ] Other adverse effects may be a result of bacterial or fungal contamination during the brewing process. [ 43 ] Some studies have found the hepatotoxin usnic acid in kombucha, although it is not known whether the cases of liver damage are due to usnic acid or to some other toxin. [ 63 ] [ 39 ]
Drinking kombucha can be harmful for people with preexisting ailments. [ 66 ] Due to its microbial sourcing and possible non-sterile packaging, kombucha is not recommended for people with poor immune function, [ 12 ] women who are pregnant or nursing, or children under 4 years old: [ 43 ] It may compromise immune responses or stomach acidity in these susceptible populations. [ 12 ] There are certain drugs that one should not take with kombucha because of the small percentage of alcohol content. [ 67 ]
A 2019 review enumerated numerous potential health risks (including hyponatremia, lactic acidosis, toxic hepatitis, etc. [ 59 ] : 68 ), but said "kombucha is not considered harmful if about 4 oz [120 mL] per day is consumed by healthy individuals; potential risks are associated with a low pH brew leaching heavy metals from containers, excessive consumption of highly acidic kombucha, or consumption by individuals with pre-existing health conditions." [ 59 ]
Kombucha contains a small amount of caffeine . [ 68 ] [ 69 ]
Kombucha culture, when dried, becomes a leather-like textile known as a microbial cellulose that can be molded onto forms to create seamless clothing. [ 70 ] [ 71 ] Using different broth media such as coffee, black tea, and green tea to grow the kombucha culture results in different textile colors, although the textile can also be dyed using other plant-based dyes. [ 72 ] Different growth media and dyes also change the textile's feel and texture. [ 72 ] Dried and processed SCOBY has been investigated as a leather substitute. [ 73 ] Additionally, the SCOBY itself can be dried and eaten as a sweet or savory snack. [ 74 ] | https://en.wikipedia.org/wiki/Kombucha |
Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions ) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces .
The theorem was proven in 1967 by János Komlós . [ 1 ] There exists also a generalization from 1970 by Srishti D. Chatterji . [ 2 ]
Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} be a probability space and ξ 1 , ξ 2 , … {\displaystyle \xi _{1},\xi _{2},\dots } be a sequence of real-valued random variables defined on this space with sup n E [ | ξ n | ] < ∞ . {\displaystyle \sup \limits _{n}\mathbb {E} [|\xi _{n}|]<\infty .}
Then there exists a random variable ψ ∈ L 1 ( P ) {\displaystyle \psi \in L^{1}(P)} and a subsequence ( η k ) = ( ξ n k ) {\displaystyle (\eta _{k})=(\xi _{n_{k}})} , such that for every arbitrary subsequence ( η ~ n ) = ( η k n ) {\displaystyle ({\tilde {\eta }}_{n})=(\eta _{k_{n}})} when n → ∞ {\displaystyle n\to \infty } then
P {\displaystyle P} - almost surely .
Let ( E , A , μ ) {\displaystyle (E,{\mathcal {A}},\mu )} be a finite measure space and f 1 , f 2 , … {\displaystyle f_{1},f_{2},\dots } be a sequence of real-valued functions in L 1 ( μ ) {\displaystyle L^{1}(\mu )} and sup n ∫ E | f n | d μ < ∞ {\displaystyle \sup \limits _{n}\int _{E}|f_{n}|\mathrm {d} \mu <\infty } . Then there exists a function υ ∈ L 1 ( μ ) {\displaystyle \upsilon \in L^{1}(\mu )} and a subsequence ( g k ) = ( f n k ) {\displaystyle (g_{k})=(f_{n_{k}})} such that for every arbitrary subsequence ( g ~ n ) = ( g k n ) {\displaystyle ({\tilde {g}}_{n})=(g_{k_{n}})} if n → ∞ {\displaystyle n\to \infty } then
μ {\displaystyle \mu } - almost everywhere .
So the theorem says, that the sequence ( η k ) {\displaystyle (\eta _{k})} and all its subsequences converge in Césaro. | https://en.wikipedia.org/wiki/Komlós'_theorem |
In the mathematical theory of non-standard positional numeral systems , the Komornik–Loreti constant is a mathematical constant that represents the smallest base q for which the number 1 has a unique representation, called its q -development. The constant is named after Vilmos Komornik and Paola Loreti , who defined it in 1998. [ 1 ]
Given a real number q > 1, the series
is called the q -expansion, or β {\displaystyle \beta } -expansion , of the positive real number x if, for all n ≥ 0 {\displaystyle n\geq 0} , 0 ≤ a n ≤ ⌊ q ⌋ {\displaystyle 0\leq a_{n}\leq \lfloor q\rfloor } , where ⌊ q ⌋ {\displaystyle \lfloor q\rfloor } is the floor function and a n {\displaystyle a_{n}} need not be an integer . Any real number x {\displaystyle x} such that 0 ≤ x ≤ q ⌊ q ⌋ / ( q − 1 ) {\displaystyle 0\leq x\leq q\lfloor q\rfloor /(q-1)} has such an expansion, as can be found using the greedy algorithm .
The special case of x = 1 {\displaystyle x=1} , a 0 = 0 {\displaystyle a_{0}=0} , and a n = 0 {\displaystyle a_{n}=0} or 1 {\displaystyle 1} is sometimes called a q {\displaystyle q} -development. a n = 1 {\displaystyle a_{n}=1} gives the only 2-development. However, for almost all 1 < q < 2 {\displaystyle 1<q<2} , there are an infinite number of different q {\displaystyle q} -developments. Even more surprisingly though, there exist exceptional q ∈ ( 1 , 2 ) {\displaystyle q\in (1,2)} for which there exists only a single q {\displaystyle q} -development. Furthermore, there is a smallest number 1 < q < 2 {\displaystyle 1<q<2} known as the Komornik–Loreti constant for which there exists a unique q {\displaystyle q} -development. [ 2 ]
The Komornik–Loreti constant is the value q {\displaystyle q} such that
where t k {\displaystyle t_{k}} is the Thue–Morse sequence , i.e., t k {\displaystyle t_{k}} is the parity of the number of 1's in the binary representation of k {\displaystyle k} . It has approximate value
The constant q {\displaystyle q} is also the unique positive real solution to the equation
This constant is transcendental . [ 4 ] | https://en.wikipedia.org/wiki/Komornik–Loreti_constant |
Kompaneyets equation refers to a non-relativistic, Fokker–Planck type , kinetic equation for photon number density with which photons interact with an electron gas via Compton scattering , first derived by Alexander Kompaneyets in 1949 and published in 1957 after declassification. [ 1 ] [ 2 ] The Kompaneyets equation describes how an initial photon distribution relaxes to the equilibrium Bose–Einstein distribution . Kompaneyets pointed out the radiation field on its own cannot reach the equilibrium distribution since the Maxwells equation are linear but it needs to exchange energy with the electron gas. The Kompaneyets equation has been used as a basis for analysis of the Sunyaev–Zeldovich effect . [ 3 ]
Consider a non-relativistic electron bath that is at an equilibrium temperature T e {\displaystyle T_{e}} , i.e., k B T e ≪ m e c 2 {\displaystyle k_{B}T_{e}\ll m_{e}c^{2}} , where m e {\displaystyle m_{e}} is the electron mass. Let there be a low-frequency radiation field that satisfies the soft-photon approximation, i.e., ℏ ω ≪ m e c 2 {\displaystyle \hbar \omega \ll m_{e}c^{2}} where ω {\displaystyle \omega } is the photon frequency. Then, the energy exchange in any collision between photon and electron will be small. Assuming homogeneity and isotropy and expanding the collision integral of the Boltzmann equation in terms of small energy exchange, one obtains the Kompaneyets equation. [ 4 ]
The Kompaneyets equation for the photon number density n ( ω , t ) {\displaystyle n(\omega ,t)} reads [ 5 ] [ 6 ]
where σ T {\displaystyle \sigma _{T}} is the total Thomson cross-section and n e {\displaystyle n_{e}} is the electron number density; λ e = 1 / ( n e σ T ) {\displaystyle \lambda _{e}=1/(n_{e}\sigma _{T})} is the Compton range or the scattering mean free path. As evident, the equation can be written in the form of the continuity equation
If we introudce the rescalings
the equation can be brought to the form
The Kompaneyets equation conserves the photon number
where V {\displaystyle V} is a sufficiently large volume, since the energy exchange between photon and electron is small. Furthermore, the equilibrium distribution of the Kompaneyets equation is the Bose–Einstein distribution for the photon gas, | https://en.wikipedia.org/wiki/Kompaneyets_equation |
Komputeko is an online project of the non-profit youth organization E@I (“Education@Internet”) with the goal of bringing together parallel computer terminology from various dictionaries in order to facilitate access to and comparison between different translations and thus promote exact use of language and counteract the (often sloppy) usage of linguistic borrowings from American English. Komputeko is short for the Esperanto noun phrase "Pri kompu tila te rmino ko lekto", meaning "collection of computer terms". The dictionary is written in five languages ( Esperanto , English , Dutch , German and French ), and there are plans to expand it into other languages. A preliminary version with a few other languages already exists. [ 1 ]
The Esperanto dictionaries and word lists on which Komputeko is based are the Komputada Leksikono ("Computing Lexicon") by Sergio Pokrovskij (Сергей Покровский), the crowd-sourced Reta Vortaro ("Internet Dictionary", ReVo, the Plena Ilustrita Vortaro de Esperanto ("Complete Illustrated Dictionary of Esperanto", PIV), the Internet mini-dictionary of the Flandra Esperanto-Ligo , the Techniczny Słownik Polsko Esperancki ("Polish-Esperanto Technical Dictionary") by Jerzy Wałaszek, the three-volume Pekoteko collection of terminology, Bill Walker's Komputilo Vortolisto [ 2 ] and a Dutch-Esperanto dictionary. It also takes into account the terminology used in articles from the Esperanto Wikipedia .
The promoter of the project, Yves Nevelsteen, among other things, joined the Esperanto translation team for the open-source productivity suite OpenOffice.org , the social networking site Ipernity and the content management system Drupal in order to make these teams' work product more widely available through Komputeko.
Among scholars who have acknowledged the utility of the Komputeko project are John C. Wells , who authored both the Teach Yourself Books' Concise Esperanto and English Dictionary (1969) and the concise yet comprehensive English-Esperanto-English Dictionary (Mondial, 2010), and Paul Peeraerts, who translated the interface of Ipernity and Facebook into Esperanto and who has served as editorial secretary of the Esperanto-language monthly Monato . Others who have availed themselves of Komputeko include Cindy McKee's KDE and Joomla translation teams, Esperanto Wikipedia founder Chuck Smith's Drupal translation and the former Amikumu projects, Tim Morley's OpenOffice.org translation team, Guillaume Savaton's GNOME translation team, the translation teams for Plone and Xfce , and Joop Kiefte's Ubuntu translation team. | https://en.wikipedia.org/wiki/Komputeko |
In physics , the Kondo effect describes the scattering of conduction electrons in a metal due to magnetic impurities , resulting in a characteristic change i.e. a minimum in electrical resistivity with temperature. [ 1 ] The cause of the effect was first explained by Jun Kondo , who applied third-order perturbation theory to the problem to account for scattering of s-orbital conduction electrons off d-orbital electrons localized at impurities ( Kondo model ). Kondo's calculation predicted that the scattering rate and the resulting part of the resistivity should increase logarithmically as the temperature approaches 0 K. [ 2 ] Extended to a lattice of magnetic impurities , the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium , praseodymium , and ytterbium , and actinide elements such as uranium . The Kondo effect has also been observed in quantum dot systems.
The dependence of the resistivity ρ {\displaystyle \rho } on temperature T {\displaystyle T} , including the Kondo effect, is written as
where ρ 0 {\displaystyle \rho _{0}} is the residual resistivity, the term a T 2 {\displaystyle aT^{2}} shows the contribution from the Fermi liquid properties, and the term b T 5 {\displaystyle bT^{5}} is from the lattice vibrations: a {\displaystyle a} , b {\displaystyle b} , c m {\displaystyle c_{m}} and μ {\displaystyle \mu } are constants independent of temperature. Jun Kondo derived the third term with logarithmic dependence on temperature and the experimentally observed concentration dependence.
In 1930, Walther Meissner and B. Voigt [ 3 ] [ 4 ] observed that the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. Similar results were discovered in other metals. [ 5 ] Kondo described the three puzzling aspects that frustrated previous researchers who tried to explain the effect: [ 6 ] [ 7 ]
Experiments in the 1960s by Myriam Sarachik at Bell Laboratories showed that phenomenon was caused by magnetic impurity in nominally pure metals. [ 8 ] When Kondo sent a preview of his paper to Sarachik, Sarachik confirmed the data fit the theory. [ 9 ]
Kondo's solution was derived using perturbation theory resulting in a divergence as the temperature approaches 0 K, but later methods used non-perturbative techniques to refine his result. These improvements produced a finite resistivity but retained the feature of a resistance minimum at a non-zero temperature. One defines the Kondo temperature as the energy scale limiting the validity of the Kondo results. The Anderson impurity model and accompanying Wilsonian renormalization theory were an important contribution to understanding the underlying physics of the problem. [ 10 ] Based on the Schrieffer–Wolff transformation , it was shown that the Kondo model lies in the strong coupling regime of the Anderson impurity model. The Schrieffer–Wolff transformation [ 11 ] projects out the high energy charge excitations in the Anderson impurity model, obtaining the Kondo model as an effective Hamiltonian.
The Kondo effect can be considered as an example of asymptotic freedom , i.e. a situation where the coupling becomes non-perturbatively strong at low temperatures and low energies. In the Kondo problem, the coupling refers to the interaction between the localized magnetic impurities and the itinerant electrons.
Extended to a lattice of magnetic ions, the Kondo effect likely explains the formation of heavy fermions and Kondo insulators in intermetallic compounds, especially those involving rare earth elements such as cerium , praseodymium , and ytterbium , and actinide elements such as uranium . In heavy fermion materials, the non-perturbative growth of the interaction leads to quasi-electrons with masses up to thousands of times the free electron mass, i.e., the electrons are dramatically slowed by the interactions. In a number of instances they are superconductors . It is believed that a manifestation of the Kondo effect is necessary for understanding the unusual metallic delta-phase of plutonium . [ citation needed ]
The Kondo effect has been observed in quantum dot systems. [ 12 ] [ 13 ] In such systems, a quantum dot with at least one unpaired electron behaves as a magnetic impurity, and when the dot is coupled to a metallic conduction band, the conduction electrons can scatter off the dot. This is completely analogous to the more traditional case of a magnetic impurity in a metal.
Band-structure hybridization and flat band topology in Kondo insulators have been imaged in angle-resolved photoemission spectroscopy experiments. [ 14 ] [ 15 ] [ 16 ]
In 2012, Beri and Cooper proposed a topological Kondo effect could be found with Majorana fermions , [ 17 ] while it has been shown that quantum simulations with ultracold atoms may also demonstrate the effect. [ 18 ]
In 2017, teams from the Vienna University of Technology and Rice University conducted experiments into the development of new materials made from the metals cerium, bismuth and palladium in specific combinations and theoretical work experimenting with models of such structures, respectively. The results of the experiments were published in December 2017 [ 19 ] and, together with the theoretical work, [ 20 ] lead to the discovery of a new state, [ 21 ] a correlation-driven Weyl semimetal . The team dubbed this new quantum material Weyl-Kondo semimetal . | https://en.wikipedia.org/wiki/Kondo_effect |
In solid-state physics , Kondo insulators (also referred as Kondo semiconductors and heavy fermion semiconductors ) are understood as materials with strongly correlated electrons, that open up a narrow band gap (in the order of 10 meV) at low temperatures with the chemical potential lying in the gap, whereas in heavy fermion materials the chemical potential is located in the conduction band .
The band gap opens up at low temperatures due to hybridization of localized electrons (mostly f-electrons) with conduction electrons, a correlation effect known as the Kondo effect . As a consequence, a transition from metallic behavior to insulating behavior is seen in resistivity measurements. The band gap could be either direct or indirect . Most studied Kondo insulators are FeSi, Ce 3 Bi 4 Pt 3 , SmB 6 , YbB 12 , and CeNiSn, although As of 2016 [update] there are over a dozen known Kondo insulators. [ 1 ]
In 1969, Menth et al. found no magnetic ordering in SmB 6 down to 0.35 K and a change from metallic to insulating behavior in the resistivity measurement with decreasing temperature. They interpreted this phenomenon as a change of the electronic configuration of Sm. [ 2 ]
In 1992, Gabriel Aeppli and Zachary Fisk found a descriptive way to explain the physical properties of Ce 3 Bi 4 Pt 3 and CeNiSn. They called the materials Kondo insulators, showing Kondo lattice behavior near room temperature, but becoming semiconducting with very small energy gaps (a few Kelvin to a few tens of Kelvin) when decreasing the temperature. [ 3 ]
At high temperatures the localized f-electrons form independent local magnetic moments. According to the Kondo effect, the dc-resistivity of Kondo insulators shows a logarithmic temperature-dependence. At low temperatures, the local magnetic moments are screened by the sea of conduction electrons, forming a so-called Kondo resonance. The interaction of the conduction band with the f-orbitals results in a hybridization and an energy gap ϵ g {\displaystyle \epsilon _{\mathrm {g} }} . If the chemical potential lies in the hybridization gap, an insulating behavior can be seen in the dc-resistivity at low temperatures.
In recent times, angle-resolved photoemission spectroscopy experiments provided direct imaging of band-structure, hybridization and flat band topology in Kondo insulators and related compounds. [ 4 ] | https://en.wikipedia.org/wiki/Kondo_insulator |
The Kondo model (sometimes referred to as the s-d model ) is a model for a single localized quantum impurity coupled to a large reservoir of delocalized and noninteracting electrons . The quantum impurity is represented by a spin-1/2 particle, and is coupled to a continuous band of noninteracting electrons by an antiferromagnetic exchange coupling J {\displaystyle J} . The Kondo model is used as a model for metals containing magnetic impurities, as well as quantum dot systems. [ 1 ]
The Kondo Hamiltonian is given by
where S {\displaystyle \mathbf {S} } is the spin-1/2 operator representing the impurity, and
is the local spin-density of the noninteracting band at the impurity site ( σ {\displaystyle \mathbf {\sigma } } are the Pauli matrices). In the Kondo problem, J < 0 {\displaystyle J<0} , i.e. the exchange coupling is antiferromagnetic.
Jun Kondo applied third-order perturbation theory to the Kondo model and showed that the resistivity of the model diverges logarithmically as the temperature goes to zero. [ 2 ] This explained why metal samples containing magnetic impurities have a resistance minimum (see Kondo effect ). The problem of finding a solution to the Kondo model which did not contain this unphysical divergence became known as the Kondo problem.
A number of methods were used to attempt to solve the Kondo problem. Phillip Anderson devised a perturbative renormalization group method, known as Poor Man's Scaling, which involves perturbatively eliminating excitations to the edges of the noninteracting band. [ 3 ] This method indicated that, as temperature is decreased, the effective coupling between the spin and the band, J e f f {\displaystyle J_{\mathrm {eff} }} , increases without limit. As this method is perturbative in J, it becomes invalid when J becomes large, so this method did not truly solve the Kondo problem, although it did hint at the way forward.
The Kondo problem was finally solved when Kenneth Wilson applied the numerical renormalization group to the Kondo model and showed that the resistivity goes to a constant as temperature goes to zero. [ 4 ]
There are many variants of the Kondo model. For instance, the spin-1/2 can be replaced by a spin-1 or even a greater spin. The two-channel Kondo model is a variant of the Kondo model which has the spin-1/2 coupled to two independent noninteracting bands. All these models have been solved by Bethe Ansatz . [ 5 ] One can also consider the ferromagnetic Kondo model (i.e. the standard Kondo model with J > 0).
The Kondo model is intimately related to the Anderson impurity model , as can be shown by Schrieffer–Wolff transformation . [ 6 ] | https://en.wikipedia.org/wiki/Kondo_model |
Kong Inc. is a software company that provides open-source platforms and cloud services for managing, monitoring, and scaling application programming interfaces (APIs) and microservices . Some of the products offered by Kong Inc. include: Kong Gateway, an open-source API gateway; Kong Enterprise, an API platform that is built on top of Kong Gateway; Kong Konnect, a service connectivity platform; Kuma, an open-source service mesh; Kong Mesh, an enterprise-grade service mesh that is built on top of Kuma; and Insomnia, an open-source API design and testing tool.
The original product was first developed in 2009 in Milan , Italy and first incorporated in the US as Mashape, Inc. The original project was a mash-up (web application hybrid) platform used to aggregate different functions and UI elements from third-party products and services. While developing the product the team dealt with a number of APIs, which inspired the founders to create a unified hub to organize the growing market of APIs. In November 2010 Mashape appeared online as an Alpha product [ 1 ] and launched in private Beta in June 2011. [ 2 ] The team raised the first funds in the US. [ 3 ] In 2010, Mashape received its first angel funding and a further $1,500,000 in seed funding. [ 4 ] [ 5 ] Mashape then rejected some acquisition offers in 2011. [ 6 ]
In 2015 Mashape launched an open source project called Kong, which later helped the company secure an $18 million in Series B funding. Subsequently, with the intention of pivoting the company's focus to its new Kong business, Mashape sold API Marketplace to RapidAPI, [ 7 ] [ 8 ] and rebranded the company to Kong Inc. [ 9 ] [ 10 ]
In 2019, Kong acquired Insomnia, an open-source API testing platform. [ 11 ]
In 2015, Mashape released Kong, an open-source management layer for APIs and Microservices that is built on top of Nginx and Cassandra / PostgreSQL with the claim of improving performance and reliability. Kong is the main engine of Mashape's marketplace. [ 12 ] | https://en.wikipedia.org/wiki/Kong_Inc. |
Konrad Osterwalder (born June 3, 1942) is a Swiss mathematician and physicist, former Undersecretary-General of the United Nations , former Rector of the United Nations University (UNU), [ 1 ] and Rector Emeritus of the Swiss Federal Institute of Technology Zurich ( ETH Zurich ). He is known for the Osterwalder–Schrader theorem .
Osterwalder was appointed to the position of United Nations Under Secretary General and United Nations University Rector by United Nations Secretary-General Ban Ki-moon May 2007 [ 2 ] and served until 28 February 2013. He succeeded Prof. Hans van Ginkel from the Netherlands to be the fifth Rector of the United Nations University.
He is credited with turning United Nations University into a world leading institution, ranked #5 & #6 in two categories according to the 2012 Global Go to Think Tank Rankings . [ 3 ] He was responsible for ensuring that UNU's charter was amended by the United Nations General Assembly [ 4 ] in 2009 allowing the United Nations University to grant degrees, introducing UNU's degree programmes and creating a new concept in education, research and development by introducing the twin institute programmes. A concept that is changing the way that development, aid and capacity building is approached both by developed countries and developing and least developed countries.
In March 2000, following the Bologna Declaration by 28 European Education Ministers, the European University Association and the Comite de Liaison within the National Rector's Conference convened the Convention of European Higher Education in Salamanca Spain, hereinafter referred to as the "Salamanca Process" with the aim of discussing the Bologna Declaration and delivering an overall, univocal response to the Council of Ministers. Professor Osterwalder, Rector of ETH, was chosen by the conference as the Rapporteur of the Salamanca Process and the voice of Higher Education institutions. The meeting concluded with a declaration and a report that led to the basis of Higher Education reform within the Bologna process and the EU. In addition, the two conveners of the conference formed the European University Association.
Konrad Osterwalder was born in Frauenfeld , Thurgau , Switzerland , in June 1942. He studied at the Swiss Federal Institute of Technology (Eidgenössische Technische Hochschule; ETH) in Zurich, where he earned a Diploma in theoretical physics in 1965 and a Doctorate in theoretical physics in 1970. He is married to Verena Osterwalder-Bollag, an analytical therapist. They have three children.
After one year with the Courant Institute of Mathematical Sciences, New York University, he accepted a research position at Harvard University with Arthur Jaffe in 1971. He remained on the faculty of Harvard for seven years, and was promoted to Assistant Professor for Mathematical Physics in 1973 and Associate Professor for Mathematical Physics in 1976. In 1977, he returned to Switzerland upon being appointed a full Professor for Mathematical Physics at ETH Zurich. His doctoral students include Felix Finster and Emil J. Straube .
During his tenure at ETH Zurich, Osterwalder served as Head of the Department of Mathematics (1986–1990) and Head of the Planning Committee (1990–1995), and was founder of the Centro Stefano Franscini seminar center in Ascona. He was appointed Rector of ETH in 1995 and held that post for 12 years. From November 2006 through August 2007, he also served concurrently as ETH President pro tempore.
On 1 September 2007, Osterwalder joined the United Nations University as its fifth rector. In that role, he held the rank of Under-Secretary-General of the United Nations.
Osterwalder's research focused on the mathematical structure of relativistic quantum field theory as well as on elementary particle physics and statistical mechanics. During his long and distinguished career, he has been a Visiting Fellow/Guest Professor at several prominent universities around the world, including the Institut des Hautes Études Scientifiques (IHES; Bures-sur-Yvette, France); Harvard University; University of Texas (Austin); Max Planck Institute for Physics and Astrophysics (Munich), Università La Sapienza (Rome); Università di Napoli; Waseda University; and Weizmann Institute of Science (Rehovot, Israel).
Since 2014 - member of International Scientific Council of Tomsk Polytechnic University. [ 5 ]
Osterwalder career encompasses service on many advisory boards, committees and associations including as
Osterwalder has been a recipient of many honours and prizes including: | https://en.wikipedia.org/wiki/Konrad_Osterwalder |
Konrad Seppelt (born September 2, 1944 in Leipzig) [ 1 ] is an academic, author, professor and former vice president of the Free University of Berlin .
A random selection of Prof Seppelt's publications: | https://en.wikipedia.org/wiki/Konrad_Seppelt |
Konstantinos Drosatos (Greek: Κωνσταντίνος Δροσάτος), born in Athens, Greece, is a Greek-American molecular biologist, who is the Ohio Eminent Scholar and Professor of Pharmacology and Systems Physiology at the University of Cincinnati College of Medicine in Cincinnati, Ohio, U.S. His parents were Georgios Drosatos and Sofia Drosatou; his family originates in Partheni, Euboea , Greece. [ 1 ]
Drosatos received his B.Sc. from the department of biology at the Aristotle University of Thessaloniki , Greece in 2000. In 2000, he continued with graduate studies at the Molecular Biology-Biomedicine graduate program of the department of biology and the medical school of the University of Crete . He received his M.Sc. in 2002 and his Ph.D. in molecular biology-biomedicine in 2007. During his graduate studies (2002–2007) he was a visiting research scholar in the laboratory of Vassilis I. Zannis [ 2 ] at Boston University Medical School. Following his graduation with a PhD in molecular biology-biomedicine in 2007, he joined the laboratory of Ira J. Goldberg at Columbia University , where he pursued post-doctoral training until 2012, [ 3 ] when he was promoted to associate research scientist in the department of medicine at Columbia University. In 2014 he joined the faculty of the Lewis Katz School of Medicine at Temple University as an assistant professor in pharmacology and in 2020, he was promoted to associate professor with tenure in cardiovascular sciences (primary affiliation). In 2022, he was recruited at the University of Cincinnati College of Medicine, which he joined as the Ohio Eminent Scholar and Professor of Pharmacology and Systems Physiology [ 4 ]
The research in his laboratory focuses on cardiovascular and systemic metabolism and particularly on signaling mechanisms that link cardiac stress in diabetes, sepsis and ischemia with altered myocardial fatty acid metabolism. His published work focuses on the transcriptional regulation of proteins that underlie lipoprotein metabolism, cardiac and systemic fatty acid metabolism, and mitochondrial function. His work has identified the role of Krüppel-like factor 5 (KLF5) in the regulation of cardiac fatty acid metabolism in diabetes [ 5 ] [ 6 ] and ischemic heart failure, [ 7 ] as well as how cardiac lipotoxicity leads to cardiac dysfunction, [ 8 ] [ 9 ] and the importance of cardiac fatty acid oxidation and mitochondrial integrity for the treatment of cardiac dysfunction in sepsis. [ 10 ] [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Konstantinos_Drosatos |
Koopmans' theorem states that in closed-shell Hartree–Fock theory (HF), the first ionization energy of a molecular system is equal to the negative of the orbital energy of the highest occupied molecular orbital ( HOMO ). This theorem is named after Tjalling Koopmans , who published this result in 1934 for atoms. [ 1 ]
Koopmans' theorem is exact in the context of restricted Hartree–Fock theory if it is assumed that the orbitals of the ion are identical to those of the neutral molecule (the frozen orbital approximation [ 2 ] ). Ionization energies calculated this way are in qualitative agreement with experiment – the first ionization energy of small molecules is often calculated with an error of less than two electron volts . [ 3 ] [ 4 ] [ 5 ] Therefore, the validity of Koopmans' theorem is intimately tied to the accuracy of the underlying Hartree–Fock wavefunction. [ citation needed ] The two main sources of error are orbital relaxation, which refers to the changes in the Fock operator and Hartree–Fock orbitals when changing the number of electrons in the system, and electron correlation , referring to the validity of representing the entire many-body wavefunction using the Hartree–Fock wavefunction, i.e. a single Slater determinant composed of orbitals that are the eigenfunctions of the corresponding self-consistent Fock operator.
Empirical comparisons with experimental values and higher-quality ab initio calculations suggest that in many cases, but not all, the energetic corrections due to relaxation effects nearly cancel the corrections due to electron correlation. [ 6 ] [ 7 ]
A similar theorem (Janak's theorem) exists in density functional theory (DFT) for relating the exact first vertical ionization energy and electron affinity to the HOMO and LUMO energies, although both the derivation and the precise statement differ from that of Koopmans' theorem. [ 8 ] Ionization energies calculated from DFT orbital energies are usually poorer than those of Koopmans' theorem, with errors much larger than two electron volts possible depending on the exchange-correlation approximation employed. [ 3 ] [ 4 ] The LUMO energy shows little correlation with the electron affinity with typical approximations. [ 9 ] The error in the DFT counterpart of Koopmans' theorem is a result of the approximation employed for the exchange correlation energy functional so that, unlike in HF theory, there is the possibility of improved results with the development of better approximations.
While Koopmans' theorem was originally stated for calculating ionization energies from restricted (closed-shell) Hartree–Fock wavefunctions, the term has since taken on a more generalized meaning as a way of using orbital energies to calculate energy changes due to changes in the number of electrons in a system.
Koopmans’ theorem applies to the removal of an electron from any occupied molecular orbital to form a positive ion. Removal of the electron from different occupied molecular orbitals leads to the ion in different electronic states. The lowest of these states is the ground state and this often, but not always, arises from removal of the electron from the HOMO. The other states are excited electronic states.
For example, the electronic configuration of the H 2 O molecule is (1a 1 ) 2 (2a 1 ) 2 (1b 2 ) 2 (3a 1 ) 2 (1b 1 ) 2 , [ 10 ] where the symbols a 1 , b 2 and b 1 are orbital labels based on molecular symmetry . From Koopmans’ theorem the energy of the 1b 1 HOMO corresponds to the ionization energy to form the H 2 O + ion in its ground state (1a 1 ) 2 (2a 1 ) 2 (1b 2 ) 2 (3a 1 ) 2 (1b 1 ) 1 . The energy of the second-highest MO 3a 1 refers to the ion in the excited state (1a 1 ) 2 (2a 1 ) 2 (1b 2 ) 2 (3a 1 ) 1 (1b 1 ) 2 , and so on. In this case the order of the ion electronic states corresponds to the order of the orbital energies. Excited-state ionization energies can be measured by photoelectron spectroscopy .
For H 2 O, the near-Hartree–Fock orbital energies (with sign changed) of these orbitals are 1a 1 559.5, 2a 1 36.7 1b 2 19.5, 3a 1 15.9 and 1b 1 13.8 eV . The corresponding ionization energies are 539.7, 32.2, 18.5, 14.7 and 12.6 eV. [ 10 ] As explained above, the deviations are due to the effects of orbital relaxation as well as differences in electron correlation energy between the molecular and the various ionized states.
For N 2 in contrast, the order of orbital energies is not identical to the order of ionization energies. Near-Hartree–Fock calculations with a large basis set indicate that the 1π u bonding orbital is the HOMO. However the lowest ionization energy corresponds to removal of an electron from the 3σ g bonding orbital. In this case the deviation is attributed primarily to the difference in correlation energy between the two orbitals. [ 11 ]
It is sometimes claimed [ 12 ] that Koopmans' theorem also allows the calculation of electron affinities as the energy of the lowest unoccupied molecular orbitals ( LUMO ) of the respective systems. However, Koopmans' original paper makes no claim with regard to the significance of eigenvalues of the Fock operator other than that corresponding to the HOMO . Nevertheless, it is straightforward to generalize the original statement of Koopmans' to calculate the electron affinity in this sense.
Calculations of electron affinities using this statement of Koopmans' theorem have been criticized [ 13 ] on the grounds that virtual (unoccupied) orbitals do not have well-founded physical interpretations, and that their orbital energies are very sensitive to the choice of basis set used in the calculation. As the basis set becomes more complete; more and more "molecular" orbitals that are not really on the molecule of interest will appear, and care must be taken not to use these orbitals for estimating electron affinities.
Comparisons with experiment and higher-quality calculations show that electron affinities predicted in this manner are generally quite poor.
Koopmans' theorem is also applicable to open-shell systems, however, orbital energies (eigenvalues of Roothaan equations) should be corrected, as was shown in the 1970s. [ 14 ] [ 15 ] Despite this early work, application of Koopmans theorem to open-shell systems continued to cause confusion, e.g., it was stated that Koopmans theorem can only be applied for removing the unpaired electron. [ 16 ] Later, the validity of Koopmans’ theorem for ROHF was revisited and several procedures for obtaining meaningful orbital energies were reported. [ 17 ] [ 18 ] [ 19 ] [ 20 ] The spin up (alpha) and spin down (beta) orbital energies do not necessarily have to be the same. [ 21 ]
Kohn–Sham (KS) density functional theory (KS-DFT) admits its own version of Koopmans' theorem (sometimes called the DFT-Koopmans' theorem ) very similar in spirit to that of Hartree-Fock theory. The theorem equates the first (vertical) ionization energy I {\displaystyle I} of a system of N {\displaystyle N} electrons to the negative of the corresponding KS HOMO energy ϵ H {\displaystyle \epsilon _{H}} . More generally, this relation is true even when the KS system describes a zero-temperature ensemble with non-integer number of electrons N − δ N {\displaystyle N-\delta N} for integer N {\displaystyle N} and δ N → 0 {\displaystyle \delta N\to 0} . When considering N + δ N {\displaystyle N+\delta N} electrons the infinitesimal excess charge enters the KS LUMO of the N electron system but then the exact KS potential jumps by a constant known as the "derivative discontinuity". [ 22 ] It can be argued that the vertical electron affinity is equal exactly to the negative of the sum of the LUMO energy and the derivative discontinuity. [ 22 ] [ 23 ] [ 24 ] [ 25 ]
Unlike the approximate status of Koopmans' theorem in Hartree Fock theory (because of the neglect of orbital relaxation), in the exact KS mapping the theorem is exact, including the effect of orbital relaxation. A sketchy proof of this exact relation goes in three stages. First, for any finite system I {\displaystyle I} determines the | r | → ∞ {\displaystyle |\mathbf {r} |\to \infty } asymptotic form of the density, which decays as n ( r ) → exp ( − 2 2 m e ℏ I | r | ) {\textstyle n(\mathbf {r} )\to \exp \left(-2{\sqrt {{\frac {2m_{\rm {e}}}{\hbar }}I}}|\mathbf {r} |\right)} . [ 22 ] [ 26 ] Next, as a corollary (since the physically interacting system has the same density as the KS system), both must have the same ionization energy. Finally, since the KS potential is zero at infinity, the ionization energy of the KS system is, by definition, the negative of its HOMO energy, i.e., ϵ H = − I {\displaystyle \epsilon _{H}=-I} . [ 27 ] [ 28 ]
While these are exact statements in the formalism of DFT, the use of approximate exchange-correlation potentials makes the calculated energies approximate and often the orbital energies are very different from the corresponding ionization energies (even by several eV!). [ 29 ]
A tuning procedure is able to "impose" Koopmans' theorem on DFT approximations, thereby improving many of its related predictions in actual applications. [ 29 ] [ 30 ] In approximate DFTs one can estimate to high degree of accuracy the deviance from Koopmans' theorem using the concept of energy curvature. [ 31 ] It provides excitation energies to zeroth-order [ 32 ] and ∂ E ∂ n i = ε i {\textstyle {\frac {\partial E}{\partial n_{i}}}=\varepsilon _{i}} . [ 8 ] [ 33 ]
The concept of molecular orbitals and a Koopmans-like picture of ionization or electron attachment processes can be extended to correlated many-body wavefunctions by introducing Dyson orbitals. [ 34 ] [ 35 ] Dyson orbitals are defined as the generalized overlap between an N {\displaystyle N} -electron molecular wavefunction and the N − 1 {\displaystyle N-1} electron wave function of the ionized system (or N + 1 {\displaystyle N+1} electron wave function of an electron-attached system):
Hartree-Fock canonical orbitals are Dyson orbitals computed for the Hartree-Fock wavefunction of the N {\displaystyle N} -electron system and Koopmans approximation of the N ± 1 {\displaystyle N\pm 1} electron system. When correlated wavefunctions are used, Dyson orbitals include correlation and orbital relaxation effects. Dyson orbitals contain all information about the initial and final states of the system needed to compute experimentally observable quantities, such as total and differential photoionization/phtodetachment cross sections. | https://en.wikipedia.org/wiki/Koopmans'_theorem |
The Koopman–von Neumann (KvN) theory is a description of classical mechanics as an operatorial theory similar to quantum mechanics , based on a Hilbert space of complex , square-integrable wavefunctions. As its name suggests, the KvN theory is related to work [ 1 ] [ 2 ] : 220 by Bernard Koopman and John von Neumann .
Statistical mechanics describes macroscopic systems in terms of statistical ensembles , such as the macroscopic properties of an ideal gas . Ergodic theory is a branch of mathematics arising from the study of statistical mechanics.
The origins of the Koopman–von Neumann theory are tightly connected with the rise [ when? ] of ergodic theory as an independent branch of mathematics, in particular with Ludwig Boltzmann 's ergodic hypothesis .
In 1931, Koopman observed that the phase space of the classical system can be converted into a Hilbert space. [ 3 ] According to this formulation, functions representing physical observables become vectors, with an inner product defined in terms of a natural integration rule over the system's probability density on phase space. This reformulation makes it possible to draw interesting conclusions about the evolution of physical observables from Stone's theorem , which had been proved shortly before. This finding inspired von Neumann to apply the novel formalism to the ergodic problem in 1932. [ 4 ] [ 5 ] Subsequently, he published several seminal results in modern ergodic theory, including the proof of his mean ergodic theorem .
The Koopman–von Neumann treatment was further developed over the time by Mário Schenberg in 1952-1953, [ 6 ] [ 7 ] by Angelo Loinger in 1962, [ 8 ] by Giacomo Della Riccia and Norbert Wiener in 1966, [ 9 ] and by E. C. George Sudarshan himself in 1976. [ 10 ]
In the approach of Koopman and von Neumann (KvN), dynamics in phase space is described by a (classical) probability density, recovered from an underlying wavefunction – the Koopman–von Neumann wavefunction – as the square of its absolute value (more precisely, as the amplitude multiplied with its own complex conjugate ). This stands in analogy to the Born rule in quantum mechanics. In the KvN framework, observables are represented by commuting self-adjoint operators acting on the Hilbert space of KvN wavefunctions. The commutativity physically implies that all observables are simultaneously measurable. Contrast this with quantum mechanics, where observables need not commute, which underlines the uncertainty principle , Kochen–Specker theorem , and Bell inequalities . [ 11 ]
The KvN wavefunction is postulated to evolve according to exactly the same Liouville equation as the classical probability density. From this postulate it can be shown that indeed probability density dynamics is recovered.
In classical statistical mechanics, the probability density (with respect to Liouville measure ) obeys the Liouville equation [ 12 ] [ 13 ] i ∂ ∂ t ρ ( x , p , t ) = L ^ ρ ( x , p , t ) {\displaystyle i{\frac {\partial }{\partial t}}\rho (x,p,t)={\hat {L}}\rho (x,p,t)} with the self-adjoint Liouvillian L ^ = − i ∂ H ( x , p ) ∂ p ∂ ∂ x + i ∂ H ( x , p ) ∂ x ∂ ∂ p , {\displaystyle {\hat {L}}=-i{\frac {\partial H(x,p)}{\partial p}}{\frac {\partial }{\partial x}}+i{\frac {\partial H(x,p)}{\partial x}}{\frac {\partial }{\partial p}},} where H ( x , p ) {\displaystyle H(x,p)} denotes the classical Hamiltonian (i.e. the Liouvillian is i {\displaystyle i} times the Hamiltonian vector field considered as a first order differential operator).
The same dynamical equation is postulated for the KvN wavefunction i ∂ ∂ t ψ ( x , p , t ) = L ^ ψ ( x , p , t ) , {\displaystyle i{\frac {\partial }{\partial t}}\psi (x,p,t)={\hat {L}}\psi (x,p,t),} thus ∂ ∂ t ψ ( x , p , t ) = [ − ∂ H ( x , p ) ∂ p ∂ ∂ x + ∂ H ( x , p ) ∂ x ∂ ∂ p ] ψ ( x , p , t ) , {\displaystyle {\frac {\partial }{\partial t}}\psi (x,p,t)=\left[-{\frac {\partial H(x,p)}{\partial p}}{\frac {\partial }{\partial x}}+{\frac {\partial H(x,p)}{\partial x}}{\frac {\partial }{\partial p}}\right]\psi (x,p,t),} and for its complex conjugate ∂ ∂ t ψ ∗ ( x , p , t ) = [ − ∂ H ( x , p ) ∂ p ∂ ∂ x + ∂ H ( x , p ) ∂ x ∂ ∂ p ] ψ ∗ ( x , p , t ) . {\displaystyle {\frac {\partial }{\partial t}}\psi ^{*}(x,p,t)=\left[-{\frac {\partial H(x,p)}{\partial p}}{\frac {\partial }{\partial x}}+{\frac {\partial H(x,p)}{\partial x}}{\frac {\partial }{\partial p}}\right]\psi ^{*}(x,p,t).} From ρ ( x , p , t ) = ψ ∗ ( x , p , t ) ψ ( x , p , t ) {\displaystyle \rho (x,p,t)=\psi ^{*}(x,p,t)\psi (x,p,t)} follows using the product rule that ∂ ∂ t ρ ( x , p , t ) = [ − ∂ H ( x , p ) ∂ p ∂ ∂ x + ∂ H ( x , p ) ∂ x ∂ ∂ p ] ρ ( x , p , t ) {\displaystyle {\frac {\partial }{\partial t}}\rho (x,p,t)=\left[-{\frac {\partial H(x,p)}{\partial p}}{\frac {\partial }{\partial x}}+{\frac {\partial H(x,p)}{\partial x}}{\frac {\partial }{\partial p}}\right]\rho (x,p,t)} which proves that probability density dynamics can be recovered from the KvN wavefunction.
[ 12 ] [ 13 ]
Conversely, it is possible to start from operator postulates, similar to the Hilbert space axioms of quantum mechanics , and derive the equation of motion by specifying how expectation values evolve. [ 14 ]
The relevant axioms are that as in quantum mechanics (i) the states of a system are represented by normalized vectors of a complex Hilbert space, and the observables are given by self-adjoint operators acting on that space, (ii) the expectation value of an observable is obtained in the manner as the expectation value in quantum mechanics , (iii) the probabilities of measuring certain values of some observables are calculated by the Born rule , and (iv) the state space of a composite system is the tensor product of the subsystem's spaces.
The above axioms (i) to (iv), with the inner product written in the bra–ket notation , are
These axioms allow us to recover the formalism of both classical and quantum mechanics. [ 14 ] Specifically, under the assumption that the classical position and momentum operators commute , the Liouville equation for the KvN wavefunction is recovered from averaged Newton's laws of motion . However, if the coordinate and momentum obey the canonical commutation relation , the Schrödinger equation of quantum mechanics is obtained.
We begin from the following equations for expectation values of the coordinate x and momentum p
aka, Newton's laws of motion averaged over ensemble. With the help of the operator axioms , they can be rewritten as
Notice a close resemblance with Ehrenfest theorems in quantum mechanics. Applications of the product rule leads to
into which we substitute a consequence of Stone's theorem i | d Ψ ( t ) / d t ⟩ = L ^ | Ψ ( t ) ⟩ {\displaystyle i|d\Psi (t)/dt\rangle ={\hat {L}}|\Psi (t)\rangle } and obtain
Since these identities must be valid for any initial state, the averaging can be dropped and the system of commutator equations for the unknown L ^ {\displaystyle {\hat {L}}} is derived
Assume that the coordinate and momentum commute [ x ^ , p ^ ] = 0 {\displaystyle [{\hat {x}},{\hat {p}}]=0} . This assumption physically means that the classical particle's coordinate and momentum can be measured simultaneously, implying absence of the uncertainty principle .
The solution L ^ {\displaystyle {\hat {L}}} cannot be simply of the form L ^ = L ( x ^ , p ^ ) {\displaystyle {\hat {L}}=L({\hat {x}},{\hat {p}})} because it would imply the contractions i m [ L ( x ^ , p ^ ) , x ^ ] = 0 = p ^ {\displaystyle im[L({\hat {x}},{\hat {p}}),{\hat {x}}]=0={\hat {p}}} and i [ L ( x ^ , p ^ ) , p ^ ] = 0 = − U ′ ( x ^ ) {\displaystyle i[L({\hat {x}},{\hat {p}}),{\hat {p}}]=0=-U'({\hat {x}})} . Therefore, we must utilize additional operators λ ^ x {\displaystyle {\hat {\lambda }}_{x}} and λ ^ p {\displaystyle {\hat {\lambda }}_{p}} obeying
The need to employ these auxiliary operators arises because all classical observables commute. Now we seek L ^ {\displaystyle {\hat {L}}} in the form L ^ = L ( x ^ , λ ^ x , p ^ , λ ^ p ) {\displaystyle {\hat {L}}=L({\hat {x}},{\hat {\lambda }}_{x},{\hat {p}},{\hat {\lambda }}_{p})} . Utilizing KvN algebra , the commutator eqs for L can be converted into the following differential equations: [ 14 ] [ 16 ]
Whence, we conclude that the classical KvN wave function | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } evolves according to the Schrödinger-like equation of motion
Let us explicitly show that KvN dynamical eq is equivalent to the classical Liouville mechanics .
Since x ^ {\displaystyle {\hat {x}}} and p ^ {\displaystyle {\hat {p}}} commute, they share the common eigenvectors
with the resolution of the identity 1 = ∫ d x d p | x , p ⟩ ⟨ x , p | . {\textstyle 1=\int dx\,dp\,|x,p\rangle \langle x,p|.} Then, one obtains from equation ( KvN algebra ) ⟨ x , p | λ ^ x | Ψ ⟩ = − i ∂ ∂ x ⟨ x , p | Ψ ⟩ , ⟨ x , p | λ ^ p | Ψ ⟩ = − i ∂ ∂ p ⟨ x , p | Ψ ⟩ . {\displaystyle \langle x,p|{\hat {\lambda }}_{x}|\Psi \rangle =-i{\frac {\partial }{\partial x}}\langle x,p|\Psi \rangle ,\qquad \langle x,p|{\hat {\lambda }}_{p}|\Psi \rangle =-i{\frac {\partial }{\partial p}}\langle x,p|\Psi \rangle .} Projecting equation ( KvN dynamical eq ) onto ⟨ x , p | {\displaystyle \langle x,p|} , we get the equation of motion for the KvN wave function in the xp-representation
The quantity ⟨ x , p | Ψ ( t ) ⟩ {\displaystyle \langle x,\,p|\Psi (t)\rangle } is the probability amplitude for a classical particle to be at point x {\displaystyle x} with momentum p {\displaystyle p} at time t {\displaystyle t} . According to the axioms above , the probability density is given by ρ ( x , p ; t ) = | ⟨ x , p | Ψ ( t ) ⟩ | 2 {\displaystyle \rho (x,p;t)=\left|\langle x,p|\Psi (t)\rangle \right|^{2}} . Utilizing the identity ∂ ∂ t ρ ( x , p ; t ) = ⟨ Ψ ( t ) | x , p ⟩ ∂ ∂ t ⟨ x , p | Ψ ( t ) ⟩ + ⟨ x , p | Ψ ( t ) ⟩ ( ∂ ∂ t ⟨ x , p | Ψ ( t ) ⟩ ) ∗ {\displaystyle {\frac {\partial }{\partial t}}\rho (x,p;t)=\langle \Psi (t)|x,p\rangle {\frac {\partial }{\partial t}}\langle x,p|\Psi (t)\rangle +\langle x,p|\Psi (t)\rangle \left({\frac {\partial }{\partial t}}\langle x,p|\Psi (t)\rangle \right)^{*}} as well as ( KvN dynamical eq in xp ), we recover the classical Liouville equation
Moreover, according to the operator axioms and ( xp eigenvec ), ⟨ A ⟩ = ⟨ Ψ ( t ) | A ( x ^ , p ^ ) | Ψ ( t ) ⟩ = ∫ d x d p ⟨ Ψ ( t ) | x , p ⟩ A ( x , p ) ⟨ x , p | Ψ ( t ) ⟩ = ∫ d x d p A ( x , p ) ⟨ Ψ ( t ) | x , p ⟩ ⟨ x , p | Ψ ( t ) ⟩ = ∫ d x d p A ( x , p ) ρ ( x , p ; t ) . {\displaystyle {\begin{aligned}\langle A\rangle &=\langle \Psi (t)|A({\hat {x}},{\hat {p}})|\Psi (t)\rangle =\int dx\,dp\,\langle \Psi (t)|x,p\rangle A(x,p)\langle x,p|\Psi (t)\rangle \\&=\int dx\,dp\,A(x,p)\langle \Psi (t)|x,p\rangle \langle x,p|\Psi (t)\rangle =\int dx\,dp\,A(x,p)\rho (x,p;t).\end{aligned}}} Therefore, the rule for calculating averages of observable A ( x , p ) {\displaystyle A(x,p)} in classical statistical mechanics has been recovered from the operator axioms with the additional assumption [ x ^ , p ^ ] = 0 {\displaystyle [{\hat {x}},{\hat {p}}]=0} . As a result, the phase of a classical wave function does not contribute to observable averages. Contrary to quantum mechanics, the phase of a KvN wave function is physically irrelevant. Hence, nonexistence of the double-slit experiment [ 13 ] [ 17 ] [ 18 ] as well as Aharonov–Bohm effect [ 19 ] is established in the KvN mechanics.
Projecting KvN dynamical eq onto the common eigenvector of the operators x ^ {\displaystyle {\hat {x}}} and λ ^ p {\displaystyle {\hat {\lambda }}_{p}} (i.e., x λ p {\displaystyle x\lambda _{p}} -representation), one obtains classical mechanics in the doubled configuration space, [ 20 ] whose generalization leads [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] to the phase space formulation of quantum mechanics .
As in the derivation of classical mechanics , we begin from the following equations for averages of coordinate x and momentum p
With the help of the operator axioms , they can be rewritten as
These are the Ehrenfest theorems in quantum mechanics. Applications of the product rule leads to
into which we substitute a consequence of Stone's theorem
where ℏ {\displaystyle \hbar } was introduced as a normalization constant to balance dimensionality. Since these identities must be valid for any initial state, the averaging can be dropped and the system of commutator equations for the unknown quantum generator of motion H ^ {\displaystyle {\hat {H}}} are derived
Contrary to the case of classical mechanics , we assume that observables of the coordinate and momentum obey the canonical commutation relation [ x ^ , p ^ ] = i ℏ {\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar } . Setting H ^ = H ( x ^ , p ^ ) {\displaystyle {\hat {H}}=H({\hat {x}},{\hat {p}})} , the commutator equations can be converted into the differential equations [ 14 ] [ 16 ]
whose solution is the familiar quantum Hamiltonian
Whence, the Schrödinger equation was derived from the Ehrenfest theorems by assuming the canonical commutation relation between the coordinate and momentum. This derivation as well as the derivation of classical KvN mechanics shows that the difference between quantum and classical mechanics essentially boils down to the value of the commutator [ x ^ , p ^ ] {\displaystyle [{\hat {x}},{\hat {p}}]} .
In the Hilbert space and operator formulation of classical mechanics, the Koopman von Neumann–wavefunction takes the form of a superposition of eigenstates, and measurement collapses the KvN wavefunction to the eigenstate which is associated the measurement result, in analogy to the wave function collapse of quantum mechanics.
However, it can be shown that for Koopman–von Neumann classical mechanics non-selective measurements leave the KvN wavefunction unchanged. [ 12 ]
The KvN dynamical equation ( KvN dynamical eq in xp ) and Liouville equation ( Liouville eq ) are first-order linear partial differential equations . One recovers Newton's laws of motion by applying the method of characteristics to either of these equations. Hence, the key difference between KvN and Liouville mechanics lies in weighting individual trajectories: Arbitrary weights, underlying the classical wave function, can be utilized in the KvN mechanics, while only positive weights, representing the probability density, are permitted in the Liouville mechanics (see this scheme ).
Being explicitly based on the Hilbert space language, the KvN classical mechanics adopts many techniques from quantum mechanics, for example, perturbation and diagram techniques [ 25 ] as well as functional integral methods . [ 26 ] [ 27 ] [ 28 ] The KvN approach is very general, and it has been extended to dissipative systems , [ 29 ] relativistic mechanics , [ 30 ] and classical field theories . [ 14 ] [ 31 ] [ 32 ] [ 33 ]
The KvN approach is fruitful in studies on the quantum-classical correspondence [ 14 ] [ 15 ] [ 34 ] [ 35 ] [ 36 ] as it reveals that the Hilbert space formulation is not exclusively quantum mechanical. [ 37 ] Even Dirac spinors are not exceptionally quantum as they are utilized in the relativistic generalization of the KvN mechanics. [ 30 ] Similarly as the more well-known phase space formulation of quantum mechanics, the KvN approach can be understood as an attempt to bring classical and quantum mechanics into a common mathematical framework. In fact, the time evolution of the Wigner function approaches, in the classical limit, the time evolution of the KvN wavefunction of a classical particle. [ 30 ] [ 38 ] However, a mathematical resemblance to quantum mechanics does not imply the presence of hallmark quantum effects. In particular, impossibility of double-slit experiment [ 13 ] [ 17 ] [ 18 ] and Aharonov–Bohm effect [ 19 ] are explicitly demonstrated in the KvN framework. | https://en.wikipedia.org/wiki/Koopman–von_Neumann_classical_mechanics |
Kopin Liu ( Chinese : 劉國平 ; born 25 January 1949) is a Taiwanese physical chemist.
Liu graduated from National Tsing Hua University in 1971 and subsequently pursued his doctorate at Ohio State University in the United States. He initiated his research career at the Georgia Institute of Technology , then moved to Argonne National Laboratory , where he spent over a decade until 1993. Upon returning to Taiwan, Liu has held various positions at Academia Sinica . [ 1 ] In 2000, he mentioned that working at Academia Sinica meant a large pay cut, but he was driven to educate and mentor future Taiwanese scientists while conducting research. Liu received two consecutive five-year grants as a fellow of the Foundation for the Advancement of Outstanding Scholarship, an organization founded by Yuan T. Lee in 1994. [ 1 ] [ 2 ] In 1998, Liu was granted fellowship by the American Physical Society . Equivalent honors were bestowed by The World Academy of Sciences in 2005, and the Royal Society of Chemistry in 2013. He became a member of the Academia Sinica in 2004 and the European Academy of Sciences in 2018. Liu has served as distinguished research chair professor within the department of physics at National Taiwan University since 2010. From 2010 to 2012, Liu was honorary chair professor at National Tsing Hua University. He is a 2011 recipient of the Humboldt Research Award . [ 1 ] | https://en.wikipedia.org/wiki/Kopin_Liu |
The Kopp–Etchells effect is a sparkling ring or disk that is sometimes produced by rotary-wing aircraft when operating in sandy conditions, particularly near the ground at night. The name was coined by photographer Michael Yon to honor two soldiers who were killed in combat; Benjamin Kopp, a US Army Ranger, and Joseph Etchells, a British soldier. Both were killed in combat in Sangin, Afghanistan in July 2009. [ 1 ]
Other names that have been used to describe this phenomenon include scintillation, [ 2 ] halo effect, [ 3 ] pixie dust, [ 4 ] and corona effect. [ 5 ]
Helicopter rotors are fitted with abrasion shields along their leading edges to protect the blades. These abrasion strips are often made of titanium , stainless steel, or nickel alloys, which are very hard, but not as hard as sand. When a helicopter flies low to the ground in sandy environments, sand can strike the metal abrasion strip and cause erosion, which produces a visible corona or halo around the rotor blades. The effect is caused by the pyrophoric oxidation of the ablated metal particles. [ 6 ] [ 7 ]
In this way, the Kopp–Etchells effect is similar to the sparks made by a grinder , which are also due to pyrophoricity. [ 8 ] When a speck of metal is chipped off the rotor, it is heated by rapid oxidation. This occurs because its freshly exposed surface reacts with oxygen to produce heat. If the particle is sufficiently small, then its mass is small compared to its surface area, and so heat is generated faster than it can be dissipated. This causes the particle to become so hot that it reaches its ignition temperature. At that point, the metal continues to burn freely. [ 9 ]
Abrasion strips made of titanium produce the brightest sparks, [ 2 ] [ 10 ] and the intensity increases with the size and concentration of sand grains in the air. [ 11 ]
Sand particles are more likely to hit the rotor when the rotorcraft is near the ground. This occurs because sand is blown into the air by the downwash and then carried to the top of the rotor disk by a vortex of air. This process is called recirculation and can lead to a complete brownout in severe situations. [ 5 ] The Kopp–Etchells effect is not necessarily associated with takeoff and landing operations. It has been observed without night vision goggles at altitudes as high as 1700 ft . [ 11 ]
The effect is often and incorrectly believed to be an electrical phenomenon, either as a result of static electricity as in St. Elmo's Fire , or due to the interaction of sand with the rotor ( triboelectric effect ), or a piezoelectric property of quartz sand. [ 12 ]
Mechanical action has been considered, whereby impact with the sand particles may cause photoluminescence . [ 13 ] Additionally, mechanisms relating to triboluminescence , chemiluminescence , and electroluminescence have been suggested. [ 3 ]
Yet another incorrect theory is that the extreme speed of the helicopter blades pushes sand particles out of the way so fast that they burn up like meteors in the atmosphere due to adiabatic heating. [ 1 ]
Groundcrew have mistaken the phenomenon for fire or other malfunctions. [ 11 ]
The erosion associated with the Kopp–Etchells effect presents costly maintenance and logistics problems, and is an example of foreign object damage (FOD). [ 11 ]
Sand hitting the moving rotor blades represents a security risk because of the highly visible ring it produces, which places military operations at a tactical disadvantage when trying to remain concealed in darkness. [ 11 ]
The light from the Kopp–Etchells effect can interfere with the pilot's ability to see, especially when using night vision equipment. This may cause difficulty with landing safely, and produce spatial disorientation . [ 4 ] | https://en.wikipedia.org/wiki/Kopp–Etchells_effect |
The Korea Atomic Energy Research Institute ( KAERI ; Korean : 한국원자력연구원 ) in Daejeon, South Korea was established in 1959 as the sole professional research-oriented institute for nuclear power in South Korea , and has rapidly built a reputation for research and development in various fields.
KAERI was established in 1959 as the Atomic Energy Research Institute (national research institute).
KAERI has made significant contributions to the nation's nuclear technology development. After Korea achieved self-reliance in nuclear core technologies, KAERI have transferred highly developed technologies to local industries for practical applications. | https://en.wikipedia.org/wiki/Korea_Atomic_Energy_Research_Institute |
The Korea Institute of Nuclear Safety ( KINS ; Korean : 한국원자력안전기술원 ) is a government-funded technical expert organization in Daejeon, South Korea , which aims to develop and implement regulations for nuclear safety .
KINS was established in February 1990, and supports the Nuclear Safety and Security Commission in technical aspects of nuclear safety regulation; "including safety reviews, inspections, education, and safety research, based on technical knowledge and accumulated regulatory experience." [ 1 ] [ 2 ] [ 3 ]
The current president of KINS is Sok Chul Kim. [ 1 ] [ 2 ] [ 3 ] As of June 2016, KINS consists of two offices, eight divisions, and 44 departments/teams with 523 persons. [ 4 ]
In 2008, in collaboration with the IAEA , KINS established the International Nuclear Safety School, an initiative to promote nuclear safety training. The school is the IAEA's designated training center for the region. [ 2 ] | https://en.wikipedia.org/wiki/Korea_Institute_of_Nuclear_Safety |
Korea Research Institute of Chemical Technology (KRICT) is the national chemical research institute for the Republic of Korea. It has performed research & development and public infrastructure services in chemistry and related convergence technologies. KRICT was established in 1976 and is a National Research Council of Science & Technology member. [ 3 ] It is located at 141, Gajeong-ro, Yuseong-gu, Daejeon.
KRICT was established in September 1976. In January 1999 its affiliation changed to the Ministry of Science and Technology under the Office of the Prime Minister. In March 2000 it was an establishment of the Korea Chemical Bank, and in January 2001 the organization name was hanged to KRICT. The institute's affiliation changed again to the National Research Council of Science & Technology under the Ministry of Science, ICT and Future Planning (Currently Ministry of Science and ICT) In June 2014. Last affiliation changed to the National Research Council of Science & Technology under the Ministry of Science and ICTIn July 2017.
Further expansions of this institute include: Korea Institute of Toxicology as an annex of KRICT (In January 2002); KRICT-CNU, Graduate School of Drug Discovery and Development (In May 2011); New Chemical Commercialization Research Center in Ulsan (Currently Research Center for Advanced Specialty Chemicals) In March 2012; Biochemistry Commercialization Center in Ulsan (Currently Center for Bio-based Chemistry) In March 2016; Carbon Neutral Demo-Plant Center in Yeosu in December 2023
- UST-KRICT School: Cultivating global chemical convergence researchers.
- Korea Chemical Bank: Establishment of infrastructure through collection and management of new drug material compounds
The division contributes to realizing low carbon society, hydrogen economy, and chemical utilization of plastics in the chemical industry by developing Eco-friendly process technology for the value-up of low-grade chemical resources and new energy-saving production technology for basic chemicals.
The division strengthens national industrial competitiveness and enhances quality of life through the development of advanced materials technology for IoT devices and off-grid energy conversion and storage.
The division secures new drug pipelines by driving innovation in fundamental technologies, achieves medical innovation, and improves quality of life by developing technology for the treatment and control of infectious and rare diseases.
The division develops advanced fine chemical material technologies and platform technologies for waste reduction bioplastics to create new growth engines and stimulate the economy.
The division provides industrial technology solutions and contributes to citizens' safety through the chemical library, data platform, and infrastructure for chemical industries and academia, and the development of related technologies. | https://en.wikipedia.org/wiki/Korea_Research_Institute_of_Chemical_Technology |
Korean Chemical Society was founded on July 7, 1946. It is a non-profit corporation that aims a contribution toward a chemical scholarship, technological development, education, education, and the spread of chemical knowledge. There are about 7,000 members active around university, laboratories, industry, and schools, and there are about 140 organization and about 30 special member companies participating in KCS.
KCS has steering committee, including 12 branches, 12 departments, 3 editing committee, and board of trustees.
KCS publishes journals including “The Journal of Korean Chemical Society”, “Bulletin of Korean Chemical Society” (english, monthly, SCI since 1981), “ Chemistry: An Asian Journal ”, “ Physical Chemistry Chemical Physics ”, and “ChemWorld”, the newsletter of KCS.
Korean Chemical Society continuously works with other international chemical societies. KCS is also a member organization of IUPAC and FACS.
Bulletin of Korean Chemical Society is a flagship journal of KCS representing basic and applied chemical science. It appeals to broad international readership in the chemical community.
As an official journal of KCS, it reaches out to the chemical community worldwide. It is strictly peer-reviewed and welcomes papers written in English.
The Bull. Korean Chem. Soc. Is jointly published with Wiley-VCH . It is read by chemist of all disciplines.
This journal is on a SCI index. It means that this journal is globally recognized,
KCS holds annual meetings and several conferences. KCS also hosts KChO, the Korean Chemistry Olympiad. KChO is a well-known chemistry competition in Korea. KChO is also a team selection test of Korean team in IChO. KChO is composed of two summer schools and two winter schools. Every high school students (starting at 16 yr) can participate in school entering exam. After entering seasonal schools, students get lesson from professors and takes exam. Also, there are experimental test in seasonal schools. For further information, visit Korean Chemistry Olympiad . This is a part of Chemical Education and Outreach Program. Also, compiling chemical terms and nomenclature of chemical compounds in Korean is another part of this program, which KCS do as a national affiliated organization of the IUPAC.
Furthermore, in Science day of Korea, April 21, which was designated in 1967 by government, KCS holds “Contest for Chemical Poetry and Painting” for secondary school students, and “Contest for Chemical Poster” for elementary school and secondary school students. Award-winning works can be searched from http://new.kcsnet.or.kr/contest_poem_result .
KCS established “Carbon Culture Award” in 2012 to recognize and understand the importance of the Carbon.
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Korean_Chemical_Society |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.