text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
In chemistry and physics , valence electrons are electrons in the outermost shell of an atom , and that can participate in the formation of a chemical bond if the outermost shell is not closed . In a single covalent bond , a shared pair forms with both atoms in the bond each contributing one valence electron.
The presence of valence electrons can determine the element 's chemical properties, such as its valence —whether it may bond with other elements and, if so, how readily and with how many. In this way, a given element's reactivity is highly dependent upon its electronic configuration . For a main-group element , a valence electron can exist only in the outermost electron shell ; for a transition metal , a valence electron can also be in an inner shell.
An atom with a closed shell of valence electrons (corresponding to a noble gas configuration ) tends to be chemically inert . Atoms with one or two valence electrons more than a closed shell are highly reactive due to the relatively low energy to remove the extra valence electrons to form a positive ion . An atom with one or two electrons fewer than a closed shell is reactive due to its tendency either to gain the missing valence electrons and form a negative ion, or else to share valence electrons and form a covalent bond.
Similar to a core electron , a valence electron has the ability to absorb or release energy in the form of a photon . An energy gain can trigger the electron to move (jump) to an outer shell; this is known as atomic excitation . Or the electron can even break free from its associated atom's shell; this is ionization to form a positive ion. When an electron loses energy (thereby causing a photon to be emitted), then it can move to an inner shell which is not fully occupied.
The electrons that determine valence – how an atom reacts chemically – are those with the highest energy .
For a main-group element , the valence electrons are defined as those electrons residing in the electronic shell of highest principal quantum number n . [ 1 ] Thus, the number of valence electrons that it may have depends on the electron configuration in a simple way. For example, the electronic configuration of phosphorus (P) is 1s 2 2s 2 2p 6 3s 2 3p 3 so that there are 5 valence electrons (3s 2 3p 3 ), corresponding to a maximum valence for P of 5 as in the molecule PF 5 ; this configuration is normally abbreviated to [Ne] 3s 2 3p 3 , where [Ne] signifies the core electrons whose configuration is identical to that of the noble gas neon .
However, transition elements have ( n −1)d energy levels that are very close in energy to the n s level. [ 2 ] So as opposed to main-group elements, a valence electron for a transition metal is defined as an electron that resides outside a noble-gas core. [ 3 ] Thus, generally, the d electrons in transition metals behave as valence electrons although they are not in the outermost shell. For example, manganese (Mn) has configuration 1s 2 2s 2 2p 6 3s 2 3p 6 4s 2 3d 5 ; this is abbreviated to [Ar] 4s 2 3d 5 , where [Ar] denotes a core configuration identical to that of the noble gas argon . In this atom, a 3d electron has energy similar to that of a 4s electron, and much higher than that of a 3s or 3p electron. In effect, there are possibly seven valence electrons (4s 2 3d 5 ) outside the argon-like core; this is consistent with the chemical fact that manganese can have an oxidation state as high as +7 (in the permanganate ion: MnO − 4 ). (But note that merely having that number of valence electrons does not imply that the corresponding oxidation state will exist. For example, fluorine is not known in oxidation state +7; and although the maximum known number of valence electrons is 16 in ytterbium and nobelium , no oxidation state higher than +9 is known for any element.)
The farther right in each transition metal series, the lower the energy of an electron in a d subshell and the less such an electron has valence properties. Thus, although a nickel atom has, in principle, ten valence electrons (4s 2 3d 8 ), its oxidation state never exceeds four. For zinc , the 3d subshell is complete in all known compounds, although it does contribute to the valence band in some compounds. [ 4 ] Similar patterns hold for the ( n −2)f energy levels of inner transition metals.
The d electron count is an alternative tool for understanding the chemistry of a transition metal.
The number of valence electrons of an element can be determined by the periodic table group (vertical column) in which the element is categorized. In groups 1–12, the group number matches the number of valence electrons; in groups 13–18, the units digit of the group number matches the number of valence electrons. (Helium is the sole exception.) [ 5 ]
Helium is an exception: despite having a 1s 2 configuration with two valence electrons, and thus having some similarities with the alkaline earth metals with their n s 2 valence configurations, its shell is completely full and hence it is chemically very inert and is usually placed in group 18 with the other noble gases.
The valence shell is the set of orbitals which are energetically accessible for accepting electrons to form chemical bonds .
For main-group elements, the valence shell consists of the n s and n p orbitals in the outermost electron shell . For transition metals the orbitals of the incomplete ( n −1)d subshell are included, and for lanthanides and actinides incomplete ( n −2)f and ( n −1)d subshells. The orbitals involved can be in an inner electron shell and do not all correspond to the same electron shell or principal quantum number n in a given element, but they are all at similar energies. [ 5 ]
As a general rule, a main-group element (except hydrogen or helium) tends to react to form a s 2 p 6 electron configuration . This tendency is called the octet rule , because each bonded atom has 8 valence electrons including shared electrons. Similarly, a transition metal tends to react to form a d 10 s 2 p 6 electron configuration. This tendency is called the 18-electron rule , because each bonded atom has 18 valence electrons including shared electrons.
The heavy group 2 elements calcium, strontium, and barium can use the ( n −1)d subshell as well, giving them some similarities to transition metals. [ 7 ] [ 8 ] [ 9 ]
The number of valence electrons in an atom governs its bonding behavior. Therefore, elements whose atoms have the same number of valence electrons are often grouped together in the periodic table of the elements, especially if they also have the same types of valence orbitals. [ 10 ]
The most reactive kind of metallic element is an alkali metal of group 1 (e.g., sodium or potassium ); this is because such an atom has only a single valence electron. During the formation of an ionic bond , which provides the necessary ionization energy , this one valence electron is easily lost to form a positive ion (cation) with a closed shell (e.g., Na + or K + ). An alkaline earth metal of group 2 (e.g., magnesium ) is somewhat less reactive, because each atom must lose two valence electrons to form a positive ion with a closed shell (e.g., Mg 2+ ). [ citation needed ]
Within each group (each periodic table column) of metals, reactivity increases with each lower row of the table (from a light element to a heavier element), because a heavier element has more electron shells than a lighter element; a heavier element's valence electrons exist at higher principal quantum numbers (they are farther away from the nucleus of the atom, and are thus at higher potential energies, which means they are less tightly bound). [ citation needed ]
A nonmetal atom tends to attract additional valence electrons to attain a full valence shell; this can be achieved in one of two ways: An atom can either share electrons with a neighboring atom (a covalent bond ), or it can remove electrons from another atom (an ionic bond ). The most reactive kind of nonmetal element is a halogen (e.g., fluorine (F) or chlorine (Cl)). Such an atom has the following electron configuration: s 2 p 5 ; this requires only one additional valence electron to form a closed shell. To form an ionic bond, a halogen atom can remove an electron from another atom in order to form an anion (e.g., F − , Cl − , etc.). To form a covalent bond, one electron from the halogen and one electron from another atom form a shared pair (e.g., in the molecule H–F, the line represents a shared pair of valence electrons, one from H and one from F). [ citation needed ]
Within each group of nonmetals, reactivity decreases with each lower row of the table (from a light element to a heavy element) in the periodic table, because the valence electrons are at progressively higher energies and thus progressively less tightly bound. In fact, oxygen (the lightest element in group 16) is the most reactive nonmetal after fluorine, even though it is not a halogen, because the valence shells of the heavier halogens are at higher principal quantum numbers.
In these simple cases where the octet rule is obeyed, the valence of an atom equals the number of electrons gained, lost, or shared in order to form the stable octet. However, there are also many molecules that are exceptions , and for which the valence is less clearly defined.
Valence electrons are also responsible for the bonding in the pure chemical elements, and whether their electrical conductivity is characteristic of metals, semiconductors, or insulators.
Metallic Network covalent Molecular covalent Single atoms Unknown Background color shows bonding of simple substances in the periodic table . If there are several, the most stable allotrope is considered.
Metallic elements generally have high electrical conductivity when in the solid state. In each row of the periodic table , the metals occur to the left of the nonmetals, and thus a metal has fewer possible valence electrons than a nonmetal. However, a valence electron of a metal atom has a small ionization energy , and in the solid-state this valence electron is relatively free to leave one atom in order to associate with another nearby. This situation characterises metallic bonding . Such a "free" electron can be moved under the influence of an electric field , and its motion constitutes an electric current ; it is responsible for the electrical conductivity of the metal. Copper , aluminium , silver , and gold are examples of good conductors.
A nonmetallic element has low electrical conductivity; it acts as an insulator . Such an element is found toward the right of the periodic table, and it has a valence shell that is at least half full (the exception is boron ). Its ionization energy is large; an electron cannot leave an atom easily when an electric field is applied, and thus such an element can conduct only very small electric currents. Examples of solid elemental insulators are diamond (an allotrope of carbon ) and sulfur . These form covalently bonded structures, either with covalent bonds extending across the whole structure (as in diamond) or with individual covalent molecules weakly attracted to each other by intermolecular forces (as in sulfur). (The noble gases remain as single atoms, but those also experience intermolecular forces of attraction, that become stronger as the group is descended: helium boils at −269 °C, while radon boils at −61.7 °C.)
A solid compound containing metals can also be an insulator if the valence electrons of the metal atoms are used to form ionic bonds . For example, although elemental sodium is a metal, solid sodium chloride is an insulator, because the valence electron of sodium is transferred to chlorine to form an ionic bond, and thus that electron cannot be moved easily.
A semiconductor has an electrical conductivity that is intermediate between that of a metal and that of a nonmetal; a semiconductor also differs from a metal in that a semiconductor's conductivity increases with temperature . The typical elemental semiconductors are silicon and germanium , each atom of which has four valence electrons. The properties of semiconductors are best explained using band theory , as a consequence of a small energy gap between a valence band (which contains the valence electrons at absolute zero) and a conduction band (to which valence electrons are excited by thermal energy).
|
https://en.wikipedia.org/wiki/Valence_electron
|
In organic chemistry , two molecules are valence isomers when they are constitutional isomers that can interconvert through pericyclic reactions . [ 1 ] [ 2 ]
There are many valence isomers one can draw for the C 6 H 6 formula benzene . Some were originally proposed for benzene itself before the actual structure of benzene was known. Others were later synthesized in lab. Some have been observed to isomerize to benzene, whereas others tend to undergo other reactions instead, or isomerize by ways other than pericyclic reactions.
The valence isomers are not restricted to isomers of benzene. Valence isomers are also seen in the series (CH) 8 . Due to the larger number of units, the number of possible valence isomers is also greater and at least 21:
Perhaps no pair of valence isomers differ more strongly in appearance than colourless naphthalene and the intensely violet azulene.
|
https://en.wikipedia.org/wiki/Valence_isomer
|
Valencia Joyner Koomson is an American electrical engineer . She is an associate professor in the Department of Electrical and Computer Engineering with secondary appointments in the Department of Computer Science and the Jonathan M. Tisch College of Civic Life at Tufts University . She is the principal investigator for the Advanced Integrated Circuits and Systems Lab at Tufts University . [ 1 ]
Koomson was born in Washington, DC and graduated from Benjamin Banneker Academic High School . Her parents, Otis and Vernese Joyner, moved to Washington DC during the Great Migration after living for years as sharecroppers in Wilson County, North Carolina . Her family history can be traced back to the Antebellum South era. Her oldest known relative is Hagar Atkinson, an enslaved African woman whose name is recorded in the will of a plantation owner in Johnston County , North Carolina established in 1746. [ 2 ]
Koomson attended the Massachusetts Institute of Technology , graduating with a BS in electrical engineering and computer science in 1998 and a Master of Engineering in 1999. [ 3 ] she earned her Master of Philosophy from the University of Cambridge in 2000, followed by her PhD in electrical engineering from the same institution in 2003. [ citation needed ]
Koomson was an adjunct professor at Howard University from 2004 to 2005, and during that period was a Senior Research Engineer at the University of Southern California 's Information Sciences Institute (USC/ISI). She was a visiting professor at Rensselaer Polytechnic Institute and Boston University in 2008 and 2013, respectively. Koomson joined Tufts University in 2005 as an assistant professor and became an associate professor in 2011. In 2020, Koomson was named an MLK Visiting Professor at MIT for the academic year 2020/2021. [ 4 ]
Her Advanced Integrated Circuits and Systems Lab continues to do research into the design and implementation of innovative high-performance, low-power microsystems, with a focus on the integration of heterogeneous devices/materials (optical, RF, bio/chemical) with silicon circuit architectures to address challenges in high-speed wireless communication, biomedical imaging, and sensing. [ 5 ] Recently, Koomson has focused on addressing racial bias in medical devices and algorithms, including the pulse oximeter device that became widely used by the public during the Covid-19 Pandemic. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] She's been addressing this concern through the development of technology designed to measure a person's skin tone. This innovation will allow the pulse oximeter to emit more light into the device, ensuring individuals with higher melanin levels receive a more accurate reading. Koomson has also been actively engaged with policymakers and scientists, advocating for an FDA review of the biases linked to pulse oximeters. [ 12 ] This effort played a pivotal role in orchestrating an FDA forum which gathered in late 2022 to address the issue. She shared with The Tufts Admission Magazine, "I spent one summer contacting our congressional delegation in Massachusetts to ensure lawmakers are aware of these issues and talking to their staff members who focus on health policy. Senator Warren led the charge in 2021 to urge the Food and Drug Administration (FDA) to review this." [ 13 ] In addition to her work with medical devices, Koomson played a crucial role in a collaborative team focused on developing a Hybrid VLC/RF parking automation system. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ better source needed ] [ better source needed ]
|
https://en.wikipedia.org/wiki/Valencia_Koomson
|
The Valency Interaction Formula , or VIF provides a way of drawing or interpreting the molecular structural formula based on molecular orbital theory . Valency Points, VP, dots drawn on a page, represent valence orbitals . Valency Interactions, VI, that connect the dots, show interactions between these valence orbitals. Theory was developed by Turkish quantum chemist Oktay Sinanoğlu in the early 1980s and first published in 1983. The theory was like a new language of quantum mechanics by the exact definition of Hilbert space . It was also the solution of the problem that Paul Dirac was trying to solve at the time of his death in 1984, which concerned the hidden symmetries in Hilbert space which were responsible for the accidental degeneracies not arising from a spatial symmetry, that was about the higher symmetries of Hilbert space) Sinanoğlu showed that the solution was possible only when the topology tool was used. This VIF theory also connected both delocalized and localized molecular orbital schemes into a unified form in an elegant way.
Chemical deductions are made from a VIF picture with the application of two pictorial rules. These are linear transformations applied to the VIF structural formula as a quantum operator . Transformation by the two rules preserves invariants crucial to the characterization of the molecular electronic properties, the numbers of bonding, non-bonding , and anti-bonding orbitals and/or the numbers of doubly, singly, and unoccupied valence orbitals. The two pictorial rules relate all pictures with the same electronic properties as characterized by these invariants.
A thorough presentation of VIF is available through the open access journal symmetry . [ 1 ]
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Valency_interaction_formula
|
Valentin Fyodorovich Turchin ( Russian : Валенти́н Фёдорович Турчи́н , 14 February 1931 – 7 April 2010) was a Soviet and American physicist, cybernetician , and computer scientist. He developed the Refal programming language , the theory of metasystem transitions and the notion of supercompilation . He was a pioneer in artificial intelligence and a proponent of the global brain hypothesis.
Turchin was born in 1931 in Podolsk , Soviet Union . In 1952, he graduated from Moscow University with a degree in Theoretical Physics and got his Ph.D. in 1957. After working on neutron and solid-state physics at the Institute for Physics of Energy in Obninsk, in 1964 he accepted a position at the Keldysh Institute of Applied Mathematics in Moscow. There he worked on statistical regularization methods and authored REFAL, one of the first AI languages and the AI language of choice in the Soviet Union.
In the 1960s, Turchin became politically active. In the Fall of 1968, he wrote the pamphlet The Inertia of Fear , which was quite widely circulated in samizdat , the writing began to be circulated under the title The Inertia of Fear: Socialism and Totalitarianism in Moscow in 1976. [ 2 ] Following its publication in the underground press, he lost his research laboratory. [ 3 ] In 1970 he authored "The Phenomenon of Science", [ 4 ] a grand cybernetic meta-theory of universal evolution, which broadened and deepened the earlier book. By 1973, Turchin had founded the Moscow chapter of Amnesty International with Andrey Tverdokhlebov and was working closely with the well-known physicist and Soviet dissident Andrei Sakharov . In 1974 he lost his position at the Institute and was persecuted by the KGB . Facing almost certain imprisonment, he and his family were forced to emigrate from the Soviet Union in 1977.
He went to New York, where he joined the faculty of the City College of New York in 1979. In 1990, together with Cliff Joslyn and Francis Heylighen , he founded the Principia Cybernetica Project , a worldwide organization devoted to the collaborative development of an evolutionary-cybernetic philosophy. In 1998, he co-founded the software start-up SuperCompilers, LLC. He retired from his post as Professor of Computer Science at City College in 1999. A resident of Oakland, New Jersey , [ 5 ] he died there on 7 April 2010. [ 1 ]
He has two sons named Peter Turchin (a specialist in population dynamics and the mathematical modeling of historical dynamics) and Dimitri Turchin.
The philosophical core of Turchin's scientific work is the concept of the metasystem transition , which denotes the evolutionary process through which higher levels of control emerge in system structure and function.
Turchin uses this concept to provide a global theory of evolution and a coherent social systems theory, to develop a complete cybernetics philosophical and ethical system, and to build a constructivist foundation for mathematics.
Using the REFAL language he has implemented Supercompiler, a unified method for program transformation and optimization based on a metasystem transition. [ 6 ]
Most cited publications according to Google Scholar
|
https://en.wikipedia.org/wiki/Valentin_Turchin
|
Valerie Charlton made props , models and special effects for movies in the 1970s and 1980s, especially those by the Monty Python troupe and Terry Gilliam .
Her partner was Julian Doyle , who also worked on many of the same movies. [ 1 ]
This biographical article related to film in the United Kingdom is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Valerie_Charlton
|
Order of Mapungubwe , 2007
Valerie Mizrahi FRS (born 1958) is a South African molecular biologist . [ 1 ]
The daughter of Morris and Etty Mizrahi, she was born in Harare , Zimbabwe and was educated there. Her family is a Sephardi Jewish family from the Greek island of Rhodes . [ 2 ]
She went on to earn a BSc in chemistry and mathematics and then a PhD in chemistry at the University of Cape Town . [ 3 ] From 1983 to 1986, she pursued post-doctoral studies at Pennsylvania State University . Mizrahi then worked in research and development for pharmaceutical company Smith, Kline & French . [ 1 ] In 1989, she established as research unit at the South African Institute for Medical Research and University of the Witwatersrand , remaining there until 2010. Her research has been focused on the treatment of tuberculosis , and drug resistance. [ 4 ] In 2011, she became director of the Institute of Infectious Disease and Molecular Medicine at the University of Cape Town. [ 5 ] Mizrahi is director of a research unit of the South African Medical Research Council and leads the University of Cape Town branch of the Centre of Excellence in Biomedical TB Research. [ 3 ]
Mizrahi received the L'Oréal-UNESCO Award for Women in Science in 2000. In 2006, she received the Gold Medal from the South African Society for Biochemistry and Molecular Biology for her contributions to the field and the Department of Science and Technology's Distinguished Woman Scientist Award. She is a fellow of the Royal Society of South Africa , a member of the Academy of Science of South Africa [ 1 ] and a fellow of the American Academy of Microbiology since 2009. [ 6 ] She was named to the Order of Mapungubwe in 2007. From 2000 to 2010, she was an International Research Scholar of the Howard Hughes Medical Institute ; in 2012, she was named a Senior International Research Scholar for the Institute until 2017. [ 3 ] In 2013, she was awarded the Institut de France 's Christophe Mérieux Prize for her work in tuberculosis research. [ 7 ] Mizrahi was elected a Fellow of the Royal Society in 2023. [ 8 ]
Valerie has two daughters, and her father is the honorary president of the Johannesburg Sephardic Hebrew Congregation. She grew up speaking Judeo-Spanish at home. [ 2 ]
|
https://en.wikipedia.org/wiki/Valerie_Mizrahi
|
Valeriu Rudic (born 18 February 1947 in Talmaza ) is a Moldovan microbiologist , chemist , biochemist , and pharmacist . He is a titular member of the Academy of Sciences of Moldova and has made significant contributions to scientific research and the development of new biotechnological products.
Rudic was born on 18 February 1947 in the Moldovan villiage Talmaza . [ 1 ] He graduated from his high school with gold medal honors and attended the Nicolae Testemițanu State University of Medicine and Pharmacy in Chișinău , where he earned his PhD medical degree with distinction, completing his doctorate in medical science in 1974. [ 1 ]
Rudic began his scientific career as a researcher at the Institute of Hygiene and Epidemiology , later becoming a lecturer and then a professor of genetics at the Moldova State University . [ 1 ] Over his career, he has held many academic positions, including the Director of the Institute of Microbiology and Biotechnology of the Academy of Sciences of Moldova and the Head of the Department of Microbiology , Virology , and Immunology at the Nicolae Testemițanu State University of Medicine and Pharmacy . [ 1 ] He has also undertaken scientific internships , most notably at the University of California, Berkeley between 1982–1983. [ 1 ]
Rudic's work has led to the development of biotechnological products, with the notable of them being a biological preparation named BioR, a biologically active substance derived from the cyanobacterium Spirulina platensis , which is used in various pharmaceutical products due to its antioxidant and therapeutic properties. [ 2 ] [ 3 ] The request to register BioR as a trademark was met with objection by French company Dior who claimed that BioR is very similar to their Parfums Christian Dior line, which would cause confusion among customers. [ 4 ] Rudic's products also include Levobior ointment and Angenol gel for oral and maxillofacial conditions and Imunobior and Aterobior, which are nutritional supplements with antioxidant and metabolic regulatory effects. [ 2 ]
Rudic published over 1,270 scientific works, including seven monographs and 16 textbooks . He also registered around 300 patents in Moldova , Russia , and Romania . [ 1 ] Furthermore he founded a scientific school in phycobiotechnology , through which he has mentored over 43 doctoral theses , including 9 habilitation doctorates. [ 1 ]
Rudic is the recipient of many national and international awards, medals, and honorary titles, including the Order of Honour and the Order of Work Glory in Moldova, [ 1 ] as well as being a titular member of the Academy of Sciences of Moldova . [ 5 ] He was also awarded over 200 gold medals at international innovation and technology exhibitions, awards from Eureka , INPEX , and Palexpo , as well as prizes from the World Intellectual Property Organization and other various international bodies. [ 1 ]
|
https://en.wikipedia.org/wiki/Valeriu_Rudic
|
Valery I Levitas is a Ukrainian mechanics and material scientist , academic and author. He is an Anson Marston Distinguished Professor and Murray Harpole Chair in Engineering at Iowa State University [ 1 ] and was a faculty scientist at the Ames National Laboratory . [ 2 ]
Levitas is most known for his works on the mechanics of materials, stress and strain-induced phase transformations and chemical reactions. Among his authored works are his publications in academic journals, including Science , Nature Communications , Nano Letters [ 3 ] as well as monographs such as Large Deformation of Materials with Complex Rheological Properties at Normal and High Pressure . [ 4 ] He is the recipient of the 2018 Khan International Award for outstanding contributions to the field of plasticity. [ 5 ]
Levitas earned his M.S. in Mechanical Engineering from Kiev Polytechnic Institute in 1978, followed by a PHD in Materials Science and Engineering from the Institute for Superhard Materials in 1981. In 1988, he completed a Doctor of Science degree in Continuum Mechanics from the Institute of Electronic Machine Building . Furthermore, in 1995, he obtained his Doctor-Engineer habilitation in Continuum Mechanics from the University of Hannover. [ 1 ]
Levitas commenced his academic journey in 1978 at the Institute for Superhard Materials of the Ukrainian Academy of Sciences in Kiev . From 1978 to 1981, he served as an engineer and then as a junior researcher from 1981 to 1984. During his tenure at the institute, he led a research group consisting of researchers and students from 1982 to 1994. Simultaneously, he held the positions of senior researcher from 1984 to 1988 and leading researcher from 1989 to 1994. Additionally, he founded the private research firm, Strength, in 1988. Since 1993 he was a Humboldt Research Fellow at the Institute of Structural and Computational Mechanics at the University of Hannover , serving until 1995. From 1995 to 1999, he continued at the same institution as a research and visiting professor. In 1999, he transitioned to Texas Tech University , where he was an associate professor in the Department of Mechanical Engineering until 2002, and then a professor until 2008. He was also the Founding Director of the Center for Mechanochemistry and Synthesis of New Materials from 2002 till 2007. From 2008 to 2017, he served as the Schafer 2050 Challenge Professor in both the Department of Aerospace Engineering and the Department of Mechanical Engineering at Iowa State University. [ 1 ] Between 2017 and 2023, he was the Vance Coffman Faculty Chair Professor in Aerospace Engineering, and since 2023 the Murray Harpole Chair in Engineering. Moreover, he has been the Anson Marston Distinguished Professor in Engineering since 2018, all at the same Departments. In addition, he has served as a faculty scientist at the Ames National Laboratory within the US Department of Energy from 2008 to 2023. [ 2 ]
Since 2002 he has also run the research and consulting firm Material Modeling. [ 2 ]
Levitas' research has focused on the interplay between plasticity and phase transformations across various scales through the creation of various methodologies. [ 6 ] [ 7 ] He pioneered the field of theoretical high-pressure mechanochemistry [ 8 ] through the development of a comprehensive four-scale theory and simulations [ 7 ] spanning from the first principle [ 9 ] and molecular dynamics [ 10 ] to nano- and microscale phase-field approaches [ 11 ] [ 12 ] and macroscale treatment. [ 13 ] His work includes coupling theoretical frameworks with quantitative in-situ experiments using synchrotron radiation facilities to investigate phase transformations and plastic flow in various materials under high pressure and large deformations. [ 10 ] [ 11 ] These efforts resulted in the identification of novel phenomena and phases, methods for controlling phase transformations, and the search for new high-pressure materials. Additionally, his research has contributed to the determination of material properties such as transformational, structural, deformational, and frictional characteristics from high throughput heterogeneous sample fields. [ 14 ] [ 15 ] His research team discovered and harnessed the phenomenon of "rotational plastic instability" to lower the required pressure for producing superhard cubic BN, reducing it from 55 to 5.6GPa. [ 16 ] In addition, their theoretical insights enabled a reduction in the transformation pressure from graphite to diamond, dropping it from 70 to 0.7GPa through shear-induced plasticity. [ 17 ] Moreover, his team unveiled a new amorphous phase of SiC, [ 18 ] the self-blow-up phase transformation-induced plasticity-heating process explaining deep-focus earthquakes, [ 19 ] the pressure self-focusing effect, [ 20 ] virtual melting at temperatures up to 5000K below melting point as a novel mechanism of solid phase transformation, stress relaxation, and plastic flow. [ 21 ] [ 22 ] Furthermore, his group introduced a mechanochemical melt dispersion mechanism to explain unusual phenomena in the combustion of Al particles at nano and micro scales, proposing significant advancements in particle synthesis, including the creation of prestressed particles, to enhance their energetic performance. [ 23 ] He also advanced phase field approach to various phase transformations, dislocation evolution, fracture, surface-induced phenomena, and their interaction by introducing advanced mechanics, large-strain formulation, strict requirements, and extending to larger sample scale. [ 6 ] [ 11 ] [ 12 ]
Levitas holds patents to 11 different inventions. They are mostly related to the development of high-pressure apparatuses for diamond synthesis and physical studies. They include a rotational diamond anvil cell. [ 24 ]
|
https://en.wikipedia.org/wiki/Valery_I._Levitas
|
Valid Time Event Code ( VTEC ) is a code used by the National Weather Service , a part of National Oceanic and Atmospheric Administration (NOAA) of the United States government , to identify products / events. [ 1 ]
This United States government–related article is a stub . You can help Wikipedia by expanding it .
This article related to a specific weather event is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Valid_Time_Event_Code
|
Validation and verification are procedures that ensure that medical devices fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests .
The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements.
For instance, a regulatory agency (such as CE or FDA ) may ensure that a product has been validated for general use before approval. An individual laboratory that introduces such an approved medical device may then not need to perform their own validation, but generally still need to perform verification to ensure that the device works correctly. [ 1 ]
Standards for validation and verification of medical laboratories are outlined in the international standard ISO 15189 , in addition to national and regional regulations. [ 1 ]
As per United States federal regulations, the following analytical tests need to be done by a medical laboratory that introduces a new testing device:
To establish a reference range, the Clinical and Laboratory Standards Institute (CLSI) recommends testing at least 120 patient samples. In contrast, for the verification of a reference range, it is recommended to use a total of 40 samples, 20 from healthy men and 20 from healthy women, and the results should be compared to the published reference range. The results should be evenly spread throughout the published reference range rather than clustered at one end. The published reference range can be accepted for use if 95% of the results fall within it. Otherwise, the laboratory needs to establish its own reference range. [ 4 ]
|
https://en.wikipedia.org/wiki/Validation_and_verification_(medical_devices)
|
The Valley of Peace initiative is an effort to promote economic cooperation between Israel , Jordan , and Palestine based around efforts and joint projects in the Arava/Arabah Valley , along which runs the southern portion of the Israel - Jordan border. It received the personal attention and support of Shimon Peres , President of Israel . The initiative involved ongoing joint efforts by regional leaders to launch joint new industrial and economic projects, which will create new local businesses and job growth, and promote ongoing cooperation. [ 1 ] [ 2 ] [ 3 ]
This effort also fits with other new trends and efforts within Israeli and Palestinian society to promote reconciliation based on joint economic effort and dialogue between both groups.
One major component of this plan is the construction and operation of Qualifying Industrial Zones . These are industrial facilities within Jordan and Egypt which can serve as centers of collaborative effort.
The idea for this project began in 2005, when Israel, Jordan and the Palestinian Authority asked the World Bank to analyze the feasibility of this idea. [ 4 ]
The formal proposal for the Valley of Peace initiative began with a joint proposal in 2008 to build a canal between the Red and Dead Seas, desalinating the water, producing hydroelectric power and yielding profits, clean water, jobs and potentially unprecedented regional cooperation. [ 3 ] The study concluded in 2013, and an agreement was signed in 2013 by Israel, Jordan, and the Palestinian Authority to move ahead with the plan. [ 4 ]
In February 2015, Israel and Jordan signed an agreement to exchange water and jointly convey Red Sea brine to the shrinking Dead Sea. The agreement was reported to be worth about $800 million. It was the result of a memorandum of understanding signed among Israeli, Jordanian and Palestinian officials on December 9, 2013, in Washington. Under this agreement, Jordan and Israel will share the potable water produced by a future desalination plant in Aqaba, while a pipeline will supply saltwater to the Dead Sea. [ 5 ] [ 6 ]
In December 2015, Israel and Jordan formally released the technical plans to move ahead with this project. [ 4 ]
A new desalination plant to be built near the Jordanian tourist resort of Aqaba would convert salt water from the Red Sea into fresh water for use in southern Israel and southern Jordan; each region would get eight billion to 13 billion gallons a year. This process would produces about as much brine as a waste product; the brine would be piped more than 100 miles to help replenish the Dead Sea, already known for its high salt content. This would reinforce the status of the Dead Sea as an important economic resource to both nations, in multiple areas including tourism, industry and business. [ 4 ] [ 6 ]
In July 2017, Israel and the Palestinian Authority announced a new deal to provide drinking water for millions of Palestinians. this was part of the larger deal between Israel, Jordan and Palestine to build a 220-kilometer (137-mile) pipeline to convey water from the Red Sea to the Dead Sea. One benefit of this canal would be to replenish the dwindling Dead Sea. Also, the water in this canal will generate electricity for local towns, and will also power a desalination plant to produce drinking water. [ 7 ] [ 8 ]
The effort first got attention in 2008, when various regional leaders began to promote this set of ideas.
One major part of the plan includes the private sector development of a $3 billion, 166 km-long (103-mile) canal system along the Arava, known as the Two Seas Canal . This engineering scheme would bring Red Sea water to the Dead Sea and could provide additional projects and benefits to the region and increase cooperation between Israelis , Jordanians , and Palestinians , through greater development and economic integration. [ 1 ] [ 9 ] [ 10 ] Some environmentalists have criticized the plan, saying that rehabilitation of the Jordan River would be a better way to save the Dead Sea, and would bring less disruption. [ 11 ] [ 12 ]
This Valley of Peace is part of a 20-kilometer [23-mile] corridor being proposed by Israeli President Shimon Peres for regional economic development. About 420 km of the corridor runs along the Jordanian border, with no border fences, and another 100 km touches on the Palestinian territories. Other projects involve the German , Japanese , and Turkish governments and are slated to create up to a million new jobs in Israel and the West Bank. [ 1 ]
As first proposed in 2008, the plan encompassed a number of items. Some possible future developments along the canal may include convention centers, hotels for up to 200,000 people, restaurants, parks, and artificial lakes and lagoons, and greenhouses for winter fruits and vegetables. A high-speed train line and highway would run along the canal allowing travel between the Dead and Red Seas within an hour. The area may also become a free-trade zone, thus attracting investment from around the world. [ 1 ]
The canal might also include a major desalinization plant. In May 2008, it was announced that this project was getting close to being implemented. [ 1 ] [ 3 ]
PIEFZA is a Palestinian economic organization designed to promote participation in the industrial parks which will be created by this effort. [ 13 ] The project will also include a number of other separate efforts and projects, including:
The idea for this project began in 2005, when Israel, Jordan and the Palestinian Authority asked the World Bank to analyze the feasibility of the idea. [ 4 ]
In July 2006, Japan announced a plan for peace called " Corridor for Peace and Prosperity ", which would be based on common economic development and effort, rather than on continuous contention over land. [ 31 ] Shimon Peres gave this idea much attention during his participation in an international conference in New York in September 2006 which was organized by former U.S. President Bill Clinton . [ 32 ]
In March 2007, at a two-day conference in Tokyo which included officials from Japan , Israel and the Palestinian Authority , Japan discussed its plan for peace based on common economic development and effort. Both sides stated their support. [ 33 ]
In March 2007, the Israeli Cabinet officially decided to adopt the Peace Valley plan , which would entail promotion of and cooperation on economic development for Palestinians. [ 33 ] [ 34 ] [ 35 ] However, some news reports indicated there was little chance of movement due to lack of attention by Prime Minister Ehud Olmert and the government of Israel. [ 36 ]
In his inaugural speech in July 2007, Peres mentioned this effort, and asserted that there was great potential for cooperation among Israel, Palestinians, and Jordan . He also noted this might mean positive support from Persian Gulf states. [ 37 ] In August 2007, Peres met with several Israeli businessmen to discuss ways to press the plan forward. [ 38 ] Peres stated that the plan might have many positive effects which might help promote peace. [ 39 ]
In August 2007, Foreign Ministers of Israel, Jordan, the Palestinian Authority, and Japan met in Jericho , and formally agreed to go ahead with this plan. [ 40 ] A ceremony took place that month in Jericho formally launching the project. it was attended by Israeli Foreign Minister Tzipi Livni , Palestinian negotiator Saeb Erekat , Japanese Foreign Minister Taro Aso and Jordanian Foreign Minister Abdul-Ilah Khatib . [ 41 ]
In January 2008, Peres announced that the plan had moved closer to realization, as new details were announced for implementation of joint economic effort in four locations in the West Bank . This included specific plans for industrial projects, and a jointly-built university, and investments from several countries, including Japan, Turkey and Germany . [ 2 ] Peres discussed this with Tony Blair during Blair's visit to the Mideast in February 2008. [ 42 ] Peres said that efforts were moving ahead. [ 43 ]
USAID and the World Bank have reviewed many of the specific proposals in depth, and issued a critique of many strengths and weaknesses of the plan. [ 44 ] In May 2008 Tony Blair announced a new plan for peace and for Palestinian rights, based heavily on the ideas on the Peace Valley plan. [ 45 ]
In May 2008, Peres hosted a conference in celebration of Israel's 60th anniversary, called "Facing Tomorrow". [ 46 ] [ 47 ] He addressed numerous issues related to Israel's future. He discussed the Peace Valley initiative with numerous foreign leaders. [ 48 ] President George Bush expressed support for the idea. [ 49 ] Peres said that the initiative could bring lasting peace and transformation to the region. Regarding Palestinians, he said,
"They haven't established a proper government and they don't have an army. We can't unite them and we can't divide them. We can't help them politically. We can only help them economically. Today, it's possible to coordinate economic aid with both the Jordanians and the Palestinians." [ 48 ]
Benjamin Netanyahu , a former Finance Minister of Israel and the former Prime Minister of Israel has repeatedly made public statements during the 2009 Israeli elections which advocated an approach to peace based on economic cooperation and joint effort, rather than continuous contention over political and diplomatic issues. [ 50 ] He raised these ideas during discussions with U.S. Secretary of State Condoleezza Rice . [ 51 ] Netanyahu continued to advocate these ideas as the Israeli elections got nearer and plans to execute them after he assumed office. [ 52 ]
Netanyahu has said:
Right now, the peace talks are based only one thing, only on peace talks. It makes no sense at this point to talk about the most contractible issue. It's Jerusalem or bust, or right of return or bust. That has led to failure and is likely to lead to failure again....We must weave an economic peace alongside a political process. That means that we have to strengthen the moderate parts of the Palestinian economy by handing rapid growth in those area, rapid economic growth that gives a stake for peace for the ordinary Palestinians." [ 50 ]
Similarly, in a Jerusalem Post interview, Tony Blair, the special envoy for the Quartet, said in May 2009:
Question: ...we're hearing about a determination to build from the bottom up with the Palestinians, including assurances that economic projects that had been stymied will now be advanced...
Blair: ...you have to build from the bottom up as well as negotiate from the top down...because once you take the three "headings" - politics, economics and security... Each of these things take decisions...it will become apparent, whether Israelis are prepared to build from the bottom up, and whether Palestinians understand that Israel will only tolerate a Palestinian state that is a stable and secure neighbor...
...people ask me, why are you bothered about whether there's a bit of agri-industrial thing around Jericho. And I say, because it matters. The detail on the ground really matters. Just supposing you've [created the conditions] in the Jericho area to exploit the [tourism] potential it has got. You're creating a whole set of stake-holders who, when it comes to those difficult concessions, are going to say, "We want the state." They are then believing in a reality, not a shibboleth... [ 53 ]
The Bethlehem Small Enterprise Center opened in early 2008 with funding from Germany, and has helped Palestinian small businesses in various areas, such as helping printers to improve software and olive wood craftsmen to market their products. [ 54 ]
In 2008, the economy in the West Bank improved gradually. Economic growth for the occupied areas reached about 4-5% and unemployment dropped about 3%. Israeli figures indicated that wages in the West Bank rose more than 20% in 2008 and trade rose about 35%. Tourism in Bethlehem increased to about twice its previous levels, and tourism increased by 50% in Jericho. [ 54 ] In 2009, an economic boom began with growth reaching 7 percent, higher than in Israel or the West. Tourism to Bethlehem, which had doubled to 1 million in 2008, rose to nearly 1.5 million in 2009. New car imports increased by 44 percent. New shopping malls opened in Jenin and Nablus. Palestinian developers are planning to build the first modern Palestinian city, Rawabi . [ 55 ]
In 2009, efforts continued to build Palestinian local institutions and governments from the ground up. Much of this work was done by Tony Blair and U.S. General Keith Dayton . Some analysts saw this as a more substantial way to lay a groundwork for viable institutions and for local peace. [ 56 ]
|
https://en.wikipedia.org/wiki/Valley_of_Peace_initiative
|
In nuclear physics , the valley of stability (also called the belt of stability , nuclear valley , energy valley , or beta stability valley ) is a characterization of the stability of nuclides to radioactivity based on their binding energy. [ 1 ] Nuclides are composed of protons and neutrons . The shape of the valley refers to the profile of binding energy as a function of the numbers of neutrons and protons, with the lowest part of the valley corresponding to the region of most stable nuclei . [ 2 ] The line of stable nuclides down the center of the valley of stability is known as the line of beta stability . The sides of the valley correspond to increasing instability to beta decay (β − or β + ). The decay of a nuclide becomes more energetically favorable the further it is from the line of beta stability. The boundaries of the valley correspond to the nuclear drip lines , where nuclides become so unstable they emit single protons or single neutrons . Regions of instability within the valley at high atomic number also include radioactive decay by alpha radiation or spontaneous fission . The shape of the valley is roughly an elongated paraboloid corresponding to the nuclide binding energies as a function of neutron and atomic numbers. [ 1 ]
The nuclides within the valley of stability encompass the entire table of nuclides . The chart of those nuclides is also known as a Segrè chart, after the physicist Emilio Segrè . [ 3 ] The Segrè chart may be considered a map of the nuclear valley. The region of proton and neutron combinations outside of the valley of stability is referred to as the sea of instability. [ 4 ] [ 5 ]
Scientists have long searched for long-lived heavy isotopes outside of the valley of stability, [ 6 ] [ 7 ] [ 8 ] hypothesized by Glenn T. Seaborg in the late 1960s. [ 9 ] [ 10 ] These relatively stable nuclides are expected to have particular configurations of " magic " atomic and neutron numbers , and form a so-called island of stability .
All atomic nuclei are composed of protons and neutrons bound together by the nuclear force . There are 286 primordial nuclides that occur naturally on earth, each corresponding to a unique number of protons, called the atomic number , Z , and a unique number of neutrons, called the neutron number , N . The mass number , A , of a nuclide is the sum of atomic and neutron numbers, A = Z + N . Not all nuclides are stable, however. According to Byrne, [ 3 ] stable nuclides are defined as those having a half-life greater than 10 18 years, and there are many combinations of protons and neutrons that form nuclides that are unstable. A common example of an unstable nuclide is carbon-14 that decays by beta decay into nitrogen-14 with a half-life of about 5,730 years:
In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation and a beta particle and an electron antineutrino are emitted. An essential property of this and all nuclide decays is that the total energy of the decay product is less than that of the original nuclide. The difference between the initial and final nuclide binding energies is carried away by the kinetic energies of the decay products, often the beta particle and its associated neutrino. [ 3 ]
The concept of the valley of stability is a way of organizing all of the nuclides according to binding energy as a function of neutron and proton numbers. [ 1 ] Most stable nuclides have roughly equal numbers of protons and neutrons, so the line for which Z = N forms a rough initial line defining stable nuclides. The greater the number of protons, the more neutrons are required to stabilize a nuclide; nuclides with larger values for Z require an even larger number of neutrons, N > Z , to be stable. The valley of stability is formed by the negative of binding energy, the binding energy being the energy required to break apart the nuclide into its proton and neutron components. The stable nuclides have high binding energy, and these nuclides lie along the bottom of the valley of stability. Nuclides with weaker binding energy have combinations of N and Z that lie off of the line of stability and further up the sides of the valley of stability. Unstable nuclides can be formed in nuclear reactors or supernovas , for example. Such nuclides often decay in sequences of reactions called decay chains that take the resulting nuclides sequentially down the slopes of the valley of stability. The sequence of decays take nuclides toward greater binding energies, and the nuclides terminating the chain are stable. [ 1 ] The valley of stability provides both a conceptual approach for how to organize the myriad stable and unstable nuclides into a coherent picture and an intuitive way to understand how and why sequences of radioactive decay occur. [ 1 ]
The protons and neutrons that comprise an atomic nucleus behave almost identically within the nucleus. The approximate symmetry of isospin treats these particles as identical, but in a different quantum state. This symmetry is only approximate, however, and the nuclear force that binds nucleons together is a complicated function depending on nucleon type, spin state, electric charge, momentum, etc. and with contributions from non- central forces . The nuclear force is not a fundamental force of nature, but a consequence of the residual effects of the strong force that surround the nucleons. One consequence of these complications is that although deuterium , a bound state of a proton (p) and a neutron (n) is stable, exotic nuclides such as diproton or dineutron are unbound. [ 11 ] The nuclear force is not sufficiently strong to form either p-p or n-n bound states, or equivalently, the nuclear force does not form a potential well deep enough to bind these identical nucleons. [ citation needed ]
Stable nuclides require approximately equal numbers of protons and neutrons. The stable nuclide carbon-12 ( 12 C) is composed of six neutrons and six protons, for example. Protons have a positive charge, hence within a nuclide with many protons there are large repulsive forces between protons arising from the Coulomb force . By acting to separate protons from one another, the neutrons within a nuclide play an essential role in stabilizing nuclides. With increasing atomic number, even greater numbers of neutrons are required to obtain stability. The heaviest stable element, lead (Pb), has many more neutrons than protons. The stable nuclide 206 Pb has Z = 82 and N = 124, for example. For this reason, the valley of stability does not follow the line Z = N for A larger than 40 ( Z = 20 is the element calcium ). [ 3 ] Neutron number increases along the line of beta stability at a faster rate than atomic number.
The line of beta stability follows a particular curve of neutron–proton ratio , corresponding to the most stable nuclides. On one side of the valley of stability, this ratio is small, corresponding to an excess of protons over neutrons in the nuclides. These nuclides tend to be unstable to β + decay or electron capture, since such decay converts a proton to a neutron. The decay serves to move the nuclides toward a more stable neutron-proton ratio. On the other side of the valley of stability, this ratio is large, corresponding to an excess of neutrons over protons in the nuclides. These nuclides tend to be unstable to β − decay, since such decay converts neutrons to protons. On this side of the valley of stability, β − decay also serves to move nuclides toward a more stable neutron-proton ratio.
The mass of an atomic nucleus is given by
where m p {\displaystyle m_{p}} and m n {\displaystyle m_{n}} are the rest mass of a proton and a neutron, respectively, and E B {\displaystyle E_{B}} is the total binding energy of the nucleus. The mass–energy equivalence is used here. The binding energy is subtracted from the sum of the proton and neutron masses because the mass of the nucleus is less than that sum. This property, called the mass defect , is necessary for a stable nucleus; within a nucleus, the nuclides are trapped by a potential well . A semi-empirical mass formula states that the binding energy will take the form
The difference between the mass of a nucleus and the sum of the masses of the neutrons and protons that comprise it is known as the mass defect . E B is often divided by the mass number to obtain binding energy per nucleon for comparisons of binding energies between nuclides. Each of the terms in this formula has a theoretical basis. The coefficients a V {\displaystyle a_{V}} , a S {\displaystyle a_{S}} , a C {\displaystyle a_{C}} , a A {\displaystyle a_{A}} and a coefficient that appears in the formula for δ ( A , Z ) {\displaystyle \delta (A,Z)} are determined empirically.
The binding energy expression gives a quantitative estimate for the neutron-proton ratio. The energy is a quadratic expression in Z that is minimized when the neutron-proton ratio is N / Z ≈ 1 + a C 2 a A A 2 / 3 {\displaystyle N/Z\approx 1+{\frac {a_{C}}{2a_{A}}}A^{2/3}} . This equation for the neutron-proton ratio shows that in stable nuclides the number of neutrons is greater than the number of protons by a factor that scales as A 2 / 3 {\displaystyle A^{2/3}} .
The figure at right shows the average binding energy per nucleon as a function of atomic mass number along the line of beta stability, that is, along the bottom of the valley of stability. For very small atomic mass number (H, He, Li), binding energy per nucleon is small, and this energy increases rapidly with atomic mass number. Nickel-62 (28 protons, 34 neutrons) has the highest mean binding energy of all nuclides, while iron-58 (26 protons, 32 neutrons) and iron-56 (26 protons, 30 neutrons) are a close second and third. [ 13 ] These nuclides lie at the very bottom of the valley of stability. From this bottom, the average binding energy per nucleon slowly decreases with increasing atomic mass number. The heavy nuclide 238 U is not stable, but is slow to decay with a half-life of 4.5 billion years. [ 1 ] It has relatively small binding energy per nucleon.
For β − decay, nuclear reactions have the generic form
where A and Z are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final nuclides, respectively. For β + decay, the generic form is
These reactions correspond to the decay of a neutron to a proton, or the decay of a proton to a neutron, within the nucleus, respectively. These reactions begin on one side or the other of the valley of stability, and the directions of the reactions are to move the initial nuclides down the valley walls towards a region of greater stability, that is, toward greater binding energy.
The figure at right shows the average binding energy per nucleon across the valley of stability for nuclides with mass number A = 125. [ 15 ] At the bottom of this curve is tellurium ( 52 Te), which is stable. Nuclides to the left of 52 Te are unstable with an excess of neutrons, while those on the right are unstable with an excess of protons. A nuclide on the left therefore undergoes β − decay, which converts a neutron to a proton, hence shifts the nuclide to the right and toward greater stability. A nuclide on the right similarly undergoes β + decay, which shifts the nuclide to the left and toward greater stability.
Heavy nuclides are susceptible to α decay, and these nuclear reactions have the generic form,
As in β decay, the decay product X′ has greater binding energy and it is closer to the middle of the valley of stability. The α particle carries away two neutrons and two protons, leaving a lighter nuclide. Since heavy nuclides have many more neutrons than protons, α decay increases a nuclide's neutron-proton ratio.
The boundaries of the valley of stability, that is, the upper limits of the valley walls, are the neutron drip line on the neutron-rich side, and the proton drip line on the proton-rich side. The nucleon drip lines are at the extremes of the neutron-proton ratio. At neutron–proton ratios beyond the drip lines, no nuclei can exist. The location of the neutron drip line is not well known for most of the Segrè chart, whereas the proton and alpha drip lines have been measured for a wide range of elements. Drip lines are defined for protons, neutrons, and alpha particles, and these all play important roles in nuclear physics.
The difference in binding energy between neighboring nuclides increases as the sides of the valley of stability are ascended, and correspondingly the nuclide half-lives decrease, as indicated in the figure above. If one were to add nucleons one at a time to a given nuclide, the process will eventually lead to a newly formed nuclide that is so unstable that it promptly decays by emitting a proton (or neutron). Colloquially speaking, the nucleon has 'leaked' or 'dripped' out of the nucleus, hence giving rise to the term "drip line".
Proton emission is not seen in naturally occurring nuclides. Proton emitters can be produced via nuclear reactions , usually utilizing linear particle accelerators (linac). Although prompt (i.e. not beta-delayed) proton emission was observed from an isomer in cobalt-53 as early as 1969, no other proton-emitting states were found until 1981, when the proton radioactive ground states of lutetium-151 and thulium-147 were observed at experiments at the GSI in West Germany. [ 16 ] Research in the field flourished after this breakthrough, and to date more than 25 nuclides have been found to exhibit proton emission. The study of proton emission has aided the understanding of nuclear deformation, masses and structure, and it is an example of quantum tunneling .
Two examples of nuclides that emit neutrons are beryllium-13 (mean life 2.7 × 10 −21 s ) and helium-5 ( 7 × 10 −22 s ). Since only a neutron is lost in this process, the atom does not gain or lose any protons, and so it does not become an atom of a different element. Instead, the atom will become a new isotope of the original element, such as beryllium-13 becoming beryllium-12 after emitting one of its neutrons. [ 17 ]
In nuclear engineering , a prompt neutron is a neutron immediately emitted by a nuclear fission event. Prompt neutrons emerge from the fission of an unstable fissionable or fissile heavy nucleus almost instantaneously. Delayed neutron decay can occur within the same context, emitted after beta decay of one of the fission products . Delayed neutron decay can occur at times from a few milliseconds to a few minutes. [ 18 ] The U.S. Nuclear Regulatory Commission defines a prompt neutron as a neutron emerging from fission within 10 −14 seconds. [ 19 ]
The island of stability is a region outside the valley of stability where it is predicted that a set of heavy isotopes with near magic numbers of protons and neutrons will locally reverse the trend of decreasing stability in elements heavier than uranium .
The hypothesis for the island of stability is based upon the nuclear shell model , which implies that the atomic nucleus is built up in "shells" in a manner similar to the structure of the much larger electron shells in atoms. In both cases, shells are just groups of quantum energy levels that are relatively close to each other. Energy levels from quantum states in two different shells will be separated by a relatively large energy gap. So when the number of neutrons and protons completely fills the energy levels of a given shell in the nucleus, the binding energy per nucleon will reach a local maximum and thus that particular configuration will have a longer lifetime than nearby isotopes that do not possess filled shells. [ 20 ]
A filled shell would have " magic numbers " of neutrons and protons. One possible magic number of neutrons for spherical nuclei is 184, and some possible matching proton numbers are 114, 120 and 126. These configurations imply that the most stable spherical isotopes would be flerovium -298, unbinilium -304 and unbihexium -310. Of particular note is 298 Fl, which would be " doubly magic " (both its proton number of 114 and neutron number of 184 are thought to be magic). This doubly magic configuration is the most likely to have a very long half-life. The next lighter doubly magic spherical nucleus is lead -208, the heaviest known stable nucleus and most stable heavy metal.
The valley of stability can be helpful in interpreting and understanding properties of nuclear decay processes such as decay chains and nuclear fission .
Radioactive decay often proceeds via a sequence of steps known as a decay chain. For example, 238 U decays to 234 Th which decays to 234m Pa and so on, eventually reaching 206 Pb :
With each step of this sequence of reactions, energy is released and the decay products move further down the valley of stability towards the line of beta stability. 206 Pb is stable and lies on the line of beta stability.
The fission processes that occur within nuclear reactors are accompanied by the release of neutrons that sustain the chain reaction . Fission occurs when a heavy nuclide such as uranium-235 absorbs a neutron and breaks into nuclides of lighter elements such as barium or krypton , usually with the release of additional neutrons. Like all nuclides with a high atomic number, these uranium nuclei require many neutrons to bolster their stability, so they have a large neutron-proton ratio ( N / Z ). The nuclei resulting from a fission ( fission products ) inherit a similar N / Z , but have atomic numbers that are approximately half that of uranium. [ 1 ] Isotopes with the atomic number of the fission products and an N / Z near that of uranium or other fissionable nuclei have too many neutrons to be stable; this neutron excess is why multiple free neutrons but no free protons are usually emitted in the fission process, and it is also why many fission product nuclei undergo a long chain of β − decays, each of which converts a nucleus N / Z to ( N − 1)/( Z + 1), where N and Z are, respectively, the numbers of neutrons and protons contained in the nucleus.
When fission reactions are sustained at a given rate, such as in a liquid-cooled or solid fuel nuclear reactor, the nuclear fuel in the system produces many antineutrinos for each fission that has occurred. These antineutrinos come from the decay of fission products that, as their nuclei progress down a β − decay chain toward the valley of stability, emit an antineutrino along with each β − particle. In 1956, Reines and Cowan exploited the (anticipated) intense flux of antineutrinos from a nuclear reactor in the design of an experiment to detect and confirm the existence of these elusive particles. [ 21 ]
|
https://en.wikipedia.org/wiki/Valley_of_stability
|
Valproate ( valproic acid , VPA , sodium valproate , and valproate semisodium forms) are medications primarily used to treat epilepsy and bipolar disorder and prevent migraine headaches . [ 6 ] They are useful for the prevention of seizures in those with absence seizures , partial seizures , and generalized seizures . [ 6 ] They can be given intravenously or by mouth, and the tablet forms exist in both long- and short-acting formulations. [ 6 ]
Common side effects of valproate include nausea, vomiting, somnolence , and dry mouth. [ 6 ] Serious side effects can include liver failure , and regular monitoring of liver function tests is therefore recommended. [ 6 ] Other serious risks include pancreatitis and an increased suicide risk. [ 6 ] Valproate is known to cause serious abnormalities or birth defects in the unborn child if taken during pregnancy , [ 6 ] [ 7 ] and is contra-indicated for women of childbearing age unless the drug is essential to their medical condition and the person is also prescribed a contraceptive . [ 6 ] [ 8 ] [ 3 ] Reproductive warnings have also been issued for men using the drug. [ 9 ] The United States Food and Drug Administration has indicated a black box warning given the frequency and severity of the side effects and teratogenicity . [ 3 ] Additionally, there is also a black box warning due to risk of hepatotoxicity and pancreatitis. [ 10 ] As of 2022 the drug was still prescribed in the UK to potentially pregnant women, but use declined by 51% from 2018–19 to 2020–21. [ 11 ] Vaproate has been in use in Japan for the prophylaxis of migraine since 2011. [ 12 ] It is approved as an antimaniac and anti seizure in Japan as well. [ 13 ] In UK, vaproate is approved for bipolar mania and epilepsy, and both valproate and divalproex are approved, although divalproex sodium [ 14 ] is known as valproate semisodium. [ 15 ]
Valproate's precise mechanism of action is unclear. [ 6 ] [ 16 ] Proposed mechanisms include affecting GABA levels, blocking voltage-gated sodium channels , inhibiting histone deacetylases , and increasing LEF1 . [ 17 ] [ 18 ] [ 19 ] Valproic acid is a branched short-chain fatty acid (SCFA), a derivative of valeric acid . [ 17 ]
Valproate was originally synthesized in 1881 and came into medical use in 1962. [ 20 ] It is on the World Health Organization's List of Essential Medicines . [ 21 ] It is available as a generic medication . [ 6 ] In 2022, it was the 174th most commonly prescribed medication in the United States, with more than 2 million prescriptions. [ 22 ] [ 23 ]
Valproate or valproic acid is used primarily to treat epilepsy and bipolar disorder and to prevent migraine headaches . [ 24 ]
Valproate has a broad spectrum of anticonvulsant activity, although it is primarily used as a first-line treatment for tonic–clonic seizures , absence seizures and myoclonic seizures and as a second-line treatment for partial seizures and infantile spasms . [ 24 ] [ 25 ] It has also been successfully given intravenously to treat status epilepticus . [ 26 ] [ 27 ]
In the US, valproic acid is also prescribed as an anti-epileptic drug indicated for the treatment of manic episodes associated with bipolar disorder; monotherapy and adjunctive therapy of complex partial seizures and simple and complex absence seizures; adjunctive therapy in people with multiple seizure types that include absence seizures. [ 3 ] [ 4 ]
Valproate products are used to treat manic or mixed episodes of bipolar disorder . [ 28 ] [ 29 ]
A 2016 systematic review compared the efficacy of valproate as an add-on for people with schizophrenia : [ 30 ]
Based upon five case reports, valproic acid may have efficacy in controlling the symptoms of the dopamine dysregulation syndrome that arise from the treatment of Parkinson's disease with levodopa . [ 31 ] [ 32 ] [ 33 ]
Valproate is not commonly used to prevent or treat migraine headaches , but it may be prescribed if other medications are not effective. [ 34 ]
The medication has been tested in the treatment of AIDS and cancer , owing to its histone-deacetylase-inhibiting effects . [ citation needed ] It has cardioprotective, kidney protective, antiinflammatory, and antimicrobial effects. [ 35 ]
Contraindications include:
Most common adverse effects include: [ 3 ]
Serious adverse effects include: [ 3 ]
Valproic acid has a black box warning for hepatotoxicity , pancreatitis , and fetal abnormalities. [ 3 ]
There is evidence that valproic acid may cause premature growth plate ossification in children and adolescents, resulting in decreased height. [ 39 ] [ 40 ] [ 41 ] Valproic acid can also cause mydriasis , a dilation of the pupils. [ 42 ] There is evidence that shows valproic acid may increase the chance of polycystic ovary syndrome (PCOS) in women with epilepsy or bipolar disorder. Studies have shown this risk of PCOS is higher in women with epilepsy compared to those with bipolar disorder. [ 43 ] Weight gain is also possible. [ 44 ]
Valproate may cause increased somnolence in the elderly. In a trial of valproate in elderly patients with dementia , a significantly higher portion of valproate patients had somnolence compared to placebo. In approximately one-half of such patients, there was associated reduced nutritional intake and weight loss. [ 3 ]
Excessive amounts of valproic acid can result in somnolence , tremor , stupor , respiratory depression , coma , metabolic acidosis , and death. In general, serum or plasma valproic acid concentrations are in a range of 20–100 mg/L during controlled therapy, but may reach 150–1500 mg/L following acute poisoning. Monitoring of the serum level is often accomplished using commercial immunoassay techniques, although some laboratories employ gas or liquid chromatography. [ 50 ] In contrast to other antiepileptic drugs , at present there is little favorable evidence for salivary therapeutic drug monitoring. Salivary levels of valproic acid correlate poorly with serum levels, partly due to valproate's weak acid property (p K a of 4.9). [ 51 ]
In severe intoxication, hemoperfusion or hemofiltration can be an effective means of hastening elimination of the drug from the body. [ 52 ] [ 53 ] Supportive therapy should be given to all patients experiencing an overdose and urine output should be monitored. [ 3 ] Supplemental L -carnitine is indicated in patients having an acute overdose [ 54 ] [ 55 ] and also prophylactically [ 54 ] in high risk patients. Acetyl- L -carnitine lowers hyperammonemia less markedly [ 56 ] than L -carnitine .
Valproate inhibits CYP2C9 , glucuronyl transferase , and epoxide hydrolase and is highly protein bound and hence may interact with drugs that are substrates for any of these enzymes or are highly protein bound themselves. [ 36 ] It may also potentiate the CNS depressant effects of alcohol. [ 36 ] It should not be given in conjunction with other antiepileptics due to the potential for reduced clearance of other antiepileptics (including carbamazepine , lamotrigine , phenytoin and phenobarbitone ) and itself. [ 36 ] It may also interact with: [ 3 ] [ 36 ] [ 57 ]
Although the mechanism of action of valproate is not fully understood, [ 36 ] traditionally, its anticonvulsant effect has been attributed to the blockade of voltage-gated sodium channels and increased brain levels of the inhibitory synaptic neurotransmitter gamma-aminobutyric acid (GABA). [ 36 ] The GABAergic effect is also believed to contribute towards the anti-manic properties of valproate. [ 36 ] In animals, sodium valproate raises cerebral and cerebellar levels of GABA, possibly by inhibiting GABA degradative enzymes, such as GABA transaminase , succinate-semialdehyde dehydrogenase and by inhibiting the re-uptake of GABA by neuronal cells. [ 36 ] Prevention of neurotransmitter-induced hyperexcitability of nerve cells via Kv7.2 channel and AKAP5 may also contribute to its mechanism. [ 58 ] Valproate has been shown to protect against a seizure-induced reduction in phosphatidylinositol (3,4,5)-trisphosphate (PIP3) as a potential therapeutic mechanism. [ 59 ]
Valproate is a histone deacetylase inhibitor . By inhibition of histone deacetylase , it promotes more transcriptionally active chromatin structures, that is it exerts an epigenetic effect. This has been proven in mice: Valproic acid induced histone hyperacetylation had brain function effects on the next generation of mice through changes in sperm DNA methylation. [ 60 ] Intermediate molecules include VEGF , BDNF , and GDNF . [ 61 ] [ 62 ]
Valproic acid has been found to be an antagonist of the androgen and progesterone receptors , and hence as a nonsteroidal antiandrogen and antiprogestogen , at concentrations much lower than therapeutic serum levels. [ 63 ] In addition, the drug has been identified as a potent aromatase inhibitor , and suppresses estrogen concentrations. [ 64 ] These actions are likely to be involved in the reproductive endocrine disturbances seen with valproic acid treatment. [ 63 ] [ 64 ]
Valproic acid has been found to directly stimulate androgen biosynthesis in the gonads via inhibition of histone deacetylases and has been associated with hyperandrogenism in women and increased 4-androstenedione levels in men. [ 65 ] [ 66 ] High rates of polycystic ovary syndrome and menstrual disorders have also been observed in women treated with valproic acid. [ 66 ]
Taken by mouth, valproate is rapidly and virtually completely absorbed from the gut. [ 67 ] When in the bloodstream, 80–90% of the substance are bound to plasma proteins , mainly albumin . Protein binding is saturable: it decreases with increasing valproate concentration, low albumin concentrations, the patient's age, additional use of other drugs such as aspirin , as well as liver and kidney impairment. [ 69 ] [ 70 ] Concentrations in the cerebrospinal fluid and in breast milk are 1 to 10% of blood plasma concentrations. [ 67 ]
The vast majority of valproate metabolism occurs in the liver . [ 71 ] Valproate is known to be metabolized by the cytochrome P450 enzymes CYP2A6 , CYP2B6 , CYP2C9 , and CYP3A5 . [ 71 ] It is also known to be metabolized by the UDP-glucuronosyltransferase enzymes UGT1A3 , UGT1A4 , UGT1A6 , UGT1A8 , UGT1A9 , UGT1A10 , UGT2B7 , and UGT2B15 . [ 71 ] Some of the known metabolites of valproate by these enzymes and uncharacterized enzymes include (see image): [ 71 ]
All in all, over 20 metabolites are known. [ 67 ]
In adult patients taking valproate alone, 30–50% of an administered dose is excreted in urine as the glucuronide conjugate. [ 71 ] The other major pathway in the metabolism of valproate is mitochondrial beta oxidation, which typically accounts for over 40% of an administered dose. [ 71 ] Typically, less than 20% of an administered dose is eliminated by other oxidative mechanisms. [ 71 ] Less than 3% of an administered dose of valproate is excreted unchanged (i.e., as valproate) in urine. [ 71 ] Only a small amount is excreted via the faeces. [ 67 ] Elimination half-life is 16±3 hours and can decrease to 4–9 hours when combined with enzyme inducers . [ 67 ] [ 70 ]
Valproic acid is a branched short-chain fatty acid and the 2- n - propyl derivative of valeric acid . [ 17 ]
Valproic acid was first synthesized in 1882 by Beverly S. Burton as an analogue of valeric acid , found naturally in valerian . [ 72 ] Valproic acid is a carboxylic acid , a clear liquid at room temperature. For many decades, its only use was in laboratories as a "metabolically inert" solvent for organic compounds. In 1962, the French researcher Pierre Eymard serendipitously discovered the anticonvulsant properties of valproic acid while using it as a vehicle for a number of other compounds that were being screened for antiseizure activity. He found it prevented pentylenetetrazol -induced convulsions in laboratory rats . [ 73 ] It was approved as an antiepileptic drug in 1967 in France and has become the most widely prescribed antiepileptic drug worldwide. [ 74 ] Valproic acid has also been used for migraine prophylaxis and bipolar disorder. [ 75 ]
Valproate is available as a generic medication . [ 6 ]
In 2012, pharmaceutical company Abbott paid $1.6 billion in fines to US federal and state governments for illegal promotion of off-label uses for Depakote, including the sedation of elderly nursing home residents. [ 106 ] [ 107 ]
Some studies have suggested that valproate may reopen the critical period for learning absolute pitch and possibly other skills such as language. [ 108 ] [ 109 ]
Valproate exists in two main molecular variants: sodium valproate and valproic acid without sodium (often implied by simply valproate ). A mixture between these two is termed semisodium valproate . It is unclear whether there is any difference in efficacy between these variants, except from the fact that about 10% more mass of sodium valproate is needed than valproic acid without sodium to compensate for the sodium itself. [ 112 ] In USA, Europe and many countrie the three variantes of vaproate are sold: vaproic acid, sodium valproate and valproate semisodium also known as divalproex sodium, the latter is believed to have fewer gastrointestinal side-effects. [ 15 ] [ 113 ] Divalproex sodium tablets are a formulation comprising valproate sodium and valproic acid in a 1:1 molar relationship.
Magnesium valproate is also available in China . [ 114 ] [ 115 ]
Valproate is a negative ion. The conjugate acid of valproate is valproic acid (VPA). Valproic acid is fully ionized into valproate at the physiologic pH of the human body, and valproate is the active form of the drug. Sodium valproate is the sodium salt of valproic acid. Divalproex sodium is a coordination complex composed of equal parts of valproic acid and sodium valproate. [ 116 ]
Branded products include:
All the above formulations are Pharmac -subsidised. [ 120 ]
In much of Europe, Dépakine and Depakine Chrono (tablets) are equivalent to Epilim and Epilim Chrono above.
Depalept and Depalept Chrono (extended release tablets) are equivalent to Epilim and Epilim Chrono above. Manufactured and distributed by Sanofi-Aventis .
A 2023 systematic review of the literature identified only one study in which valproate was evaluated in the treatment of seizures in infants aged 1 to 36 months. In a randomized control trial, valproate alone was found to show poorer outcomes for infants than valproate plus levetiracetam in terms of reduction of seizures, freedom from seizures, daily living ability, quality of life, and cognitive abilities. [ 123 ]
|
https://en.wikipedia.org/wiki/Valproate
|
In geometry , a valuation is a finitely additive function from a collection of subsets of a set X {\displaystyle X} to an abelian semigroup .
For example, Lebesgue measure is a valuation on finite unions of convex bodies of R n . {\displaystyle \mathbb {R} ^{n}.} Other examples of valuations on finite unions of convex bodies of R n {\displaystyle \mathbb {R} ^{n}} are surface area , mean width, and Euler characteristic .
In geometry, continuity (or smoothness ) conditions are often imposed on valuations, but there are also purely discrete facets of the theory. In fact, the concept of valuation has its origin in the dissection theory of polytopes and in particular Hilbert's third problem , which has grown into a rich theory reliant on tools from abstract algebra.
Let X {\displaystyle X} be a set, and let S {\displaystyle {\mathcal {S}}} be a collection of subsets of X . {\displaystyle X.} A function ϕ {\displaystyle \phi } on S {\displaystyle {\mathcal {S}}} with values in an abelian semigroup R {\displaystyle R} is called a valuation if it satisfies ϕ ( A ∪ B ) + ϕ ( A ∩ B ) = ϕ ( A ) + ϕ ( B ) {\displaystyle \phi (A\cup B)+\phi (A\cap B)=\phi (A)+\phi (B)} whenever A , {\displaystyle A,} B , {\displaystyle B,} A ∪ B , {\displaystyle A\cup B,} and A ∩ B {\displaystyle A\cap B} are elements of S . {\displaystyle {\mathcal {S}}.} If ∅ ∈ S , {\displaystyle \emptyset \in {\mathcal {S}},} then one always assumes ϕ ( ∅ ) = 0. {\displaystyle \phi (\emptyset )=0.}
Some common examples of S {\displaystyle {\mathcal {S}}} are
Let K ( R n ) {\displaystyle {\mathcal {K}}(\mathbb {R} ^{n})} be the set of convex bodies in R n . {\displaystyle \mathbb {R} ^{n}.} Then some valuations on K ( R n ) {\displaystyle {\mathcal {K}}(\mathbb {R} ^{n})} are
Some other valuations are
From here on, let V = R n {\displaystyle V=\mathbb {R} ^{n}} , let K ( V ) {\displaystyle {\mathcal {K}}(V)} be the set of convex bodies in V {\displaystyle V} , and let ϕ {\displaystyle \phi } be a valuation on K ( V ) {\displaystyle {\mathcal {K}}(V)} .
We say ϕ {\displaystyle \phi } is translation invariant if, for all K ∈ K ( V ) {\displaystyle K\in {\mathcal {K}}(V)} and x ∈ V {\displaystyle x\in V} , we have ϕ ( K + x ) = ϕ ( K ) {\displaystyle \phi (K+x)=\phi (K)} .
Let ( K , L ) ∈ K ( V ) 2 {\displaystyle (K,L)\in {\mathcal {K}}(V)^{2}} . The Hausdorff distance d H ( K , L ) {\displaystyle d_{H}(K,L)} is defined as d H ( K , L ) = inf { ε > 0 : K ⊂ L ε and L ⊂ K ε } , {\displaystyle d_{H}(K,L)=\inf\{\varepsilon >0:K\subset L_{\varepsilon }{\text{ and }}L\subset K_{\varepsilon }\},} where K ε {\displaystyle K_{\varepsilon }} is the ε {\displaystyle \varepsilon } -neighborhood of K {\displaystyle K} under some Euclidean inner product. Equipped with this metric, K ( V ) {\displaystyle {\mathcal {K}}(V)} is a locally compact space .
The space of continuous, translation-invariant valuations from K ( V ) {\displaystyle {\mathcal {K}}(V)} to C {\displaystyle \mathbb {C} } is denoted by Val ( V ) . {\displaystyle \operatorname {Val} (V).}
The topology on Val ( V ) {\displaystyle \operatorname {Val} (V)} is the topology of uniform convergence on compact subsets of K ( V ) . {\displaystyle {\mathcal {K}}(V).} Equipped with the norm ‖ ϕ ‖ = max { | ϕ ( K ) | : K ⊂ B } , {\displaystyle \|\phi \|=\max\{|\phi (K)|:K\subset B\},} where B ⊂ V {\displaystyle B\subset V} is a bounded subset with nonempty interior, Val ( V ) {\displaystyle \operatorname {Val} (V)} is a Banach space .
A translation-invariant continuous valuation ϕ ∈ Val ( V ) {\displaystyle \phi \in \operatorname {Val} (V)} is said to be i {\displaystyle i} -homogeneous if ϕ ( λ K ) = λ i ϕ ( K ) {\displaystyle \phi (\lambda K)=\lambda ^{i}\phi (K)} for all λ > 0 {\displaystyle \lambda >0} and K ∈ K ( V ) . {\displaystyle K\in {\mathcal {K}}(V).} The subset Val i ( V ) {\displaystyle \operatorname {Val} _{i}(V)} of i {\displaystyle i} -homogeneous valuations is a vector subspace of Val ( V ) . {\displaystyle \operatorname {Val} (V).} McMullen's decomposition theorem [ 1 ] states that
Val ( V ) = ⨁ i = 0 n Val i ( V ) , n = dim V . {\displaystyle \operatorname {Val} (V)=\bigoplus _{i=0}^{n}\operatorname {Val} _{i}(V),\qquad n=\dim V.}
In particular, the degree of a homogeneous valuation is always an integer between 0 {\displaystyle 0} and n = dim V . {\displaystyle n=\operatorname {dim} V.}
Valuations are not only graded by the degree of homogeneity, but also by the parity with respect to the reflection through the origin, namely Val i = Val i + ⊕ Val i − , {\displaystyle \operatorname {Val} _{i}=\operatorname {Val} _{i}^{+}\oplus \operatorname {Val} _{i}^{-},} where ϕ ∈ Val i ϵ {\displaystyle \phi \in \operatorname {Val} _{i}^{\epsilon }} with ϵ ∈ { + , − } {\displaystyle \epsilon \in \{+,-\}} if and only if ϕ ( − K ) = ϵ ϕ ( K ) {\displaystyle \phi (-K)=\epsilon \phi (K)} for all convex bodies K . {\displaystyle K.} The elements of Val i + {\displaystyle \operatorname {Val} _{i}^{+}} and Val i − {\displaystyle \operatorname {Val} _{i}^{-}} are said to be even and odd , respectively.
It is a simple fact that Val 0 ( V ) {\displaystyle \operatorname {Val} _{0}(V)} is 1 {\displaystyle 1} -dimensional and spanned by the Euler characteristic χ , {\displaystyle \chi ,} that is, consists of the constant valuations on K ( V ) . {\displaystyle {\mathcal {K}}(V).}
In 1957 Hadwiger [ 2 ] proved that Val n ( V ) {\displaystyle \operatorname {Val} _{n}(V)} (where n = dim V {\displaystyle n=\dim V} ) coincides with the 1 {\displaystyle 1} -dimensional space of Lebesgue measures on V . {\displaystyle V.}
A valuation ϕ ∈ Val ( R n ) {\displaystyle \phi \in \operatorname {Val} (\mathbb {R} ^{n})} is simple if ϕ ( K ) = 0 {\displaystyle \phi (K)=0} for all convex bodies with dim K < n . {\displaystyle \dim K<n.} Schneider [ 3 ] in 1996 described all simple valuations on R n {\displaystyle \mathbb {R} ^{n}} : they are given by ϕ ( K ) = c vol ( K ) + ∫ S n − 1 f ( θ ) d σ K ( θ ) , {\displaystyle \phi (K)=c\operatorname {vol} (K)+\int _{S^{n-1}}f(\theta )d\sigma _{K}(\theta ),} where c ∈ C , {\displaystyle c\in \mathbb {C} ,} f ∈ C ( S n − 1 ) {\displaystyle f\in C(S^{n-1})} is an arbitrary odd function on the unit sphere S n − 1 ⊂ R n , {\displaystyle S^{n-1}\subset \mathbb {R} ^{n},} and σ K {\displaystyle \sigma _{K}} is the surface area measure of K . {\displaystyle K.} In particular, any simple valuation is the sum of an n {\displaystyle n} - and an ( n − 1 ) {\displaystyle (n-1)} -homogeneous valuation. This in turn implies that an i {\displaystyle i} -homogeneous valuation is uniquely determined by its restrictions to all ( i + 1 ) {\displaystyle (i+1)} -dimensional subspaces.
The Klain embedding is a linear injection of Val i + ( V ) , {\displaystyle \operatorname {Val} _{i}^{+}(V),} the space of even i {\displaystyle i} -homogeneous valuations, into the space of continuous sections of a canonical complex line bundle over the Grassmannian Gr i ( V ) {\displaystyle \operatorname {Gr} _{i}(V)} of i {\displaystyle i} -dimensional linear subspaces of V . {\displaystyle V.} Its construction is based on Hadwiger's characterization [ 2 ] of n {\displaystyle n} -homogeneous valuations. If ϕ ∈ Val i ( V ) {\displaystyle \phi \in \operatorname {Val} _{i}(V)} and E ∈ Gr i ( V ) , {\displaystyle E\in \operatorname {Gr} _{i}(V),} then the restriction ϕ | E {\displaystyle \phi |_{E}} is an element Val i ( E ) , {\displaystyle \operatorname {Val} _{i}(E),} and by Hadwiger's theorem it is a Lebesgue measure.
Hence Kl ϕ ( E ) = ϕ | E {\displaystyle \operatorname {Kl} _{\phi }(E)=\phi |_{E}} defines a continuous section of the line bundle D e n s {\displaystyle Dens} over Gr i ( V ) {\displaystyle \operatorname {Gr} _{i}(V)} with fiber over E {\displaystyle E} equal to the 1 {\displaystyle 1} -dimensional space Dens ( E ) {\displaystyle \operatorname {Dens} (E)} of densities (Lebesgue measures) on E . {\displaystyle E.}
Theorem (Klain [ 4 ] ). The linear map Kl : Val i + ( V ) → C ( Gr i ( V ) , Dens ) {\displaystyle \operatorname {Kl} :\operatorname {Val} _{i}^{+}(V)\to C(\operatorname {Gr} _{i}(V),\operatorname {Dens} )} is injective.
A different injection, known as the Schneider embedding, exists for odd valuations. It is based on Schneider's description of simple valuations. [ 3 ] It is a linear injection of Val i − ( V ) , {\displaystyle \operatorname {Val} _{i}^{-}(V),} the space of odd i {\displaystyle i} -homogeneous valuations, into a certain quotient of the space of continuous sections of a line bundle over the partial flag manifold of cooriented pairs ( F i ⊂ E i + 1 ) . {\displaystyle (F^{i}\subset E^{i+1}).} Its definition is reminiscent of the Klain embedding, but more involved. Details can be found in. [ 5 ]
The Goodey-Weil embedding is a linear injection of Val i {\displaystyle \operatorname {Val} _{i}} into the space of distributions on the i {\displaystyle i} -fold product of the ( n − 1 ) {\displaystyle (n-1)} -dimensional sphere. It is nothing but the Schwartz kernel of a natural polarization that any ϕ ∈ Val k ( V ) {\displaystyle \phi \in \operatorname {Val} _{k}(V)} admits, namely as a functional on the k {\displaystyle k} -fold product of C 2 ( S n − 1 ) , {\displaystyle C^{2}(S^{n-1}),} the latter space of functions having the geometric meaning of differences of support functions of smooth convex bodies. For details, see. [ 5 ]
The classical theorems of Hadwiger, Schneider and McMullen give fairly explicit descriptions of valuations that are homogeneous of degree 1 , {\displaystyle 1,} n − 1 , {\displaystyle n-1,} and n = dim V . {\displaystyle n=\operatorname {dim} V.} But for degrees 1 < i < n − 1 {\displaystyle 1<i<n-1} very little was known before the turn of the 21st century. McMullen's conjecture is the statement that the valuations ϕ A ( K ) = vol n ( K + A ) , A ∈ K ( V ) , {\displaystyle \phi _{A}(K)=\operatorname {vol} _{n}(K+A),\qquad A\in {\mathcal {K}}(V),} span a dense subspace of Val ( V ) . {\displaystyle \operatorname {Val} (V).} McMullen's conjecture was confirmed by Alesker in a much stronger form, which became known as the Irreducibility Theorem:
Theorem (Alesker [ 6 ] ). For every 0 ≤ i ≤ n , {\displaystyle 0\leq i\leq n,} the natural action of G L ( V ) {\displaystyle GL(V)} on the spaces Val i + ( V ) {\displaystyle \operatorname {Val} _{i}^{+}(V)} and Val i − ( V ) {\displaystyle \operatorname {Val} _{i}^{-}(V)} is irreducible.
Here the action of the general linear group G L ( V ) {\displaystyle GL(V)} on Val ( V ) {\displaystyle \operatorname {Val} (V)} is given by ( g ⋅ ϕ ) ( K ) = ϕ ( g − 1 K ) . {\displaystyle (g\cdot \phi )(K)=\phi (g^{-1}K).} The proof of the Irreducibility Theorem is based on the embedding theorems of the previous section and Beilinson-Bernstein localization .
A valuation ϕ ∈ Val ( V ) {\displaystyle \phi \in \operatorname {Val} (V)} is called smooth if the map g ↦ g ⋅ ϕ {\displaystyle g\mapsto g\cdot \phi } from G L ( V ) {\displaystyle GL(V)} to Val ( V ) {\displaystyle \operatorname {Val} (V)} is smooth. In other words, ϕ {\displaystyle \phi } is smooth if and only if ϕ {\displaystyle \phi } is a smooth vector of the natural representation of G L ( V ) {\displaystyle GL(V)} on Val ( V ) . {\displaystyle \operatorname {Val} (V).} The space of smooth valuations Val ∞ ( V ) {\displaystyle \operatorname {Val} ^{\infty }(V)} is dense in Val ( V ) {\displaystyle \operatorname {Val} (V)} ; it comes equipped with a natural Fréchet-space topology, which is finer than the one induced from Val ( V ) . {\displaystyle \operatorname {Val} (V).}
For every (complex-valued) smooth function f {\displaystyle f} on Gr i ( R n ) , {\displaystyle \operatorname {Gr} _{i}(\mathbb {R} ^{n}),} ϕ ( K ) = ∫ Gr i ( R n ) vol i ( P E K ) f ( E ) d E , {\displaystyle \phi (K)=\int _{\operatorname {Gr} _{i}(\mathbb {R} ^{n})}\operatorname {vol} _{i}(P_{E}K)f(E)dE,} where P E : R n → E {\displaystyle P_{E}:\mathbb {R} ^{n}\to E} denotes the orthogonal projection and d E {\displaystyle dE} is the Haar measure, defines a smooth even valuation of degree i . {\displaystyle i.} It follows from the Irreducibility Theorem, in combination with the Casselman-Wallach theorem, that any smooth even valuation can be represented in this way. Such a representation is sometimes called a Crofton formula .
For any (complex-valued) smooth differential form ω ∈ Ω n − 1 ( R n × S n − 1 ) {\displaystyle \omega \in \Omega ^{n-1}(\mathbb {R} ^{n}\times S^{n-1})} that is invariant under all the translations ( x , u ) ↦ ( x + t , u ) {\displaystyle (x,u)\mapsto (x+t,u)} and every number c ∈ C , {\displaystyle c\in \mathbb {C} ,} integration over the normal cycle defines a smooth valuation:
As a set, the normal cycle N ( K ) {\displaystyle N(K)} consists of the outward unit normals to K . {\displaystyle K.} The Irreducibility Theorem implies that every smooth valuation is of this form.
There are several natural operations defined on the subspace of smooth valuations Val ∞ ( V ) ⊂ Val ( V ) . {\displaystyle \operatorname {Val} ^{\infty }(V)\subset \operatorname {Val} (V).} The most important one is the product of two smooth valuations. Together with pullback and pushforward, this operation extends to valuations on manifolds.
Let V , W {\displaystyle V,W} be finite-dimensional real vector spaces.
There exists a bilinear map, called the exterior product, ⊠ : Val ∞ ( V ) × Val ∞ ( W ) → Val ( V × W ) {\displaystyle \boxtimes :\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(W)\to \operatorname {Val} (V\times W)} which is uniquely characterized by the following two properties:
ϕ ⊠ ψ = ( vol V ⊠ vol W ) ( ∙ + A × B ) . {\displaystyle \phi \boxtimes \psi =(\operatorname {vol} _{V}\boxtimes \operatorname {vol} _{W})(\bullet +A\times B).}
The product of two smooth valuations ϕ , ψ ∈ Val ∞ ( V ) {\displaystyle \phi ,\psi \in \operatorname {Val} ^{\infty }(V)} is defined by ( ϕ ⋅ ψ ) ( K ) = ( ϕ ⊠ ψ ) ( Δ ( K ) ) , {\displaystyle (\phi \cdot \psi )(K)=(\phi \boxtimes \psi )(\Delta (K)),} where Δ : V → V × V {\displaystyle \Delta :V\to V\times V} is the diagonal embedding. The product is a continuous map Val ∞ ( V ) × Val ∞ ( V ) → Val ∞ ( V ) . {\displaystyle \operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V).} Equipped with this product, Val ∞ ( V ) {\displaystyle \operatorname {Val} ^{\infty }(V)} becomes a commutative associative graded algebra with the Euler characteristic as the multiplicative identity.
By a theorem of Alesker, the restriction of the product Val k ∞ ( V ) × Val n − k ∞ ( V ) → Val n ∞ ( V ) = Dens ( V ) {\displaystyle \operatorname {Val} _{k}^{\infty }(V)\times \operatorname {Val} _{n-k}^{\infty }(V)\to \operatorname {Val} _{n}^{\infty }(V)=\operatorname {Dens} (V)} is a non-degenerate pairing. This motivates the definition of the k {\displaystyle k} -homogeneous generalized valuation , denoted Val k − ∞ ( V ) , {\displaystyle \operatorname {Val} _{k}^{-\infty }(V),} as Val n − k ∞ ( V ) ∗ ⊗ Dens ( V ) , {\displaystyle \operatorname {Val} _{n-k}^{\infty }(V)^{*}\otimes \operatorname {Dens} (V),} topologized with the weak topology. By the Alesker-Poincaré duality, there is a natural dense inclusion Val k ∞ ( V ) ↪ Val k − ∞ ( V ) / {\displaystyle \operatorname {Val} _{k}^{\infty }(V)\hookrightarrow \operatorname {Val} _{k}^{-\infty }(V)/}
Convolution is a natural product on Val ∞ ( V ) ⊗ Dens ( V ∗ ) . {\displaystyle \operatorname {Val} ^{\infty }(V)\otimes \operatorname {Dens} (V^{*}).} For simplicity, we fix a density vol {\displaystyle \operatorname {vol} } on V {\displaystyle V} to trivialize the second factor. Define for fixed A , B ∈ K ( V ) {\displaystyle A,B\in {\mathcal {K}}(V)} with smooth boundary and strictly positive Gauss curvature vol ( ∙ + A ) ∗ vol ( ∙ + B ) = vol ( ∙ + A + B ) . {\displaystyle \operatorname {vol} (\bullet +A)\ast \operatorname {vol} (\bullet +B)=\operatorname {vol} (\bullet +A+B).} There is then a unique extension by continuity to a map Val ∞ ( V ) × Val ∞ ( V ) → Val ∞ ( V ) , {\displaystyle \operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V),} called the convolution.
Unlike the product, convolution respects the co-grading, namely if ϕ ∈ Val n − i ∞ ( V ) , {\displaystyle \phi \in \operatorname {Val} _{n-i}^{\infty }(V),} ψ ∈ Val n − j ∞ ( V ) , {\displaystyle \psi \in \operatorname {Val} _{n-j}^{\infty }(V),} then ϕ ∗ ψ ∈ Val n − i − j ∞ ( V ) . {\displaystyle \phi \ast \psi \in \operatorname {Val} _{n-i-j}^{\infty }(V).}
For instance, let V ( K 1 , … , K n ) {\displaystyle V(K_{1},\ldots ,K_{n})} denote the mixed volume of the convex bodies K 1 , … , K n ⊂ R n . {\displaystyle K_{1},\ldots ,K_{n}\subset \mathbb {R} ^{n}.} If convex bodies A 1 , … , A n − i {\displaystyle A_{1},\dots ,A_{n-i}} in R n {\displaystyle \mathbb {R} ^{n}} with a smooth boundary and strictly positive Gauss curvature are fixed, then ϕ ( K ) = V ( K [ i ] , A 1 , … , A n − i ) {\displaystyle \phi (K)=V(K[i],A_{1},\dots ,A_{n-i})} defines a smooth valuation of degree i . {\displaystyle i.} The convolution two such valuations is V ( ∙ [ i ] , A 1 , … , A n − i ) ∗ V ( ∙ [ j ] , B 1 , … , B n − j ) = c i , j V ( ∙ [ n − j − i ] , A 1 , … , A n − i , B 1 , … , B n − j ) , {\displaystyle V(\bullet [i],A_{1},\dots ,A_{n-i})\ast V(\bullet [j],B_{1},\dots ,B_{n-j})=c_{i,j}V(\bullet [n-j-i],A_{1},\dots ,A_{n-i},B_{1},\dots ,B_{n-j}),} where c i , j {\displaystyle c_{i,j}} is a constant depending only on i , j , n . {\displaystyle i,j,n.}
The Alesker-Fourier transform is a natural, G L ( V ) {\displaystyle GL(V)} -equivariant isomorphism of complex-valued valuations F : Val ∞ ( V ) → Val ∞ ( V ∗ ) ⊗ Dens ( V ) , {\displaystyle \mathbb {F} :\operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V^{*})\otimes \operatorname {Dens} (V),} discovered by Alesker and enjoying many properties resembling the classical Fourier transform, which explains its name.
It reverses the grading, namely F : Val k ∞ ( V ) → Val n − k ∞ ( V ∗ ) ⊗ Dens ( V ) , {\displaystyle \mathbb {F} :\operatorname {Val} _{k}^{\infty }(V)\to \operatorname {Val} _{n-k}^{\infty }(V^{*})\otimes \operatorname {Dens} (V),} and intertwines the product and the convolution: F ( ϕ ⋅ ψ ) = F ϕ ∗ F ψ . {\displaystyle \mathbb {F} (\phi \cdot \psi )=\mathbb {F} \phi \ast \mathbb {F} \psi .}
Fixing for simplicity a Euclidean structure to identify V = V ∗ , {\displaystyle V=V^{*},} Dens ( V ) = C , {\displaystyle \operatorname {Dens} (V)=\mathbb {C} ,} we have the identity F 2 ϕ ( K ) = ϕ ( − K ) . {\displaystyle \mathbb {F} ^{2}\phi (K)=\phi (-K).} On even valuations, there is a simple description of the Fourier transform in terms of the Klain embedding: Kl F ϕ ( E ) = Kl ϕ ( E ⊥ ) . {\displaystyle \operatorname {Kl} _{\mathbb {F} \phi }(E)=\operatorname {Kl} _{\phi }(E^{\perp }).} In particular, even real-valued valuations remain real-valued after the Fourier transform.
For odd valuations, the description of the Fourier transform is substantially more involved. Unlike the even case, it is no longer of purely geometric nature. For instance, the space of real-valued odd valuations is not preserved.
Given a linear map f : U → V , {\displaystyle f:U\to V,} there are induced operations of pullback f ∗ : Val ( V ) → Val ( U ) {\displaystyle f^{*}:\operatorname {Val} (V)\to \operatorname {Val} (U)} and pushforward f ∗ : Val ( U ) ⊗ Dens ( U ) ∗ → Val ( V ) ⊗ Dens ( V ) ∗ . {\displaystyle f_{*}:\operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*}\to \operatorname {Val} (V)\otimes \operatorname {Dens} (V)^{*}.} The pullback is the simpler of the two, given by f ∗ ϕ ( K ) = ϕ ( f ( K ) ) . {\displaystyle f^{*}\phi (K)=\phi (f(K)).} It evidently preserves the parity and degree of homogeneity of a valuation.
Note that the pullback does not preserve smoothness when f {\displaystyle f} is not injective.
The pushforward is harder to define formally. For simplicity, fix Lebesgue measures on U {\displaystyle U} and V . {\displaystyle V.} The pushforward can be uniquely characterized by describing its action on valuations of the form vol ( ∙ + A ) , {\displaystyle \operatorname {vol} (\bullet +A),} for all A ∈ K ( U ) , {\displaystyle A\in {\mathcal {K}}(U),} and then extended by continuity to all valuations using the Irreducibility Theorem. For a surjective map f , {\displaystyle f,} f ∗ vol ( ∙ + A ) = vol ( ∙ + f ( A ) ) . {\displaystyle f_{*}\operatorname {vol} (\bullet +A)=\operatorname {vol} (\bullet +f(A)).} For an inclusion f : U ↪ V , {\displaystyle f:U\hookrightarrow V,} choose a splitting V = U ⊕ W . {\displaystyle V=U\oplus W.} Then f ∗ vol ( ∙ + A ) ( K ) = ∫ W vol ( K ∩ ( U + w ) + A ) d w . {\displaystyle f_{*}\operatorname {vol} (\bullet +A)(K)=\int _{W}\operatorname {vol} (K\cap (U+w)+A)dw.} Informally, the pushforward is dual to the pullback with respect to the Alesker-Poincaré pairing: for ϕ ∈ Val ( V ) {\displaystyle \phi \in \operatorname {Val} (V)} and ψ ∈ Val ( U ) ⊗ Dens ( U ) ∗ , {\displaystyle \psi \in \operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*},} ⟨ f ∗ ϕ , ψ ⟩ = ⟨ ϕ , f ∗ ψ ⟩ . {\displaystyle \langle f^{*}\phi ,\psi \rangle =\langle \phi ,f_{*}\psi \rangle .} However, this identity has to be carefully interpreted since the pairing is only well-defined for smooth valuations. For further details, see. [ 7 ]
In a series of papers beginning in 2006, Alesker laid down the foundations for a theory of valuations on manifolds that extends the theory of valuations on convex bodies. The key observation leading to this extension is that via integration over the normal cycle ( 1 ), a smooth translation-invariant valuation may be evaluated on sets much more general than convex ones. Also ( 1 ) suggests to define smooth valuations in general by dropping the requirement that the form ω {\displaystyle \omega } be translation-invariant and by replacing the translation-invariant Lebesgue measure with an arbitrary smooth measure.
Let X {\displaystyle X} be an n-dimensional smooth manifold and let P X = P + ( T ∗ X ) {\displaystyle \mathbb {P} _{X}=\mathbb {P} _{+}(T^{*}X)} be the co-sphere bundle of X , {\displaystyle X,} that is, the oriented projectivization of the cotangent bundle.
Let P ( X ) {\displaystyle {\mathcal {P}}(X)} denote the collection of compact differentiable polyhedra in X . {\displaystyle X.} The normal cycle N ( A ) ⊂ P X {\displaystyle N(A)\subset \mathbb {P} _{X}} of A ∈ P ( X ) , {\displaystyle A\in {\mathcal {P}}(X),} which consists of the outward co-normals to A , {\displaystyle A,} is naturally a Lipschitz submanifold of dimension n − 1. {\displaystyle n-1.}
For ease of presentation we henceforth assume that X {\displaystyle X} is oriented, even though the concept of smooth valuations in fact does not depend on orientability. The space of smooth valuations V ∞ ( X ) {\displaystyle {\mathcal {V}}^{\infty }(X)} on X {\displaystyle X} consists of functions ϕ : P ( X ) → C {\displaystyle \phi :{\mathcal {P}}(X)\to \mathbb {C} } of the form ϕ ( A ) = ∫ A μ + ∫ N ( A ) ω , A ∈ P ( X ) , {\displaystyle \phi (A)=\int _{A}\mu +\int _{N(A)}\omega ,\qquad A\in {\mathcal {P}}(X),} where μ ∈ Ω n ( X ) {\displaystyle \mu \in \Omega ^{n}(X)} and ω ∈ Ω n − 1 ( P X ) {\displaystyle \omega \in \Omega ^{n-1}(\mathbb {P} _{X})} can be arbitrary.
It was shown by Alesker that the smooth valuations on open subsets of X {\displaystyle X} form a soft sheaf over X . {\displaystyle X.}
The following are examples of smooth valuations on a smooth manifold X {\displaystyle X} :
ϕ ( A ) = ∫ G r k C χ ( A ∩ E ) d E , A ∈ P ( C P n ) , {\displaystyle \phi (A)=\int _{\mathrm {Gr} _{k}^{\mathbb {C} }}\chi (A\cap E)dE,\qquad A\in {\mathcal {P}}(\mathbb {C} P^{n}),} where the integration is with respect to the Haar probability measure on G r k C , {\displaystyle \mathrm {Gr} _{k}^{\mathbb {C} },} is a smooth valuation. This follows from the work of Fu. [ 10 ]
The space V ∞ ( X ) {\displaystyle {\mathcal {V}}^{\infty }(X)} admits no natural grading in general, however it carries a canonical filtration V ∞ ( X ) = W 0 ⊃ W 1 ⊃ ⋯ ⊃ W n . {\displaystyle {\mathcal {V}}^{\infty }(X)=W_{0}\supset W_{1}\supset \cdots \supset W_{n}.} Here W n {\displaystyle W_{n}} consists of the smooth measures on X , {\displaystyle X,} and W j {\displaystyle W_{j}} is given by forms ω {\displaystyle \omega } in the ideal generated by π ∗ Ω j ( X ) , {\displaystyle \pi ^{*}\Omega ^{j}(X),} where π : P X → X {\displaystyle \pi :\mathbb {P} _{X}\to X} is the canonical projection.
The associated graded vector space ⨁ i = 0 n W i / W i + 1 {\displaystyle \bigoplus _{i=0}^{n}W_{i}/W_{i+1}} is canonically isomorphic to the space of smooth sections ⨁ i = 0 n C ∞ ( X , Val i ∞ ( T X ) ) , {\displaystyle \bigoplus _{i=0}^{n}C^{\infty }(X,\operatorname {Val} _{i}^{\infty }(TX)),} where Val i ∞ ( T X ) {\displaystyle \operatorname {Val} _{i}^{\infty }(TX)} denotes the vector bundle over X {\displaystyle X} such that the fiber over a point x ∈ X {\displaystyle x\in X} is Val i ∞ ( T x X ) , {\displaystyle \operatorname {Val} _{i}^{\infty }(T_{x}X),} the space of i {\displaystyle i} -homogeneous smooth translation-invariant valuations on the tangent space T x X . {\displaystyle T_{x}X.}
The space V ∞ ( X ) {\displaystyle {\mathcal {V}}^{\infty }(X)} admits a natural product. This product is continuous, commutative, associative, compatible with the filtration: W i ⋅ W j ⊂ W i + j , {\displaystyle W_{i}\cdot W_{j}\subset W_{i+j},} and has the Euler characteristic as the identity element. It also commutes with the restriction to embedded submanifolds, and the diffeomorphism group of X {\displaystyle X} acts on V ∞ ( X ) {\displaystyle {\mathcal {V}}^{\infty }(X)} by algebra automorphisms.
For example, if X {\displaystyle X} is Riemannian, the Lipschitz-Killing valuations satisfy V i X ⋅ V j X = V i + j X . {\displaystyle V_{i}^{X}\cdot V_{j}^{X}=V_{i+j}^{X}.}
The Alesker-Poincaré duality still holds. For compact X {\displaystyle X} it says that the pairing V ∞ ( X ) × V ∞ ( X ) → C , {\displaystyle {\mathcal {V}}^{\infty }(X)\times {\mathcal {V}}^{\infty }(X)\to \mathbb {C} ,} ( ϕ , ψ ) ↦ ( ϕ ⋅ ψ ) ( X ) {\displaystyle (\phi ,\psi )\mapsto (\phi \cdot \psi )(X)} is non-degenerate. As in the translation-invariant case, this duality can be used to define generalized valuations. Unlike the translation-invariant case, no good definition of continuous valuations exists for valuations on manifolds.
The product of valuations closely reflects the geometric operation of intersection of subsets.
Informally, consider the generalized valuation χ A = χ ( A ∩ ∙ ) . {\displaystyle \chi _{A}=\chi (A\cap \bullet ).} The product is given by χ A ⋅ χ B = χ A ∩ B . {\displaystyle \chi _{A}\cdot \chi _{B}=\chi _{A\cap B}.} Now one can obtain smooth valuations by averaging generalized valuations of the form χ A , {\displaystyle \chi _{A},} more precisely ϕ ( X ) = ∫ S χ s ( A ) d s {\displaystyle \phi (X)=\int _{S}\chi _{s(A)}ds} is a smooth valuation if S {\displaystyle S} is a sufficiently large measured family of diffeomorphisms. Then one has ∫ S χ s ( A ) d s ⋅ ∫ S ′ χ s ′ ( B ) d s ′ = ∫ S × S ′ χ s ( A ) ∩ s ′ ( B ) d s d s ′ , {\displaystyle \int _{S}\chi _{s(A)}ds\cdot \int _{S'}\chi _{s'(B)}ds'=\int _{S\times S'}\chi _{s(A)\cap s'(B)}dsds',} see. [ 11 ]
Every smooth immersion f : X → Y {\displaystyle f:X\to Y} of smooth manifolds induces a pullback map f ∗ : V ∞ ( Y ) → V ∞ ( X ) . {\displaystyle f^{*}:{\mathcal {V}}^{\infty }(Y)\to {\mathcal {V}}^{\infty }(X).} If f {\displaystyle f} is an embedding, then ( f ∗ ϕ ) ( A ) = ϕ ( f ( A ) ) , A ∈ P ( X ) . {\displaystyle (f^{*}\phi )(A)=\phi (f(A)),\qquad A\in {\mathcal {P}}(X).} The pullback is a morphism of filtered algebras.
Every smooth proper submersion f : X → Y {\displaystyle f:X\to Y} defines a pushforward map f ∗ : V ∞ ( X ) → V ∞ ( Y ) {\displaystyle f^{*}:{\mathcal {V}}^{\infty }(X)\to {\mathcal {V}}^{\infty }(Y)} by ( f ∗ ϕ ) ( A ) = ϕ ( f − 1 ( A ) ) , A ∈ P ( Y ) . {\displaystyle (f_{*}\phi )(A)=\phi (f^{-1}(A)),\qquad A\in {\mathcal {P}}(Y).} The pushforward is compatible with the filtration as well: f ∗ : W i ( X ) → W i − ( dim X − dim Y ) ( Y ) . {\displaystyle f_{*}:W_{i}(X)\to W_{i-(\dim X-\dim Y)}(Y).} For general smooth maps, one can define pullback and pushforward for generalized valuations under some restrictions.
Let M {\displaystyle M} be a Riemannian manifold and let G {\displaystyle G} be a Lie group of isometries of M {\displaystyle M} acting transitively on the sphere bundle S M . {\displaystyle SM.} Under these assumptions the space V ∞ ( M ) G {\displaystyle {\mathcal {V}}^{\infty }(M)^{G}} of G {\displaystyle G} -invariant smooth valuations on M {\displaystyle M} is finite-dimensional; let ϕ 1 , … , ϕ m {\displaystyle \phi _{1},\ldots ,\phi _{m}} be a basis. Let A , B ∈ P ( M ) {\displaystyle A,B\in {\mathcal {P}}(M)} be differentiable polyhedra in M . {\displaystyle M.} Then integrals of the form ∫ G ϕ i ( A ∩ g B ) d g {\displaystyle \int _{G}\phi _{i}(A\cap gB)dg} are expressible as linear combinations of ϕ k ( A ) ϕ l ( B ) {\displaystyle \phi _{k}(A)\phi _{l}(B)} with coefficients c i k l {\displaystyle c_{i}^{kl}} independent of A {\displaystyle A} and B {\displaystyle B} :
Formulas of this type are called kinematic formulas . Their existence in this generality was proved by Fu. [ 10 ] For the three simply connected real space forms, that is, the sphere, Euclidean space, and hyperbolic space, they go back to Blaschke , Santaló , Chern , and Federer .
Describing the kinematic formulas explicitly is typically a difficult problem. In fact already in the step from real to complex space forms, considerable difficulties arise and these have only recently been resolved by Bernig, Fu, and Solanes. [ 12 ] [ 13 ] The key insight responsible for this progress is that the kinematic formulas contain the same information as the algebra of invariant valuations V ∞ ( M ) G . {\displaystyle {\mathcal {V}}^{\infty }(M)^{G}.} For a precise statement, let k G : V ∞ ( M ) G → V ∞ ( M ) G ⊗ V ∞ ( M ) G {\displaystyle k_{G}:{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}} be the kinematic operator, that is, the map determined by the kinematic formulas ( 2 ). Let pd : V ∞ ( M ) G → V ∞ ( M ) G ∗ {\displaystyle \operatorname {pd} :{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G*}} denote the Alesker-Poincaré duality, which is a linear isomorphism. Finally let m G ∗ {\displaystyle m_{G}^{*}} be the adjoint of the product map m G : V ∞ ( M ) G ⊗ V ∞ ( M ) G → V ∞ ( M ) G . {\displaystyle m_{G}:{\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}.} The Fundamental theorem of algebraic integral geometry relating operations on valuations to integral geometry, states that if the Poincaré duality is used to identify V ∞ ( M ) G {\displaystyle {\mathcal {V}}^{\infty }(M)^{G}} with V ∞ ( M ) G ∗ , {\displaystyle {\mathcal {V}}^{\infty }(M)^{G*},} then k G = m G ∗ {\displaystyle k_{G}=m_{G}^{*}} :
.
|
https://en.wikipedia.org/wiki/Valuation_(geometry)
|
In logic and model theory , a valuation can be:
In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema . Valuations are also called truth assignments.
In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas.
In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set ( domain of discourse ) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables ) in the language.
If v {\displaystyle v} is a valuation, that is, a mapping from the atoms to the set { t , f } {\displaystyle \{t,f\}} , then the double-bracket notation is commonly used to denote a valuation; that is, [ [ ϕ ] ] v = v ( ϕ ) {\displaystyle [\![\phi ]\!]_{v}=v(\phi )} for a proposition ϕ {\displaystyle \phi } . [ 1 ]
|
https://en.wikipedia.org/wiki/Valuation_(logic)
|
Value-driven design ( VDD ) is a systems engineering strategy based on microeconomics which enables multidisciplinary design optimization . Value-driven design is being developed by the American Institute of Aeronautics and Astronautics , through a program committee of government, industry and academic representatives. [ 1 ] In parallel, the U.S. Defense Advanced Research Projects Agency has promulgated an identical strategy, calling it value-centric design , on the F6 Program . At this point, the terms value-driven design and value-centric design are interchangeable. The essence of these strategies is that design choices are made to maximize system value rather than to meet performance requirements.
This is also similar to the value-driven approach of agile software development where a project's stakeholders prioritise their high-level needs (or system features) based on the perceived business value each would deliver. [ 2 ]
Value-driven design is controversial because performance requirements are a central element of systems engineering. [ 3 ] However, value-driven design supporters claim that it can improve the development of large aerospace systems by reducing or eliminating cost overruns [ 4 ] which are a major problem, according to independent auditors. [ 5 ]
Value-driven design creates an environment that enables and encourages design optimization by providing designers with an objective function and eliminating those constraints which have been expressed as performance requirements. The objective function inputs all the important attributes of the system being designed, and outputs a score. The higher the score, the better the design. [ 6 ] Describing an early version of what is now called value-driven design, George Hazelrigg said, "The purpose of this framework is to enable the assessment of a value for every design option so that options can be rationally compared and a choice taken." [ 7 ] At the whole system level, the objective function which performs this assessment of value is called a "value model." [ 8 ] The value model distinguishes value-driven design from Multi-Attribute Utility Theory applied to design. [ 9 ] Whereas in Multi-Attribute Utility Theory, an objective function is constructed from stakeholder assessments, [ 10 ] value-driven design employs economic analysis to build a value model. [ 11 ] The basis for the value model is often an expression of profit for a business, but economic value models have also been developed for other organizations, such as government. [ 8 ]
To design a system, engineers first take system attributes that would traditionally be assigned performance requirements, like the range and fuel consumption of an aircraft, and build a system value model that uses all these attributes as inputs. Next, the conceptual design is optimized to maximize the output of the value model. Then, when the system is decomposed into components, an objective function for each component is derived from the system value model through a sensitivity analysis. [ 6 ]
A workshop exercise implementing value-driven design for a GPS satellite was conducted in 2006, and may serve as an example of the process. [ 12 ]
The dichotomy between designing to performance requirements versus objective functions was raised by Herbert Simon in an essay called "The Science of Design" in 1969. [ 13 ] Simon played both sides, saying that, ideally, engineered systems should be optimized according to an objective function, but realistically this is often too hard, so that attributes would need to be satisficed , which amounted to setting performance requirements. But he included optimization techniques in his recommended curriculum for engineers, and endorsed "utility theory and statistical decision theory as a logical framework for rational choice among given alternatives".
Utility theory was given most of its current mathematical formulation by von Neumann and Morgenstern , [ 14 ] but it was the economist Kenneth Arrow who proved the Expected Utility Theorem most broadly, which says in essence that, given a choice among a set of alternatives, one should choose the alternative that provides the greatest probabilistic expectation of utility, where utility is value adjusted for risk aversion. [ 15 ]
Ralph Keeney and Howard Raiffa extended utility theory in support of decision making, [ 10 ] and Keeney developed the idea of a value model to encapsulate the calculation of utility. [ 16 ] Keeney and Raiffa also used "attributes" to describe the inputs to an evaluation process or value model.
George Hazelrigg put engineering design, business plan analysis, and decision theory together for the first time in a framework in a paper written in 1995, which was published in 1998. [ 7 ] Meanwhile, Paul Collopy independently developed a similar framework in 1997, and Harry Cook developed the S-Model for incorporating product price and demand into a profit-based objective function for design decisions. [ 17 ]
The MIT Engineering Systems Division produced a series of papers from 2000 on, many co-authored by Daniel Hastings, in which many utility formulations were used to address various forms of uncertainty in making engineering design decisions. Saleh et al. [ 18 ] is a good example of this work.
The term value-driven design was coined by James Sturges at Lockheed Martin while he was organizing a workshop that would become the Value-Driven Design Program Committee at the American Institute of Aeronautics and Astronautics (AIAA) in 2006. [ 19 ] Meanwhile, value centric design was coined independently by Owen Brown and Paul Eremenko of DARPA in the Phase 1 Broad Agency Announcement for the DARPA F6 satellite design program in 2007. [ 20 ] Castagne et al. [ 21 ] provides an example where value-driven design was used to design fuselage panels for a regional jet .
Implementation of value-driven design on large government systems, such as NASA or European Space Agency spacecraft or weapon systems, will require a government acquisition system that directs or incentivizes the contractor to employ a value model. [ 22 ] Such a system is proposed in some detail in an essay by Michael Lippitz, Sean O'Keefe, and John White. [ 23 ] They suggest that "A program office can offer a contract in which price is a function of value", where the function is derived from a value model. The price function is structured so that, in optimizing the product design in accordance with the value model, the contractor will maximize its own profit. They call this system Value Based Acquisition .
|
https://en.wikipedia.org/wiki/Value-driven_design
|
Value-stream mapping , also known as material- and information-flow mapping , [ 1 ] is a lean [ 2 ] -management method for analyzing the current state and designing a future state for the series of events that take a product or service from the beginning of the specific process until it reaches the customer. A value stream map is a visual [ 2 ] tool that displays all critical steps in a specific process and easily quantifies the time and volume taken at each stage. Value stream maps show the flow of both materials and information as they progress through the process. [ 3 ]
Whereas a value stream map represents a core business process that adds value to a material product, a value chain diagram shows an overview of all activities within a company. [ 3 ] Other business activities may be represented in "value stream diagrams" and/or other kinds of diagram that represent business processes that create and use business data.
The purpose of value-stream mapping is to identify and remove or reduce "waste" in value streams, [ 2 ] thereby increasing the efficiency of a given value stream. Waste removal is intended to increase productivity by creating leaner operations which in turn make waste and quality problems easier to identify. [ 4 ]
Value-stream mapping has supporting methods that are often used in lean environments to analyze and design flows at the system level (across multiple processes).
Although value-stream mapping is often associated with manufacturing, it is also used in logistics, supply chain, service related industries, healthcare, [ 5 ] [ 6 ] software development , [ 7 ] [ 8 ] product development , [ 9 ] project management, [ 2 ] and administrative and office processes. [ 10 ]
Daniel T. Jones (1995) identifies seven commonly accepted types of waste. These terms are updated from Toyota's operating model " The Toyota Way " ( Toyota Production System, TPS ) original nomenclature ( muda ): [ 11 ]
Yasuhiro Monden (1994) identifies three types of operations: [ 12 ]
NNVA activities may also be referred to as "sustaining non-value adding", i.e. they have to be done, or they are necessary to sustain the business but do not contribute to customer requirements. [ 13 ]
For additional views on waste, see Lean manufacturing .
There are two kinds of value stream maps, current state and future state . The current state value stream map is used to determine what the process currently looks like, the future state value stream map focuses on what the process will ideally look like after process improvements have occurred to the value stream. [ 3 ]
The current state value stream map must be created before the future state map and is created by observing the process and tracking the information and material flow. [ 14 ] The value stream map is then created using the following symbols: [ 15 ]
In a build-to-the-standard form, Shigeo Shingo [ 16 ] suggests that the value-adding steps be drawn across the centre of the map and the non–value-adding steps be represented in vertical lines at right angles to the value stream. Thus, the activities become easily separated into the value stream, which is the focus of one type of attention, and the "waste" steps, another type. He calls the value stream the process and the non-value streams the operations. The thinking here is that the non–value-adding steps are often preparatory or tidying up to the value-adding step and are closely associated with the person or machine/workstation that executes that value-adding step. Therefore, each vertical line is the "story" of a person or workstation whilst the horizontal line represents the "story" of the product being created.
Value-stream mapping is a recognised method used as part of Lean Six Sigma methodologies. [ 17 ]
Value-stream mapping analyzes both material (artifact) and information flow. [ 18 ] The following two resources exemplify the use of VSM in the context of software process improvement in industrial settings:
Hines and Rich (1997) defined seven value-stream mapping tools. [ 21 ] These are:
|
https://en.wikipedia.org/wiki/Value-stream_mapping
|
In mathematics, value may refer to several, strongly related notions.
In general, a mathematical value may be any definite mathematical object . In elementary mathematics , this is most often a number – for example, a real number such as π or an integer such as 42.
For example, if the function f is defined by f ( x ) = 2 x 2 − 3 x + 1 , then assigning the value 3 to its argument x yields the function value 10, since f (3) = 2·3 2 − 3·3 + 1 = 10 .
If the variable, expression or function only assumes real values, it is called real-valued . Likewise, a complex-valued variable, expression or function only assumes complex values .
|
https://en.wikipedia.org/wiki/Value_(mathematics)
|
Value engineering ( VE ) is a systematic analysis of the functions of various components and materials to lower the cost of goods, products and services with a tolerable loss of performance or functionality. Value, as defined, is the ratio of function to cost . Value can therefore be manipulated by either improving the function or reducing the cost . It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements. [ 3 ] The term "value management" is sometimes used as a synonym of "value engineering", and both promote the planning and delivery of projects with improved performance. [ 4 ]
The reasoning behind value engineering is as follows: if marketers expect a product to become practically or stylistically obsolete within a specific length of time, they can design it to only last for that specific lifetime. The products could be built with higher-grade components, but with value engineering they are not because this would impose an unnecessary cost on the manufacturer, and to a limited extent also an increased cost on the purchaser. Value engineering will reduce these costs. A company will typically use the least expensive components that satisfy the product's lifetime projections at a risk of product and company reputation.
Due to the very short life spans, however, which is often a result of this "value engineering technique", planned obsolescence has become associated with product deterioration and inferior quality. Vance Packard once claimed this practice gave engineering as a whole a bad name, as it directed creative engineering energies toward short-term market ends. Philosophers such as Herbert Marcuse and Jacque Fresco have also criticized the economic and societal implications of this model.
Value engineering began at General Electric Co. during World War II . Because of the war, there were shortages of skilled labour, raw materials, and component parts. Lawrence Miles , Jerry Leftow, and Harry Erlicher at G.E. looked for acceptable substitutes. They noticed that these substitutions often reduced costs, improved the product, or both. What started out as an accident of necessity was turned into a systematic process. They called their technique "value analysis" or "value control". [ 5 ]
The U S Navy's Bureau of Ships established a formal program of value engineering, overseen by Miles and Raymond Fountain, also from G.E., in 1957. [ 5 ]
Since the 1970's the US Government's General Accounting Office (GAO) has recognised the benefit of value engineering. In a 1992 statement by L. Nye Stevens, Director of Government Business Operations Issues within the GAO, referred to "considerable work" done by the GAO on value engineering and the office's recommendation that VE should be adopted by "all federal construction agencies". [ 6 ]
Dr. Paul Collopy, UAH Professor, ISEEM Department, has recommended an improvement to value engineering known as Value-Driven Design. [ 7 ]
Value engineering is sometimes taught within the project management , industrial engineering or architecture body of knowledge as a technique in which the value of a system's outputs is superficially optimized by distorting a mix of performance (function) and costs. It is based on an analysis investigating systems, equipment, facilities, services, and supplies for providing necessary functions at superficial low life cycle cost while meeting the misunderstood requirement targets in performance, reliability, quality, and safety. In most cases this practice identifies and removes necessary functions of value expenditures, thereby decreasing the capabilities of the manufacturer and/or their customers. What this practice disregards in providing necessary functions of value are expenditures such as equipment maintenance and relationships between employee, equipment, and materials. For example, a machinist is unable to complete their quota because the drill press is temporarily inoperable due to lack of maintenance and the material handler is not doing their daily checklist, tally, log, invoice, and accounting of maintenance and materials each machinist needs to maintain the required productivity and adherence to section 4306.
VE follows a structured thought process that is based exclusively on "function", i.e. what something "does", not what it "is". For example, a screwdriver that is being used to stir a can of paint has a "function" of mixing the contents of a paint can and not the original connotation of securing a screw into a screw-hole. In value engineering "functions" are always described in a two word abridgment consisting of an active verb and measurable noun (what is being done – the verb – and what it is being done to – the noun) and to do so in the most non-descriptive way possible. In the screwdriver and can of paint example, the most basic function would be "blend liquid" which is less descriptive than "stir paint" which can be seen to limit the action (by stirring) and to limit the application (only considers paint).
Value engineering uses rational logic (a unique "how" – "why" questioning technique) and an irrational analysis of function to identify relationships that increase value. It is considered a quantitative method similar to the scientific method, which focuses on hypothesis-conclusion approaches to test relationships, and operations research, which uses model building to identify predictive relationships.
In the United States, value engineering is specifically mandated for federal agencies by section 4306 of the National Defense Authorization Act for Fiscal Year 1996 , [ 8 ] which amended the Office of Federal Procurement Policy Act (41 U.S.C. 401 et seq.):
An earlier bill, HR 281, the "Systematic Approach for Value Engineering Act" was proposed in 1990, which would have mandated the use of VE in major federally-sponsored construction, design or IT system contracts. This bill identified the objective of a value engineering review as "reducing all costs (including initial and long-term costs) and improving quality, performance, productivity, efficiency, promptness, reliability, maintainability, and aesthetics". [ 6 ] [ 9 ]
Federal Acquisition Regulation (FAR) part 48 provides direction to federal agencies on the use of VE techniques. [ 10 ] The FAR provides for
In the United Kingdom, the lawfulness of undertaking value engineering discussions with a supplier in advance of contract award is one of the issues which was highlighted during the inquiry into the Grenfell Tower fire of 2017. [ 11 ] The inquiry report was highly sceptical of the whole endeavour of value engineering:
In theory, “value engineering” involves making changes to the design or specification that reduce cost without sacrificing performance, but in our view it is in practice little more than a euphemism for reducing cost, because substituting a cheaper product for a more expensive one or altering the design or scope of the work in a way that reduces cost almost invariably involves a compromise of some kind, whether in content, performance or appearance. [ 12 ]
The Society of American Value Engineers (SAVE) was established in 1959. Since 1996, it has been known as SAVE International. [ 13 ]
|
https://en.wikipedia.org/wiki/Value_engineering
|
The value of Earth , i.e. the net worth of our planet, is a debated concept both in terms of the definition of value, as well as the scope of " Earth ". Since most of the planet's substance is not available as a resource , "earth" has been equated with the sum of all ecosystem services as evaluated in ecosystem valuation or full-cost accounting . [ 1 ]
The price on the services that the world's ecosystems provide to humans has been estimated in 1997 to be $33 trillion per annum, with a confidence interval of from $16 trillion to $54 trillion. [ vague ] Compared with the combined gross national product (GNP) of all the countries at about the same time ($18 trillion) ecosystems would appear to be providing 1.8 times as much economic value as people are creating. [ 2 ] The result details have been questioned, in particular the GNP, which is believed to be closer to $28 trillion (which makes ecosystem services only 1.2 times as precious), while the basic approach was readily acknowledged. [ 3 ] The World Bank gives the total gross domestic product (GDP) in 1997 as $31.435, which would about equal the biosystem value. [ 4 ] Criticisms were addressed in a later publication, which gave an estimate of $125 trillion/yr for ecosystem services in 2011, which would make them twice as valuable as the GDP, with a yearly loss of 4.3–20.2 trillion/yr. [ 5 ]
The BBC has published a website that lists various types of resources on various scales together with their current estimated values from different sources, among them BBC Earth , and Tony Juniper in collaboration with The United Nations Environment Programme World Conservation Monitoring Centre (UNEP-WCMC). [ 6 ]
|
https://en.wikipedia.org/wiki/Value_of_Earth
|
Value of information (VOI or VoI) is the amount a decision maker would be willing to pay for information prior to making a decision.
VoI is sometimes distinguished into value of perfect information , also called value of clairvoyance (VoC) , and value of imperfect information . They are closely related to the widely known expected value of perfect information (EVPI) and expected value of sample information (EVSI). Note that VoI is not necessarily equal to "value of decision situation with perfect information" - "value of current decision situation" as commonly understood.
A simple example best illustrates the concept: Consider the decision situation with one decision, for example deciding on a 'Vacation Activity'; and one uncertainty, for example what will the 'Weather Condition' be? But we will only know the 'Weather Condition' after we have decided and begun the 'Vacation Activity'.
The above definition illustrates that the value of imperfect information of any uncertainty can always be framed as the value of perfect information, i.e., VoC, of another uncertainty, hence only the term VoC will be used onwards.
Consider a general decision situation [ 1 ] having n decisions ( d 1 , d 2 , d 3 , ..., d n ) and m uncertainties ( u 1 , u 2 , u 3 , ..., u m ). Rationality assumption in standard individual decision-making philosophy states that what is made or known are not forgotten, i.e., the decision-maker has perfect recall . This assumption translates into the existence of a linear ordering of these decisions and uncertainties such that;
Consider cases where the decision-maker is enabled to know the outcome of some additional uncertainties earlier in his/her decision situation, i.e., some u i are moved to appear earlier in the ordering. In such case, VoC is quantified as the highest price which the decision-maker is willing to pay for all those moves.
The standard then is further generalized in team decision analysis framework where there is typically incomplete sharing of information among team members under the same decision situation. In such case, what is made or known might not be known in later decisions belonging to different team members, i.e., there might not exist linear ordering of decisions and uncertainties satisfying perfect recall assumption. VoC thus captures the value of being able to know "not only additional uncertainties but also additional decisions already made by other team members" before making some other decisions in the team decision situation. [ 2 ]
There are four characteristics of VoI that always hold for any decision situation:
VoC is derived strictly following its definition as the monetary amount that is big enough to just offset the additional benefit of getting more information. In other words; VoC is calculated iteratively until
A special case is when the decision-maker is risk neutral where VoC can be simply computed as
This special case is how expected value of perfect information and expected value of sample information are calculated where risk neutrality is implicitly assumed. For cases where the decision-maker is risk averse or risk seeking , this simple calculation does not necessarily yield the correct result, and iterative calculation is the only way to ensure correctness.
Decision trees and influence diagrams are most commonly used in representing and solving decision situations as well as associated VoC calculation. The influence diagram, in particular, is structured to accommodate team decision situations where incomplete sharing of information among team members can be represented and solved very efficiently. While decision trees are not designed to accommodate team decision situations, they can do so by augmenting them with information sets widely used in game trees .
VoC is often illustrated using the example of paying for a consultant in a business transaction, who may either be perfect ( expected value of perfect information ) or imperfect (expected value of imperfect information). [ 3 ]
In a typical consultant situation, the consultant would be paid up to cost c for their information, based on the expected cost E without the consultant and the revised cost F with the consultant's information. In a perfect information scenario, E can be defined as the sum product of the probability of a good outcome g times its cost k , plus the probability of a bad outcome (1- g ) times its cost k '>k:
E = gk + (1-g)k',
which is revised to reflect expected cost F of perfect information including consulting cost c . The perfect information case assumes the bad outcome does not occur due to the perfect information consultant.
F = g(k+c)
We then solve for values of c for which F<E to determine when to pay the consultant.
In the case of a recursive decision tree , we often have an additional cost m that results from correcting the error, and the process restarts such that the expected cost will appear on both the left and right sides of our equations. [ 4 ] This is typical of hiring-rehiring decisions or value chain decisions for which assembly line components must be replaced if erroneously ordered or installed:
E = gk + (1-g)(k'+m+E)
F = g(k+c)
If the consultant is imperfect with frequency f , then the consultant cost is solved with the probability of error included:
F = g(k+c)(1-f) + g(k+c+F)f + (1-g)(1-f)(k+c+F) + (1-g)f(k'+c+m+F)
VoI is also used to do an inspection and maintenance planning of the structures. analyze to what extent the value associated with the information collected during the service life of engineered structures, for example, inspections, in the context of integrity management, is affected by not only measurement random errors but also biases (systematic errors), taking the dependency between the collections into account [ 5 ]
|
https://en.wikipedia.org/wiki/Value_of_information
|
Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. [ 1 ] [ 2 ] VSD originated within the field of information systems design [ 3 ] and human-computer interaction [ 4 ] to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders . It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". [ 5 ] Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. [ 6 ] Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. [ 7 ] These investigations are intended to be iterative , allowing the designer to modify the design continuously. [ 8 ]
The VSD approach is often described as an approach that is fundamentally predicated on its ability to be modified depending on the technology, value(s), or context of use. [ 9 ] [ 10 ] Some examples of modified VSD approaches are Privacy by Design which is concerned with respecting the privacy of personally identifiable information in systems and processes. [ 11 ] Care-Centered Value Sensitive Design (CCVSD) proposed by Aimee van Wynsberghe is another example of how the VSD approach is modified to account for the values central to care for the design and development of care robots. [ 12 ]
VSD uses an iterative design process that involves three types of investigations: conceptual, empirical and technical. Conceptual investigations aim at understanding and articulating the various stakeholders of the technology, as well as their values and any values conflicts that might arise for these stakeholders through the use of the technology. Empirical investigations are qualitative or quantitative design research studies used to inform the designers' understanding of the users' values, needs, and practices. Technical investigations can involve either analysis of how people use related technologies, or the design of systems to support values identified in the conceptual and empirical investigations. [ 13 ] Friedman and Hendry account seventeen methods, including their main purpose, an overview of its function as well as key references: [ 5 ]
Two commonly cited criticisms are critiques of the heuristics of values on which VSD is built. [ 37 ] [ 38 ] These critiques have been forwarded by Le Dantec et al. [ 39 ] and Manders-Huits. [ 40 ] Le Dantec et al. argue that formulating a pre-determined list of implicated values runs the risk of ignoring important values that can be elicited from any given empirical case by mapping those value a priori. [ 39 ] Manders-Huits instead takes on the concept of ‘values’ itself with VSD as the central issue. She argues that the traditional VSD definition of values as “what a person or group of people consider important in life” is nebulous and runs the risk of conflating stakeholders preferences with moral values. [ 40 ]
Wessel Reijers and Bert Gordijn have built upon the criticisms of Le Dantec et alia and Manders-Huits that the value heuristics of VSD are insufficient given their lack of moral commitment. [ 38 ] They propose that a heuristic of virtues stemming from a virtue ethics approach to technology design, mostly influenced by the works of Shannon Vallor , provides a more holistic approach to technology design. Steven Umbrello has criticized this approach arguing that not only can the heuristic of values be reinforced [ 41 ] but that VSD does make moral commitments to at least three universal values: human well-being, justice and dignity. [ 37 ] [ 5 ] Batya Friedman and David Hendry, in "Value Sensitive Design: Shaping Technology with Moral Imagination", argue that although earlier iterations of the VSD approach did not make explicit moral commitments, it has since evolved over the past two decades to commit to at least those three fundamental values. [ 5 ]
VSD as a standalone approach has also been criticized as being insufficient for the ethical design of artificial intelligence . [ 42 ] This criticism is predicated on the self-learning and opaque artificial intelligence techniques like those stemming from machine learning and, as a consequence, the unforeseen or unforeseeable values or disvalues that may emerge after the deployment of an AI system. Steven Umbrello and Ibo van de Poel propose a modified VSD approach that uses the Artificial Intelligence for Social Good (AI4SG) [ 43 ] factors as norms to translate abstract philosophical values into tangible design requirements. [ 44 ] What they propose is that full-lifecycle monitoring is necessary to encourage redesign in the event that unwanted values manifest themselves during the deployment of a system.
|
https://en.wikipedia.org/wiki/Value_sensitive_design
|
Value tree analysis is a multi-criteria decision-making (MCDM) implement by which the decision-making attributes for each choice to come out with a preference for the decision makes are weighted. [ 1 ] Usually, choices' attribute-specific values are aggregated into a complete method. Decision analysts (DAs) distinguished two types of utility. [ 2 ] The preferences of value are made among alternatives when there is no uncertainty. Risk preferences solves the attitude of DM to risk taking under uncertainty. This learning package focuses on deterministic choices, namely value theory , and in particular a decision analysis tool called a value tree. [ 2 ]
The concept of utility was used by Daniel Bernoulli (1738) first in 1730s while explaining the evaluation of St Petersburg paradox , a specific uncertain gable. He explained that money was not enough to measure how much value is. For an individual, however, the worth of money was a non-linear function. This discovery led to the emergence of utility theory , which is a numerical measure that indicates how much value alternative choices have. With the development of decision analysis, utility played an important role in the explanation of economics behavior. Some utilitarian philosophers like Bentham and Mill took advantage of it as an implement to build a certain kind of ethics theory either. Nevertheless, there was no possibility of measuring one's utility function. Moreover, the theory was not so important as in practice. With the time past, the utility theory gradually based on a solid theoretical foundation. People started to use theory of games to explain the behavior of those who are rational and calm when engaging with others with conflict happening. In 1944 John von Neumann and Oskar Morgenstern 's Theory of Games and Economic Behavior was published. Afterwards, it emerged since it has become of the key implement researchers and practitioners from statistics and operations research use to give a helping hand to decision makers when it was hard to make a decision. Decision analysts can be separated into two sorts of utility. The attitude of decision makers towards uncertain risk are solved by risk preference. [ 3 ]
The goal of the value tree analysis process is to offer a well-organized way to think and discuss about alternatives and support subjective judgements which are critical for correct or excellent decisions. The phases of process of the value tree analysis is shown as below:
These processes are usually large and iterative. For example, problem structure, collection of related information, and modeling of DM preferences often require a lot of work. DM's perception of the problem and preferences for results not previously considered may change and evolve during this process.
Value tree was built to be an effective and essential technique for improving and enhancing goals and values by several aspects. The tree analysis displays a visual mode to problems that used to be only available in a verbal mode. Plus separate aspects, thoughts and opinions are united to a single visual representation, which gives birth to great clarity, stimulation of creative thinking, and constructive communication.
We take the steps below to create a value tree analysis with an example to help illustrate the steps: [ 4 ]
Step1: Initial pool
Using a free brainstorming of all the values as a beginning, by which we mean all the problems which are related to the decision: the goals and criteria, the demands, etc.—all the things which have relevance to decision making. Write down what each value is on a piece of paper.
(A) Begin the process with several things:
(B) Once you've exhausted your thoughts after this very open phase, consider the following topics to help yu come up with comprehensive values, interests, and concerns related to your decision:
Consider who is affected by the decision and what their values might be. Stakeholders may be family, friends, neighbors, society, offspring or other species, but they can be anyone who might be affected by your decision, whether intentional or not.
The lack of awareness of this intangible consequence can easily lead to our regretful decision. Moreover, if there is a disagreement between our intuitive and thorough analysis of decision-making, we are usually not aware of the underlying intangible consequences.
Step2: Clustering
When lacking of ideas, clustering the ideas is an efficient way to move the paper around until similar ideas are gathered together.
Step3: Labeling
Mark each group with a higher level value that holds them together to make each element clearer.
[Example]
As a simplified example, let us assume that some of the initial values we propose are self-determined, family, safe, friend and healthy. Health, safety and self-realization can be grouped together and labeled as "self", where families and friends can be grouped together and labeled as "other".
Step4: Moving up the tree
Seeing whether these groups can be grouped into still larger groups
[Example]
SELF and OTHERS group into OVERALL VALUE.
Step5: Moving down the tree
Also seeing if these groups can be divided into still smaller sub-groups.
[Example]
SELF-ACTUALIZATION could be divided into WORK and RECREATION.
Step6: Moving across the tree
Asking themselves is another valid way to bring new ideas to a tree, whether any additional thoughts at that level can come out(moving across the tree).
[Example]
In addition to FAMILY and FRIENDS, we could add SOCIETY.
The diagram on the right shows the final result of the (still simplified) example. Bold, italic indicates the basic values that were not originally written by us, but were thought of when we tried to fill in the tree. [ 4 ]
PRIME Decisions is a decision helping implement which use PRIME method to analyze incomplete preference information. Novel features are also offered by PRIME Decisions, which gives support to interactive decision process which includes an elicitation tour. PRIME Decisions are seen as an essential catalyst for further applied work due to its practitioners benefit from M. Köksalan et al. (eds.), Multiple Criteria Decision Making in the New Millennium © Springer-Verlag Berlin Heidelberg 2001 166 the explicit recognition of incomplete information. [ 5 ]
Web-HIPRE, a Java applet, provides help to multiple criteria decision analysis. Moreover, a normal platform is provided for individual and group decision making. People can process the model at the same time at any time. Plus, they can easily have access to the model. It is possible to define links to other websites. All other sorts of information like geography, media files describing the criteria or alternatives can be referred to this link, which help make a better quality of decision support significantly. [ 6 ]
Some indicators obtained by process analysis are of great help to the value tree analysis. Especially in the value decomposition of internal operation indicators, the driving indicators of a first-level process indicator are usually the secondary sub-process indicators. For instance, the new product launch cycle (in terms of R&D project to production) is actually driven by two processes: R&D and testing in the company. The standardized R&D and testing process is a key success factor for improving the speed of innovation. To this end, the two process indicators development cycle, test cycle, sample acceptance and other indicators are the vital elements which drive the new product launch cycle indicators. Therefore, combining process analysis is of great significance for the decomposition of indicator value, especially for the decomposition of internal operational indicators. The instances of the main application areas are shown as below: [ 7 ]
Allocating the engineering budget for products and projects annually is always a challenge. With value tree analysis aspects, such as strategic fit, which have no natural evaluation measure, but may have a significant role in decision-making can be included into the analysis. Furthermore, there is likelihood of communication being increased by explicit modelling of the relevant facts and a base for justified decisions is also provided.
As it is known to all that the risk in high in many R&D programs sometimes, thus the role of a good reason may be as essential as the decision itself. Value tree analysis offers a tool to give support to the reasoning of the selection of the R&D programme and modelling the facts affecting the decision.
For instance, the analysis of new strategies for merchandising gasoline and other products through full-facility service stations.
For instance, organization of negotiations between several parties in order to identify compromise regulations for acid rain and identify the objectives of the regulations.
Carry out an evaluation report of subcontractors and analyze the criteria which should be used.
For instance, organizing a debate about nuclear power, aiding the decision process, and studying value differences between the decision-makers.
In addition to the decision-making problems value tree analysis serves also other purposes.
For instance, a scale which measures the worth of military targets.
As value tree analysis is an approach that costs and computes little, it is one of the best choices for time-sensitive variable selection in empirical pilot healthcare studies. Moreover, value tree analysis offers a well-structured and strategic process for decision-making so that pilot study and patient data constraints can be accounted for and value for study stakeholders can be maximized. [ 1 ]
Value tree analysis help creative and critical thinking and organize the thoughts in a logical way. Moreover, when a decision has come up, value tree analysis can also be an effective way to think about one's core goals and values. Afterwards, we can actively look for decision opportunities with the analysis done before. [ 8 ] [ 9 ] [ 10 ]
The software tools of value tree analysis are shown in the picture below: [ 11 ]
|
https://en.wikipedia.org/wiki/Value_tree_analysis
|
A valve RF amplifier ( UK and Aus. ) or tube amplifier ( U.S. ) is a device for electrically amplifying the power of an electrical radio frequency signal .
Low to medium power valve amplifiers for frequencies below the microwaves were largely replaced by solid state amplifiers during the 1960s and 1970s, initially for receivers and low power stages of transmitters, transmitter output stages switching to transistors somewhat later. Specially constructed valves are still in use for very high power transmitters, although rarely in new designs. [ 1 ] [ citation needed ]
Valves are high voltage / low current devices in comparison with transistors . Tetrode and pentode valves have very flat anode current vs. anode voltage indicating high anode output impedances . Triodes show a stronger relationship between anode voltage and anode current.
The high working voltage makes them well suited for radio transmitters and valves remain in use today for very high power short wave radio transmitters, where solid state techniques would require many devices in parallel, and very high DC supply currents. High power solid state transmitters also require a complex combination of transformers and tuning networks, whereas a valve-based transmitter would use a single, relatively simple tuned network.
Thus while solid state high power short wave transmitters are technically possible, economic considerations still favor valves above 3 MHz and 10,000 watts.
Radio amateurs also use valve amplifiers in the 500–1500 watt range mainly for economic reasons.
Valve audio amplifiers typically amplify the entire audio range between 20 Hz and 20 kHz or higher. They use an iron core transformer to provide a suitable high impedance load to the valve(s) while driving a speaker, which is typically 8 Ohms. Audio amplifiers normally use a single valve in class A , or a pair in class B or class AB .
An RF power amplifier is tuned to a single frequency as low as 18 kHz and as high as the UHF range of frequencies, for the purpose of radio transmission or industrial heating. They use a narrow tuned circuit to provide the valve with a suitably high load impedance and feed a load that is typically 50 or 75 Ohms. RF amplifiers normally operate class C or class AB .
Although the frequency ranges for audio amplifiers and RF amplifiers overlap, the class of operation, method of output coupling and percent operational bandwidth will differ. Power valves are capable of high frequency response, up to at least 30 MHz. Indeed, many of the Directly Heated Single Ended Triode ( DH-SET ) audio amplifiers use radio transmitting valves originally designed to operate as RF amplifiers in the high frequency range. [ citation needed ]
The most efficient valve-based RF amplifiers operate class C . If used with no tuned circuit in the output, this would distort the input signal, producing harmonics. However, class C amplifiers normally use a high Q output network which removes the harmonics, leaving an undistorted sine wave identical to the input waveform. Class C is suitable only for amplifying signals with a constant amplitude, such as FM , FSK , and some CW ( Morse code ) signals. Where the amplitude of the input signal to the amplifier varies as with single-sideband modulation , amplitude modulation , video and complex digital signals, the amplifier must operate class A or AB, to preserve the envelope of the driving signal in an undistorted form. Such amplifiers are referred to as linear amplifiers .
It is also common to modify the gain of an amplifier operating class C so as to produce amplitude modulation . If done in a linear manner, this modulated amplifier is capable of low distortion. The output signal can be viewed as a product of the input RF signal and the modulating signal.
The development of FM broadcasting improved fidelity by using a greater bandwidth which was available in the VHF range, and where atmospheric noise was absent. FM also has an inherent ability to reject noise, which is mostly amplitude modulated. Valve technology suffers high-frequency limitations due to cathode-anode transit time. However, tetrodes are successfully used into the VHF range and triodes into the low GHz range. Modern FM broadcast transmitters use both valve and solid state devices, with valves tending to be more used at the highest power levels. FM transmitters operate class C with very low distortion.
Today's digital radio that carries coded data over various phase modulations (such as GMSK , QPSK , etc.) and also the increasing demand for spectrum have forced a dramatic change in the way radio is used, e.g. the cellular radio concept. Today's cellular radio and digital broadcast standards are extremely demanding in terms of the spectral envelope and out of band emissions that are acceptable (in the case of GSM for example, −70 dB or better just a few hundred kilohertz from center frequency). Digital transmitters must therefore operate in the linear modes, with much attention given to achieving low distortion.
(High voltage/high power)
Valve stages were used to amplify the received radio frequency signals, the intermediate frequencies, the video signal and the audio signals at the various points in the receiver. Historically (pre WWII) "transmitting tubes" were among the most powerful tubes available, were usually direct heated by thoriated filaments that glowed like light bulbs. Some tubes were built to be very rugged, capable of being driven so hard that the anode would itself glow cherry red, the anodes being machined from solid material (rather than fabricated from thin sheet) to be able to withstand this without distorting when heated. Notable tubes of this type are the 845 and 211. Later beam power tubes such as the 807 and (direct heated) 813 were also used in large numbers in (especially military) radio transmitters.
Today, radio transmitters are overwhelmingly solid state, even at microwave frequencies (cellular radio base stations). Depending on the application, a fair number of radio frequency amplifiers continue to have valve construction, due to their simplicity, where as, it takes several output transistors with complex splitting and combining circuits to equal the same amount of output power of a single valve.
Valve amplifier circuits are significantly different from broadband solid state circuits. Solid state devices have a very low output impedance which allows matching via a broadband transformer covering a large range of frequencies, for example 1.8 to 30 MHz. With either class C or AB operation, these must include low pass filters to remove harmonics. While the proper low pass filter must be switch selected for the frequency range of interest, the result is considered to be a "no tune" design. Valve amplifiers have a tuned network that serves as both the low pass harmonic filter and impedance matching to the output load. In either case, both solid state and valve devices need such filtering networks before the RF signal is output to the load.
Unlike audio amplifiers, in which the analog output signal is of the same form and frequency as the input signal, RF circuits may modulate low frequency information (audio, video, or data) onto a carrier (at a much higher frequency), and the circuitry comprises several distinct stages. For example, a radio transmitter may contain:
The most common anode circuit is a tuned LC circuit where the anodes are connected at a voltage node . This circuit is often known as the anode tank circuit .
An example of this used at VHF/ UHF include the 4CX250B , an example of a twin tetrode is the QQV06/40A.
Neutralization is a term used in TGTP (tuned grid tuned plate) amplifiers for the methods and circuits used for stabilization against unwanted oscillations at the operating frequency caused by the inadvertent introduction of some of the output signal back into the input circuits. This mainly occurs via the grid to plate capacity, but can also come via other paths, making circuit layout important. To cancel the unwanted feedback signal, a portion of the output signal is deliberately introduced into the input circuit with the same amplitude but opposite phase.
When using a tuned circuit in the input, the network must match the driving source to the input impedance of the grid. This impedance will be determined by the grid current in Class C or AB2 operation. In AB1 operation, the grid circuit should be designed to avoid excessive step up voltage, which although it might provide more stage gain, as in audio designs, it will increase instability and make neutralization more critical.
In common with all three basic designs shown here, the anode of the valve is connected to a resonant LC circuit which has another inductive link which allows the RF signal to be passed to the output.
The circuit shown has been largely replaced by a Pi network which allows simpler adjustment and adds low pass filtering.
The anode current is controlled by the electrical potential (voltage) of the first grid. A DC bias is applied to the valve to ensure that the part of the transfer equation which is most suitable to the required application is used. The input signal is able to perturb (change) the potential of the grid, this in turn will change the anode current (also known as the plate current).
In the RF designs shown on this page, a tuned circuit is between the anode and the high voltage supply. This tuned circuit is brought to resonance presenting an inductive load that is well matched to the valve and thus results in an efficient power transfer.
As the current flowing through the anode connection is controlled by the grid, then the current flowing through the load is also controlled by the grid.
One of the disadvantages of a tuned grid compared to other RF designs is that neutralization is required.
A passive grid circuit used at VHF/UHF frequencies might use the 4CX250B tetrode. An example of a twin tetrode would be the QQV06/40A. The tetrode has a screen grid which is between the anode and the first grid, which being grounded for RF, acts as a shield to reducing the effective capacitance between the first grid and the anode. The combination of the effects of the screen grid and the grid damping resistor often allow the use of this design without neutralization. The screen found in tetrodes and pentodes, greatly increases the valve's gain by reducing the effect of anode voltage on anode current.
The input signal is applied to the valve's first grid via a capacitor. The value of the grid resistor determines the gain of the amplifier stage. The higher the resistor the greater the gain, the lower the damping effect and the greater the risk of instability. With this type of stage good layout is less vital.
This design normally uses a triode so valves such as the 4CX250B are not suitable for this circuit, unless the screen and control grids are joined, effectively converting the tetrode into a triode. This circuit design has been used at 1296 MHz using disk seal triode valves such as the 2C39A.
The grid is grounded and the drive is applied to the cathode through a capacitor. The heater supply must be isolated from the cathode as unlike the other designs the cathode is not connected to RF ground. Some valves, such as the 811A, are designed for "zero bias" operation and the cathode can be at ground potential for DC. Valves that require a negative grid bias can be used by putting a positive DC voltage on the cathode. This can be achieved by putting a zener diode between the cathode and ground or using a separate bias supply.
The valve interelectrode capacitance which exists between the input and output of the amplifier and other stray coupling may allow enough energy to feed back into input so as to cause self-oscillation in an amplifier stage. For the higher gain designs this effect must be counteracted. Various methods exist for introducing an out-of-phase signal from the output back to the input so that the effect is cancelled. Even when the feed back is not sufficient to cause oscillation it can produce other effects, such as difficult tuning. Therefore, neutralization can be helpful, even for an amplifier that does not oscillate. Many grounded grid amplifiers use no neutralization, but at 30 MHz adding it can smooth out the tuning.
An important part of the neutralization of a tetrode or pentode is the design of the screen grid circuit. To provide the greatest shielding effect, the screen must be well-grounded at the frequency of operation. Many valves will have a "self-neutralizing" frequency somewhere in the VHF range. This results from a series resonance consisting of the screen capacity and the inductance of the screen lead, thus providing a very low impedance path to ground.
Transit time effects are important at these frequencies, so feedback is not normally usable and for performance critical applications alternative linearisation techniques have to be used such as degeneration and feedforward.
Noise figure is not usually an issue for power amplifier valves, however, in receivers using valves it can be important. While such uses are obsolete, this information is included for historical interest.
Like any amplifying device, valves add noise to the signal to be amplified. Even with a hypothetical perfect amplifier, however, noise is unavoidably present due to thermal fluctuations in the signal source (usually assumed to be at room temperature, T = 295 K). Such fluctuations cause an electrical noise power of k B T B {\displaystyle k_{B}TB} , where k B is the Boltzmann constant and B the bandwidth. Correspondingly, the voltage noise of a resistance R into an open circuit is 4 ∗ k B ∗ T ∗ B ∗ R ) 1 / 2 {\displaystyle 4*k_{B}*T*B*R)^{1/2}} and the current noise into a short circuit is 4 ∗ k B ∗ T ∗ B / R ) 1 / 2 {\displaystyle 4*k_{B}*T*B/R)^{1/2}} .
The noise figure is defined as the ratio of the noise power at the output of the amplifier relative to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect.
The noise properties of tubes at audio frequencies can be modeled well by a perfect noiseless tube having a source of voltage noise in series with the grid. For the EF86 tube, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the tube is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by tube and source are the same, so the total noise power at the output of the amplifier is twice the noise power at the output of the perfect amplifier. The noise figure is then two, or 3 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is 1 / 10 1 / 2 {\displaystyle 1/10^{1/2}} lower than the source's own noise. It therefore adds 1/10 of the noise power caused by the source, and the noise figure is 0.4 dB. For a low-impedance source of 250 Ω, on the other hand, the noise voltage contribution of the tube is 10 times larger than the signal source, so that the noise power is one hundred times larger than that caused by the source. The noise figure in this case is 20 dB.
To obtain low noise figure the impedance of the source can be increased by a transformer. This is eventually limited by the input capacity of the tube, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired.
The noise voltage density of a given tube is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the tube input. For triodes, it is approximately (2-4)/ g m , where g m is the transconductivity. For pentodes, it is higher, about (5-7)/ g m . Tubes with high g m thus tend to have lower noise at high frequencies. For example, it is 300 Ω for one half of the ECC88, 250 Ω for an E188CC (both have g m = 12.5 mA/V) and as low as 65 Ω for a tride-connected D3a ( g m = 40 mA/V).
In the audio frequency range (below 1–100 kHz), "1/ f " noise becomes dominant, which rises like 1/ f . (This is the reason for the relatively high noise resistance of the EF86 in the above example.) Thus, tubes with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio tubes, the frequency at which 1/ f noise takes over is reduced as far as possible, maybe to approximately a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the tube at an optimized (generally low) anode current.
At radio frequencies, things are more complicated: (i) The input impedance of a tube has a real component that goes down like 1/ f ² (due to cathode lead inductance and transit time effects). This means the input impedance can no longer be increased arbitrarily in order to reduce the noise figure. (ii) This input resistance has its own thermal noise, just like any resistor. (The "temperature" of this resistor for noise purposes is more close to the cathode temperature than to room temperature). Thus, the noise figure of tube amplifiers increases with frequency. At 200 MHz, a noise figure of 2.5 (or 4 dB) can be reached with the ECC2000 tube in an optimized "cascode"-circuit with an optimized source impedance. At 800 MHz, tubes like EC8010 have noise figures of about 10 dB or more. Planar triodes are better, but very early, transistors have reached noise figures substantially lower than tubes at UHF. Thus, the tuners of television sets were among the first parts of consumer electronics were transistors were used.
Semiconductor amplifiers have overwhelmingly displaced valve amplifiers for low- and medium-power applications at all frequencies.
Valves continue to be used in some high-power, high-frequency amplifiers used for short wave broadcasting, VHF and UHF TV and (VHF) FM radio, also in existing "radar, countermeasures equipment, or communications equipment" [ 7 ] using specially designed valves, such as the klystron , gyrotron , traveling-wave tube , and crossed-field amplifier ; however, new designs for such products are now invariably semiconductor-based. [ 8 ]
|
https://en.wikipedia.org/wiki/Valve_RF_amplifier
|
Technical specifications and detailed information on the valve audio amplifier , including its development history.
Valves (also known as vacuum tubes) are very high input impedance (near infinite in most circuits) and high-output impedance devices. They are also high-voltage / low-current devices.
The characteristics of valves as gain devices have direct implications for their use as audio amplifiers , notably that power amplifiers need output transformers (OPTs) to translate a high-output-impedance high-voltage low-current signal into a lower-voltage high-current signal needed to drive modern low-impedance loudspeakers (cf. transistors and FETs which are relatively low voltage devices but able to carry large currents directly).
Another consequence is that since the output of one stage is often at ~100 V offset from the input of the next stage, direct coupling is normally not possible and stages need to be coupled using a capacitor or transformer. Capacitors have little effect on the performance of amplifiers. Interstage transformer coupling is a source of distortion and phase shift, and was avoided from the 1940s for high-quality applications; transformers also add cost, bulk, and weight.
The following circuits are simplified conceptual circuits only, real world circuits also require a smoothed or regulated power supply, heater for the filaments (the details depending on if the selected valve types are directly or indirectly heated), and the cathode resistors are often bypassed, etc.
The basic gain stage for a valve amplifier is the auto-biased common cathode stage, in which an anode resistor, the valve, and a cathode resistor form a potential divider across the supply rails. The resistance of the valve varies as a function of the voltage on the grid, relative to the voltage on the cathode.
In the auto-bias configuration, the "operating point" is obtained by setting DC potential of the input grid at zero volts relative to ground via a high-value "grid leak" resistor. The anode current is set by the value of the grid voltage relative to the cathode and this voltage is now dependent upon the value of the resistance selected for the cathode branch of the circuit.
The anode resistor acts as the load for the circuit and is typically order of 3-4 times the anode resistance of the valve type in use. The output from the circuit is the voltage at the junction between the anode and anode resistor. This output varies relative to changes in the input voltage and is a function of the voltage amplification of the valve "mu" and the values chosen for the various circuit elements.
Almost all audio preamplifier circuits are built using cascaded common cathode stages.
The signal is usually coupled from stage to stage via a coupling capacitor or a transformer, although direct coupling is done in unusual cases.
The cathode resistor may or may not be bypassed with a capacitor. Feedback may also be applied to the cathode resistor.
A simple SET power amplifier can be constructed by cascading two stages, using an output transformer as the load.
Two triodes with the cathodes coupled together to form a differential pair . This stage has the ability to cancel common mode (equal on both inputs) signals, and if operated in class A also has the merit of having the ability to largely reject any supply variations (since they affect both sides of the differential stage equally), and conversely the total current drawn by the stage is almost constant (if one side draws more instantaneously the other draws less), resulting in minimal variation in the supply rail sag, and this possibly also interstage distortion.
Two power valves (may be triodes or tetrodes) being differentially driven to form a push–pull output stage, driving a push–pull transformer load. This output stage makes much better use of the transformer core than the single-ended output stage.
A long tail is a constant current (CC) load as the shared cathode feed to a differential pair. In theory the more constant current linearises the differential stage.
The CC may be approximated by a resistor dropping a large voltage, or may be generated by an active circuit (either valve, transistor or FET based)
The long-tail pair can also be used as a phase splitter . It is often used in guitar amplifiers (where it is referred to as the "phase inverter") to drive the power section.
As an alternate to the long-tail pair, the concertina uses a single triode as a variable resistance within a potential divider formed by Ra and Rk either side of the valve. The result is that the voltage at the anode swings exactly and opposite to the voltage at the cathode, giving a perfectly balanced phase split. the disadvantage of this stage (cf the differential long-tail pair) is that it does not give any gain. Using a double triode (typically octal or noval) to form a SET input buffer (giving gain) to then feed a concertina phase splitter is a classic push–pull front end, typically followed by a driver (triode) and (triode or pentode) output stage (in ultra linear in many cases) to form the classic push–pull amplifier circuit.
The push–pull output circuit shown is a simplified variation of the Williamson topology , which comprises four stages:
The cascode (a contraction of the phrase cascade to cathode ) is a two-stage amplifier composed of a transconductance amplifier followed by a current buffer . In valve circuits, the cascode is often constructed from two triodes connected in series, with one operating as a common grid and thus acting as a voltage regulator , providing a nearly constant anode voltage to the other, which operates as a common cathode . This improves input-output isolation (or reverse transmission) by eliminating the Miller effect and thus contributes to a much higher bandwidth , higher input impedance , high output impedance , and higher gain than a single-triode stage.
The tetrode has a screen grid (g2) which is between the anode and the first grid and normally serves, like the cascode , to eliminate the Miller effect and therefore also allows a higher bandwidth and/or higher gain than a triode, but at the expense of linearity and noise performance.
A pentode has an additional suppressor grid (g3) to eliminate the tetrode kink . This is used for improved performance rather than extra gain and is usually not accessible externally. Some of these valves use aligned grids to minimise grid current and beam plates instead of a third grid, these are known as " beam tetrodes ".
It was realised (and many pentodes were specifically designed to permit) that by strapping the screens to the grid/anode a tetrode/pentode just became a triode again, as such making these late design valves very flexible. "Triode strapped" tetrodes are often used in modern amplifier designs that are optimised for quality rather than power output.
In 1937, Alan Blumlein originated a configuration between a "triode strapped" tetrode and normal tetrode, that connects the extra grid (screen) of a tetrode to a tap from the OPT part way between the anode voltage and the supply voltage. This electrical compromise gives a gain and linearity equal to the best traits of both extremes. In a 1951 engineering paper published by David Hafler and Herbert Keroes , they determined that when the screen tap was set to approximately 43% of anode voltage, an optimized condition within the output stage occurred, which they referred to as ultra-linear . By the late 1950s, this design became the dominant configuration for high-fidelity PP amplifiers.
Julius Futterman pioneered a type of amplifier known as " output transformerless " (OTL). These use paralleled valves to match with speaker impedances (typically 8 ohms). This design require numerous valves, run hot, and because they attempt to match impedances in a way fundamentally different from a transformer [ citation needed ] , they often have a unique sound quality. [ citation needed ] 6080 triodes, designed for regulated power supplies, were low-impedance types sometimes pressed into transformerless use.
Some valve amplifiers use the single-ended triode (SET) topology that uses the gain device in class A. SETs are extremely simple and have low parts count. Such amplifiers are expensive because of the output transformers required.
This type of design results in an extremely simple distortion spectrum comprising a monotonically decaying series of harmonics. Some consider this distortion characteristic is a factor in the attractiveness of the sound such designs produce. Compared with modern designs SETs adopt a minimalist approach, and often have just two stages, a single stage triode voltage amplifier followed by a triode power stage. However, variations using some form of active current source or load, not considered a gain stage, are used.
The typical valve using this topology in (rare) current commercial production is the 300B , which yields about 5 watts in SE mode. Rare amplifiers of this type use valves such as the 211 or 845 , capable of about 18 watts. These valves are bright emitter transmitting valves, and have thoriated tungsten filaments which glow like light bulbs when powered.
See paragraphs further down regarding high-power commercially available SET amplifiers offering up to 40 watts with no difficulty, following the development of output transformers to overcome the above restrictions.
The pictures below are of a commercial SET amplifier, and also a prototype of a hobbyist amplifier.
One reason for SETs being (usually) limited to low power is the extreme difficulty (and consequent expense) of making an output transformer that can handle the plate current without saturating, while avoiding excessively large capacitive parasitics.
The use of differential ("push–pull") output stages cancels standing bias current drawn through the output transformer by each of the output valves individually, greatly reducing the problem of core saturation and thus facilitating the construction of more powerful amplifiers at the same time as using smaller, wider bandwidth and cheaper transformers.
The cancellation of the differential output valves also largely cancels the (dominant) even-order harmonic distortion products of the output stage, resulting in less THD, albeit dominated now by odd-order harmonics and no longer monotonic.
Ideally, cancellation of even-order distortion is perfect, but it the real world it is not, even with closely matched valves. PP OPTs usually have a gap to prevent saturation, though less than required by a single-ended circuit.
Since the 1950s the vast majority of high-quality valve amplifiers, and almost all higher-power valve amplifiers have been of the push–pull type.
Push–pull output stages can use triodes for lowest Z out and best linearity, but often use tetrodes or pentodes which give greater gain and power. Many output valves such as KT88, EL34, and EL84 were specifically designed to be operated in either triode or tetrode mode, and some amplifiers can be switched between these modes. Post-Williamson, most commercial amplifiers have used tetrodes in the "ultra-linear" configuration.
Class A pure triode PP stages are sufficiently linear that they can be operated without feedback, although modest NFB to reduce distortion, reduce Z out , and control gain may be desirable. Their power efficiency is, however, much less than class AB (and, of course, class B); significantly less output power is available for the same anode dissipation.
Class A PP designs have no crossover distortion and distortion becomes negligible as signal amplitude is reduced. The effect of this is that class A amplifiers perform extremely well with music that has a low average level (with negligible distortion) with momentary peaks.
A disadvantage of Class A operation for power valves is a shortened life, because the valves are always fully "on" and dissipate maximum power all of the time. Signal amplifier valves not operating at high power are not affected in this way.
Power supply regulation (variation of voltage available with current drawn) is not an issue, as average current is essentially constant; AB amplifiers, which draw current dependent upon signal level, require attention to supply regulation.
Class B and AB amplifiers are more efficient than class A, and can deliver higher power output levels from a given power supply and set of valves.
However, the price for this is that they suffer from crossover distortion, of more or less constant amplitude regardless of signal amplitude. This means that class AB and B amplifiers produce their lowest distortion percentage at near maximum amplitude, with poorer distortion performance at low levels. As the circuit changes from pure class A, through AB1 and AB2, to B, open-loop crossover distortion worsens.
Class AB and B amplifiers use NFB to reduce open-loop distortion. Measured distortion spectra from such amplifiers [ citation needed ] show that distortion percentage is dramatically reduced by NFB, but the residual distortion is shifted towards higher harmonics.
In a class B push–pull amplifier, output valve current which must be provided by the power supply ranges from nearly zero for zero signal to a maximum at maximum signal. Consequently, for linear response to transient signal changes the power supply must have good regulation.
Only class A can be used in single-ended mode, as part of the signal would otherwise be cut off. The driver stage for class AB2 and B valve amplifiers must be capable of supplying some signal current to the power valve grids ("driving power").
The biasing of a push–pull output stage can be adjusted (at the design stage, usually not in a finished amplifier) between class A (giving best open-loop linearity) through classes AB1 and AB2, to class B (giving greatest power and efficiency from a given power supply, output valves and output transformer).
Most commercial valve amplifiers operate in Class AB1 (typically pentodes in the ultra-linear configuration), trading open-loop linearity against higher power; some run in pure class A.
The typical topology for a PP amplifier has an input stage, a phase splitter, a driver and the output stage, although there are many variations of the input stage / phase splitter, and sometimes two of the listed functions are combined in one valve stage. The dominant phase splitter topologies today are the concertina , floating paraphase , and some variation of the long-tail pair .
The gallery shows a modern home-constructed, fully differential, pure class A amplifier of about 15 watts output power without negative feedback, using 6SN7 low-power dual triodes and KT88 power tetrodes.
Because of their inability to drive low impedance loads directly, valve audio amplifiers must employ output transformers to step down the impedance to match the loudspeakers.
Output transformers are not perfect devices and will always introduce some odd harmonic distortion and amplitude variation with frequency to the output signal. In addition, transformers introduce frequency-dependent phase shifts which limit the overall negative feedback which can be used, to keep within the Nyquist stability criteria at high frequencies and avoid oscillation. In recent years, however, the development of improved transformer designs and winding techniques greatly reduce these unwanted effects within the desired pass-band, moving them further out to the margins.
Following its invention by Harold Stephen Black , negative feedback (NFB) has been almost universally adopted in amplifiers of all types, to substantially reduce distortion, flatten frequency response, and reduce the effect of component variations. This is especially needed with non-class-A amplifiers.
Feedback very much reduces distortion percentage, but the distortion spectrum becomes more complex, with a far higher contribution from higher harmonics; [ 1 ] the high harmonics, if at an audible level, are much more undesirable than lower ones, [ 1 ] so that the improvement due to lower overall distortion is partly cancelled by its nature. It is reported that under some circumstances the absolute amplitude of higher harmonics may increase with feedback, although total distortion decreases. [ 1 ]
NFB reduces output impedance (Z out ) (which may vary as a function of frequency in some circuits). This has two important consequences:
Like any amplifying device, valves add noise to the signal to be amplified. Noise is due to device imperfections plus unavoidable temperature-dependent thermal fluctuations (systems are usually assumed to be at room temperature, T = 295 K). Thermal fluctuations cause an electrical noise power of k B T B {\displaystyle k_{B}TB} , where k B {\displaystyle k_{B}} is the Boltzmann constant and B the bandwidth. Correspondingly, the voltage noise of a resistance R into an open circuit is 4 k B ⋅ T ⋅ B ⋅ R {\displaystyle {\sqrt {4k_{B}\cdot T\cdot B\cdot R}}} and the current noise into a short circuit is 4 k B ⋅ T ⋅ B / R {\displaystyle {\sqrt {4k_{B}\cdot T\cdot B/R}}} .
The noise figure is defined as the ratio of the noise power at the output of the amplifier to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect.
The noise properties of valves at audio frequencies can be modelled well by a perfect noiseless valve having a source of voltage noise in series with the grid. For the EF86 low-noise audio pentode valve, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the valve is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by valve and source are the same, so the total noise power at the output of the amplifier is the square root of two times the noise power at the output of the perfect amplifier. It is not simply double because the noise sources are random and there is some partial cancellation in the combined noise. The noise figure is then 1.414, or 1.5 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is 1/10 1/2 lower than the sources's own noise, and the noise figure is ~1 dB. For a low-impedance source of 250 Ω, on the other hand, the noise contribution of the valve is 10 times larger than the signal source, and the noise figure is approximately ten, or 10 dB.
To obtain low noise figure, the impedance of the source can be increased by a transformer. This is eventually limited by the input capacitance of the valve, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired.
The noise voltage density of a given valve is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the valve input. For triodes, it is approximately (2-3)/ g m , where g m is the transconductivity. For pentodes, it is higher, about (5-7)/ g m . Valves with high g m thus tend to have lower noise at high frequencies.
In the audio frequency range (below 1–100 kHz), "1/ f " noise becomes dominant, which rises like 1/ f . Thus, valves with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio valves, the frequency at which 1/ f noise takes over is reduced as far as possible, maybe to something like a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the valve at an optimized (generally low) anode current.
Unlike solid-state devices, valves are assemblies of mechanical parts whose arrangement determines their functioning, and which cannot be totally rigid. If a valve is jarred, either by the equipment being moved or by acoustic vibrations from the loudspeakers, or any sound source, it will produce an output signal, as if it were some sort of microphone (the effect is consequently called microphony ). All valves are subject to this to some extent; low-level voltage amplifier valves for audio are designed to be resistant to this effect, with extra internal supports. The EF86 mentioned in the context of noise is also designed for low microphony, though its high gain makes it particularly susceptible.
For high-end audio , where cost is not the primary consideration, valve amplifiers have remained popular and indeed during the 1990s made a commercial resurgence.
Circuits designed since then in most cases remain similar to circuits from the valve age, but benefit from advances in ancillary component quality (including capacitors) as well as general progress across the electronics industry which gives designers increasingly powerful insight into circuit operation. Solid-state power supplies are more compact, efficient, and can have very good regulation.
Semiconductor power amplifiers do not have the severe limitations on output power imposed by thermionic devices; accordingly loudspeaker design has evolved in the direction of smaller. more convenient, loudspeakers, trading off power efficiency for small size, giving speakers of similar quality but smaller size which require much greater power for the same loudness than hitherto. In response, many modern valve push–pull amplifiers are more powerful than earlier designs, reflecting the need to drive inefficient speakers.
When valve amplifiers were the norm, user-adjustable "tone controls" (a simple two-band non-graphic equaliser ) and electronic filters were used to allow the listener to change frequency response according to taste and room acoustics; this has become uncommon. Some modern equipment uses graphic equalisers, but valve preamplifiers tend not to supply these facilities (except for RIAA and similar equalisation needed for vinyl and shellac discs).
Modern signal sources, unlike vinyl discs, supply line level signals without need for equalisation. It is common to drive valve power amps directly from such source, using passive volume and input source switching integrated into the amplifier, or with a minimalist "line level" control amplifier which is little more than passive volume and switching, plus a buffer amplifier stage to drive the interconnects.
However, there is some small demand for valve preamps and filter circuits for studio microphone amplifiers, equalising preamplifiers for vinyl discs, and exceptionally for active crossovers.
When valve amplifiers were the norm, SETs more-or-less disappeared from western products except for low-power designs (up to 5 watts), with push–pull indirectly heated triodes or triode-connected valves such as EL84 becoming the norm.
However, the far east never abandoned valves, and especially the SET circuit; indeed the extreme interest in all things audiophile in Japan and other far eastern countries sustained great interest in this approach.
Since the 1990s a niche market has developed again in the west for low-power commercial SET amplifies (up to 7 watts), notably using the 300B valve in recent years, which has become fashionable and expensive. Lower-power amplifiers based on other vintage valve types such as 2A3 and 45 are also made.
Even more rarely, higher powered SETs are produced commercially, usually using the 211 or 845 transmitting valves, which are able to deliver 20 watts, operating at 1000 V. Notable amplifiers in this class are those from Audio Note corporation (designed in Japan), including the "Ongaku", voted amplifier of the year during the late 1990s. A very small number of hand-built products of this class sell at very high prices (from US$10,000). The Wavac 833 may be the world's most expensive hi-fi amplifier, delivering around 150 watts using an 833A valve.
Aside from this Wavac and a very few other high-power SETs, SET amplifiers usually need to be carefully paired with very efficient speakers, notably horn and transmission-line enclosures and full-range drivers such as those made by Klipsch and Lowther , which invariably have their own quirks, offsetting their advantages of very high efficiency and minimalism.
Some companies such as the Chinese company " Ming Da " make low power SETs using valves other than the 300B, such as KT90 (a development of the KT88) and up to the more powerful sister of the 845, the 805ASE, with output power of 40 watts over the full audio range from 20 Hz. This is made possible by an output transformer design which does not saturate at high levels and has high efficiency.
Mainstream modern loudspeakers give good sound quality in a compact size, but are much less power-efficient than older designs and require powerful amplifiers to drive them. This makes them unsuitable for use with valve amplifiers, particularly lower-power single-ended designs. Valve hi-fi power amplifier designs since the 1970s have had to move mainly to class AB1 push–pull (PP) circuits. Tetrodes and pentodes, sometimes in ultra-linear configuration, with significant negative feedback, are the usual configuration.
Some class A push–pull amplifiers are made commercially. Some amplifiers can be switched between classes A and AB; some can be switched into triode mode.
Major manufacturers in the PP valve market include:
The simplicity of valve amplifiers, especially single-ended designs, makes them viable for home construction. This has some advantages:
Point-to-point hand-wiring tends to be used rather than circuit boards in low-volume high-end commercial constructions as well as by hobbyists. This construction style is satisfactory due to ease of construction, adapted to the number of physically large and chassis mounted components (valve sockets, large supply capacitors, transformers), the need to twist heater wiring to minimise hum, and as a side effect benefiting from the fact that "flying" wiring minimises capacitive effects.
One picture below shows circuit constructed using "standard" modern industrial parts (630 V MKP capacitors/metal film resistors). One advantage a hobbyist has over a commercial producer is the ability to use higher quality parts that are not reliably available in production volumes (or at a commercially viable cost price). For example, the "silver top getter" Sylvania brown base 6SN7s in use in the external picture date from the 1960s.
Another picture shows exactly the same circuit constructed using Russian military production Teflon capacitors and non-inductive planar film resistors, of the same values.
The wiring of a commercial amplifier is also shown for comparison
Very occasionally, very-high-power valves (usually designed for use in radio transmitters) from decades ago are pressed into service to create one-off SET designs (usually at very high cost). Examples include valves 211 and 833.
The main problem with these designs is constructing output transformers able to sustain the plate current and resultant flux density without core saturation over the full audio-frequency spectrum. This problem increases with power level.
Another problem is that the voltages for such amplifiers often pass well beyond 1 kV, which forms an effective disincentive to commercial products of this type.
Many modern commercial amplifiers (and some hobbyist constructions) place multiple pairs of output valves of readily obtainable types in parallel to increase power, operating from the same voltage required by a single pair. A beneficial side effect is that the output impedance of the valves, and thus the transformer turns ratio needed, is reduced, making it easier to construct a wide bandwidth transformer.
Some high-power commercial amplifiers use arrays of standard valves (e.g. EL34, KT88) in the parallel push–pull (PPP) configuration (e.g. Jadis, Audio Research, McIntosh, Ampeg SVT).
Some home-constructed amplifiers use pairs of high-power transmitting valves (e.g. 813) to yield 100 watts or more of output power per pair in class AB1 (ultra-linear).
The output transformer (OPT) is a major component in all mainstream valve power amplifiers, accounting for significant cost, size, and weight. It is a compromise, balancing the needs for low stray capacitance, low losses in iron and copper, operation without saturation at the required direct current, good linearity, etc.
One approach to avoid the problems of OPTs is to avoid the OPT entirely, and directly couple the amplifier to the loudspeaker, as is done with most solid-state amplifiers. Some designs without output transformers (OTLs) were produced by Julius Futterman in the 1960s and '70s, and more recently in different embodiments by others.
Valves normally match much higher impedances than that of a loudspeaker. Low-impedance valve types and purpose-designed circuits are required. Reasonable efficiency and moderate Z out (damping factor) can be achieved.
These effects mean that OTLs have selective speaker load requirements, just like any other amplifier. Generally a speaker of at least 8 ohms is required, although larger OTLs are often quite comfortable with 4 ohm loads. Electrostatic speakers (often considered difficult to drive) often work especially well with OTLs.
The more recent and more successful OTL circuits employ an output circuit generally known as a Circlotron . The Circlotron has about one-half the output impedance of the Futterman-style (totem-pole) circuits. The Circlotron is fully symmetrical and does not require large amounts of feedback to reduce output impedance and distortion. Successful embodiments use the 6AS7G and the Russian 6C33-CB power triodes.
A common myth is that a short-circuit in an output valve may result in the loudspeaker being connected directly across the power supply and destroyed. In practice, the older Futterman-style amplifiers have been known to damage speakers, due not to shorts but to oscillation. The Circlotron amplifiers often feature direct-coupled outputs, but proper engineering (with a few well-placed fuses) ensures that damage to a speaker is no more likely than with an output transformer.
Modern OTLs are often more reliable, sound better, and are less expensive than many transformer-coupled valve approaches.
In a sense this niche is a subset of OTLs however it merits treating separately because unlike an OTL for a loudspeaker, which has to push the extremes of a valve circuit's ability to deliver relatively high currents at low voltages into a low impedance load, some headphone types have impedances high enough for normal valve types to drive reasonably as OTLs, and in particular electrostatic loudspeakers and headphones which can be driven directly at hundreds of volts but minimal currents.
Once more there are some safety issues associated with direct drive for electrostatic loudspeakers, which in extremis may use transmitting valves operating at over 1 kV. Such systems are potentially lethal.
Shunt Regulated Push-Pull amplifier (SRPP) and Mu-follower is a family of class A push-pull amplifiers featuring two valves. They form high gain and low output impedance inverting stage, providing very low non-linear distortion and great PSRR .
|
https://en.wikipedia.org/wiki/Valve_audio_amplifier_technical_specification
|
A valve exerciser is a device that operates a valve periodically in order to prevent it from becoming so stiff that it no longer works. Valves that are left in a static position for a long time may corrode, [ 1 ] or become blocked with mineral deposits . [ 2 ] Electronic valve exercisers can provide information on the health of a valve by monitoring the required operating torque. [ 3 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Valve_exerciser
|
Valve interstitial cells (VIC), cardiac valve interstitial cells , or also known as valvular interstitial cells (VICs), are the most prevalent cells in the heart valve leaflets, which are a type of mesenchymal stem cells (MSCs) and are responsible for maintaining the extracellular matrix that provides the mechanical properties of the heart valve. [ 1 ] [ 2 ]
They are present in all three layers of the heart valve: a) fibrosa, b) spongiosa, c) ventricularis. [ 3 ] [ 2 ]
VICs are found in all three layers of heart valves, while the entire structure is covered by valve endothelial cells (VECs). Each layer has a unique matrix composition: ventricularis is abundant in elastin , spongiosa is rich in proteoglycans , and fibrosa is filled with collagen . During embryogenesis , the endothelial cells that cover the primordial valve structures migrate into the underlying matrix and undergo a transformation from endothelial to mesenchymal, becoming the interstitial cells . Therefore, VICs originate from endothelial cells. [ 2 ] [ 4 ]
|
https://en.wikipedia.org/wiki/Valve_interstitial_cells
|
In electrochemistry, a valve metal is a metal which passes current in only one direction. Usually, in an electrolytic cell , it can function generally as a cathode , but not generally as an anode because a (highly resistive) oxide of the metal forms under anodic conditions. [ 1 ] Valve metals include commonly aluminium , titanium , tantalum , and niobium . Other metals may also be considered as valve metals, such as tungsten , chromium , zirconium , hafnium , zinc , vanadium , bismuth or antimony . [ 2 ]
This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Valve_metals
|
Valve oil is a lubricant for valves of brass instruments . It is typically mostly mineral oil with a small amount of other ingredients, although synthetic oils are increasingly available.
Besides lubricating the moving parts of the valve, valve oil provides corrosion protection to the bare metal of the inner valve. While the valve piston or rotor is made of a metal which is more resistant to corrosion, the inner valve casing is typically bare brass. (The brass on the outside of a modern instrument is lacquered or plated to prevent corrosion.) The oil also completes the seal between the valve casing and the piston or rotor.
Although a clean and unoiled valve of a well maintained instrument should move without unusual force, the inside of a musical instrument is a very inhospitable environment for a delicate valve mechanism. The musician constantly blows warm moist air through the valve. Worse, impurities may be blown from the musician's mouth into the instrument. Even if nothing grows in the valve, the condensation and changing temperature of the metal can cause an untreated valve to bind, possibly resulting in a stuck valve. Even a minor binding of a valve affects the play-ability of the instrument and is at least very annoying. Also, woodwind musicians use valve oil (called key oil for woodwinds since they do not have valves, they have keys) to lubricate the mechanism of the keys to improve the springback action. However, woodwinds usually oil their keys only every few months, whereas some brass players lubricate their valves several times a week. [ 1 ]
There are two main valve types on brass instruments: piston and rotor. Accordingly, vendors sell different types of oil, including scented varieties. Some vendors recommend up to three different types of oil for some valve types. Slide oil for trombones is also a very similar solution. The difference between oil types is primarily the viscosity . The minerals in the valve oil are dangerous and can cause serious health problems if swallowed.
Synthetic valve oils have become more readily available. Their characteristics include, but are not necessarily limited to, greater compatibility with other related oils without chemical reactions (some types of mineral based oils or their additives were known to react with each other, forming thick solids - this required disassembly and complete cleaning of the valves to restore operation), slow evaporation or total lack of evaporation resulting in fewer oilings required, and less dissipation on contact with moisture inside the valve. Some synthetics have the additional advantage that they do not act as thinners for mineral-based slide greases (previously, mineral-based valve oil inserted into a valve by way of a tuning slide could carry some slide grease with it, fouling the valve). Finally, high-viscosity synthetic oils can increase valve compression and improve feel in instruments with worn valves, returning such instruments to playable condition.
|
https://en.wikipedia.org/wiki/Valve_oil
|
The Valz Prize (Prix Valz) was awarded by the French Academy of Sciences , from 1877 through 1970, to honor advances in astronomy . [ 1 ] [ 2 ]
The Valz Prize was established in June 1874 when the widow of astronomer Benjamin Valz , [ 1 ] Marie Madeleine Julie Malhian, [ 3 ] donated 10,000 francs to establish a prize in honor of her late husband. The Valz Prize was to be awarded for work of similar stature as that honored by the pre-existing Lalande Prize . The first Valz Prize was awarded in 1877 to brothers Paul and Prosper Henry, and was for the sum of 460 francs. [ 1 ]
Save for 1924, the French Academy of Sciences awarded the Valz Prize annually from 1877 to 1943. After 1943, the prize was awarded only sporadically (only once per decade from 1950 to 1970). In 1970, the Valz Prize was combined with the Lalande Prize to create the Lalande-Valz Prize, which continued to be awarded through 1996. In 1997, that prize was combined with numerous other Academy prizes to create the Grande Médaille . [ 2 ]
|
https://en.wikipedia.org/wiki/Valz_Prize
|
The Van 't Hoff equation relates the change in the equilibrium constant , K eq , of a chemical reaction to the change in temperature , T , given the standard enthalpy change , Δ r H ⊖ , for the process. The subscript r {\displaystyle r} means "reaction" and the superscript ⊖ {\displaystyle \ominus } means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique ( Studies in Dynamic Chemistry ). [ 1 ]
The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system . The Van 't Hoff plot , which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction .
The standard pressure , P 0 {\displaystyle P^{0}} , is used to define the reference state for the Van 't Hoff equation, which is [ 2 ] [ 3 ]
d d T ln K e q = Δ r H ⊖ R T 2 {\displaystyle {\frac {d}{dT}}\ln K_{\mathrm {eq} }={\frac {\Delta _{r}H^{\ominus }}{RT^{2}}}}
where ln denotes the natural logarithm , K e q {\displaystyle K_{eq}} is the thermodynamic equilibrium constant , and R is the ideal gas constant . This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium .
In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature ). Since in reality Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and the standard reaction entropy Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} do vary with temperature for most processes, [ 4 ] the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant.
A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as [ 2 ]
The definite integral between temperatures T 1 and T 2 is then
In this equation K 1 is the equilibrium constant at absolute temperature T 1 , and K 2 is the equilibrium constant at absolute temperature T 2 .
Combining the well-known formula for the Gibbs free energy of reaction
where S is the entropy of the system, with the Gibbs free energy isotherm equation: [ 5 ]
we obtain
Differentiation of this expression with respect to the variable T while assuming that both Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} are independent of T yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations.
Provided that Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} are constant, the preceding equation gives ln K as a linear function of 1 / T and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant R to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by R to obtain the standard entropy change.
The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature: [ 6 ]
where Δ r G {\displaystyle \Delta _{\mathrm {r} }G} is the Gibbs free energy of reaction under non-standard states at temperature T {\displaystyle T} , Δ r G ⊖ {\displaystyle \Delta _{r}G^{\ominus }} is the Gibbs free energy for the reaction at ( T , P 0 ) {\displaystyle (T,P^{0})} , ξ {\displaystyle \xi } is the extent of reaction , and Q r is the thermodynamic reaction quotient . Since Δ r G ⊖ = − R T ln K e q {\displaystyle \Delta _{r}G^{\ominus }=-RT\ln K_{eq}} , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T . This finds applications in the field of electrochemistry . particularly in the study of the temperature dependence of voltaic cells.
The isotherm can also be used at fixed temperature to describe the Law of Mass Action . When a reaction is at equilibrium , Q r = K eq and Δ r G = 0 {\displaystyle \Delta _{\mathrm {r} }G=0} . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when Δ r G < 0 , the reaction moves in the forward direction, whereas when Δ r G > 0 , the reaction moves in the backwards direction. See Chemical equilibrium .
For a reversible reaction , the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with ln K eq on the y -axis and 1 / T on the x axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation
This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction . From this plot, − Δ r H / R is the slope, and Δ r S / R is the intercept of the linear fit.
By measuring the equilibrium constant , K eq , at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. [ 7 ] [ 8 ] Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using
The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot.
For an endothermic reaction , heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope:
When the reaction is endothermic , Δ r H > 0 (and the gas constant R > 0 ), so
Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope.
For an exothermic reaction , heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope:
For an exothermic reaction Δ r H < 0 , so
Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope.
At first glance, using the fact that Δ r G ⊖ = − RT ln K = Δ r H ⊖ − T Δ r S ⊖ it would appear that two measurements of K would suffice to be able to obtain an accurate value of Δ r H ⊖ :
where K 1 and K 2 are the equilibrium constant values obtained at temperatures T 1 and T 2 respectively. However, the precision of Δ r H ⊖ values obtained in this way is highly dependent on the precision of the measured equilibrium constant values.
The use of error propagation shows that the error in Δ r H ⊖ will be about 76 kJ/mol times the experimental uncertainty in (ln K 1 − ln K 2 ) , or about 110 kJ/mol times the uncertainty in the ln K values. Similar considerations apply to the entropy of reaction obtained from Δ r S ⊖ = 1 / T (Δ H ⊖ + RT ln K ) .
Notably, when equilibrium constants are measured at three or more temperatures, values of Δ r H ⊖ and Δ r S ⊖ are often obtained by straight-line fitting . [ 9 ] The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this.
In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. [ 10 ] It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error. [ 11 ]
Assume two products B and C form in a reaction:
In this case, K eq can be defined as ratio of B to C rather than the equilibrium constant.
When B / C > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region.
When B / C < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region.
Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product.
In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C -terminus or the N -terminus of the amino acid proline. [ 12 ] The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C -terminus, but entropically it was more favorable to hydrogen bond with the N -terminus. Specifically, they found that C -terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N -terminus hydrogen bonding was favored by 31–43 J/(K mol).
This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C -terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N -terminus, was preferred.
A chemical reaction may undergo different reaction mechanisms at different temperatures. [ 13 ]
In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures.
In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature.
If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term c / T 2 in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction: [ 14 ]
where
Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists.
The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy Δ H ⊖ m of surfactants from the temperature dependence of the critical micelle concentration (CMC):
However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead: [ 15 ]
with G N + 1 and G N being the free energies of the surfactant in a micelle with aggregation number N + 1 and N respectively. This effect is particularly relevant for nonionic ethoxylated surfactants [ 16 ] or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). [ 17 ] The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms. [ 18 ]
|
https://en.wikipedia.org/wiki/Van_'t_Hoff_equation
|
The van 't Hoff factor i (named after Dutch chemist Jacobus Henricus van 't Hoff ) is a measure of the effect of a solute on colligative properties such as osmotic pressure , relative lowering in vapor pressure , boiling-point elevation and freezing-point depression . The van 't Hoff factor is the ratio between the actual concentration of particles produced when the substance is dissolved and the formal concentration that would be expected from its chemical formula. For most non- electrolytes dissolved in water, the van 't Hoff factor is essentially 1.
For most ionic compounds dissolved in water, the van 't Hoff factor is equal to the number of discrete ions in a formula unit of the substance. This is true for ideal solutions only, as occasionally ion pairing occurs in solution. At a given instant a small percentage of the ions are paired and count as a single particle. Ion pairing occurs to some extent in all electrolyte solutions. This causes the measured van 't Hoff factor to be less than that predicted in an ideal solution. The deviation for the van 't Hoff factor tends to be greatest where the ions have multiple charges.
The factor binds osmolarity to molarity and osmolality to molality .
The degree of dissociation is the fraction of the original solute molecules that have dissociated . It is usually indicated by the Greek symbol α {\displaystyle \alpha } . There is a simple relationship between this parameter and the van 't Hoff factor. If a fraction α {\displaystyle \alpha } of the solute dissociates into n {\displaystyle n} ions, then
For example, the dissociation KCl ⇌ K + + Cl − yields n = 2 {\displaystyle n=2} ions, so that i = 1 + α {\displaystyle i=1+\alpha } .
For dissociation in the absence of association, the van 't Hoff factor is: i > 1 {\displaystyle i>1} .
Similarly, if a fraction α {\displaystyle \alpha } of n {\displaystyle n} moles of solute associate to form one mole of an n -mer ( dimer , trimer , etc.), then
For the dimerisation of acetic acid in benzene :
2 moles of acetic acid associate to form 1 mole of dimer, so that
For association in the absence of dissociation, the van 't Hoff factor is: i < 1 {\displaystyle i<1} .
The value of i is the actual number of particles in solution after dissociation divided by the number of formula units initially dissolved in solution and means the number of particles per formula unit of the solute when a solution is dilute.
This quantity can be related to the osmotic coefficient g by the relation: i = n g {\displaystyle i=ng} .
|
https://en.wikipedia.org/wiki/Van_'t_Hoff_factor
|
Bond triangles or Van Arkel–Ketelaar triangles (named after Anton Eduard van Arkel and J. A. A. Ketelaar ) are triangles used for showing different compounds in varying degrees of ionic , metallic and covalent bonding .
In 1941 Van Arkel recognised three extreme materials and associated bonding types. Using 36 main group elements, such as metals, metalloids and non-metals, he placed ionic, metallic and covalent bonds on the corners of an equilateral triangle, as well as suggested intermediate species. The bond triangle shows that chemical bonds are not just particular bonds of a specific type. Rather, bond types are interconnected and different compounds have varying degrees of different bonding character (for example, covalent bonds with significant ionic character are called polar covalent bonds).
Six years later, in 1947, Ketelaar developed van Arkel's idea by adding more compounds and placing bonds on different sides of the triangle.
Many people developed the triangle idea. Some (e.g. Allen's quantitative triangle) used electron configuration energy as an atom parameter, others (Jensen's quantitative triangle, Norman's quantitative triangle) used electronegativity of compounds. Nowadays, electronegativity triangles are mostly used to rate the chemical bond type.
Different compounds that obey the octet rule ( sp-elements ) and hydrogen can be placed on the triangle. Unfortunately, d-elements cannot be analysed using van Arkel-Ketelaar triangle, as their electronegativity is so high that it is taken as a constant. Using electronegativity - two compound average electronegativity on x-axis and electronegativity difference on y-axis, we can rate the dominant bond between the compounds. Example is here
On the right side (from ionic to covalent) should be compounds with varying difference in electronegativity. The compounds with equal electronegativity, such as Cl 2 ( chlorine ) are placed in the covalent corner, while the ionic corner has compounds with large electronegativity difference, such as NaCl (table salt). The bottom side (from metallic to covalent) contains compounds with varying degree of directionality in the bond.
At one extreme is metallic bonds with delocalized bonding and the other are covalent bonds in which the orbitals overlap in a particular direction. The left side (from ionic to metallic) is meant for delocalized bonds with varying electronegativity difference.
|
https://en.wikipedia.org/wiki/Van_Arkel–Ketelaar_triangle
|
The van Arkel–de Boer process , also known as the iodide process or crystal-bar process , was the first industrial process for the commercial production of pure ductile titanium , zirconium and some other metals. It was developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925 for Philips Nv . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Now it is used in the production of small quantities of ultrapure titanium and zirconium. It primarily involves the formation of the metal iodides and their subsequent decomposition to yield pure metal, for example at one of the Allegheny Technologies ' Albany plants. [ 6 ]
This process was superseded commercially by the Kroll process .
As seen in the diagram, impure titanium, zirconium, hafnium , vanadium , thorium or protactinium is heated in an evacuated vessel with a halogen at 50–250 °C. The patent specifically involved the intermediacy of TiI 4 and ZrI 4 , which were volatilized (leaving impurities as solid).
At atmospheric pressure TiI 4 melts at 150 °C and boils at 377 °C, while ZrI 4 melts at 499 °C and boils at 600 °C. The boiling points are lower at reduced pressure. The gaseous metal tetraiodide is decomposed on a white hot tungsten filament (1400 °C). As more metal is deposited the filament conducts better and thus a greater electric current is required to maintain the temperature of the filament. The process can be performed in the span of several hours or several weeks, depending on the particular setup.
Generally, the crystal bar process can be performed using any number of metals using whichever halogen or combination of halogens is most appropriate for that sort of transport mechanism, based on the reactivities involved. The only metals it has been used to purify on an industrial scale are titanium, zirconium and hafnium, and in fact it is still in use today on a much smaller scale for special purity needs. [ citation needed ]
This industry -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_Arkel–de_Boer_process
|
The van Deemter equation in chromatography , named for Jan van Deemter , relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. [ 1 ] These properties include pathways within the column, diffusion ( axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates . The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process.
The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows:
Where
In open tubular capillaries , the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero.
The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following:
The plate height given as:
with L {\displaystyle L\,} the column length and N {\displaystyle N\,} the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time t R {\displaystyle t_{R}\,} for each component and its standard deviation σ {\displaystyle \sigma \,} as a measure for peak width, provided that the elution curve represents a Gaussian curve .
In this case the plate count is given by: [ 2 ]
By using the more practical peak width at half height W 1 / 2 {\displaystyle W_{1/2}\,} the equation is:
or with the width at the base of the peak:
The Van Deemter equation can be further expanded to: [ 3 ]
Where:
The Rodrigues equation , named for Alírio Rodrigues , is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. [ 4 ]
The equation is:
H E T P = A + B u + C ⋅ f ( λ ) ⋅ u {\displaystyle HETP=A+{\frac {B}{u}}+C\cdot f(\lambda )\cdot u}
where
and λ {\displaystyle \lambda } is the intraparticular Péclet number .
|
https://en.wikipedia.org/wiki/Van_Deemter_equation
|
Van Gieson's stain is a histological staining technique used to differentiate between collagen and other tissue elements in microscopic sections. It is a combination of two Acidic dye - picric acid and acid fuchsin , producing distinct coloration that aids in the visualization of connective tissue . [ 1 ]
When examining histological specimens, it colors collagen fibers bright red while staining muscle and other cytoplasmic elements yellow. It was introduced in the late 19th century to histology by American psychiatrist and neuropathologist Ira Van Gieson . Van Gieson’s solution is commonly used as a counterstain in histology, sharply highlighting collagen against a yellow background. [ 1 ]
Van Gieson’s stain was first described by Ira T. Van Gieson in 1889 as a method for examining nervous system tissue. Van Gieson was a pathologist who published The Laboratory notes of technical methods for the nervous system in 1889, introducing the picric–fuchsin method at that time. [ 2 ] In early 20th century the stain was combined with other techniques. In 1908, Friedrich hermann verhoeff introduced an iron–hematoxylin stain for elastic fibers, which used with Van Gieson’s counterstain to form the Verhoeff–Van Gieson (VVG) stain . [ 3 ] In VVG staining, elastic fibers are stained black (by Verhoeff’s hematoxylin), collagen appears red (by Van Gieson), and cytoplasm elements are yellow.
Van Gieson’s stain is an acidic dye mixture. It utilizes the different affinities of its two components for tissue proteins. Acid fuchsin is a large poly-ionic dye (a sulfonated triphenylmethane) [ 4 ] that strongly binds to collagen fibers in a strongly acidic solution, while picric acid (a small trinitrophenol molecule) penetrates and binds more to cytoplasmic proteins and muscle. [ 1 ] Additionally, Picric acid provides the acidic pH necessary for the stain mechanism. Van Gieson stain essentially differentiates cytoplasm and muscle from collagen. Mechanistic studies suggest that acid fuchsin molecules bind to collagen mainly via hydrogen bonds , collagen ’s triple-helix stays relatively open during and after dye-binding. Meanwhile, picric acid binds more via hydrophobic and ionic interactions in dense cytoplasmic protein networks. [ 5 ] In practice, tissue sections are often first stained with an iron hematoxylin for nuclei, then with Van Gieson solution.
Van Gieson’s stain is widely used to as a counterstain to evaluate connective tissue in both histology research and pathology. In medical liver biopsies, Hematoxylin–Van Gieson (HVG) stain is used to visualize the extent of fibrosis, as collagen appears bright pink/red. [ 6 ] When used after Verhoeff’s elastic stain it reveals elastic fibers (stain black) and collagen (stain red). [ 1 ] It differentiates between collagen and elastic fibers in tumor stroma. [ 7 ] It is often used in general pathology to stain collagen and other connective tissues. as a quick “connective tissue” stain.
Van Gieson’s solution is frequently used in combination with other stains for greater information. In the Hematoxylin–Van Gieson (HVG) method, an iron hematoxylin is applied first, staining nuclei dark blue, followed by Van Gieson’s solution. This results in dark nuclei, red collagen, and yellow cytoplasmic elements. [ 8 ] In the Verhoeff–Van Gieson (VVG) stain, Verhoeff’s iron-hematoxylin (containing ferric chloride and iodine) is used first to stain elastic fibers black, then Van Gieson’s counterstain colors collagen red and cytoplasm yellow. [ 1 ]
Like other staining methods, Van Gieson’s stain has limitations. It may miss very thin collagen fibrils, immature collagen can be faint or invisible with this stain. This can lead to an underestimation of collagen content. [ 1 ] The red coloration can also fade if slides are not properly fixed or stored. The usage of the picric acid–acid fuchsin mixture tends to remove or significantly weaken majority of hematoxylin, resulting in nuclei that are faint or nearly invisible under the microscope.To overcome this, an iron-mordanted hematoxylin, such as Weigert’s hematoxylin, is typically used. Iron hematoxylins are more resistant to acid decolorization and preserve nuclear detail even after exposure to Van Gieson's solution. [ 8 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_Gieson's_stain
|
In mathematics education , the Van Hiele model is a theory that describes how students learn geometry . The theory originated in 1957 in the doctoral dissertations of Dina van Hiele-Geldof and Pierre van Hiele (wife and husband) at Utrecht University , in the Netherlands . The Soviets did research on the theory in the 1960s and integrated their findings into their curricula. American researchers did several large studies on the van Hiele theory in the late 1970s and early 1980s, concluding that students' low van Hiele levels made it difficult to succeed in proof-oriented geometry courses and advising better preparation at earlier grade levels. [ 1 ] [ 2 ] Pierre van Hiele published Structure and Insight in 1986, further describing his theory. The model has greatly influenced geometry curricula throughout the world through emphasis on analyzing properties and classification of shapes at early grade levels. In the United States, the theory has influenced the geometry strand of the Standards published by the National Council of Teachers of Mathematics and the Common Core Standards .
The student learns by rote to operate with [mathematical] relations that he does not understand, and of which he has not seen the origin…. Therefore the system of relations is an independent construction having no rapport with other experiences of the child. This means that the student knows only what has been taught to him and what has been deduced from it. He has not learned to establish connections between the system and the sensory world. He will not know how to apply what he has learned in a new situation. - Pierre van Hiele, 1959 [ 3 ]
The best known part of the van Hiele model are the five levels which the van Hieles postulated to describe how children learn to reason in geometry. Students cannot be expected to prove geometric theorems until they have built up an extensive understanding of the systems of relationships between geometric ideas. These systems cannot be learned by rote, but must be developed through familiarity by experiencing numerous examples and counterexamples, the various properties of geometric figures, the relationships between the properties, and how these properties are ordered. The five levels postulated by the van Hieles describe how students advance through this understanding.
The five van Hiele levels are sometimes misunderstood to be descriptions of how students understand shape classification, but the levels actually describe the way that students reason about shapes and other geometric ideas. Pierre van Hiele noticed that his students tended to "plateau" at certain points in their understanding of geometry and he identified these plateau points as levels . [ 4 ] In general, these levels are a product of experience and instruction rather than age. This is in contrast to Piaget 's theory of cognitive development, which is age-dependent. A child must have enough experiences (classroom or otherwise) with these geometric ideas to move to a higher level of sophistication. Through rich experiences, children can reach Level 2 in elementary school. Without such experiences, many adults (including teachers) remain in Level 1 all their lives, even if they take a formal geometry course in secondary school. [ 5 ] The levels are as follows:
Level 0. Visualization : At this level, the focus of a child's thinking is on individual shapes, which the child is learning to classify by judging their holistic appearance. Children simply say, "That is a circle," usually without further description. Children identify prototypes of basic geometrical figures ( triangle , circle , square ). These visual prototypes are then used to identify other shapes. A shape is a circle because it looks like a sun; a shape is a rectangle because it looks like a door or a box; and so on. A square seems to be a different sort of shape than a rectangle, and a rhombus does not look like other parallelograms, so these shapes are classified completely separately in the child’s mind. Children view figures holistically without analyzing their properties. If a shape does not sufficiently resemble its prototype, the child may reject the classification. Thus, children at this stage might balk at calling a thin, wedge-shaped triangle (with sides 1, 20, 20 or sides 20, 20, 39) a "triangle", because it's so different in shape from an equilateral triangle , which is the usual prototype for "triangle". If the horizontal base of the triangle is on top and the opposing vertex below, the child may recognize it as a triangle, but claim it is "upside down". Shapes with rounded or incomplete sides may be accepted as "triangles" if they bear a holistic resemblance to an equilateral triangle. [ 6 ] Squares are called "diamonds" and not recognized as squares if their sides are oriented at 45° to the horizontal. Children at this level often believe something is true based on a single example.
Level 1. Analysis : At this level, the shapes become bearers of their properties. The objects of thought are classes of shapes, which the child has learned to analyze as having properties. A person at this level might say, "A square has 4 equal sides and 4 equal angles. Its diagonals are congruent and perpendicular, and they bisect each other." The properties are more important than the appearance of the shape. If a figure is sketched on the blackboard and the teacher claims it is intended to have congruent sides and angles, the students accept that it is a square, even if it is poorly drawn. Properties are not yet ordered at this level. Children can discuss the properties of the basic figures and recognize them by these properties, but generally do not allow categories to overlap because they understand each property in isolation from the others. For example, they will still insist that "a square is not a rectangle ." (They may introduce extraneous properties to support such beliefs, such as defining a rectangle as a shape with one pair of sides longer than the other pair of sides.) Children begin to notice many properties of shapes, but do not see the relationships between the properties; therefore they cannot reduce the list of properties to a concise definition with necessary and sufficient conditions. They usually reason inductively from several examples, but cannot yet reason deductively because they do not understand how the properties of shapes are related.
Level 2. Abstraction : At this level, properties are ordered. The objects of thought are geometric properties, which the student has learned to connect deductively. The student understands that properties are related and one set of properties may imply another property. Students can reason with simple arguments about geometric figures. A student at this level might say, " Isosceles triangles are symmetric, so their base angles must be equal." Learners recognize the relationships between types of shapes. They recognize that all squares are rectangles, but not all rectangles are squares, and they understand why squares are a type of rectangle based on an understanding of the properties of each. They can tell whether it is possible or not to have a rectangle that is, for example, also a rhombus. They understand necessary and sufficient conditions and can write concise definitions. However, they do not yet understand the intrinsic meaning of deduction. They cannot follow a complex argument, understand the place of definitions, or grasp the need for axioms, so they cannot yet understand the role of formal geometric proofs.
Level 3. Deduction : Students at this level understand the meaning of deduction. The object of thought is deductive reasoning (simple proofs), which the student learns to combine to form a system of formal proofs ( Euclidean geometry ). Learners can construct geometric proofs at a secondary school level and understand their meaning. They understand the role of undefined terms, definitions, axioms and theorems in Euclidean geometry. However, students at this level believe that axioms and definitions are fixed, rather than arbitrary, so they cannot yet conceive of non-Euclidean geometry . Geometric ideas are still understood as objects in the Euclidean plane.
Level 4. Rigor : At this level, geometry is understood at the level of a mathematician. Students understand that definitions are arbitrary and need not actually refer to any concrete realization. The object of thought is deductive geometric systems, for which the learner compares axiomatic systems . Learners can study non-Euclidean geometries with understanding. People can understand the discipline of geometry and how it differs philosophically from non-mathematical studies.
American researchers renumbered the levels as 1 to 5 so that they could add a "Level 0" which described young children who could not identify shapes at all. Both numbering systems are still in use. Some researchers also give different names to the levels.
The van Hiele levels have five properties:
1. Fixed sequence : the levels are hierarchical. Students cannot "skip" a level. [ 5 ] The van Hieles claim that much of the difficulty experienced by geometry students is due to being taught at the Deduction level when they have not yet achieved the Abstraction level.
2. Adjacency : properties which are intrinsic at one level become extrinsic at the next. (The properties are there at the Visualization level, but the student is not yet consciously aware of them until the Analysis level. Properties are in fact related at the Analysis level, but students are not yet explicitly aware of the relationships.)
3. Distinction : each level has its own linguistic symbols and network of relationships. The meaning of a linguistic symbol is more than its explicit definition; it includes the experiences the speaker associates with the given symbol. What may be "correct" at one level is not necessarily correct at another level. At Level 0 a square is something that looks like a box. At Level 2 a square is a special type of rectangle. Neither of these is a correct description of the meaning of "square" for someone reasoning at Level 1. If the student is simply handed the definition and its associated properties, without being allowed to develop meaningful experiences with the concept, the student will not be able to apply this knowledge beyond the situations used in the lesson.
4. Separation : a teacher who is reasoning at one level speaks a different "language" from a student at a lower level, preventing understanding. When a teacher speaks of a "square" she or he means a special type of rectangle. A student at Level 0 or 1 will not have the same understanding of this term. The student does not understand the teacher, and the teacher does not understand how the student is reasoning, frequently concluding that the student's answers are simply "wrong". The van Hieles believed this property was one of the main reasons for failure in geometry. Teachers believe they are expressing themselves clearly and logically, but their Level 3 or 4 reasoning is not understandable to students at lower levels, nor do the teachers understand their students’ thought processes. Ideally, the teacher and students need shared experiences behind their language.
5. Attainment : The van Hieles recommended five phases for guiding students from one level to another on a given topic: [ 7 ]
For Dina van Hiele-Geldof's doctoral dissertation, she conducted a teaching experiment with 12-year-olds in a Montessori secondary school in the Netherlands. She reported that by using this method she was able to raise students' levels from Level 0 to 1 in 20 lessons and from Level 1 to 2 in 50 lessons.
Using van Hiele levels as the criterion, almost half of geometry students are placed in a course in which their chances of being successful are only 50-50. — Zalman Usiskin, 1982 [ 1 ]
Researchers found that the van Hiele levels of American students are low. European researchers have found similar results for European students. [ 8 ] Many, perhaps most, American students do not achieve the Deduction level even after successfully completing a proof-oriented high school geometry course, [ 1 ] probably because material is learned by rote, as the van Hieles claimed. [ 5 ] This appears to be because American high school geometry courses assume students are already at least at Level 2, ready to move into Level 3, whereas many high school students are still at Level 1, or even Level 0. [ 1 ] See the Fixed Sequence property above.
The levels are discontinuous, as defined in the properties above, but researchers have debated as to just how discrete the levels actually are. Studies have found that many children reason at multiple levels, or intermediate levels, which appears to be in contradiction to the theory. [ 6 ] Children also advance through the levels at different rates for different concepts, depending on their exposure to the subject. They may therefore reason at one level for certain shapes, but at another level for other shapes. [ 5 ]
Some researchers [ 9 ] have found that many children at the Visualization level do not reason in a completely holistic fashion, but may focus on a single attribute, such as the equal sides of a square or the roundness of a circle. They have proposed renaming this level the syncretic level. Other modifications have also been suggested, [ 10 ] such as defining sub-levels between the main levels, though none of these modifications have yet gained popularity.
|
https://en.wikipedia.org/wiki/Van_Hiele_model
|
Van Jacobson TCP/IP Header Compression is a data compression protocol described in RFC 1144, [ 1 ] specifically designed by Van Jacobson to improve TCP/IP performance over slow serial links . Van Jacobson compression reduces the normal 40 byte TCP/IP packet headers down to 3–4 bytes for the average case; it does this by saving the state of TCP connections at both ends of a link, and only sending the differences in the header fields that change. This makes a very big difference for interactive performance on low speed links, although it will not do anything about the processing delay inherent to most dial-up modems .
Van Jacobson Header Compression (also VJ compression, or just Header Compression) is an option in most versions of PPP . Versions of Serial Line Internet Protocol (SLIP) with VJ compression are often called CSLIP (Compressed SLIP).
This computer networking article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_Jacobson_TCP/IP_Header_Compression
|
Van Krevelen diagrams are graphical plots developed by Dirk Willem van Krevelen (chemist and professor of fuel technology at the TU Delft ) and used to assess the origin and maturity of kerogen and petroleum . The diagram cross-plots the hydrogen : carbon atomic ratio as a function of the oxygen :carbon atomic ratio.
Beginning around 2003, the diagrams are often used to visualize data from mass spectrometry analysis, used for mixtures other than kerogen and petroleum. [ 1 ] For example, the diagrams have been used in one analysis of the components in Scotch whiskey . [ 2 ]
Different types of kerogen have differing potentials to produce oil during maturation . These various types of kerogen can be distinguished on a van Krevelen diagram. [ 3 ]
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_Krevelen_diagram
|
The Van Leusen reaction is the reaction of a ketone with TosMIC leading to the formation of a nitrile . It was first described in 1977 by Van Leusen and co-workers. [ 1 ] When aldehydes are employed, the Van Leusen reaction is particularly useful to form oxazoles and imidazoles .
The reaction mechanism consists of the initial deprotonation of TosMIC, which is facile thanks to the electron-withdrawing effect of both sulfone and isocyanide groups. Attack onto the carbonyl is followed by 5-endo-dig cyclisation (following Baldwin's rules ) into a 5-membered ring.
If the substrate is an aldehyde, then elimination of the excellent tosyl leaving group can occur readily. Upon quenching, the resulting molecule is an oxazole.
If an aldimine is used, formed from the condensation of an aldehyde with an amine , then imidazoles can be generated through the same process. [ 2 ]
When ketones are used instead, elimination cannot occur; rather, a tautomerization process gives an intermediate which after a ring opening process and elimination of the tosyl group forms an N -formylated alkeneimine. This is then solvolysed by an acidic alcohol solution to give the nitrile product.
|
https://en.wikipedia.org/wiki/Van_Leusen_reaction
|
The Van Slyke determination is a chemical test for the determination of amino acids containing a primary amine group. It is named after the biochemist Donald Dexter Van Slyke (1883-1971). [ 1 ]
One of Van Slyke's first professional achievements was the quantification of amino acids by the Van Slyke determination reaction. [ 2 ] To quantify aliphatic amino acids , the sample is diluted in glycerol and then treated with a solution of sodium nitrite , water and acetic acid . The resulting diazotisation reaction produces nitrogen gas which can be observed qualitatively or measured quantitatively. [ 3 ]
Van Slyke Reaction: [ 4 ]
R − NH 2 + HONO ⟶ ROH + N 2 + H 2 O {\displaystyle {\ce {R-NH2 + HONO -> ROH + N2 + H2O}}}
In addition, Van Slyke developed the so-called Van Slyke apparatus, which can be used to determine the concentration of respiratory gases in the blood , especially the concentration of sodium bicarbonate . This was of high importance to be able to recognize a beginning acidosis in diabetic patients as early as possible, in order to start alkali treatment. The Van Slyke apparatus became a standard equipment in clinical laboratories around the world and the results of Van Slyke's research are still used today to determine abnormalities in the acid-base homeostasis . Later on, Van Slyke further improved his apparatus, increasing its accuracy and sensitivity. Using the new method, he was able to further investigate the role of gas and electrolyte equilibria in the blood and how they change in response to respiration . [ 5 ] [ 6 ] The oxygen carrying capacity of blood is estimated by Van Slyke gasometry method. [ 7 ]
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_Slyke_determination
|
In general relativity , the van Stockum dust is an exact solution of the Einstein field equations where the gravitational field is generated by dust rotating about an axis of cylindrical symmetry. Since the density of the dust is increasing with distance from this axis, the solution is rather artificial, but as one of the simplest known solutions in general relativity, it stands as a pedagogically important example.
This solution is named after Willem Jacob van Stockum , who rediscovered it in 1938 independently of a much earlier discovery by Cornelius Lanczos in 1924. It is currently recommended that the solution be referred to as the Lanczos–van Stockum dust.
One way of obtaining this solution is to look for a cylindrically symmetric perfect fluid solution in which the fluid exhibits rigid rotation . That is, we demand that the world lines of the fluid particles form a timelike congruence having nonzero vorticity but vanishing expansion and shear. (In fact, since dust particles feel no forces, this will turn out to be a timelike geodesic congruence, but we won't need to assume this in advance.)
A simple ansatz corresponding to this demand is expressed by the following frame field , which contains two undetermined functions of r {\displaystyle r} :
To prevent misunderstanding, we should emphasize that taking the dual coframe
gives the metric tensor in terms of the same two undetermined functions:
Multiplying out gives
We compute the Einstein tensor with respect to this frame, in terms of the two undetermined functions,
and demand that the result have the form appropriate for a perfect fluid solution with the timelike unit vector e → 0 {\displaystyle {\vec {e}}_{0}} everywhere tangent to the world line of a fluid particle. That is, we demand that
This gives the conditions
Solving for f {\displaystyle f} and then for h {\displaystyle h} gives the desired frame defining the van Stockum solution:
Note that this frame is only defined on r > 0 {\displaystyle r>0} .
Computing the Einstein tensor with respect to our frame shows that in fact the pressure vanishes , so we have a dust solution. The mass density of the dust turns out to be
Happily, this is finite on the axis of symmetry r = 0 {\displaystyle r=0} , but the density increases with radius, a feature which unfortunately severely limits possible astrophysical applications.
Solving the Killing equations shows that this spacetime admits a three-dimensional abelian Lie algebra of Killing vector fields, generated by
Here, ξ → 1 {\displaystyle {\vec {\xi }}_{1}} has nonzero vorticity, so we have a stationary spacetime invariant under translation along the world lines of the dust particles, and also under translation along the axis of cylindrical symmetry and rotation about that axis.
Note that unlike the Gödel dust solution , in the van Stockum dust the dust particles are rotating about a geometrically distinguished axis .
As promised, the expansion and shear of the timelike geodesic congruence e → 0 {\displaystyle {\vec {e}}_{0}} vanishes, but the vorticity vector is
This means that even though in our comoving chart the world lines of the dust particles appear as vertical lines, in fact they are twisting about one another as the dust particles swirl about the axis of symmetry. In other words, if we follow the evolution of a small ball of dust, we find that it rotates about its own axis (parallel to r = 0 {\displaystyle r=0} ), but does not shear or expand; the latter properties define what we mean by rigid rotation . Notice that on the axis itself, the magnitude of the vorticity vector becomes simply a {\displaystyle a} .
The tidal tensor is
which shows that observers riding on the dust particles experience isotropic tidal tension in the plane of rotation. The magnetogravitic tensor is
Consider a thought experiment in which an observer riding on a dust particle sitting on the axis of symmetry looks out at dust particles with positive radial coordinate. Does he see them to be rotating , or not?
Since the top array of null geodesics is obtained simply by translating upwards the lower array, and since the three world lines are all vertical (invariant under time translation ), it might seem that the answer is "no". However, while the frame given above is an inertial frame , computing the covariant derivatives
shows that only the first vanishes identically. In other words, the remaining spatial vectors are spinning about e → 1 {\displaystyle {\vec {e}}_{1}} (i.e. about an axis parallel to the axis of cylindrical symmetry of this spacetime).
Thus, to obtain a nonspinning inertial frame we need to spin up our original frame, like this:
where θ = t q ( r ) {\displaystyle \theta =tq(r)} where q is a new undetermined function of r. Plugging in the requirement that the covariant derivatives vanish, we obtain
The new frame appears, in our comoving coordinate chart, to be spinning, but in fact it is gyrostabilized. In particular, since our observer with the green world line in the figure is presumably riding a nonspinning dust particle (otherwise spin-spin forces would be apparent in the dynamics of the dust), he in fact observes nearby radially separated dust particles to be rotating clockwise about his location with angular velocity a. This explains the physical meaning of the parameter which we found in our earlier derivation of the first frame.
( Pedantic note: alert readers will have noticed that we ignored the fact that neither of our frame fields is well defined on the axis. However, we can define a frame for an on-axis observer by an appropriate one-sided limit; this gives a discontinuous frame field, but we only need to define a frame along the world line of our on-axis observer in order to pursue the thought experiment considered in this section.)
It is worth remarking that the null geodesics spiral inwards in the above figure. This means that our on-axis observer sees the other dust particles at time-lagged locations , which is of course just what we would expect. The fact that the null geodesics appear "bent" in this chart is of course an artifact of our choice of comoving coordinates in which the world lines of the dust particles appear as vertical coordinate lines.
Let us draw the light cones for some typical events in the van Stockum dust, to see how their appearance (in our comoving cylindrical chart) depends on the radial coordinate:
As the figure [ which? ] shows, at r = a − 1 {\displaystyle r=a^{-1}} , the cones become tangent to the coordinate plane t = t 0 {\displaystyle t=t_{0}} , and we obtain a closed null curve (the red circle). Note that this is not a null geodesic.
As we move further outward, we can see that horizontal circles with larger radii are closed timelike curves . The paradoxical nature of these CTCs was apparently first pointed out by van Stockum: observers whose world lines form a closed timelike curve can apparently revisit or affect their own past. Even worse, there is apparently nothing to prevent such an observer from deciding, on his third lifetime, say, to stop accelerating, which would give him multiple biographies.
These closed timelike curves are not timelike geodesics, so these paradoxical observers must accelerate to experience these effects. Indeed, as we would expect, the required acceleration diverges as these timelike circles approach the null circles lying in the critical cylinder r = a − 1 {\displaystyle r=a^{-1}} .
Closed timelike curves turn out to exist in many other exact solutions in general relativity, and their common appearance is one of the most troubling theoretical objections to this theory. However, very few physicists refuse to use general relativity at all on the basis of such objections; rather most take the pragmatic attitude that using general relativity makes sense whenever one can get away with it, because of the relative simplicity and well established reliability of this theory in many astrophysical situations. This is not unlike the fact that many physicists use Newtonian mechanics every day, even though they are well aware that Galilean kinematics has been "overthrown" by relativistic kinematics.
|
https://en.wikipedia.org/wiki/Van_Stockum_dust
|
In condensed matter and atomic physics , Van Vleck paramagnetism refers to a positive and temperature -independent contribution to the magnetic susceptibility of a material, derived from second order corrections to the Zeeman interaction . The quantum mechanical theory was developed by John Hasbrouck Van Vleck between the 1920s and the 1930s to explain the magnetic response of gaseous nitric oxide ( NO ) and of rare-earth salts. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Alongside other magnetic effects like Paul Langevin 's formulas for paramagnetism ( Curie's law ) and diamagnetism , Van Vleck discovered an additional paramagnetic contribution of the same order as Langevin's diamagnetism. Van Vleck contribution is usually important for systems with one electron short of being half filled and this contribution vanishes for elements with closed shells . [ 5 ] [ 6 ]
The magnetization of a material under an external small magnetic field H {\displaystyle \mathbf {H} } is approximately described by
where χ {\displaystyle \chi } is the magnetic susceptibility. When a magnetic field is applied to a paramagnetic material, its magnetization is parallel to the magnetic field and χ > 0 {\displaystyle \chi >0} . For a diamagnetic material , the magnetization opposes the field, and χ < 0 {\displaystyle \chi <0} .
Experimental measurements show that most non-magnetic materials have a susceptibility that behaves in the following way:
where T {\displaystyle T} is the absolute temperature ; C 0 , χ 0 {\displaystyle C_{0},\chi _{0}} are constant, and C 0 ≥ 0 {\displaystyle C_{0}\geq 0} , while χ 0 {\displaystyle \chi _{0}} can be positive, negative or null. Van Vleck paramagnetism often refers to systems where C 0 ≈ 0 {\displaystyle C_{0}\approx 0} and χ 0 > 0 {\displaystyle \chi _{0}>0} .
The Hamiltonian for an electron in a static homogeneous magnetic field H {\displaystyle \mathbf {H} } in an atom is usually composed of three terms
where μ 0 {\displaystyle \mu _{0}} is the vacuum permeability , μ B {\displaystyle \mu _{\rm {B}}} is the Bohr magneton , g {\displaystyle g} is the g-factor , e {\displaystyle e} is the elementary charge , m e {\displaystyle m_{\rm {e}}} is the electron mass , L {\displaystyle \mathbf {L} } is the orbital angular momentum operator , S {\displaystyle \mathbf {S} } the spin and r ⊥ {\displaystyle r_{\perp }} is the component of the position operator orthogonal to the magnetic field. The Hamiltonian has three terms, the first one H 0 {\displaystyle {\mathcal {H}}_{0}} is the unperturbed Hamiltonian without the magnetic field, the second one is proportional to H {\displaystyle \mathbf {H} } , and the third one is proportional to H 2 {\displaystyle H^{2}} . In order to obtain the ground state of the system, one can treat H 0 {\displaystyle {\mathcal {H}}_{0}} exactly, and treat the magnetic field dependent terms using perturbation theory. Note that for strong magnetic fields, Paschen-Back effect dominates.
First order perturbation theory on the second term of the Hamiltonian (proportional to H {\displaystyle H} ) for electrons bound to an atom, gives a positive correction to energy given by
where | g ⟩ {\displaystyle |\mathrm {g} \rangle } is the ground state, g J {\displaystyle g_{J}} is the Landé g-factor of the ground state and J = L + S {\displaystyle \mathbf {J} =\mathbf {L} +\mathbf {S} } is the total angular momentum operator (see Wigner–Eckart theorem ). This correction leads to what is known as Langevin paramagnetism (the quantum theory is sometimes called Brillouin paramagnetism), that leads to a positive magnetic susceptibility. For sufficiently large temperatures, this contribution is described by Curie's law :
a susceptibility that is inversely proportional to the temperature T {\displaystyle T} , where C 0 ≈ C 1 {\displaystyle C_{0}\approx C_{1}} is the material dependent Curie constant . If the ground state has no total angular momentum there is no Curie contribution and other terms dominate.
The first perturbation theory on the third term of the Hamiltonian (proportional to H 2 {\displaystyle H^{2}} ), leads to a negative response (magnetization that opposes the magnetic field). Usually known as Larmor or Langenvin diamagnetism :
where C 2 {\displaystyle C_{2}} is another constant proportional to n {\displaystyle n} the number of atoms per unit volume, and ⟨ r 2 ⟩ {\displaystyle \langle r^{2}\rangle } is the mean squared radius of the atom. Note that Larmor susceptibility does not depend on the temperature.
While Curie and Larmor susceptibilities were well understood from experimental measurements, J.H. Van Vleck noticed that the calculation above was incomplete. If H {\displaystyle H} is taken as the perturbation parameter, the calculation must include all orders of perturbation up to the same power of H {\displaystyle H} . As Larmor diamagnetism comes from first order perturbation of the H 2 {\displaystyle H^{2}} , one must calculate second order perturbation of the B {\displaystyle B} term:
where the sum goes over all excited degenerate states | e i ⟩ {\displaystyle |\mathrm {e} _{i}\rangle } , and E e , i ( 0 ) , E g ( 0 ) {\displaystyle E_{\mathrm {e} ,i}^{(0)},E_{\mathrm {g} }^{(0)}} are the energies of the excited states and the ground state, respectively, the sum excludes the state i = 0 {\displaystyle i=0} , where | e 0 ⟩ = | g ⟩ {\displaystyle |\mathrm {e} _{0}\rangle =|\mathrm {g} \rangle } . Historically, J.H. Van Vleck called this term the "high frequency matrix elements". [ 4 ]
In this way, Van Vleck susceptibility comes from the second order energy correction, and can be written as
where n {\displaystyle n} is the number density , and S z {\displaystyle S_{z}} and L z {\displaystyle L_{z}} are the projection of the spin and orbital angular momentum in the direction of the magnetic field, respectively.
In this way, χ 0 ≈ χ V V + χ L a r m o r {\displaystyle \chi _{0}\approx \chi _{\rm {VV}}+\chi _{\rm {Larmor}}} , as the signs of Larmor and Van Vleck susceptibilities are opposite, the sign of χ 0 {\displaystyle \chi _{0}} depends on the specific properties of the material.
For a more general system (molecules, complex systems), the paramagnetic susceptibility for an ensemble of independent magnetic moments can be written as
where
and g J ( i ) {\displaystyle g_{J}^{(i)}} is the Landé g-factor of state i . Van Vleck summarizes the results of this formula in four cases, depending on the temperature: [ 3 ]
While molecular oxygen O 2 and nitric oxide NO are similar paramagnetic gases, O 2 follows Curie law as in case (a), while NO , deviates slightly from it. In 1927, Van Vleck considered NO to be in case (d) and obtained a more precise prediction of its susceptibility using the formula above. [ 2 ] [ 4 ]
The standard example of Van Vleck paramagnetism are europium(III) oxide ( Eu 2 O 3 ) salts where there are six 4f electrons in trivalent europium ions. The ground state of Eu 3+ that has a total azimuthal quantum number j = 0 {\displaystyle j=0} and Curie's contribution ( C 0 / T {\displaystyle C_{0}/T} ) vanishes, the first excited state with j = 1 {\displaystyle j=1} is very close to the ground state at 330 K and contributes through second order corrections as showed by Van Vleck. A similar effect is observed in samarium salts ( Sm 3+ ions). [ 7 ] [ 6 ] In the actinides , Van Vleck paramagnetism is also important in Bk 5+ and Cm 4+ which have a localized 5f 6 configuration. [ 7 ]
|
https://en.wikipedia.org/wiki/Van_Vleck_paramagnetism
|
Van den Bergh reaction is a chemical reaction used to measure bilirubin levels in blood. [ 1 ] [ 2 ] More specifically, it determines the amount of conjugated bilirubin in the blood. The reaction produces azobilirubin .
Principle: bilirubin reacts with diazotised sulphanilic acid to produce purple coloured azobilirubin. [ 3 ] This reaction is highly useful in understanding the nature of jaundice . This was pioneered by the Dutch physician, Abraham Albert Hijmans van den Bergh (1869–1943) of Utrecht . This test helps to identify the type of jaundice. The serum of the patient is mixed with diazo reagent. If a red colour develops immediately it is called a direct positive. It happens if conjugated bilirubin is present.
In an indirect positive test, the patient's serum is first treated with alcohol and later mixed with diazo reagent. This causes development of a red colour. It is seen if unconjugated bilirubin is present.
If both conjugated and unconjugated bilirubin are present the reaction is termed a biphasic reaction.
This article related to pathology is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_den_Bergh_reaction
|
In mathematics , the van der Corput inequality is a corollary of the Cauchy–Schwarz inequality that is useful in the study of correlations among vectors, and hence random variables . It is also useful in the study of equidistributed sequences , for example in the Weyl equidistribution estimate . Loosely stated, the van der Corput inequality asserts that if a unit vector v {\displaystyle v} in an inner product space V {\displaystyle V} is strongly correlated with many unit vectors u 1 , … , u n ∈ V {\displaystyle u_{1},\dots ,u_{n}\in V} , then many of the pairs u i , u j {\displaystyle u_{i},u_{j}} must be strongly correlated with each other. Here, the notion of correlation is made precise by the inner product of the space V {\displaystyle V} : when the absolute value of ⟨ u , v ⟩ {\displaystyle \langle u,v\rangle } is close to 1 {\displaystyle 1} , then u {\displaystyle u} and v {\displaystyle v} are considered to be strongly correlated. (More generally, if the vectors involved are not unit vectors, then strong correlation means that | ⟨ u , v ⟩ | ≈ ‖ u ‖ ‖ v ‖ {\displaystyle |\langle u,v\rangle |\approx \|u\|\|v\|} .)
Let V {\displaystyle V} be a real or complex inner product space with inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } and induced norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} . Suppose that v , u 1 , … , u n ∈ V {\displaystyle v,u_{1},\dots ,u_{n}\in V} and that ‖ v ‖ = 1 {\displaystyle \|v\|=1} . Then
In terms of the correlation heuristic mentioned above, if v {\displaystyle v} is strongly correlated with many unit vectors u 1 , … , u n ∈ V {\displaystyle u_{1},\dots ,u_{n}\in V} , then the left-hand side of the inequality will be large, which then forces a significant proportion of the vectors u i {\displaystyle u_{i}} to be strongly correlated with one another.
We start by noticing that for any i ∈ 1 , … , n {\displaystyle i\in 1,\dots ,n} there exists ϵ i {\displaystyle \epsilon _{i}} (real or complex) such that | ϵ i | = 1 {\displaystyle |\epsilon _{i}|=1} and | ⟨ v , u i ⟩ | = ϵ i ⟨ v , u i ⟩ {\displaystyle |\langle v,u_{i}\rangle |=\epsilon _{i}\langle v,u_{i}\rangle } . Then,
|
https://en.wikipedia.org/wiki/Van_der_Corput_inequality
|
In mathematics , in the field of harmonic analysis ,
the van der Corput lemma is an estimate for oscillatory integrals named after the Dutch mathematician J. G. van der Corput .
The following result is stated by E. Stein : [ 1 ]
Suppose that a real-valued function ϕ ( x ) {\displaystyle \phi (x)} is smooth in an open interval ( a , b ) {\displaystyle (a,b)} ,
and that | ϕ ( k ) ( x ) | ≥ 1 {\displaystyle |\phi ^{(k)}(x)|\geq 1} for all x ∈ ( a , b ) {\displaystyle x\in (a,b)} .
Assume that either k ≥ 2 {\displaystyle k\geq 2} , or that k = 1 {\displaystyle k=1} and ϕ ′ ( x ) {\displaystyle \phi '(x)} is monotone for x ∈ R {\displaystyle x\in \mathbb {R} } .
Then there is a constant c k {\displaystyle c_{k}} , which does not depend on ϕ {\displaystyle \phi } ,
such that
for any λ ∈ R {\displaystyle \lambda \in \mathbb {R} } .
The van der Corput lemma is closely related to the sublevel set estimates, [ 2 ] which give the upper bound on the measure of the set
where a function takes values not larger than ϵ {\displaystyle \epsilon } .
Suppose that a real-valued function ϕ ( x ) {\displaystyle \phi (x)} is smooth
on a finite or infinite interval I ⊂ R {\displaystyle I\subset \mathbb {R} } ,
and that | ϕ ( k ) ( x ) | ≥ 1 {\displaystyle |\phi ^{(k)}(x)|\geq 1} for all x ∈ I {\displaystyle x\in I} .
There is a constant c k {\displaystyle c_{k}} , which does not depend on ϕ {\displaystyle \phi } ,
such that
for any ϵ ≥ 0 {\displaystyle \epsilon \geq 0} the measure of the sublevel set { x ∈ I : | ϕ ( x ) | ≤ ϵ } {\displaystyle \{x\in I:|\phi (x)|\leq \epsilon \}} is bounded by c k ϵ 1 / k {\displaystyle c_{k}\epsilon ^{1/k}} .
|
https://en.wikipedia.org/wiki/Van_der_Corput_lemma_(harmonic_analysis)
|
The Van der Meer formula is a formula for calculating the required stone weight for armourstone under the influence of (wind) waves . This is necessary for the design of breakwaters and shoreline protection. Around 1985 it was found that the Hudson formula in use at that time had considerable limitations (only valid for permeable breakwaters and steep (storm) waves). That is why the Dutch government agency Rijkswaterstaat commissioned Deltares to start research for a more complete formula. This research, conducted by Jentsje van der Meer, resulted in the Van der Meer formula in 1988, as described in his dissertation. [ 1 ] This formula reads [ 2 ] [ 3 ]
and
In this formula:
For design purposes, for the coefficient c p the value of 5,2 and for c s the value 0,87 is recommended. [ 2 ]
The value of P can be read from attached graph. Until now, there is no good method for determining P different than with accompanying pictures. Research is under way to try to determine the value of P using calculation models that can simulate the water movement in the breakwater ( OpenFOAM models).
The value of the damage number S is defined as
where A is the area of the erosion area. Permissible values for S are: [ 2 ]
|
https://en.wikipedia.org/wiki/Van_der_Meer_formula
|
The van der Pauw Method is a technique commonly used to measure the resistivity and the Hall coefficient of a sample. Its strength lies in its ability to accurately measure the properties of a sample of any arbitrary shape, as long as the sample is approximately two-dimensional (i.e. it is much thinner than it is wide), solid (no holes), and the electrodes are placed on its perimeter . The van der Pauw method employs a four-point probe placed around the perimeter of the sample, in contrast to the linear four point probe : this allows the van der Pauw method to provide an average resistivity of the sample, whereas a linear array provides the resistivity in the sensing direction. [ 1 ] This difference becomes important for anisotropic materials, which can be properly measured using the Montgomery Method , an extension of the van der Pauw Method (see, for instance, reference [ 2 ] ).
From the measurements made, the following properties of the material can be calculated:
The method was first propounded by Leo J. van der Pauw in 1958. [ 3 ]
There are five conditions that must be satisfied to use this technique: [ 4 ] 1. The sample must have a flat shape of uniform thickness 2. The sample must not have any isolated holes 3. The sample must be homogeneous and isotropic 4. All four contacts must be located at the edges of the sample 5. The area of contact of any individual contact should be at least an order of magnitude smaller than the area of the entire sample.
The second condition can be weakened. The van der Pauw technique can also be applied to samples with one hole. [ 5 ] [ 6 ]
In order to use the van der Pauw method, the sample thickness must be much less than the width and length of the sample. In order to reduce errors in the calculations, it is preferable that the sample be symmetrical. There must also be no isolated holes within the sample.
The measurements require that four ohmic contacts be placed on the sample. Certain conditions for their placement need to be met:
In addition to this, any leads from the contacts should be constructed from the same batch of wire to minimise thermoelectric effects. For the same reason, all four contacts should be of the same material.
The average resistivity of a sample is given by ρ = R S ⋅t , where the sheet resistance R S is determined as follows. For an anisotropic material, the individual resistivity components, e.g. ρ x or ρ y , can be calculated using the Montgomery method .
To make a measurement, a current is caused to flow along one edge of the sample (for instance, I 12 ) and the voltage across the opposite edge (in this case, V 34 ) is measured. From these two values, a resistance (for this example, R 12 , 34 {\displaystyle R_{12,34}} ) can be found using Ohm's law :
In his paper, van der Pauw showed that the sheet resistance of samples with arbitrary shapes can be determined from two of these resistances - one measured along a vertical edge, such as R 12 , 34 {\displaystyle R_{12,34}} , and a corresponding one measured along a horizontal edge, such as R 23 , 41 {\displaystyle R_{23,41}} . The actual sheet resistance is related to these resistances by the van der Pauw formula
The reciprocity theorem [ 7 ] tells us that R A B , C D = R C D , A B {\displaystyle R_{AB,CD}=R_{CD,AB}}
Therefore, it is possible to obtain a more precise value for the resistances R 12 , 34 {\displaystyle R_{12,34}} and R 23 , 41 {\displaystyle R_{23,41}} by making two additional measurements of their reciprocal values R 34 , 12 {\displaystyle R_{34,12}} and R 41 , 23 {\displaystyle R_{41,23}} and averaging the results.
We define R vertical = R 12 , 34 + R 34 , 12 2 {\displaystyle R_{\text{vertical}}={\frac {R_{12,34}+R_{34,12}}{2}}} and R horizontal = R 23 , 41 + R 41 , 23 2 {\displaystyle R_{\text{horizontal}}={\frac {R_{23,41}+R_{41,23}}{2}}}
Then, the van der Pauw formula becomes e − π R vertical / R S + e − π R horizontal / R S = 1 {\displaystyle e^{-\pi R_{\text{vertical}}/R_{S}}+e^{-\pi R_{\text{horizontal}}/R_{S}}=1}
A further improvement in the accuracy of the resistance values can be obtained by repeating the resistance measurements after switching polarities of both the current source and the voltage meter. Since this is still measuring the same portion of the sample, just in the opposite direction, the values of R vertical and R horizontal can still be calculated as the averages of the standard and reversed polarity measurements. The benefit of doing this is that any offset voltages, such as thermoelectric potentials due to the Seebeck effect , will be cancelled out.
Combining these methods with the reciprocal measurements from above leads to the formulas for the resistances being
and
The van der Pauw formula takes the same form as in the previous section.
Both of the above procedures check the repeatability of the measurements. If any of the reversed polarity measurements don't agree to a sufficient degree of accuracy (usually within 3%) with the corresponding standard polarity measurement, then there is probably a source of error somewhere in the setup, which should be investigated before continuing. The same principle applies to the reciprocal measurements – they should agree to a sufficient degree before they are used in any calculations.
In general, the van der Pauw formula cannot be rearranged to give the sheet resistance R S in terms of known functions. The most notable exception to this is when R vertical = R = R horizontal ; in this scenario the sheet resistance is given by
The quotient π / ln 2 {\displaystyle \pi /\ln 2} is known as the van der Pauw constant and has approximate value 4.53236. In most other scenarios, an iterative method is used to solve the van der Pauw formula numerically for R S . Typically a formula is considered to fail the preconditions for Banach Fixed Point Theorem , so methods based on it do not work. Instead, nested intervals converge slowly but steadily. Recently, however, it has been shown that an appropriate reformulation of the van der Pauw problem (e.g., by introducing a second van der Pauw formula) makes it fully solvable by the Banach fixed point method. [ 8 ]
Alternatively, a Newton-Raphson method converges relatively quickly. To reduce the complexity of the notation, the following variables are introduced:
Then the next approximation R s + {\displaystyle R_{s}^{+}} is calculated by
When a charged particle—such as an electron—is placed in a magnetic field , it experiences a Lorentz force proportional to the strength of the field and the velocity at which it is traveling through it. This force is strongest when the direction of motion is perpendicular to the direction of the magnetic field; in this case the force
where q {\displaystyle q} is the charge on the particle in coulombs , v {\displaystyle v} the velocity it is traveling at (centimeters per second ), and B {\displaystyle B} the strength of the magnetic field ( Wb /cm 2 ). Note that centimeters are often used to measure length in the semiconductor industry, which is why they are used here instead of the SI units of meters.
When a current is applied to a piece of semiconducting material, this results in a steady flow of electrons through the material (as shown in parts (a) and (b) of the accompanying figure). The velocity the electrons are traveling at is (see electric current ):
where n {\displaystyle n} is the electron density, A {\displaystyle A} is the cross-sectional area of the material and q {\displaystyle q} the elementary charge (1.602×10 −19 coulombs ).
If an external magnetic field is then applied perpendicular to the direction of current flow, then the resulting Lorentz force will cause the electrons to accumulate at one edge of the sample (see part (c) of the figure). Combining the above two equations, and noting that q {\displaystyle q} is the charge on an electron, results in a formula for the Lorentz force experienced by the electrons:
This accumulation will create an electric field across the material due to the uneven distribution of charge, as shown in part (d) of the figure. This in turn leads to a potential difference across the material, known as the Hall voltage V H {\displaystyle V_{H}} . The current, however, continues to only flow along the material, which indicates that the force on the electrons due to the electric field balances the Lorentz force. Since the force on an electron from an electric field ϵ {\displaystyle \epsilon } is q ϵ {\displaystyle q\epsilon } , we can say that the strength of the electric field is therefore
Finally, the magnitude of the Hall voltage is simply the strength of the electric field multiplied by the width of the material; that is,
where t {\displaystyle t} is the thickness of the material. Since the sheet density n s {\displaystyle n_{s}} is defined as the density of electrons multiplied by the thickness of the material, we can define the Hall voltage in terms of the sheet density:
Two sets of measurements need to be made: one with a magnetic field in the positive z -direction as shown above, and one with it in the negative z -direction. From here on in, the voltages recorded with a positive field will have a subscript P (for example, V 13, P = V 3, P - V 1, P ) and those recorded with a negative field will have a subscript N (such as V 13, N = V 3, N - V 1, N ). For all of the measurements, the magnitude of the injected current should be kept the same; the magnitude of the magnetic field needs to be the same in both directions also.
First of all with a positive magnetic field, the current I 24 is applied to the sample and the voltage V 13, P is recorded; note that the voltages can be positive or negative. This is then repeated for I 13 and V 42, P .
As before, we can take advantage of the reciprocity theorem to provide a check on the accuracy of these measurements. If we reverse the direction of the currents (i.e. apply the current I 42 and measure V 31, P , and repeat for I 31 and V 24, P ), then V 13, P should be the same as V 31, P to within a suitably small degree of error. Similarly, V 42, P and V 24, P should agree.
Having completed the measurements, a negative magnetic field is applied in place of the positive one, and the above procedure is repeated to obtain the voltage measurements V 13, N , V 42, N , V 31, N and V 24, N .
Initially, the difference of the voltages for positive and negative magnetic fields is calculated:
V 13 = V 13, P − V 13, N V 24 = V 24, P − V 24, N V 31 = V 31, P − V 31, N V 42 = V 42, P − V 42, N
The overall Hall voltage is then
The polarity of this Hall voltage indicates the type of material the sample is made of; if it is positive, the material is P-type, and if it is negative, the material is N-type.
The formula given in the background can then be rearranged to show that the sheet density
Note that the strength of the magnetic field B needs to be in units of Wb/cm 2 if n s is in cm −2 . For instance, if the strength is given in the commonly used units of teslas , it can be converted by multiplying it by 10 −4 .
The resistivity of a semiconductor material can be shown to be [ 9 ]
where n and p are the concentration of electrons and holes in the material respectively, and μ n and μ p are the mobility of the electrons and holes respectively.
Generally, the material is sufficiently doped so that there is a difference of many orders-of-magnitude between the two concentrations, allowing this equation to be simplified to
where n m and μ m are the doping level and mobility of the majority carrier respectively.
If we then note that the sheet resistance R S is the resistivity divided by the thickness of the sample, and that the sheet density n S is the doping level multiplied by the thickness, we can divide the equation through by the thickness to get
This can then be rearranged to give the majority carrier mobility in terms of the previously calculated sheet resistance and sheet density:
|
https://en.wikipedia.org/wiki/Van_der_Pauw_method
|
In the study of dynamical systems , the van der Pol oscillator (named for Dutch physicist Balthasar van der Pol ) is a non- conservative , oscillating system with non-linear damping . It evolves in time according to the second-order differential equation d 2 x d t 2 − μ ( 1 − x 2 ) d x d t + x = 0 , {\displaystyle {d^{2}x \over dt^{2}}-\mu (1-x^{2}){dx \over dt}+x=0,} where x is the position coordinate —which is a function of the time t —and μ is a scalar parameter indicating the nonlinearity and the strength of the damping.
The Van der Pol oscillator was originally proposed by the Dutch electrical engineer and physicist Balthasar van der Pol while he was working at Philips . [ 2 ] Van der Pol found stable oscillations, [ 3 ] which he subsequently called relaxation-oscillations [ 4 ] and are now known as a type of limit cycle , in electrical circuits employing vacuum tubes . When these circuits are driven near the limit cycle , they become entrained , i.e. the driving signal pulls the current along with it. Van der Pol and his colleague, van der Mark, reported in the September 1927 issue of Nature that at certain drive frequencies an irregular noise was heard, [ 5 ] which was later found to be the result of deterministic chaos . [ 6 ]
The Van der Pol equation has a long history of being used in both the physical and biological sciences . For instance, in biology, Fitzhugh [ 7 ] and Nagumo [ 8 ] extended the equation in a planar field as a model for action potentials of neurons . The equation has also been utilised in seismology to model the two plates in a geological fault , [ 9 ] and in studies of phonation to model the right and left vocal fold oscillators. [ 10 ]
Liénard's theorem can be used to prove that the system has a limit cycle. Applying the Liénard transformation y = x − x 3 / 3 − x ˙ / μ {\displaystyle y=x-x^{3}/3-{\dot {x}}/\mu } , where the dot indicates the time derivative, the Van der Pol oscillator can be written in its two-dimensional form: [ 11 ]
Another commonly used form based on the transformation y = x ˙ {\displaystyle y={\dot {x}}} leads to:
[ 12 ]
As μ moves from less than zero to more than zero, the spiral sink at origin becomes a spiral source, and a limit cycle appears "out of the blue" with radius two. This is because the transition is not generic: when ε = 0 , both the differential equation becomes linear, and the origin becomes a circular node.
Knowing that in a Hopf bifurcation , the limit cycle should have size ∝ ε 1 / 2 , {\displaystyle \propto \varepsilon ^{1/2},} we may attempt to convert this to a Hopf bifurcation by using the change of variables u = ε 1 / 2 x , {\displaystyle u=\varepsilon ^{1/2}x,} which gives u ¨ + u + u 2 u ˙ − ε u ˙ = 0 {\displaystyle {\ddot {u}}+u+u^{2}{\dot {u}}-\varepsilon {\dot {u}}=0} This indeed is a Hopf bifurcation. [ 21 ]
One can also write a time-independent Hamiltonian formalism for the Van der Pol oscillator by augmenting it to a four-dimensional autonomous dynamical system using an auxiliary second-order nonlinear differential equation as follows:
Note that the dynamics of the original Van der Pol oscillator is not affected due to the one-way coupling between the time-evolutions of x and y variables. A Hamiltonian H for this system of equations can be shown to be [ 22 ]
where p x = y ˙ + μ ( 1 − x 2 ) y {\displaystyle p_{x}={\dot {y}}+\mu (1-x^{2})y} and p y = x ˙ {\displaystyle p_{y}={\dot {x}}} are the conjugate momenta corresponding to x and y , respectively. This may, in principle, lead to quantization of the Van der Pol oscillator. Such a Hamiltonian also connects [ 23 ] the geometric phase of the limit cycle system having time dependent parameters with the Hannay angle of the corresponding Hamiltonian system.
The quantum van der Pol oscillator, which is the quantum mechanical version of the classical van der Pol oscillator, has been proposed using a Lindblad equation to study its quantum dynamics and quantum synchronization . [ 24 ] Note the above Hamiltonian approach with an auxiliary second-order equation produces unbounded phase-space trajectories and hence cannot be used to quantize the van der Pol oscillator. In the limit of weak nonlinearity (i.e. μ→ 0) the van der Pol oscillator reduces to the Stuart–Landau equation . The Stuart–Landau equation in fact describes an entire class of limit-cycle oscillators in the weakly-nonlinear limit. The form of the classical Stuart–Landau equation is much simpler, and perhaps not surprisingly, can be quantized by a Lindblad equation which is also simpler than the Lindblad equation for the van der Pol oscillator. The quantum Stuart–Landau model has played an important role in the study of quantum synchronisation [ 25 ] [ 26 ] (where it has often been called a van der Pol oscillator although it cannot be uniquely associated with the van der Pol oscillator). The relationship between the classical Stuart–Landau model ( μ→ 0) and more general limit-cycle oscillators (arbitrary μ ) has also been demonstrated numerically in the corresponding quantum models. [ 24 ]
The forced, or driven, Van der Pol oscillator takes the 'original' function and adds a driving function A sin( ωt ) to give a differential equation of the form:
where A is the amplitude , or displacement , of the wave function and ω is its angular velocity .
Author James Gleick described a vacuum tube Van der Pol oscillator in his book from 1987 Chaos: Making a New Science . [ 28 ] According to a New York Times article, [ 29 ] Gleick received a modern electronic Van der Pol oscillator from a reader in 1988.
|
https://en.wikipedia.org/wiki/Van_der_Pol_oscillator
|
The following table lists the Van der Waals constants (from the Van der Waals equation ) for a number of common gases and volatile liquids. [ 1 ] These constants are generally calculated from the critical pressure p c {\displaystyle p_{c}} and temperature T c {\displaystyle T_{c}} using the formulas a = 27 64 R 2 T c 2 p c {\displaystyle a={\frac {27}{64}}{\frac {R^{2}T_{c}^{2}}{p_{c}}}} and b = R T c 8 p c {\displaystyle b={\frac {RT_{c}}{8p_{c}}}} .
To convert from L 2 b a r / m o l 2 {\displaystyle \mathrm {L^{2}bar/mol^{2}} } to L 2 k P a / m o l 2 {\displaystyle \mathrm {L^{2}kPa/mol^{2}} } , multiply by 100. To convert from L 2 b a r / m o l 2 {\displaystyle \mathrm {L^{2}bar/mol^{2}} } to m 6 P a / m o l 2 {\displaystyle \mathrm {m^{6}Pa/mol^{2}} } , divide by 10. To convert from L / m o l {\displaystyle \mathrm {L/mol} } to m 3 / m o l {\displaystyle \mathrm {m^{3}/mol} } , divide by 1000.
1 J·m 3 /mol 2 = 1 m 6 ·Pa/mol 2 = 10 L 2 ·bar/mol 2
1 L 2 atm/mol 2 = 0.101325 J·m 3 /mol 2 = 0.101325 Pa·m 6 /mol 2
1 dm 3 /mol = 1 L/mol = 1 m 3 /kmol = 0.001 m 3 /mol (where kmol is kilomoles = 1000 moles)
|
https://en.wikipedia.org/wiki/Van_der_Waals_constants_(data_page)
|
The van der Waals equation is a mathematical formula that describes the behavior of real gases . It is an equation of state that relates the pressure , volume , number of molecules , and temperature in a fluid . The equation modifies the ideal gas law in two ways: first, it considers particles to have a finite diameter (whereas an ideal gas consists of point particles); second, its particles interact with each other (unlike an ideal gas, whose particles move as though alone in the volume).
The equation is named after Dutch physicist Johannes Diderik van der Waals , who first derived it in 1873 as part of his doctoral thesis . Van der Waals based the equation on the idea that fluids are composed of discrete particles , which few scientists believed existed. However, the equation accurately predicted the behavior of a fluid around its critical point , which had been discovered a few years earlier. Its qualitative and quantitative agreement with experiments ultimately cemented its acceptance in the scientific community. These accomplishments won van der Waals the 1910 Nobel Prize in Physics . [ 1 ] Today the equation is recognized as an important model of phase change processes. [ 2 ]
One explicit way to write the van der Waals equation is: [ 3 ] [ 4 ]
where p {\displaystyle p} is pressure, T {\displaystyle T} is temperature, and v = V / n = N A V / N {\displaystyle v=V/n=N_{\text{A}}V/N} is molar volume , the ratio of volume, V {\displaystyle V} , to quantity of matter , n {\displaystyle n} ( N A {\displaystyle N_{\text{A}}} is the Avogadro constant and N {\displaystyle N} the number of molecules). Also a {\displaystyle a} and b {\displaystyle b} are experimentally determinable, substance-specific constants, and R = k N A {\displaystyle R=kN_{\text{A}}} is the universal gas constant . This form is useful for plotting isotherms (constant temperature curves).
Van der Waals wrote it in an equivalent, explicit in temperature, form in his Thesis [ 5 ] [ 6 ] (although he could not denote absolute temperature by its modern form in 1873)
This form is useful for plotting isobars (constant pressure curves). Writing v = V / n {\displaystyle v=V/n} , and multiplying both sides by n {\displaystyle n} it becomes the form that appears in Figure A. [ 7 ]
When van der Waals created his equation, few scientists believed that fluids were composed of rapidly moving particles. Moreover, those who thought so did not know the atomic/molecular structure. The simplest conception of a particle, and the easiest to model mathematically, was a hard sphere of volume V 0 {\displaystyle V_{0}} ; this is what van der Waals used, and he found the total excluded volume was B = 4 N V 0 {\displaystyle B=4NV_{0}} , namely 4 times the volume of all the particles. [ 8 ] [ 9 ] The constant b = B N A / N {\displaystyle b=BN_{\text{A}}/N} , has the dimension of molar volume, [ v ]. The constant a {\displaystyle a} expresses the strength of the hypothesized inter-particle attraction. Van der Waals only had Newton's law of gravitation, in which two particles are attracted in proportion to the product of their masses, as a model. Thus he argued that, in his case, the attractive pressure was proportional to the density squared. [ 10 ] The proportionality constant, a , when written in the form used above, has the dimension [ pv 2 ] (pressure times molar volume squared).
The force magnitude between two spherically symmetric molecules is written as F = − d φ / d r {\displaystyle F=-d\varphi /dr} , where φ ( r ) {\displaystyle \varphi (r)} is the pair potential function, and the force direction is along the line connecting the two mass centers. The specific functional relation is most simply characterized by a single length, σ {\displaystyle \sigma } , and a minimum energy, − ε {\displaystyle -\varepsilon } (with ε ≥ 0 {\displaystyle \varepsilon \geq 0} ). Two of the many such functions that have been suggested are shown in Fig. B. [ 11 ]
A modern theory based on statistical mechanics produces the same result for b = 4 N A [ ( 4 π / 3 ) ( σ / 2 ) 3 ] {\displaystyle b=4N_{\text{A}}[(4\pi /3)(\sigma /2)^{3}]} obtained by van der Waals and his contemporaries. It also produces a constant value for a / N A ε b {\displaystyle a/N_{\text{A}}\varepsilon b} when ε / k T {\displaystyle \varepsilon /kT} is small enough. [ 12 ] [ 13 ]
Once the constants a {\displaystyle a} and b {\displaystyle b} are known for a given substance, the van der Waals equation can be used to predict attributes like the boiling point at any given pressure, and the critical point . [ 14 ] These predictions are accurate for only a few substances. Most simple fluids are only a valuable approximation. [ 15 ] [ 16 ]
The ideal gas law follows from the van der Waals equation whenever the molar volume v {\displaystyle v} is sufficiently large (when v ≫ b {\displaystyle v\gg b} , so v − b ≈ v {\displaystyle v-b\approx v} ), or equivalently whenever the molar density, ρ = 1 / v {\displaystyle \rho =1/v} , is sufficiently small (when v ≫ ( a / p ) 1 / 2 {\displaystyle v\gg (a/p)^{1/2}} , so p + a / v 2 ≈ p {\displaystyle p+a/v^{2}\approx p} ). [ 17 ]
When v {\displaystyle v} is large enough that both inequalities are satisfied, these two approximations reduce the van der Waals equation to p = R T / v {\displaystyle p=RT/v} , or p v = R T {\displaystyle pv=RT} . With R = N A k {\displaystyle R=N_{\text{A}}k} , where k {\displaystyle k} is the Boltzmann constant, and using the definition v = V / n {\displaystyle v=V/n} given after Eq (1a), this becomes p V = N k T {\displaystyle pV=NkT} ; either of these forms expresses the ideal gas law . [ 17 ] This is unsurprising since the van der Waals equation was constructed from the ideal gas equation to obtain an equation valid beyond the low-density limit of ideal gas behavior.
What is truly remarkable is the extent to which van der Waals succeeded. Indeed, Epstein in his classic thermodynamics textbook began his discussion of the van der Waals equation by writing, "Despite its simplicity, it comprehends both the gaseous and the liquid state and brings out, in a most remarkable way, all the phenomena pertaining to the continuity of these two states". [ 17 ] Also, in Volume 5 of his Lectures on Theoretical Physics , Sommerfeld , in addition to noting that "Boltzmann [ 18 ] described van der Waals as the Newton of real gases ", [ 6 ] also wrote "It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the unstable [referring to superheated liquid, and subcooled vapor, now called metastable ] states" that are associated with the phase change process. [ 19 ]
The first to propose a volume correction to Boyle's law was Daniel Bernoulli in his microscopic theory in Hydrodynamica , however this model was mostly ignored in 1738. [ 20 ]
In 1857 Rudolf Clausius published The Nature of the Motion which We Call Heat . In it he derived the relation p = ( N / V ) m c 2 ¯ / 3 {\displaystyle p=(N/V)m{\overline {c^{2}}}/3} for the pressure p {\displaystyle p} in a gas, composed of particles in motion, with number density N / V {\displaystyle N/V} , mass m {\displaystyle m} , and mean square speed c 2 ¯ {\displaystyle {\overline {c^{2}}}} . He then noted that using the classical laws of Boyle and Charles, one could write m c 2 ¯ / 3 = k T {\displaystyle m{\overline {c^{2}}}/3=kT} with a constant of proportionality k {\displaystyle k} . Hence temperature was proportional to the average kinetic energy of the particles. [ 21 ] This article inspired further work based on the twin ideas that substances are composed of indivisible particles, and that heat is a consequence of the particle motion; movement that evolves according to Newton's laws. The work, known as the kinetic theory of gases , was done principally by Clausius, James Clerk Maxwell , and Ludwig Boltzmann . At about the same time, Josiah Willard Gibbs advanced the work by converting it into statistical mechanics . [ 22 ] [ 23 ]
This environment influenced Johannes Diderik van der Waals . After initially pursuing a teaching credential, he was accepted for doctoral studies at the University of Leiden under Pieter Rijke . [ 24 ] This led, in 1873, to a dissertation that provided a simple, particle-based equation that described the gas-liquid change of state, the origin of a critical temperature, and the concept of corresponding states. [ 25 ] [ 26 ] The equation is based on two premises: first, that fluids are composed of particles with non-zero volumes, and second, that at a large enough distance each particle exerts an attractive force on all other particles in its vicinity. Boltzmann called these forces van der Waals cohesive forces . [ 27 ]
In 1869 Irish professor of chemistry Thomas Andrews at Queen's University Belfast, in a paper entitled On the Continuity of the Gaseous and Liquid States of Matter , [ 28 ] displayed an experimentally obtained set of isotherms of carbonic acid, H 2 CO 3 , that showed at low temperatures a jump in density at a certain pressure, while at higher temperatures there was no abrupt change (the figure can be seen here ). Andrews called the isotherm at which the jump disappears the critical point . Given the similarity of the titles of this paper and van der Waals' subsequent thesis, one might think that van der Waals set out to develop a theoretical explanation of Andrews' experiments; however, this is not what happened. Van der Waals began work by trying to determine a molecular attraction that appeared in Laplace's theory of capillarity , and only after establishing his equation he tested it using Andrews' results. [ 29 ] [ 30 ]
By 1877 sprays of both liquid oxygen and liquid nitrogen had been produced, and a new field of research, low-temperature physics , had been opened. The van der Waals equation played a part in all this, especially for the liquefaction of hydrogen and helium which was finally achieved in 1908. [ 31 ] From measurements of p 1 , T 1 {\displaystyle p_{1},T_{1}} and p 2 , T 2 {\displaystyle p_{2},T_{2}} in two states with the same density, the van der Waals equation produces the values [ 32 ]
b = v − R ( T 2 − T 1 ) p 2 − p 1 and a = v 2 p 2 T 1 − p 1 T 2 T 2 − T 1 . {\displaystyle b=v-{\frac {R\left(T_{2}-T_{1}\right)}{p_{2}-p_{1}}}\qquad {\text{and}}\qquad a=v^{2}{\frac {p_{2}T_{1}-p_{1}T_{2}}{T_{2}-T_{1}}}.}
Thus from two such measurements of pressure and temperature, one could determine a {\displaystyle a} and b {\displaystyle b} , and from these values calculate the expected critical pressure, temperature, and molar volume. Goodstein summarized this contribution of the van der Waals equation as follows: [ 33 ]
All this labor required considerable faith in the belief that gas–liquid systems were all basically the same, even if no one had ever seen the liquid phase. This faith arose out of the repeated success of the van der Waals theory, which is essentially a universal equation of state, independent of the details of any particular substance once it has been properly scaled. [...] As a result, not only was it possible to believe that hydrogen could be liquefied, but it was even possible to predict the necessary temperature and pressure.
Van der Waals was awarded the Nobel Prize in 1910, in recognition of the contribution of his formulation of this "equation of state for gases and liquids".
The van der Waals equation has been, and remains, useful because: [ 34 ]
In addition [ 35 ]
and
Figure 1 shows four isotherms of the van der Waals equation (abbreviated as vdW) on a p , v {\displaystyle p,v} (pressure, molar volume) plane. The essential character of these curves is that they come in three forms:
The critical point can be analytically determined by equating the two partial derivatives of the vdW equation, created by differentiating Eq ( 1a ), to zero. This produces the critical values v c = 3 b {\displaystyle v_{\text{c}}=3b} and T c = 8 a / ( 27 R b ) {\displaystyle T_{\text{c}}=8a/(27Rb)} . Finally, using these values in Eq ( 1a ) gives p c = a / 27 b 2 {\displaystyle p_{\text{c}}=a/27b^{2}} . [ 38 ] These results can also be obtained algebraically by noting that at the critical point the three roots are equal. Hence, Eqs ( 1 ) can be written as either v 3 − ( b + R T c / p c ) v 2 + ( a / p c ) v − a b / p c = 0 {\displaystyle v^{3}-(b+RT_{\text{c}}/p_{\text{c}})v^{2}+(a/p_{\text{c}})v-ab/p_{\text{c}}=0} , or ( v − v c ) 3 = 0 {\displaystyle (v-v_{\text{c}})^{3}=0} ; two forms with the same coefficients. [ 39 ] [ 40 ]
Above the critical temperature T c {\displaystyle T_{\text{c}}} , van der Waals isotherms satisfy the stability criterion that ∂ p / ∂ v | T < 0 {\displaystyle \partial p/\partial v|_{T}<0} . Below the critical temperature, each isotherm contains an interval where this condition is violated. This unstable region is the genesis of the phase change; there is a range v m i n ≤ v ≤ v m a x {\displaystyle v_{\rm {min}}\leq v\leq v_{\rm {max}}} , for which no observable states exist. The states for v < v m i n {\displaystyle v<v_{\rm {min}}} are liquid, and those for v > v m a x {\displaystyle v>v_{\rm {max}}} are vapor; the denser liquid separates and lies below the vapor due to gravity. The transition points, states with zero slope, are called spinodal points . [ 41 ] Their locus is the spinodal curve , a boundary that separates the regions of the plane for which liquid, vapor, and gas exist from a region where no observable homogeneous states exist. This spinodal curve is obtained here from the vdW equation by differentiation (or equivalently from κ T = ∞ {\displaystyle \kappa _{T}=\infty } ) as T s p = 2 a ( v − b ) 2 R v 3 p s p = a ( v − 2 b ) v 3 {\displaystyle T_{\rm {sp}}=2a{\frac {(v-b)^{2}}{Rv^{3}}}\qquad p_{\rm {sp}}={\frac {a(v-2b)}{v^{3}}}}
A projection of the spinodal curve is plotted in Figure 1 as the black dash-dot curve. It passes through the critical point, which is also a spinodal point.
Using the critical values to define reduced (dimensionless) variables p r = p / p c {\displaystyle p_{r}=p/p_{\text{c}}} , T r = T / T c {\displaystyle T_{r}=T/T_{\text{c}}} , and v r = v / v c {\displaystyle v_{r}=v/v_{\text{c}}} renders the vdW equation in the dimensionless form (used to construct Fig. 1):
p r = 8 T r 3 v r − 1 − 3 v r 2 {\displaystyle p_{r}={\frac {8T_{r}}{3v_{r}-1}}-{\frac {3}{v_{r}^{2}}}}
This dimensionless form is a similarity relation; it indicates that all vdW fluids at the same T r {\displaystyle T_{r}} will plot on the same curve. It expresses the law of corresponding states which Boltzmann described as follows: [ 42 ]
All the constants characterizing the gas have dropped out of this equation. If one bases measurements on the van der Waals units [Boltzmann's name for the reduced quantities here], then he obtains the same equation of state for all gases. [...] Only the values of the critical volume, pressure, and temperature depend on the nature of the particular substance; the numbers that express the actual volume, pressure, and temperature as multiples of the critical values satisfy the same equation for all substances. In other words, the same equation relates the reduced volume, reduced pressure, and reduced temperature for all substances.
Obviously such a broad general relation is unlikely to be correct; nevertheless, the fact that one can obtain from it an essentially correct description of actual phenomena is very remarkable.
This "law" is just a special case of dimensional analysis in which an equation containing 6 dimensional quantities, p , v , T , a , b , R {\displaystyle p,v,T,a,b,R} , and 3 independent dimensions, [ p ], [ v ], [ T ], must be expressible in terms of 6 − 3 = 3 dimensionless groups. [ 43 ] Here v ∗ = b {\displaystyle v^{*}=b} is a characteristic molar volume, p ∗ = a / b 2 {\displaystyle p^{*}=a/b^{2}} a characteristic pressure, and T ∗ = a / ( R b ) {\displaystyle T^{*}=a/(Rb)} a characteristic temperature, and the 3 dimensionless groups are p / p ∗ , v / v ∗ , T / T ∗ {\displaystyle p/p^{*},v/v^{*},T/T^{*}} . According to dimensional analysis the equation must then have the form p / p ∗ = Φ ( v / v ∗ , T / T ∗ ) {\displaystyle p/p^{*}=\Phi (v/v^{*},T/T^{*})} , a general similarity relation. In his discussion of the vdW equation, Sommerfeld also mentioned this point. [ 44 ] The reduced properties defined previously are p r = 27 ( p / p ∗ ) {\displaystyle p_{r}=27(p/p^{*})} , v r = ( 1 / 3 ) ( v / v ∗ ) {\displaystyle v_{r}=(1/3)(v/v^{*})} , and T r = ( 27 / 8 ) ( T / T ∗ ) {\displaystyle T_{r}=(27/8)(T/T^{*})} . Recent research has suggested that there is a family of equations of state that depend on an additional dimensionless group, and this provides a more exact correlation of properties. [ 45 ] Nevertheless, as Boltzmann observed, the van der Waals equation provides an essentially correct description.
The vdW equation produces the critical compressibility factor Z c = p c v c / ( R T c ) = 3 / 8 = 0.375 {\displaystyle Z_{\text{c}}=p_{\text{c}}v_{\text{c}}/(RT_{\text{c}})=3/8=0.375} , while for most real fluids 0.23 < Z c < 0.31 {\displaystyle 0.23<Z_{\text{c}}<0.31} . [ 46 ] Thus most real fluids do not satisfy this condition, and consequently their behavior is only described qualitatively by the vdW equation. However, the vdW equation of state is a member of a family of state equations based on the Pitzer ( acentric ) factor, ω {\displaystyle \omega } , and the liquid metals (mercury and cesium) are well approximated by it. [ 16 ] [ 37 ]
The properties molar internal energy, u {\displaystyle u} , and entropy, s {\displaystyle s} , are defined by the first and second laws of thermodynamics. From these laws, they, and all other thermodynamic properties of a simple compressible substance, can be specified, up to a constant of integration, by two measurable functions. These are a mechanical equation of state, p = p ( v , T ) {\displaystyle p=p(v,T)} , and a constant volume specific heat, c v ( v , T ) {\displaystyle c_{v}(v,T)} . [ 47 ] [ 48 ]
When u ( v , T ) {\displaystyle u(v,T)} represents a continuous surface, it must be a continuous function with continuous partial derivatives, and its second mixed partial derivatives must be equal, ∂ v ∂ T u = ∂ T ∂ v u {\displaystyle \partial _{v}\partial _{T}u=\partial _{T}\partial _{v}u} . Then with c v = ∂ T u {\displaystyle c_{v}=\partial _{T}u} this condition can be written simply as ∂ v c ( v , T ) = ∂ T [ T 2 ∂ T ( p / T ) ] {\displaystyle \partial _{v}c(v,T)=\partial _{T}[T^{2}\partial _{T}(p/T)]} . Differentiating p / T {\displaystyle p/T} for the vdW equation gives T 2 ∂ T ( p / T ) ] = a / v 2 {\displaystyle T^{2}\partial _{T}(p/T)]=a/v^{2}} , so ∂ v c v = 0 {\displaystyle \partial _{v}c_{v}=0} . Consequently c v = c v ( T ) {\displaystyle c_{v}=c_{v}(T)} for a vdW fluid exactly as it is for an ideal gas. [ 49 ] To keep things simple, it is regarded as a constant in the following, c v = c R {\displaystyle c_{v}=cR} , with c {\displaystyle c} a number.
The energetic equation of state gives the internal energy, and the entropic equation of state gives the entropy as [ 50 ] [ 48 ]
u − C u = ∫ c v ( v , T ) d T + ∫ T 2 ∂ ( p / T ) ∂ T d v s − C s = ∫ c v ( T ) d T T + ∫ ∂ p ∂ T d v {\displaystyle {\begin{aligned}u-C_{u}&=\int c_{v}(v,T)\,dT+\int T^{2}\,{\frac {\partial (p/T)}{\partial T}}\,dv\\s-C_{s}&=\int c_{v}(T)\,{\frac {dT}{T}}+\int {\frac {\partial p}{\partial T}}\,dv\end{aligned}}}
where C u , C s {\displaystyle C_{u},C_{s}} are arbitrary constants of integration.
Both integrals for u {\displaystyle u} can be easily evaluated and the result is, [ 51 ] [ 52 ]
Likewise both integrals for s {\displaystyle s} can be evaluated with the result, [ 46 ] [ 53 ]
The Helmholtz free energy is f = u − T s {\displaystyle f=u-Ts} . Subtracting T {\displaystyle T} times Eq ( 3 ) from Eq ( 2 ) gives f {\displaystyle f} as [ 54 ]
The enthalpy is h = u + p v {\displaystyle h=u+pv} , and the product p v {\displaystyle pv} is, using Eq ( 1a ), p v = R T v / ( v − b ) − a / v {\displaystyle pv=RTv/(v-b)-a/v} . Adding Eq ( 2 ) gives h {\displaystyle h} as [ 51 ] [ 52 ] h − C u = R T [ c + v / ( v − b ) ] − 2 a / v {\displaystyle h-C_{u}=RT[c+v/(v-b)]-2a/v}
The Gibbs free energy is g = h − T s {\displaystyle g=h-Ts} so subtracting T {\displaystyle T} times Eq ( 3 ) from h {\displaystyle h} produces g {\displaystyle g} as [ 55 ]
All these results can be rendered in reduced form by using the characteristic energy R T c {\displaystyle RT_{\text{c}}} .
Any derivative of any thermodynamic property can be expressed in terms of any three of them. [ 56 ] A standard set is composed of α , κ T , c v {\displaystyle \alpha ,\kappa _{T},c_{v}} . For a vdW fluid c v ( T ) {\displaystyle c_{v}(T)} is a known function, and the other two are obtained from the first partial derivatives of the vdW equation as, ( ∂ p ∂ T ) v = R v − b = α κ T and ( ∂ p ∂ v ) T = − R T ( v − b ) 2 + 2 a v 3 = − 1 v κ T {\displaystyle \left({\frac {\partial p}{\partial T}}\right)_{v}={\frac {R}{v-b}}={\frac {\alpha }{\kappa _{T}}}\quad {\text{and}}\quad \left({\frac {\partial p}{\partial v}}\right)_{T}=-{\frac {RT}{(v-b)^{2}}}+{\frac {2a}{v^{3}}}=-{\frac {1}{v\kappa _{T}}}}
Here κ T = − v − 1 ∂ p v {\displaystyle \kappa _{T}=-v^{-1}\partial _{p}v} , is the isothermal compressibility, and α = v − 1 ∂ T v p {\displaystyle \alpha =v^{-1}\partial _{T}v_{p}} , is the coefficient of thermal expansion. [ 57 ] [ 58 ] Therefore, [ 59 ] [ 60 ]
In the limit v → ∞ {\displaystyle v\to \infty } , α = 1 / T {\displaystyle \alpha =1/T} and κ T = v / ( R T ) {\displaystyle \kappa _{T}=v/(RT)} . [ 60 ] [ 61 ] Since the vdW equation in this limit becomes p = R T / v {\displaystyle p=RT/v} , finally κ T = 1 / p {\displaystyle \kappa _{T}=1/p} . Both of these are the ideal gas values.
The specific heat at constant pressure, c p {\displaystyle c_{p}} is defined as the partial derivative c p = ∂ T h | p {\displaystyle c_{p}=\partial _{T}h|_{p}} . It is related to c v {\displaystyle c_{v}} by the Mayer equation, c p − c v = − T ( ∂ T p ) 2 / ∂ v p = T v α 2 / κ T {\displaystyle c_{p}-c_{v}=-T(\partial _{T}p)^{2}/\partial _{v}p=Tv\alpha ^{2}/\kappa _{T}} . [ 62 ] [ 63 ] Then the two partials of the vdW equation can be used to express c p {\displaystyle c_{p}} as, [ 55 ]
Here in the limit v → ∞ {\displaystyle v\to \infty } , c p − c v = R {\displaystyle c_{p}-c_{v}=R} , which is also the ideal gas result; [ 55 ] however the limit v → b {\displaystyle v\rightarrow b} gives the same result, which does not agree with experiments on liquids. [ 61 ]
Finally c p , α {\displaystyle c_{p},\alpha } , and κ T {\displaystyle \kappa _{T}} are all infinite on the curve T = 2 a ( v − b ) 2 / ( R v 3 ) = T c ( 3 v r − 1 ) 2 / ( 4 v r 3 ) {\displaystyle T=2a(v-b)^{2}/(Rv^{3})=T_{\text{c}}(3v_{r}-1)^{2}/(4v_{r}^{3})} . [ 55 ] This is the spinodal curve defined by κ T − 1 = 0 {\displaystyle \kappa _{T}^{-1}=0} , [ 64 ] that was discussed in the subsection The course of the isotherms .
Although the gap in v {\displaystyle v} delimited by the two spinodal points on an isotherm (e.g. T r = 7 / 8 {\displaystyle T_{\text{r}}=7/8} in Fig. 1) is the origin of the phase change, the change occurs at some intermediate value of pressure. This can be understood by considering that the saturated liquid and vapor states can coexist in equilibrium, with the same pressure and temperature. [ 65 ] However, the minimum and maximum spinodal points are not at the same pressure. Therefore, at a temperature T s {\displaystyle T_{\text{s}}} , the phase change is characterized by the pressure p s {\displaystyle p_{\text{s}}} , which lies within the range of p {\displaystyle p} set by the spinodal points ( p min < p s < p max {\displaystyle p_{\text{min}}<p_{\text{s}}<p_{\text{max}}} ), and by the molar volume of liquid v f {\displaystyle v_{\text{f}}} and vapor v g {\displaystyle v_{\text{g}}} , which lie outside the range of v {\displaystyle v} set by the spinodal points ( v f < v min {\displaystyle v_{\text{f}}<v_{\text{min}}} and v g > v max {\displaystyle v_{\text{g}}>v_{\text{max}}} ).
Applying Eq ( 1a ) to the saturated liquid and saturated vapor states gives:
Equations ( 7 ) contain four variables p s , T s , v f , v g {\displaystyle p_{\text{s}},T_{\text{s}},v_{\text{f}},v_{\text{g}}} ), so a third equation is required to uniquely specify three of these variables in terms of the fourth. In this case of a single substance, the equation is provided by the condition of equal Gibbs free energy, [ 65 ]
g g = g f {\displaystyle g_{\text{g}}=g_{\text{f}}}
Using Eq ( 4b ) applied to each state in this equation produces
This is a third equation that, along with Eqs. 7 can be solved numerically. This has been done given a value for either T s {\displaystyle T_{\text{s}}} or p s {\displaystyle p_{\text{s}}} , and tabular results presented; [ 66 ] [ 67 ] however, the equations also admit an analytic parametric solution obtained by Lekner. [ 68 ] Details of this solution may be found in the Maxwell construction , and the dimensionless results are:
T rs ( y ) = 27 8 ⋅ 2 f ( y ) [ cosh y + f ( y ) ] g ( y ) 2 , p rs = 27 f ( y ) 2 [ 1 − f ( y ) 2 ] g ( y ) 2 , v rf = 1 + f ( y ) e y 3 f ( y ) e y , v rg = 1 + f ( y ) e − y 3 f ( y ) e − y {\displaystyle {\begin{aligned}T_{\text{rs}}(y)&={\frac {27}{8}}\cdot {\frac {2f(y)\left[\cosh y+f(y)\right]}{g(y)^{2}}},&p_{\text{rs}}&=27{\frac {f(y)^{2}\left[1-f(y)^{2}\right]}{g(y)^{2}}},\\[1ex]v_{\text{rf}}&={\frac {1+f(y)e^{y}}{3f(y)e^{y}}},&v_{\text{rg}}&={\frac {1+f(y)e^{-y}}{3f(y)e^{-y}}}\end{aligned}}} where f ( y ) = y cosh y − sinh y sinh y cosh y − y , g ( y ) = 1 + 2 f ( y ) cosh y + f ( y ) 2 {\displaystyle {\begin{aligned}f(y)&={\frac {y\cosh y-\sinh y}{\sinh y\cosh y-y}},&g(y)&=1+2f(y)\cosh y+f(y)^{2}\end{aligned}}}
The parameter 0 ≤ y < ∞ {\displaystyle 0\leq y<\infty } is given physically by y = ( s g − s f ) / ( 2 R ) {\displaystyle y=(s_{\text{g}}-s_{\text{f}})/(2R)} . This solution also produces values of all other property discontinuities across the saturation curve. [ 69 ] These functions define the coexistence curve (or saturation curve ), which is the locus of the saturated liquid and saturated vapor states of the vdW fluid. Projections of this saturation curve are plotted in Figures 1 and 2.
Referring back to Figure 1, the isotherms for T r < 1 {\displaystyle T_{\text{r}}<1} are discontinuous. For example, the T r = 7 / 8 {\displaystyle T_{\text{r}}=7/8} (green) isotherm consists of two separate segments. The solid green lines are composed of stable states. They terminate at dots representing the saturated liquid and vapor states forming the phase change. The dashed green lines represent metastable states ( superheated liquid and subcooled vapor). They are created in the phase transition, have a finite lifetime, and then devolve into their lower energy stable alternative.
At every point in the region between the two curves in Figure 2, there are two states: one stable and one metastable. The coexistence of these states can be seen in Figure 1—for discontinuous isotherms, there are values of p r {\displaystyle p_{\text{r}}} which correspond to two points on the isotherm: one on a solid line (the stable state) and one on a dashed region (the metastable state).
In his treatise of 1898, in which he described the van der Waals equation in great detail, Boltzmann discussed these metastable states in a section titled "Undercooling, Delayed evaporation". [ 70 ] (Today, these states are now denoted "subcooled vapor" and "superheated liquid".) Moreover, it has now become clear that these metastable states occur regularly in the phase transition process. In particular, processes that involve very high heat fluxes create large numbers of these states, and transition to their stable alternative with a corresponding release of energy that can be dangerous. Consequently, there is a pressing need to study their thermal properties. [ 71 ]
In the same section, Boltzmann also addressed and explained the negative pressures which some liquid metastable states exhibit (for example, the blue isotherm T r = 4 / 5 {\displaystyle T_{\text{r}}=4/5} in Fig. 1). He concluded that such liquid states of tensile stresses were real, as did Tien and Lienhard many years later who wrote "The van der Waals equation predicts that at low temperatures liquids sustain enormous tension [...] In recent years measurements have been made that reveal this to be entirely correct." [ 72 ]
Even though the phase change produces a mathematical discontinuity in the homogeneous fluid properties (for example v {\displaystyle v} ), there is no physical discontinuity. [ 73 ] As the liquid begins to vaporize, the fluid becomes a heterogeneous mixture of liquid and vapor whose molar volume varies continuously from v f {\displaystyle v_{\text{f}}} to v g {\displaystyle v_{\text{g}}} according to the equation of state v = v f + x ( v g − v f ) {\textstyle v=v_{\text{f}}+x(v_{\text{g}}-v_{\text{f}})} where x = N g / ( N f + N g ) {\textstyle x=N_{\text{g}}/(N_{\text{f}}+N_{\text{g}})} and 0 ≤ x ≤ 1 {\displaystyle 0\leq x\leq 1} is the mole fraction of the vapor. This equation is called the lever rule and applies to other properties as well. [ 19 ] [ 73 ] The states it represents form a horizontal line bridging the discontinuous region of an isotherm (not shown in Fig. 1 because it is a different equation from the vdW equation).
The idea of corresponding states originated when van der Waals cast his equation in the dimensionless form, p r = p ( v r , T r ) {\displaystyle p_{\text{r}}=p(v_{\text{r}},T_{\text{r}})} . However, as Boltzmann noted, such a simple representation could not correctly describe all substances. Indeed, the saturation analysis of this form produces p rs = p s ( T r ) {\displaystyle p_{\text{rs}}=p_{\text{s}}(T_{\text{r}})} ; namely, that all substances have the same dimensionless coexistence curve, which is not true. [ 74 ] To avoid this paradox, an extended principle of corresponding states has been suggested in which p r = p ( v r , T r , ϕ ) {\displaystyle p_{\text{r}}=p(v_{\text{r}},T_{\text{r}},\phi )} where ϕ {\displaystyle \phi } is a substance-dependent dimensionless parameter related to the only physical feature associated with an individual substance: its critical point.
One candidate for ϕ {\displaystyle \phi } is the critical compressibility factor Z c = p c v c / ( R T c ) {\displaystyle Z_{\text{c}}=p_{\text{c}}v_{\text{c}}/(RT_{\text{c}})} ; however, because v c {\displaystyle v_{\text{c}}} is difficult to measure accurately, the acentric factor developed by Kenneth Pitzer , [ 75 ] ω = − log 10 [ p r ( T r = 0.7 ) ] − 1 {\displaystyle \omega =-\log _{10}[p_{\text{r}}(T_{\text{r}}=0.7)]-1} , is more useful. The saturation pressure in this situation is represented by a one-parameter family of curves: p rs = p s ( T r , ω ) {\displaystyle p_{\text{rs}}=p_{\text{s}}(T_{\text{r}},\omega )} . Several investigators have produced correlations of saturation data for several substances; Dong and Lienhard give [ 37 ] ln p rs = 5.37270 ( 1 − 1 / T r ) + ω ( 7.49408 − 11.181777 T r 3 + 3.68769 T r 6 + 17.92998 ln T r ) {\displaystyle {\begin{aligned}\ln p_{\text{rs}}=5.37270(1-1/T_{\text{r}})+\omega (&7.49408-11.181777\ {T_{\text{r}}}^{3}+\\&3.68769\ {T_{\text{r}}}^{6}+17.92998\,\ln T_{\text{r}})\end{aligned}}} which has an RMS error of ± 0.42 {\displaystyle \pm 0.42} over the range 1 ≤ T r ≤ 0.3 {\displaystyle 1\leq T_{\text{r}}\leq 0.3} .
Figure 3 is a plot of p rs {\displaystyle p_{\text{rs}}} vs. T r {\displaystyle T_{\text{r}}} for various values of the Pitzer factor ω {\displaystyle \omega } as given by this equation. The vertical axis is logarithmic to show the behavior at pressures closer to zero, where differences among the various substances (indicated by varying values of ω {\displaystyle \omega } ) are more pronounced.
Figure 4 is another plot of the same equation showing T r {\displaystyle T_{\text{r}}} as a function of ω {\displaystyle \omega } for various values of p rs {\displaystyle p_{\text{rs}}} . It includes data from 51 substances, including the vdW fluid, over the range − 0.4 < ω < 0.9 {\displaystyle -0.4<\omega <0.9} . This plot shows that the vdW fluid ( ω = − 0.302 {\displaystyle \omega =-0.302} ) is a member of the class of real fluids; indeed, the vdW fluid can quantitatively approximate the behavior of the liquid metals cesium ( ω = − 0.267 {\displaystyle \omega =-0.267} ) and mercury ( ω = − 0.21 {\displaystyle \omega =-0.21} ), which share similar values of ω {\displaystyle \omega } . However, in general it can describe the behavior of fluids of various ω {\displaystyle \omega } only qualitatively.
The Joule–Thomson coefficient, μ JT = ∂ p T | h {\displaystyle \mu _{\text{JT}}=\partial _{p}T|_{h}} , is of practical importance because the two end states of a throttling process ( h 2 = h 1 {\displaystyle h_{2}=h_{1}} ) lie on a constant enthalpy curve. Although ideal gases, for which h = h ( T ) {\displaystyle h=h(T)} , do not change temperature in such a process, real gases do, and it is important in applications to know whether they heat up or cool down. [ 76 ]
This coefficient can be found in terms of the previously derived α {\displaystyle \alpha } and c p {\displaystyle c_{p}} as [ 77 ] μ JT = v ( α T − 1 ) c p . {\displaystyle \mu _{\text{JT}}={\frac {v(\alpha T-1)}{c_{p}}}.}
When μ JT {\displaystyle \mu _{\text{JT}}} is positive, the gas temperature decreases as it passes through a throttling process, and when it is negative, the temperature increases. Therefore, the condition μ JT = 0 {\displaystyle \mu _{\text{JT}}=0} defines a curve that separates the region of the T , p {\displaystyle T,p} plane where μ JT > 0 {\displaystyle \mu _{\text{JT}}>0} from the region where μ JT < 0 {\displaystyle \mu _{\text{JT}}<0} . This curve is called the inversion curve , and its equation is α T − 1 = 0 {\displaystyle \alpha T-1=0} . Evaluating this using the expression for α {\displaystyle \alpha } derived in Eq. 5 produces, [ 78 ]
2 a ( v − b ) 2 − R T v 2 b = 0 {\displaystyle 2a(v-b)^{2}-RTv^{2}b=0}
Note that for v ≫ b {\displaystyle v\gg b} there will be cooling for 2 a > R T b {\displaystyle 2a>RTb} (or, in terms of the critical temperature, T < ( 27 / 4 ) T c {\displaystyle T<(27/4)\ T_{\text{c}}} ). As Sommerfeld noted, "This is the case with air and with most other gases. Air can be cooled at will by repeated expansion and can finally be liquified. " [ 78 ]
Solving for b / v > 0 {\displaystyle b/v>0} , and using this to eliminate v {\displaystyle v} from Eq (1a) gives the inversion curve as
p p ∗ = − 1 + 4 ( T 2 T ∗ ) 1 / 2 − 3 ( T 2 T ∗ ) {\displaystyle {\frac {p}{p^{*}}}=-1+4\left({\frac {T}{2T^{*}}}\right)^{1/2}-3\left({\frac {T}{2T^{*}}}\right)}
where, for simplicity, a , b , R {\displaystyle a,b,R} have been replaced by p ∗ , T ∗ {\displaystyle p^{*},T^{*}} .
A plot of the curve, in reduced variables, is shown in green in Figure 5. Sommerfeld also displays this plot, [ 79 ] together with a curve drawn using experimental data from H 2 . The two curves agree qualitatively, but not quantitatively.
Figure 5 shows an overlap between the saturation curve and the inversion curve plotted in the same region. This crossover means a van der Waals gas can be liquified by passing it through a throttling process under the proper conditions; real gases are liquified in this way.
Real gases are characterized by their difference from ideal gases by writing p v = Z R T {\displaystyle pv=ZRT} , where Z {\displaystyle Z} , called the compressibility factor. It is expressed either as Z ( p , T ) {\displaystyle Z(p,T)} or Z ( ρ , T ) {\displaystyle Z(\rho ,T)} , because in either case (pressure, p {\displaystyle p} , or density, ρ {\displaystyle \rho } ) the limit as p {\displaystyle p} or ρ {\displaystyle \rho } approaches zero is 1, and Z {\displaystyle Z} takes the ideal gas value. In the second case Z ( ρ , T ) = p ( ρ , T ) / ρ R T {\displaystyle Z(\rho ,T)=p(\rho ,T)/\rho RT} , [ 80 ] so for a van der Waals fluid from Eq (‘’’1’’’) the compressibility factor is
or in terms of reduced variables Z = 3 3 − ρ r − 9 ρ r 8 T r {\displaystyle Z={\frac {3}{3-\rho _{r}}}-{\frac {9\rho _{r}}{8T_{r}}}} where 0 ≤ ρ r = 1 / v r ≤ 3 {\displaystyle 0\leq \rho _{r}=1/v_{r}\leq 3} . At the critical point, T r = ρ r = 1 {\displaystyle T_{r}=\rho _{r}=1} and Z = Z c = 3 / 2 − 9 / 8 = 3 / 8 {\displaystyle Z=Z_{\text{c}}=3/2-9/8=3/8} .
In the limit ρ → 0 {\displaystyle \rho \rightarrow 0} , Z = 1 {\displaystyle Z=1} ; the fluid behaves like an ideal gas, as mentioned before. The derivative ( ∂ Z ∂ ρ ) T = b ( ( 1 − b ρ ) − 2 − a b R T ) {\displaystyle \left({\frac {\partial Z}{\partial \rho }}\right)_{T}=b\left({\left(1-b\rho \right)}^{-2}-{\frac {a}{bRT}}\right)} is never negative when a b R T = T ∗ / T ≤ 1 {\displaystyle {a \over bRT}=T^{*}/T\leq 1} ; that is, when T / T ∗ ≥ 1 {\displaystyle T/T^{*}\geq 1} (which corresponds to T r ≥ 27 / 8 {\displaystyle T_{r}\geq 27/8} ). Alternatively, the initial slope is negative when T / T ∗ < 1 {\displaystyle T/T^{*}<1} , is zero at b ρ = 1 − ( T / T ∗ ) 1 / 2 {\displaystyle b\rho =1-(T/T^{*})^{1/2}} , and is positive for larger b ρ ≤ 1 {\displaystyle b\rho \leq 1} (see Fig. 6). In this case, the value of Z {\displaystyle Z} passes through 1 {\displaystyle 1} when b ρ B = 1 − T B / T ∗ {\displaystyle b\rho _{B}=1-T_{B}/T^{*}} . Here T B = ( 27 T c / 8 ) ( 1 − b ρ B ) {\displaystyle T_{B}=(27T_{\text{c}}/8)(1-b\rho _{B})} is called the Boyle temperature . It ranges between 0 ≤ T B ≤ 27 T c / 8 {\displaystyle 0\leq T_{B}\leq 27T_{\text{c}}/8} , and denotes a point in T , ρ {\displaystyle T,\rho } space where the equation of state reduces to the ideal gas law. However, the fluid does not behave like an ideal gas there, because neither its derivatives ( α , κ T ) {\displaystyle (\alpha ,\kappa _{T})} nor c p {\displaystyle c_{p}} reduce to their ideal gas values, other than where b ρ B ≪ 1 , T B ∼ 27 T c / 8 {\displaystyle b\rho _{B}\ll 1,\,T_{B}\sim 27T_{\text{c}}/8} the actual ideal gas region. [ 81 ]
Figure 6 plots various isotherms of Z ( ρ , T r ) {\displaystyle Z(\rho ,T_{r})} vs ρ r {\displaystyle \rho _{r}} . Also shown are the spinodal and coexistence curves described previously. The subcritical isotherm consists of stable, metastable, and unstable segments (identified in the same way as in Fig. 1). Also included are the zero initial slope isotherm and the one corresponding to infinite temperature.
Figure 7 shows a generalized compressibility chart for a vdW gas. Like all other vdW properties, this is not quantitatively correct for most gases, but it has the correct qualitative features. [ 82 ] [ 83 ] Note the caustic generated by the crossing isotherms.
Kammerlingh Onnes first suggested the virial expansion as an empirical alternative to the vdW equation. Subsequently, it was proven to result from Statistical mechanics , [ 84 ] [ 85 ] in the form Z ( ρ , T ) = 1 + ∑ k = 2 ∞ B k ( T ) ( ρ ) k − 1 {\displaystyle Z(\rho ,T)=1+\sum _{k=2}^{\infty }\,B_{k}(T)(\rho )^{k-1}} where Z = p / ( ρ R T ) {\displaystyle Z=p/(\rho RT)} and the functions B k ( T ) {\displaystyle B_{k}(T)} are the virial coefficients. The k {\displaystyle k} th term represents a k {\displaystyle k} -particle interaction.
Expanding the term ( 1 − b ρ ) − 1 {\displaystyle (1-b\rho )^{-1}} in the definition of Z {\displaystyle Z} , Eq ( 9 ), into an infinite series, absolutely convergent for b ρ < 1 {\displaystyle b\rho <1} , produces Z ( ρ , T ) = 1 + ( 1 − a b R T ) b ρ + ∑ k = 3 ∞ ( b ρ ) k − 1 . {\displaystyle Z(\rho ,T)=1+\left(1-{a \over bRT}\right)b\rho +\sum _{k=3}^{\infty }(b\rho )^{k-1}.}
The second virial coefficient is the slope of Z ( ρ , T ) {\displaystyle Z(\rho ,T)} at ρ = 0 {\displaystyle \rho =0} . It is positive when T / T ∗ > 1 {\displaystyle T/T^{*}>1} and negative when T / T ∗ < 1 {\displaystyle T/T^{*}<1} ( T r = T / T c > or < 27 / 8 {\displaystyle T_{\text{r}}=T/T_{\text{c}}>{\mbox{or}}<27/8} ), in agreement with the result found by differentiation. Its vdW value, B 2 = b − a / R T {\displaystyle B_{2}=b-a/RT} agrees with a statistical mechanical calculation; however, the higher order coefficients are in error. This means that the vdW virial expansion, hence the vdW equation itself, is equivalent to a two term asymptotic approximation to the virial equation. [ 13 ] [ 86 ]
For molecules modeled as non-attracting hard spheres, a = 0 {\displaystyle a=0} , and the vdW virial expansion becomes Z ( ρ ) = ( 1 − b ρ ) − 1 = 1 + ∑ k = 2 ∞ ( b ρ ) k − 1 , {\displaystyle Z(\rho )=(1-b\rho )^{-1}=1+\sum _{k=2}^{\infty }(b\rho )^{k-1},} which illustrates the effect of the excluded volume alone. It was recognized early on that this was in error beginning with the term ( b ρ ) 2 {\displaystyle (b\rho )^{2}} . Boltzmann calculated its correct value as 5 8 ( b ρ ) 2 {\textstyle {\frac {5}{8}}(b\rho )^{2}} , and used the result to propose an enhanced version of the vdW equation: ( p + a v 2 ) ( v − b 3 ) = R T ( 1 + 2 b 3 v + 7 b 2 24 v 2 ) . {\displaystyle \left(p+{a \over v^{2}}\right)\left(v-{b \over 3}\right)=RT\left(1+{2b \over 3v}+{{7b^{2}} \over {24v^{2}}}\right).}
On expanding ( v − b / 3 ) − 1 {\displaystyle (v-b/3)^{-1}} , this produced the correct coefficients through ( b / v ) 2 {\displaystyle (b/v)^{2}} and also gave infinite pressure at b / 3 {\displaystyle b/3} , which is approximately the close-packing distance for hard spheres. [ 87 ] This was one of the first of many equations of state proposed over the years that attempted to make quantitative improvements to the remarkably accurate explanations of real gas behavior produced by the vdW equation. [ 88 ]
In 1890 van der Waals published an article that initiated the study of fluid mixtures. It was subsequently included as Part III of a later published version of his thesis. [ 89 ] His essential idea was that in a binary mixture of vdW fluids described by the equations p 1 = R T v − b 11 − a 11 v 2 and p 2 = R T v − b 22 − a 22 v 2 {\displaystyle p_{1}={\frac {RT}{v-b_{11}}}-{\frac {a_{11}}{v^{2}}}\quad {\text{and}}\quad p_{2}={\frac {RT}{v-b_{22}}}-{\frac {a_{22}}{v^{2}}}} the mixture is also a vdW fluid given by p = R T v − b x − a x v 2 {\displaystyle p={\frac {RT}{v-b_{x}}}-{\frac {a_{x}}{v^{2}}}} where a x = a 11 x 1 2 + 2 a 12 x 1 x 2 + a 22 x 2 2 , b x = b 11 x 1 2 + 2 b 12 x 1 x 2 + b 22 x 2 2 . {\displaystyle {\begin{aligned}a_{x}&=a_{11}x_{1}^{2}+2a_{12}x_{1}x_{2}+a_{22}x_{2}^{2},\\[2pt]b_{x}&=b_{11}x_{1}^{2}+2b_{12}x_{1}x_{2}+b_{22}x_{2}^{2}.\end{aligned}}}
Here x 1 = N 1 / N {\displaystyle x_{1}=N_{1}/N} and x 2 = N 2 / N {\displaystyle x_{2}=N_{2}/N} , with N = N 1 + N 2 {\displaystyle N=N_{1}+N_{2}} (so that x 1 + x 2 = 1 {\displaystyle x_{1}+x_{2}=1} ), are the mole fractions of the two fluid substances. Adding the equations for the two fluids shows that p ≠ p 1 + p 2 {\displaystyle p\neq p_{1}+p_{2}} , although for v {\displaystyle v} sufficiently large p ≈ p 1 + p 2 {\displaystyle p\approx p_{1}+p_{2}} with equality holding in the ideal gas limit. The quadratic forms for a x {\displaystyle a_{x}} and b x {\displaystyle b_{x}} are a consequence of the forces between molecules. This was first shown by Lorentz, [ 90 ] and was credited to him by van der Waals. The quantities a 11 , a 22 {\displaystyle a_{11},\,a_{22}} and b 11 , b 22 {\displaystyle b_{11},\,b_{22}} in these expressions characterize collisions between two molecules of the same fluid component, while a 12 = a 21 {\displaystyle a_{12}=a_{21}} and b 12 = b 21 {\displaystyle b_{12}=b_{21}} represent collisions between one molecule of each of the two different fluid components. This idea of van der Waals' was later called a one fluid model of mixture behavior . [ 91 ]
Assuming that b 12 {\displaystyle b_{12}} is the arithmetic mean of b 11 {\displaystyle b_{11}} and b 22 {\displaystyle b_{22}} , b 12 = ( b 11 + b 22 ) / 2 {\displaystyle b_{12}=(b_{11}+b_{22})/2} , substituting into the quadratic form and noting that x 1 + x 2 = 1 {\displaystyle x_{1}+x_{2}=1} produces b = b 11 x 1 + b 22 x 2 {\displaystyle b=b_{11}x_{1}+b_{22}x_{2}}
Van der Waals wrote this relation, but did not make use of it initially. [ 92 ] However, it has been used frequently in subsequent studies, and its use is said to produce good agreement with experimental results at high pressure. [ 93 ]
In this article, van der Waals used the Helmholtz potential minimum principle to establish stability conditions. This principle states that in a system in diathermal contact with a heat reservoir T = T R {\displaystyle T=T_{R}} , D F = 0 {\displaystyle DF=0} , and D 2 F > 0 {\displaystyle D^{2}F>0} , namely at equilibrium, the Helmholtz potential is a minimum. [ 94 ] Since, like g ( p , T ) {\displaystyle g(p,T)} , the molar Helmholtz function f ( v , T ) {\displaystyle f(v,T)} is also a potential function whose differential is d f = ( ∂ f ∂ v ) T d v + ( ∂ f ∂ T ) v d T = − p d v − s d T , {\displaystyle df=\left({\frac {\partial f}{\partial v}}\right)_{T}dv+\left({\frac {\partial f}{\partial T}}\right)_{v}dT=-p\,dv-s\,dT,} this minimum principle leads to the stability condition ∂ 2 f / ∂ v 2 | T = − ∂ p / ∂ v | T > 0 {\displaystyle \partial ^{2}f/\partial v^{2}|_{T}=-\partial p/\partial v|_{T}>0} . This condition means that the function, f ( v , T ) {\displaystyle f(v,T)} , is convex at all stable states of the system. Moreover, for those states the previous stability condition for the pressure is also necessarily satisfied. [ 95 ]
For a single substance, the definition of the molar Gibbs free energy can be written in the form f = g − p v {\displaystyle f=g-pv} . Thus when p {\displaystyle p} and g {\displaystyle g} are constant, the function f ( v ) {\displaystyle f(v)} is a straight line with slope − p {\displaystyle -p} , and intercept g {\displaystyle g} . Since the curve f ( T R , v ) {\displaystyle f(T_{R},v)} has positive curvature everywhere when T R ≥ T c {\displaystyle T_{R}\geq T_{\text{c}}} , the curve and the straight line will have a single tangent. However, for a subcritical T R , f ( T R , v ) {\displaystyle T_{R},\,f(T_{R},v)} is not everywhere convex. With p = p s ( T R ) {\displaystyle p=p_{\text{s}}(T_{R})} and a suitable value of g {\displaystyle g} , the line will be tangent to f ( T R , v ) {\displaystyle f(T_{R},v)} at the molar volume of each coexisting phase: saturated liquid v f ( T R ) {\displaystyle v_{f}(T_{R})} and saturated vapor v g ( T R ) {\displaystyle v_{g}(T_{R})} ; there will be a double tangent. Furthermore, each of these points is characterized by the same values of g {\displaystyle g} , p {\displaystyle p} , and T R . {\displaystyle T_{R}.} These are the same three specifications for coexistence that were used previously . [ 95 ]
Figure 8 depicts an evaluation of f ( T R , v ) {\displaystyle f(T_{R},v)} as a green curve, with v f {\displaystyle v_{f}} and v g {\displaystyle v_{g}} marked by the left and right green circles, respectively. The region on the green curve for v ≤ v f {\displaystyle v\leq v_{f}} corresponds to the liquid state. As v {\displaystyle v} increases past v f {\displaystyle v_{f}} , the curvature of f {\displaystyle f} (proportional to ∂ v ∂ v f = − ∂ v p {\displaystyle \partial _{v}\partial _{v}f=-\partial _{v}p} ) continually decreases. The inflection point , characterized by zero curvature, is a spinodal point; between v f {\displaystyle v_{f}} and this point is the metastable superheated liquid. For further increases in v {\displaystyle v} the curvature decreases to a minimum then increases to another (zero curvature) spinodal point; between these two spinodal points is the unstable region in which the fluid cannot exist in a homogeneous equilibrium state (represented by the dotted grey curve). With a further increase in v {\displaystyle v} the curvature increases to a maximum at v g {\displaystyle v_{g}} , where the slope is p s {\displaystyle p_{\text{s}}} ; the region between this point and the second spinodal point is the metastable subcooled vapor. Finally, the region v ≥ v g {\displaystyle v\geq v_{g}} is the vapor. In this region the curvature continually decreases until it is zero at infinitely large v {\displaystyle v} . The double tangent line (solid black) that runs between v f {\displaystyle v_{f}} and v g {\displaystyle v_{g}} represents states that are stable but heterogeneous, not homogeneous solutions of the vdW equation. [ 95 ] The states above this line (with larger Helmholtz free energy) are either metastable or unstable. [ 95 ] The combined solid green-black curve in Figure 8 is the convex envelope of f ( T R , v ) {\displaystyle f(T_{R},v)} , which is defined as the largest convex curve that is less than or equal to the function. [ 96 ]
For a vdW fluid, the molar Helmholtz potential is given by Eq ( 4a ). This is, in reduced form,
f r = f R T c = C u + T r ( c − C s − ln [ T r c ( 3 v r − 1 ) ] ) − 9 8 v r {\displaystyle f_{r}={\frac {f}{RT_{\text{c}}}}=C_{u}+T_{\text{r}}(c-C_{\text{s}}-\ln[T_{\text{r}}^{c}(3v_{\text{r}}-1)])-{\frac {9}{8v_{\text{r}}}}}
with derivative ∂ v r f r = − 3 T r / ( 3 v r − 1 ) + 9 / ( 8 v r ) 2 = − p r {\displaystyle \partial _{v_{\text{r}}}f_{\text{r}}=-3T_{\text{r}}}/({3v_{\text{r}}-1)+9/(8v_{\text{r}})^{2}=-p_{\text{r}}} . A plot of this function f r {\displaystyle f_{\text{r}}} , whose slope at each point is given by p r {\displaystyle p_{\text{r}}} of the vdW equation, for the subcritical isotherm T r = 7 / 8 {\displaystyle T_{\text{r}}=7/8} is shown in Figure 8 along with the line tangent to it at its two coexisting saturation points. The data illustrated in Figure 8 is the same as that shown in Figure 1 for this isotherm. [ 95 ]
This double tangent construction thus provides a graphical alternative to the Maxwell construction to establish the saturated liquid and vapor points on an isotherm. [ 95 ]
Van der Waals used the Helmholtz function because its properties could be easily extended to the binary fluid situation. In a binary mixture of vdW fluids, the Helmholtz potential is a function of two variables, f ( T R , v , x ) {\displaystyle f(T_{R},v,x)} , where x {\displaystyle x} is a composition variable (for example x = x 2 {\displaystyle x=x_{2}} so x 1 = 1 − x {\displaystyle x_{1}=1-x} ). In this case, there are three stability conditions: [ 97 ]
∂ 2 f ∂ v 2 > 0 ∂ 2 f ∂ x 2 > 0 ∂ 2 f ∂ v 2 ∂ 2 f ∂ x 2 − ( ∂ 2 f ∂ x ∂ v ) 2 > 0 {\displaystyle {\frac {\partial ^{2}f}{\partial v^{2}}}>0\qquad {\frac {\partial ^{2}f}{\partial x^{2}}}>0\qquad {\frac {\partial ^{2}f}{\partial v^{2}}}{\frac {\partial ^{2}f}{\partial x^{2}}}-\left({\frac {\partial ^{2}f}{\partial x\partial v}}\right)^{2}>0} and the Helmholtz potential is a surface (of physical interest in the region 0 ≤ x ≤ 1 {\displaystyle 0\leq x\leq 1} ). The first two stability conditions show that the curvature in each of the directions v {\displaystyle v} and x {\displaystyle x} are both non-negative for stable states, while the third condition indicates that stable states correspond to elliptic points on this surface. [ 98 ] Moreover, its limit ∂ 2 f ∂ v 2 ∂ 2 f ∂ x 2 − ∂ 2 f ∂ x ∂ v = 0 {\displaystyle {\frac {\partial ^{2}f}{\partial v^{2}}}{\frac {\partial ^{2}f}{\partial x^{2}}}-{\frac {\partial ^{2}f}{\partial x\partial v}}=0} specifies the spinodal curves on the surface.
For a binary mixture, the Euler equation [ 99 ] can be written in the form f = − p v + μ 1 x 1 + μ 2 x 2 = − p v + ( μ 2 − μ 1 ) x + μ 1 {\displaystyle {\begin{aligned}f&=-pv+\mu _{1}x_{1}+\mu _{2}x_{2}\\&=-pv+(\mu _{2}-\mu _{1})x+\mu _{1}\end{aligned}}} where μ j = ∂ x j f {\displaystyle \mu _{j}=\partial _{x_{j}}f} are the molar chemical potentials of each substance, j = 1 , 2 {\displaystyle j=1,2} . For constant values of p {\displaystyle p} , μ 1 {\displaystyle \mu _{1}} , and μ 2 {\displaystyle \mu _{2}} , this equation is a plane with slopes − p {\displaystyle -p} in the v {\displaystyle v} direction, μ 2 − μ 1 {\displaystyle \mu _{2}-\mu _{1}} in the x {\displaystyle x} direction, and intercept μ 1 {\displaystyle \mu _{1}} . As in the case of a single substance, here the plane and the surface can have a double tangent, and the locus of the coexisting phase points forms a curve on each surface. The coexistence conditions are that the two phases have the same T {\displaystyle T} , p {\displaystyle p} , μ 2 − μ 1 {\displaystyle \mu _{2}-\mu _{1}} , and μ 1 {\displaystyle \mu _{1}} ; the last two are equivalent to having the same μ 1 {\displaystyle \mu _{1}} and μ 2 {\displaystyle \mu _{2}} individually, which are just the Gibbs conditions for material equilibrium in this situation. The two methods of producing the coexistence surface are equivalent. [ 97 ]
Although this case is similar to that of a single fluid, here the geometry can be much more complex. The surface can develop a wave (called a plait or fold) in the x {\displaystyle x} direction as well as the one in the v {\displaystyle v} direction. Therefore, there can be two liquid phases that can be either miscible , or wholly or partially immiscible, as well as a vapor phase. [ 100 ] [ 101 ] Despite a great deal of both theoretical and experimental work on this problem by van der Waals and his successors—work which produced much useful knowledge about the various types of phase equilibria that are possible in fluid mixtures [ 102 ] —complete solutions to the problem were only obtained after 1967, when the availability of modern computers made calculations of mathematical problems of this complexity feasible for the first time. [ 103 ] The results obtained were, in Rowlinson's words, [ 104 ]
a spectacular vindication of the essential physical correctness of the ideas behind the van der Waals equation, for almost every kind of critical behavior found in practice can be reproduced by the calculations, and the range of parameters that correlate with the different kinds of behavior are intelligible in terms of the expected effects of size and energy.
To obtain these numerical results, the values of the constants of the individual component fluids a 11 , a 22 , b 11 , b 22 {\displaystyle a_{11},a_{22},b_{11},b_{22}} must be known. In addition, the effect of collisions between molecules of the different components, given by a 12 {\displaystyle a_{12}} and b 12 {\displaystyle b_{12}} , must also be specified. In the absence of experimental data, or computer modeling results to estimate their value, the empirical combining rules, geometric and algebraic means can be used, respectively: [ 105 ] a 12 = ( a 11 a 22 ) 1 / 2 and b 12 1 / 3 = ( b 11 1 / 3 + b 22 1 / 3 ) / 2. {\displaystyle a_{12}=(a_{11}a_{22})^{1/2}\qquad {\text{and}}\qquad b_{12}^{1/3}=(b_{11}^{1/3}+b_{22}^{1/3})/2.}
These relations correspond to the empirical combining rules for the intermolecular force constants, ϵ 12 = ( ϵ 11 ϵ 22 ) 1 / 2 and σ 12 = ( σ 11 + σ 22 ) / 2 , {\displaystyle \epsilon _{12}=(\epsilon _{11}\epsilon _{22})^{1/2}\qquad {\text{and}}\qquad \sigma _{12}=(\sigma _{11}+\sigma _{22})/2,} the first of which follows from a simple interpretation of the dispersion forces in terms of polarizabilities of the individual molecules, while the second is exact for rigid molecules. [ 106 ] Using these empirical combining rules to generalize for n {\displaystyle n} fluid components, the quadratic mixing rules for the material constants are: [ 93 ] a x = ∑ i = 1 n ∑ j = 1 n ( a i i a j j ) 1 / 2 x i x j = ( ∑ i = 1 n a i i 1 / 2 x i ) 2 b x = 1 8 ∑ i = 1 n ∑ j = 1 n ( b i i 1 / 3 + b j j 1 / 3 ) 3 x i x j {\displaystyle {\begin{aligned}a_{x}&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\left(a_{ii}a_{jj}\right)}^{1/2}x_{i}x_{j}={\left(\sum _{i=1}^{n}a_{ii}^{1/2}x_{i}\right)}^{2}\\b_{x}&={\tfrac {1}{8}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\left(b_{ii}^{1/3}+b_{jj}^{1/3}\right)}^{3}x_{i}x_{j}\end{aligned}}}
These expressions come into use when mixing gases in proportion, such as when producing tanks of air for diving [ 107 ] and managing the behavior of fluid mixtures in engineering applications. However, more sophisticated mixing rules are often necessary, to obtain satisfactory agreement with reality over the wide variety of mixtures encountered in practice. [ 108 ] [ 109 ]
Another method of specifying the vdW constants, pioneered by W.B. Kay and known as Kay's rule , [ 110 ] specifies the effective critical temperature and pressure of the fluid mixture by T c x = ∑ i = 1 n T c i x i and p c x = ∑ i = 1 n p c i x i . {\displaystyle T_{{\text{c}}x}=\sum _{i=1}^{n}T_{{\text{c}}i}x_{i}\qquad {\text{and}}\qquad p_{{\text{c}}x}=\sum _{i=1}^{n}\,p_{{\text{c}}i}x_{i}.}
In terms of these quantities, the vdW mixture constants are a x = ( 3 4 ) 3 ( R T c x ) 2 p c x , b x = ( 1 2 ) 3 R T c x p c x {\displaystyle a_{x}=\left({\frac {3}{4}}\right)^{3}{\frac {(RT_{{\text{c}}x})^{2}}{p_{{\text{c}}x}}},\qquad \qquad b_{x}=\left({\frac {1}{2}}\right)^{3}{\frac {RT_{{\text{c}}x}}{p_{{\text{c}}x}}}} which Kay used as the basis for calculations of the thermodynamic properties of mixtures. Kay's idea was adopted by T. W. Leland, who applied it to the molecular parameters ϵ , σ {\displaystyle \epsilon ,\sigma } , which are related to a , b {\displaystyle a,b} through T c , p c {\displaystyle T_{\text{c}},p_{\text{c}}} by a ∝ ϵ σ 3 {\displaystyle a\propto \epsilon \sigma ^{3}} and b ∝ σ 3 {\displaystyle b\propto \sigma ^{3}} . Using these together with the quadratic mixing rules for a , b {\displaystyle a,b} produces σ x 3 = ∑ i = i n ∑ j = 1 n σ i j 3 x i x j and ϵ x = [ ∑ i = 1 n ∑ j = 1 n ϵ i j σ i j 3 x i x j ] [ ∑ i = i n ∑ j = 1 n σ i j 3 x i x j ] − 1 {\displaystyle \sigma _{x}^{3}=\sum _{i=i}^{n}\sum _{j=1}^{n}\,\sigma _{ij}^{3}x_{i}x_{j}\qquad {\text{and}}\qquad \epsilon _{x}=\left[\sum _{i=1}^{n}\sum _{j=1}^{n}\epsilon _{ij}\sigma _{ij}^{3}x_{i}x_{j}\right]\left[\sum _{i=i}^{n}\sum _{j=1}^{n}\,\sigma _{ij}^{3}x_{i}x_{j}\right]^{-1}} which is the van der Waals approximation expressed in terms of the intermolecular constants. [ 111 ] [ 112 ] This approximation, when compared with computer simulations for mixtures, are in good agreement over the range 1 / 2 < ( σ 11 / σ 22 ) 3 < 2 {\displaystyle 1/2<(\sigma _{11}/\sigma _{22})^{3}<2} , namely for molecules of similar diameters. Rowlinson said of this approximation, "It was, and indeed still is, hard to improve on the original van der Waals recipe when expressed in [this] form". [ 113 ]
Since van der Waals presented his thesis, "[m]any derivations, pseudo-derivations, and plausibility arguments have been given" for it. [ 114 ] However, no mathematically rigorous derivation of the equation over its entire range of molar volume that begins from a statistical mechanical principle exists. Indeed, such a proof is not possible, even for hard spheres. [ 115 ] [ 116 ] [ 117 ] [ 118 ] [ 119 ] Goodstein writes, "Obviously the value of the van der Waals equation rests principally on its empirical behavior rather than its theoretical foundation." [ 7 ]
Although the use of the vdW equation is not justified mathematically, it has empirical validity. Its various applications in this region that attest to this, both qualitative and quantitative, have been described previously in this article. This point was also made by Alder , et al. who, at a conference marking the 100th anniversary of van der Waals' thesis, noted that: [ 120 ]
It is doubtful whether we would celebrate the centennial of the Van der Waals equation if it were applicable only under circumstances where it has been proven to be rigorously valid. It is empirically well established that many systems whose molecules have attractive potentials that are neither long-range nor weak conform nearly quantitatively to the Van der Waals model. An example is the theoretically much studied system of Argon, where the attractive potential has only a range half as large as the repulsive core.
They continued by saying that this model has "validity down to temperatures below the critical temperature, where the attractive potential is not weak at all but, in fact, comparable to the thermal energy." They also described its application to mixtures "where the Van der Waals model has also been applied with great success. In fact, its success has been so great that not a single other model of the many proposed since, has equalled its quantitative predictions, [ 121 ] let alone its simplicity." [ 122 ]
Engineers have made extensive use of this empirical validity, modifying the equation in numerous ways (by one account there have been some 400 cubic equations of state produced) [ 123 ] to manage the liquids, [ 124 ] and gases of pure substances and mixtures, [ 125 ] that they encounter in practice.
This situation has been aptly described by Boltzmann: [ 126 ]
... van der Waals has given us such a valuable tool that it would cost us much trouble to obtain by the subtlest deliberations a formula that would really be more useful than the one that van der Waals found by inspiration, as it were.
|
https://en.wikipedia.org/wiki/Van_der_Waals_equation
|
In molecular physics and chemistry , the van der Waals force (sometimes van der Waals' force ) is a distance-dependent interaction between atoms or molecules . Unlike ionic or covalent bonds , these attractions do not result from a chemical electronic bond ; [ 2 ] they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules.
Named after Dutch physicist Johannes Diderik van der Waals , the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry , structural biology , polymer science , nanotechnology , surface science , and condensed matter physics . It also underlies many properties of organic compounds and molecular solids , including their solubility in polar and non-polar media.
If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance ; this phenomenon results from the mutual repulsion between the atoms' electron clouds . [ 3 ]
The van der Waals forces [ 4 ] are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles ", [ 5 ] Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time.
Van der Waals forces include attraction and repulsions between atoms , molecules , as well as other intermolecular forces . They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics [ 6 ] ).
The force results from a transient shift in electron density . Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. [ 7 ] When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance r approximately with the 7th power (~ r −7 ). [ 8 ]
Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H ( hydrogen ) atoms in different H 2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O ( oxygen ) atoms in different O 2 molecules equals 0.44 kJ/mol (4.6 meV). [ 9 ] The corresponding vaporization energies of H 2 and O 2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds ).
The strength of van der Waals bonds increases with higher polarizability of the participating atoms. [ 10 ] For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S ( sulfur ) atoms in H 2 S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe ( xenon ) atoms is 2.35 kJ/mol (24.3 meV). [ 11 ] These van der Waals interactions are up to 40 times stronger than in H 2 , which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb ( lead ) and on the order of 32 kJ/mol (330 meV) for high-melting Pt ( platinum ), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas . [ 12 ] Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present.
More broadly, intermolecular forces have several possible contributions. They are ordered from strongest to weakest:
When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). [ 13 ] Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range. [ 14 ]
All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces.
The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance.
Van der Waals forces are responsible for certain cases of pressure broadening ( van der Waals broadening ) of spectral lines and the formation of van der Waals molecules . The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz . [ 15 ] [ 16 ] A more general theory of van der Waals forces has also been developed. [ 17 ] [ 18 ]
The main characteristics of van der Waals forces are: [ 19 ]
In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility.
Van der Waals forces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules .
London dispersion forces , named after the German-American physicist Fritz London , are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments . In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as ' dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. [ 20 ] In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction.
For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R 1 and R 2 and with smooth surfaces was approximated in 1937 by Hamaker [ 21 ] [ full citation needed ] (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules [ 22 ] [ full citation needed ] as the starting point) by:
where A is the Hamaker coefficient , which is a constant (~10 −19 − 10 −20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R 1 , R 2 , and r (the distance between the surfaces): z = R 1 + R 2 + r {\displaystyle \ z=R_{1}+R_{2}+r} .
The van der Waals force between two spheres of constant radii ( R 1 and R 2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function, F V d W ( z ) = − d d z U ( z ) {\displaystyle \ F_{\rm {VdW}}(z)=-{\frac {d}{dz}}U(z)} . This yields:
In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., r ≪ R 1 {\displaystyle \ r\ll R_{1}} or R 2 {\displaystyle R_{2}} , so that equation (1) for the potential energy function simplifies to:
with the force:
The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature. [ 23 ] [ 24 ] [ 25 ]
From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm.
The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking.
The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation . A more rigorous approach accounting for these effects, called the " macroscopic theory ", was developed by Lifshitz in 1956. [ 26 ] [ full citation needed ] Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory [ 27 ] [ full citation needed ] while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. [ 28 ] [ full citation needed ] Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published.
The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae , or microscopic projections, which cover the hair-like setae found on their footpads. [ 29 ] [ 30 ]
There were efforts in 2008 to create a dry glue that exploits the effect, [ 31 ] and success was achieved in 2011 to create an adhesive tape on similar grounds [ 32 ] (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints. [ 33 ]
A later study suggested that capillary adhesion might play a role, [ 34 ] but that hypothesis has been rejected by more recent studies. [ 35 ] [ 36 ] [ 37 ]
A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification ), not van der Waals or capillary forces. [ 38 ]
Among the arthropods , some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain. [ 39 ] [ 40 ]
|
https://en.wikipedia.org/wiki/Van_der_Waals_force
|
van der Waals integration is a physical assembly strategy, in which prefabricated building blocks are physically assembled together through weak van der Waals interactions . [ 1 ] This concept was originally proposed in two-dimensional materials research community to construct 2D van der Waals heterostructures . [ 2 ] [ 3 ]
A key benefit of van der Waals integration is that it offers an alternative way to integrate highly disparate material systems with unprecedented degrees of freedom, regardless of their crystal structures , lattice parameters , or orientation. [ 3 ] As this physical assembly method does not involve one-to-one chemical bonds between adjacent layers, [ 1 ] the van der Waals integration approach can thus enable the creation of a wide spectrum of series of artificial van der Waals heterostructures and novel moiré superlattices through layer transfer. [ 3 ] Highly disparate material systems with diverse functionalities can be integrated together with atomically clean and electronically sharp interfaces., [ 1 ] eliminating the rigorous lattice matching and process compatibility requirements that applied epitaxy [ 3 ] This approach has proven fruitful in 2D photonics, [ 4 ] polariton physics, [ 1 ] [ 5 ] hetero-integrated photonics, [ 3 ] and wearable optoelectronic applications. [ 1 ] [ 6 ]
|
https://en.wikipedia.org/wiki/Van_der_Waals_integration
|
A Van der Waals molecule is a weakly bound complex of atoms or molecules held together by intermolecular attractions such as Van der Waals forces or by hydrogen bonds . [ 1 ] The name originated in the beginning of the 1970s when stable molecular clusters were regularly observed in molecular beam microwave spectroscopy .
Examples of well-studied vdW molecules are Ar 2 , H 2 -Ar, H 2 O-Ar, benzene-Ar, (H 2 O) 2 , and (HF) 2 .
Others include the largest diatomic molecule He 2 , and LiHe . [ 2 ] [ 3 ]
A notable example is the He-HCN complex, studied for its large amplitude motions and the applicability of the adiabatic approximation in separating its angular and radial motions. Research has shown that even in such 'floppy' systems, the adiabatic approximation can be effectively utilized to simplify quantum mechanical analyses.
In (supersonic) molecular beams temperatures are very low (usually less than 5 K). At these low temperatures Van der Waals (vdW) molecules are stable and can be investigated by microwave, far-infrared spectroscopy and other modes of spectroscopy. [ 4 ] Also in cold equilibrium gases vdW molecules are formed, albeit in small, temperature dependent concentrations. Rotational and vibrational transitions in vdW molecules have been observed in gases, mainly by UV and IR spectroscopy.
Van der Waals molecules are usually very non-rigid and different versions are separated by low energy barriers, so that tunneling splittings, observable in far-infrared spectra, are relatively large. [ 5 ] Thus, in the far-infrared one may observe intermolecular vibrations, rotations, and tunneling motions of Van der Waals molecules.
The VRT spectroscopic study of Van der Waals molecules is one of the most direct routes to the understanding of intermolecular forces . [ 6 ]
In study of helium-containing van der Waals complexes, the adiabatic or Born–Oppenheimer approximation has been adapted to separate angular and radial motions. Despite the challenges posed by the weak interactions leading to large amplitude motions, research demonstrates that this approximation can still be valid, offering a quicker computational method for Diffusion Monte Carlo studies of molecular rotation within ultra-cold helium droplets. The non-rigid nature of these complexes, especially those with helium, complicates traditional quantum mechanical approaches. However, recent studies have validated the use of the adiabatic approximation for separating different types of molecular motion, even in these 'floppy' systems.
|
https://en.wikipedia.org/wiki/Van_der_Waals_molecule
|
The van der Waals radius , r w , of an atom is the radius of an imaginary hard sphere representing the distance of closest approach for another atom.
It is named after Johannes Diderik van der Waals , winner of the 1910 Nobel Prize in Physics , as he was the first to recognise that atoms were not simply points and to demonstrate the physical consequences of their size through the van der Waals equation of state .
The van der Waals volume , V w , also called the atomic volume or molecular volume , is the atomic property most directly related to the van der Waals radius. [ 3 ] It is the volume "occupied" by an individual atom (or molecule). The van der Waals volume may be calculated if the van der Waals radii (and, for molecules, the inter-atomic distances, and angles) are known. For a single atom, it is the volume of a sphere whose radius is the van der Waals radius of the atom: V w = 4 3 π r w 3 . {\displaystyle V_{\rm {w}}={4 \over 3}\pi r_{\rm {w}}^{3}.}
For a molecule, it is the volume enclosed by the van der Waals surface .
The van der Waals volume of a molecule is always smaller than the sum of the van der Waals volumes of the constituent atoms: the atoms can be said to "overlap" when they form chemical bonds .
The van der Waals volume of an atom or molecule may also be determined by experimental measurements on gases, notably from the van der Waals constant b , the polarizability α , or the molar refractivity A .
In all three cases, measurements are made on macroscopic samples and it is normal to express the results as molar quantities.
To find the van der Waals volume of a single atom or molecule, it is necessary to divide by the Avogadro constant N A .
The molar van der Waals volume should not be confused with the molar volume of the substance.
In general, at normal laboratory temperatures and pressures, the atoms or molecules of gas only occupy about 1 ⁄ 1000 of the volume of the gas, the rest is empty space.
Hence the molar van der Waals volume, which only counts the volume occupied by the atoms or molecules, is usually about 1000 times smaller than the molar volume for a gas at standard temperature and pressure .
Primordial From decay Synthetic Border shows natural occurrence of the element
Van der Waals radii may be determined from the mechanical properties of gases (the original method), from the critical point , from measurements of atomic spacing between pairs of unbonded atoms in crystals or from measurements of electrical or optical properties (the polarizability and the molar refractivity ).
These various methods give values for the van der Waals radius which are similar (1–2 Å , 100–200 pm ) but not identical.
Tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will often have different values for the van der Waals radius of the same atom.
Indeed, there is no reason to assume that the van der Waals radius is a fixed property of the atom in all circumstances: rather, it tends to vary with the particular chemical environment of the atom in any given case. [ 2 ]
The van der Waals equation of state is the simplest and best-known modification of the ideal gas law to account for the behaviour of real gases : ( p + a ( n V ~ ) 2 ) ( V ~ − n b ) = n R T , {\displaystyle \left(p+a\left({\frac {n}{\tilde {V}}}\right)^{2}\right)({\tilde {V}}-nb)=nRT,} where p is pressure, n is the number of moles of the gas in question and a and b depend on the particular gas, V ~ {\displaystyle {\tilde {V}}} is the volume, R is the specific gas constant on a unit mole basis and T the absolute temperature; a is a correction for intermolecular forces and b corrects for finite atomic or molecular sizes; the value of b equals the van der Waals volume per mole of the gas. Their values vary from gas to gas.
The van der Waals equation also has a microscopic interpretation: molecules interact with one another.
The interaction is strongly repulsive at a very short distance, becomes mildly attractive at the intermediate range, and vanishes at a long distance.
The ideal gas law must be corrected when attractive and repulsive forces are considered.
For example, the mutual repulsion between molecules has the effect of excluding neighbors from a certain amount of space around each molecule.
Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion.
In the equation of state, this volume of exclusion ( nb ) should be subtracted from the volume of the container ( V ), thus: ( V - nb ).
The other term that is introduced in the van der Waals equation, a ( n V ~ ) 2 {\textstyle a\left({\frac {n}{\tilde {V}}}\right)^{2}} , describes a weak attractive force among molecules (known as the van der Waals force ), which increases when n increases or V decreases and molecules become more crowded together.
The van der Waals constant b volume can be used to calculate the van der Waals volume of an atom or molecule with experimental data derived from measurements on gases.
For helium , [ 6 ] b = 23.7 cm 3 /mol. Helium is a monatomic gas , and each mole of helium contains 6.022 × 10 23 atoms (the Avogadro constant , N A ): V w = b N A {\displaystyle V_{\rm {w}}={b \over {N_{\rm {A}}}}} Therefore, the van der Waals volume of a single atom V w = 39.36 Å 3 , which corresponds to r w = 2.11 Å (≈ 200 picometers).
This method may be extended to diatomic gases by approximating the molecule as a rod with rounded ends where the diameter is 2 r w and the internuclear distance is d .
The algebra is more complicated, but the relation V w = 4 3 π r w 3 + π r w 2 d {\displaystyle V_{\rm {w}}={4 \over 3}\pi r_{\rm {w}}^{3}+\pi r_{\rm {w}}^{2}d} can be solved by the normal methods for cubic functions .
The molecules in a molecular crystal are held together by van der Waals forces rather than chemical bonds .
In principle, the closest that two atoms belonging to different molecules can approach one another is given by the sum of their van der Waals radii.
By examining a large number of structures of molecular crystals, it is possible to find a minimum radius for each type of atom such that other non-bonded atoms do not encroach any closer.
This approach was first used by Linus Pauling in his seminal work The Nature of the Chemical Bond . [ 7 ] Arnold Bondi also conducted a study of this type, published in 1964, [ 2 ] although he also considered other methods of determining the van der Waals radius in coming to his final estimates.
Some of Bondi's figures are given in the table at the top of this article, and they remain the most widely used "consensus" values for the van der Waals radii of the elements.
Scott Rowland and Robin Taylor re-examined these 1964 figures in the light of more recent crystallographic data: on the whole, the agreement was very good, although they recommend a value of 1.09 Å for the van der Waals radius of hydrogen as opposed to Bondi's 1.20 Å. [ 1 ] A more recent analysis of the Cambridge Structural Database , carried out by Santiago Alvarez, provided a new set of values for 93 naturally occurring elements. [ 8 ] The values of different authors are sometimes very different, so that one has to chose the ones which are closest in their physical meaning to those one wants to compare with. Here is a table with entries of four different authors. The valuse of Bondi from 1966 are those mostly used in crystallography:
number
1966
2001
2009
2014
A simple example of the use of crystallographic data (here neutron diffraction ) is to consider the case of solid helium, where the atoms are held together only by van der Waals forces (rather than by covalent or metallic bonds ) and so the distance between the nuclei can be considered to be equal to twice the van der Waals radius.
The density of solid helium at 1.1 K and 66 atm is 0.214(6) g/cm 3 , [ 11 ] corresponding to a molar volume V m = 18.7 × 10 −6 m 3 /mol .
The van der Waals volume is given by V w = π V m N A 18 {\displaystyle V_{\rm {w}}={\frac {\pi V_{\rm {m}}}{N_{\rm {A}}{\sqrt {18}}}}} where the factor of π/√18 arises from the packing of spheres : V w = 2.30 × 10 −29 m 3 = 23.0 Å 3 , corresponding to a van der Waals radius r w = 1.76 Å.
The molar refractivity A of a gas is related to its refractive index n by the Lorentz–Lorenz equation : A = R T ( n 2 − 1 ) 3 p {\displaystyle A={\frac {RT(n^{2}-1)}{3p}}} The refractive index of helium n = 1.000 0350 at 0 °C and 101.325 kPa, [ 12 ] which corresponds to a molar refractivity A = 5.23 × 10 −7 m 3 /mol .
Dividing by the Avogadro constant gives V w = 8.685 × 10 −31 m 3 = 0.8685 Å 3 , corresponding to r w = 0.59 Å.
The polarizability α of a gas is related to its electric susceptibility χ e by the relation α = ε 0 k B T p χ e {\displaystyle \alpha ={\varepsilon _{0}k_{\rm {B}}T \over p}\chi _{\rm {e}}} and the electric susceptibility may be calculated from tabulated values of the relative permittivity ε r using the relation χ e = ε r − 1.
The electric susceptibility of helium χ e = 7 × 10 −5 at 0 °C and 101.325 kPa, [ 13 ] which corresponds to a polarizability α = 2.307 × 10 −41 C⋅m 2 /V .
The polarizability is related the van der Waals volume by the relation V w = 1 4 π ε 0 α , {\displaystyle V_{\rm {w}}={1 \over {4\pi \varepsilon _{0}}}\alpha ,} so the van der Waals volume of helium V w = 2.073 × 10 −31 m 3 = 0.2073 Å 3 by this method, corresponding to r w = 0.37 Å.
When the atomic polarizability is quoted in units of volume such as Å 3 , as is often the case, it is equal to the van der Waals volume.
However, the term "atomic polarizability" is preferred as polarizability is a precisely defined (and measurable) physical quantity , whereas "van der Waals volume" can have any number of definitions depending on the method of measurement.
|
https://en.wikipedia.org/wiki/Van_der_Waals_radius
|
Van der Waals strain is strain resulting from Van der Waals repulsion when two substituents in a molecule approach each other with a distance less than the sum of their Van der Waals radii .
Van der Waals strain is also called Van der Waals repulsion and is related to steric hindrance . [ 1 ] One of the most common forms of this strain is eclipsing hydrogen , in alkanes .
In molecules whose vibrational mode involves a rotational or pseudorotational mechanism (such as the Berry mechanism or the Bartell mechanism ), [ 2 ] Van der Waals strain can cause significant differences in potential energy , even between molecules with identical geometry. PF 5 , for example, has significantly lower potential energy than PCl 5 .
Despite their identical trigonal bipyramidal molecular geometry , the higher electron count of chlorine as compared to fluorine causes a potential energy spike as the molecule enters its intermediate in the mechanism and the substituents draw nearer to each other.
This stereochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Van_der_Waals_strain
|
The van der Waals surface of a molecule is an abstract representation or model of that molecule, illustrating where, in very rough terms, a surface might reside for the molecule based on the hard cutoffs of van der Waals radii for individual atoms, and it represents a surface through which the molecule might be conceived as interacting with other molecules. [ citation needed ] Also referred to as a van der Waals envelope, the van der Waals surface is named for Johannes Diderik van der Waals , a Dutch theoretical physicist and thermodynamicist who developed theory to provide a liquid-gas equation of state that accounted for the non-zero volume of atoms and molecules, and on their exhibiting an attractive force when they interacted (theoretical constructions that also bear his name).
van der Waals surfaces are therefore a tool used in the abstract representations of molecules, whether accessed, as they were originally, via hand calculation, or via physical wood/plastic models, or now digitally, via computational chemistry software.
Practically speaking, CPK models , developed by and named for Robert Corey , Linus Pauling , and Walter Koltun , [ 3 ] were the first widely used physical molecular models based on van der Waals radii, and allowed broad pedagogical and research use of a model showing the van der Waals surfaces of molecules.
Related to the title concept are the ideas of a van der Waals volume , V w , and a van der Waals surface area, abbreviated variously as A w , vdWSA, VSA, and WSA. [ citation needed ] A van der Waals surface area is an abstract conception of the surface area of atoms or molecules from a mathematical estimation, either computing it from first principles or by integrating over a corresponding van der Waals volume.
In simplest case, for a spherical monatomic gas, it is simply the computed surface area of a sphere of radius equal to the van der Waals radius of the gaseous atom:
The van der Waals volume , a type of atomic or molecular volume, is a property directly related to the van der Waals radius , and is defined as the volume occupied by an individual atom, or in a combined sense, by all atoms of a molecule.
It may be calculated for atoms if the van der Waals radius is known, and for molecules if its atoms radii and the inter-atomic distances and angles are known.
As above, in simplest case, for a spherical monatomic gas, V w is simply the computed volume of a sphere of radius equal to the van der Waals radius of the gaseous atom:
For a molecule, V w is the volume enclosed by the van der Waals surface ; hence, computation of V w presumes ability to describe and compute a van der Waals surface. van der Waals volumes of molecules are always smaller than the sum of the van der Waals volumes of their constituent atoms, due to the fact that the interatomic distances resulting from chemical bond are less than the sum of the atomic van der Waals radii.
In this sense, a van der Waals surface of a homonuclear diatomic molecule can be viewed as an pictorial overlap of the two spherical van der Waals surfaces of the individual atoms, likewise for larger molecules like methane, ammonia, etc. (see images).
van der Waals radii and volumes may be determined from the mechanical properties of gases (the original method, determining the van der Waals constant ), from the critical point (e.g., of a fluid), from crystallographic measurements of the spacing between pairs of unbonded atoms in crystals, or from measurements of electrical or optical properties (i.e., polarizability or molar refractivity ).
In all cases, measurements are made on macroscopic samples and results are expressed as molar quantities. van der Waals volumes of a single atom or molecules are arrived at by dividing the macroscopically determined volumes by the Avogadro constant .
The various methods give radius values which are similar, but not identical—generally within 1–2 Å (100–200 pm ).
Useful tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will be seen to present different values for the van der Waals radius of the same atom.
As well, it has been argued that the van der Waals radius is not a fixed property of an atom in all circumstances, rather, that it will vary with the chemical environment of the atom. [ 2 ]
|
https://en.wikipedia.org/wiki/Van_der_Waals_surface
|
In linear algebra , the permanent of a square matrix is a function of the matrix similar to the determinant . The permanent, as well as the determinant, is a polynomial in the entries of the matrix. [ 1 ] Both are special cases of a more general function of a matrix called the immanant .
The permanent of an n × n matrix A = ( a i,j ) is defined as
perm ( A ) = ∑ σ ∈ S n ∏ i = 1 n a i , σ ( i ) . {\displaystyle \operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.}
The sum here extends over all elements σ of the symmetric group S n ; i.e. over all permutations of the numbers 1, 2, ..., n .
For example,
perm ( a b c d ) = a d + b c , {\displaystyle \operatorname {perm} {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=ad+bc,}
and
perm ( a b c d e f g h i ) = a e i + b f g + c d h + c e g + b d i + a f h . {\displaystyle \operatorname {perm} {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}=aei+bfg+cdh+ceg+bdi+afh.}
The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account.
The permanent of a matrix A is denoted per A , perm A , or Per A , sometimes with parentheses around the argument. Minc uses Per( A ) for the permanent of rectangular matrices, and per( A ) when A is a square matrix. [ 2 ] Muir and Metzler use the notation | + | + {\displaystyle {\overset {+}{|}}\quad {\overset {+}{|}}} . [ 3 ]
The word, permanent , originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function, [ 4 ] and was used by Muir and Metzler [ 5 ] in the modern, more specific, sense. [ 6 ]
If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix A = ( a i j ) {\displaystyle A=\left(a_{ij}\right)} of order n : [ 7 ]
Laplace's expansion by minors for computing the determinant along a row, column or diagonal extends to the permanent by ignoring all signs. [ 9 ]
For every i {\textstyle i} ,
p e r m ( B ) = ∑ j = 1 n B i , j M i , j , {\displaystyle \mathbb {perm} (B)=\sum _{j=1}^{n}B_{i,j}M_{i,j},}
where B i , j {\displaystyle B_{i,j}} is the entry of the i th row and the j th column of B , and M i , j {\textstyle M_{i,j}} is the permanent of the submatrix obtained by removing the i th row and the j th column of B .
For example, expanding along the first column,
perm ( 1 1 1 1 2 1 0 0 3 0 1 0 4 0 0 1 ) = 1 ⋅ perm ( 1 0 0 0 1 0 0 0 1 ) + 2 ⋅ perm ( 1 1 1 0 1 0 0 0 1 ) + 3 ⋅ perm ( 1 1 1 1 0 0 0 0 1 ) + 4 ⋅ perm ( 1 1 1 1 0 0 0 1 0 ) = 1 ( 1 ) + 2 ( 1 ) + 3 ( 1 ) + 4 ( 1 ) = 10 , {\displaystyle {\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&1\cdot \operatorname {perm} \left({\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}}\right)+2\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}}\right)\\&{}+\ 3\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}}\right)+4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)\\={}&1(1)+2(1)+3(1)+4(1)=10,\end{aligned}}}
while expanding along the last row gives,
perm ( 1 1 1 1 2 1 0 0 3 0 1 0 4 0 0 1 ) = 4 ⋅ perm ( 1 1 1 1 0 0 0 1 0 ) + 0 ⋅ perm ( 1 1 1 2 0 0 3 1 0 ) + 0 ⋅ perm ( 1 1 1 2 1 0 3 0 0 ) + 1 ⋅ perm ( 1 1 1 2 1 0 3 0 1 ) = 4 ( 1 ) + 0 + 0 + 1 ( 6 ) = 10. {\displaystyle {\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)+0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}}\right)\\&{}+\ 0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}}\right)+1\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&1\end{matrix}}\right)\\={}&4(1)+0+0+1(6)=10.\end{aligned}}}
On the other hand, the basic multiplicative property of determinants is not valid for permanents. [ 10 ] A simple example shows that this is so.
4 = perm ( 1 1 1 1 ) perm ( 1 1 1 1 ) ≠ perm ( ( 1 1 1 1 ) ( 1 1 1 1 ) ) = perm ( 2 2 2 2 ) = 8. {\displaystyle {\begin{aligned}4&=\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\\&\neq \operatorname {perm} \left(\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\right)=\operatorname {perm} \left({\begin{matrix}2&2\\2&2\end{matrix}}\right)=8.\end{aligned}}}
Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in combinatorics , in treating boson Green's functions in quantum field theory , and in determining state probabilities of boson sampling systems. [ 11 ] However, it has two graph-theoretic interpretations: as the sum of weights of cycle covers of a directed graph , and as the sum of weights of perfect matchings in a bipartite graph .
The permanent arises naturally in the study of the symmetric tensor power of Hilbert spaces . [ 12 ] In particular, for a Hilbert space H {\displaystyle H} , let ∨ k H {\displaystyle \vee ^{k}H} denote the k {\displaystyle k} th symmetric tensor power of H {\displaystyle H} , which is the space of symmetric tensors . Note in particular that ∨ k H {\displaystyle \vee ^{k}H} is spanned by the symmetric products of elements in H {\displaystyle H} . For x 1 , x 2 , … , x k ∈ H {\displaystyle x_{1},x_{2},\dots ,x_{k}\in H} , we define the symmetric product of these elements by x 1 ∨ x 2 ∨ ⋯ ∨ x k = ( k ! ) − 1 / 2 ∑ σ ∈ S k x σ ( 1 ) ⊗ x σ ( 2 ) ⊗ ⋯ ⊗ x σ ( k ) {\displaystyle x_{1}\vee x_{2}\vee \cdots \vee x_{k}=(k!)^{-1/2}\sum _{\sigma \in S_{k}}x_{\sigma (1)}\otimes x_{\sigma (2)}\otimes \cdots \otimes x_{\sigma (k)}} If we consider ∨ k H {\displaystyle \vee ^{k}H} (as a subspace of ⊗ k H {\displaystyle \otimes ^{k}H} , the k th tensor power of H {\displaystyle H} ) and define the inner product on ∨ k H {\displaystyle \vee ^{k}H} accordingly, we find that for x j , y j ∈ H {\displaystyle x_{j},y_{j}\in H} ⟨ x 1 ∨ x 2 ∨ ⋯ ∨ x k , y 1 ∨ y 2 ∨ ⋯ ∨ y k ⟩ = perm [ ⟨ x i , y j ⟩ ] i , j = 1 k {\displaystyle \langle x_{1}\vee x_{2}\vee \cdots \vee x_{k},y_{1}\vee y_{2}\vee \cdots \vee y_{k}\rangle =\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}} Applying the Cauchy–Schwarz inequality , we find that perm [ ⟨ x i , x j ⟩ ] i , j = 1 k ≥ 0 {\displaystyle \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\geq 0} , and that | perm [ ⟨ x i , y j ⟩ ] i , j = 1 k | 2 ≤ perm [ ⟨ x i , x j ⟩ ] i , j = 1 k ⋅ perm [ ⟨ y i , y j ⟩ ] i , j = 1 k {\displaystyle \left|\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}\right|^{2}\leq \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\cdot \operatorname {perm} \left[\langle y_{i},y_{j}\rangle \right]_{i,j=1}^{k}}
Any square matrix A = ( a i j ) i , j = 1 n {\displaystyle A=(a_{ij})_{i,j=1}^{n}} can be viewed as the adjacency matrix of a weighted directed graph on vertex set V = { 1 , 2 , … , n } {\displaystyle V=\{1,2,\dots ,n\}} , with a i j {\displaystyle a_{ij}} representing the weight of the arc from vertex i to vertex j .
A cycle cover of a weighted directed graph is a collection of vertex-disjoint directed cycles in the digraph that covers all vertices in the graph. Thus, each vertex i in the digraph has a unique "successor" σ ( i ) {\displaystyle \sigma (i)} in the cycle cover, and so σ {\displaystyle \sigma } represents a permutation on V . Conversely, any permutation σ {\displaystyle \sigma } on V corresponds to a cycle cover with arcs from each vertex i to vertex σ ( i ) {\displaystyle \sigma (i)} .
If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then weight ( σ ) = ∏ i = 1 n a i , σ ( i ) , {\displaystyle \operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)},} implying that perm ( A ) = ∑ σ weight ( σ ) . {\displaystyle \operatorname {perm} (A)=\sum _{\sigma }\operatorname {weight} (\sigma ).} Thus the permanent of A is equal to the sum of the weights of all cycle-covers of the digraph.
A square matrix A = ( a i j ) {\displaystyle A=(a_{ij})} can also be viewed as the adjacency matrix of a bipartite graph which has vertices x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} on one side and y 1 , y 2 , … , y n {\displaystyle y_{1},y_{2},\dots ,y_{n}} on the other side, with a i j {\displaystyle a_{ij}} representing the weight of the edge from vertex x i {\displaystyle x_{i}} to vertex y j {\displaystyle y_{j}} . If the weight of a perfect matching σ {\displaystyle \sigma } that matches x i {\displaystyle x_{i}} to y σ ( i ) {\displaystyle y_{\sigma (i)}} is defined to be the product of the weights of the edges in the matching, then weight ( σ ) = ∏ i = 1 n a i , σ ( i ) . {\displaystyle \operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)}.} Thus the permanent of A is equal to the sum of the weights of all perfect matchings of the graph.
The answers to many counting questions can be computed as permanents of matrices that only have 0 and 1 as entries.
Let Ω( n , k ) be the class of all (0, 1)-matrices of order n with each row and column sum equal to k . Every matrix A in this class has perm( A ) > 0. [ 13 ] The incidence matrices of projective planes are in the class Ω( n 2 + n + 1, n + 1) for n an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For n = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively. [ 13 ] Let Z be the incidence matrix of the projective plane with n = 2, the Fano plane . Remarkably, perm( Z ) = 24 = |det ( Z )|, the absolute value of the determinant of Z . This is a consequence of Z being a circulant matrix and the theorem: [ 14 ]
Permanents can also be used to calculate the number of permutations with restricted (prohibited) positions. For the standard n -set {1, 2, ..., n }, let A = ( a i j ) {\displaystyle A=(a_{ij})} be the (0, 1)-matrix where a ij = 1 if i → j is allowed in a permutation and a ij = 0 otherwise. Then perm( A ) is equal to the number of permutations of the n -set that satisfy all the restrictions. [ 9 ] Two well known special cases of this are the solution of the derangement problem and the ménage problem : the number of permutations of an n -set with no fixed points (derangements) is given by
perm ( J − I ) = perm ( 0 1 1 … 1 1 0 1 … 1 1 1 0 … 1 ⋮ ⋮ ⋮ ⋱ ⋮ 1 1 1 … 0 ) = n ! ∑ i = 0 n ( − 1 ) i i ! , {\displaystyle \operatorname {perm} (J-I)=\operatorname {perm} \left({\begin{matrix}0&1&1&\dots &1\\1&0&1&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&1&1&\dots &0\end{matrix}}\right)=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}},} where J is the n × n all 1's matrix and I is the identity matrix, and the ménage numbers are given by
perm ( J − I − I ′ ) = perm ( 0 0 1 … 1 1 0 0 … 1 1 1 0 … 1 ⋮ ⋮ ⋮ ⋱ ⋮ 0 1 1 … 0 ) = ∑ k = 0 n ( − 1 ) k 2 n 2 n − k ( 2 n − k k ) ( n − k ) ! , {\displaystyle {\begin{aligned}\operatorname {perm} (J-I-I')&=\operatorname {perm} \left({\begin{matrix}0&0&1&\dots &1\\1&0&0&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&1&1&\dots &0\end{matrix}}\right)\\&=\sum _{k=0}^{n}(-1)^{k}{\frac {2n}{2n-k}}{2n-k \choose k}(n-k)!,\end{aligned}}}
where I' is the (0, 1)-matrix with nonzero entries in positions ( i , i + 1) and ( n , 1).
Permanent of n × n all 1's matrix is a number of possible arrangements of n mutually non-attacking rooks in the positions of the board of size n × n . [ 15 ]
The Bregman–Minc inequality , conjectured by H. Minc in 1963 [ 16 ] and proved by L. M. Brégman in 1973, [ 17 ] gives an upper bound for the permanent of an n × n (0, 1)-matrix. If A has r i ones in row i for each 1 ≤ i ≤ n , the inequality states that perm A ≤ ∏ i = 1 n ( r i ) ! 1 / r i . {\displaystyle \operatorname {perm} A\leq \prod _{i=1}^{n}(r_{i})!^{1/r_{i}}.}
In 1926, Van der Waerden conjectured that the minimum permanent among all n × n doubly stochastic matrices is n !/ n n , achieved by the matrix for which all entries are equal to 1/ n . [ 18 ] Proofs of this conjecture were published in 1980 by B. Gyires [ 19 ] and in 1981 by G. P. Egorychev [ 20 ] and D. I. Falikman; [ 21 ] Egorychev's proof is an application of the Alexandrov–Fenchel inequality . [ 22 ] For this work, Egorychev and Falikman won the Fulkerson Prize in 1982. [ 23 ]
The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to H. J. Ryser . [ 24 ] Ryser's method is based on an inclusion–exclusion formula that can be given [ 25 ] as follows: Let A k {\displaystyle A_{k}} be obtained from A by deleting k columns, let P ( A k ) {\displaystyle P(A_{k})} be the product of the row-sums of A k {\displaystyle A_{k}} , and let Σ k {\displaystyle \Sigma _{k}} be the sum of the values of P ( A k ) {\displaystyle P(A_{k})} over all possible A k {\displaystyle A_{k}} . Then perm ( A ) = ∑ k = 0 n − 1 ( − 1 ) k Σ k . {\displaystyle \operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.}
It may be rewritten in terms of the matrix entries as follows: perm ( A ) = ( − 1 ) n ∑ S ⊆ { 1 , … , n } ( − 1 ) | S | ∏ i = 1 n ∑ j ∈ S a i j . {\displaystyle \operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.}
The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in polynomial time by Gaussian elimination , Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is #P-complete . Thus, if the permanent can be computed in polynomial time by any method, then FP = #P , which is an even stronger statement than P = NP . When the entries of A are nonnegative, however, the permanent can be computed approximately in probabilistic polynomial time, up to an error of ε M {\displaystyle \varepsilon M} , where M {\displaystyle M} is the value of the permanent and ε > 0 {\displaystyle \varepsilon >0} is arbitrary. [ 26 ] The permanent of a certain set of positive semidefinite matrices is NP-hard to approximate within any subexponential factor. [ 27 ] If further conditions on the spectrum are imposed, the permanent can be approximated in probabilistic polynomial time: the best achievable error of this approximation is ε M {\displaystyle \varepsilon {\sqrt {M}}} ( M {\displaystyle M} is again the value of the permanent). [ 28 ] The hardness in these instances is closely linked with difficulty of simulating boson sampling experiments.
Another way to view permanents is via multivariate generating functions . Let A = ( a i j ) {\displaystyle A=(a_{ij})} be a square matrix of order n . Consider the multivariate generating function: F ( x 1 , x 2 , … , x n ) = ∏ i = 1 n ( ∑ j = 1 n a i j x j ) = ( ∑ j = 1 n a 1 j x j ) ( ∑ j = 1 n a 2 j x j ) ⋯ ( ∑ j = 1 n a n j x j ) . {\displaystyle {\begin{aligned}F(x_{1},x_{2},\dots ,x_{n})&=\prod _{i=1}^{n}\left(\sum _{j=1}^{n}a_{ij}x_{j}\right)\\&=\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right).\end{aligned}}} The coefficient of x 1 x 2 … x n {\displaystyle x_{1}x_{2}\dots x_{n}} in F ( x 1 , x 2 , … , x n ) {\displaystyle F(x_{1},x_{2},\dots ,x_{n})} is perm( A ). [ 29 ]
As a generalization, for any sequence of n non-negative integers, s 1 , s 2 , … , s n {\displaystyle s_{1},s_{2},\dots ,s_{n}} define: perm ( s 1 , s 2 , … , s n ) ( A ) {\displaystyle \operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)} as the coefficient of x 1 s 1 x 2 s 2 ⋯ x n s n {\displaystyle x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}} in ( ∑ j = 1 n a 1 j x j ) s 1 ( ∑ j = 1 n a 2 j x j ) s 2 ⋯ ( ∑ j = 1 n a n j x j ) s n . {\displaystyle \left(\sum _{j=1}^{n}a_{1j}x_{j}\right)^{s_{1}}\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)^{s_{2}}\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right)^{s_{n}}.}
MacMahon's master theorem relating permanents and determinants is: [ 30 ] perm ( s 1 , s 2 , … , s n ) ( A ) = coefficient of x 1 s 1 x 2 s 2 ⋯ x n s n in 1 det ( I − X A ) , {\displaystyle \operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)={\text{ coefficient of }}x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}{\text{ in }}{\frac {1}{\det(I-XA)}},} where I is the order n identity matrix and X is the diagonal matrix with diagonal [ x 1 , x 2 , … , x n ] . {\displaystyle [x_{1},x_{2},\dots ,x_{n}].}
The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case. [ 31 ] Specifically, for an m × n matrix A = ( a i j ) {\displaystyle A=(a_{ij})} with m ≤ n , define perm ( A ) = ∑ σ ∈ P ( n , m ) a 1 σ ( 1 ) a 2 σ ( 2 ) … a m σ ( m ) {\displaystyle \operatorname {perm} (A)=\sum _{\sigma \in \operatorname {P} (n,m)}a_{1\sigma (1)}a_{2\sigma (2)}\ldots a_{m\sigma (m)}} where P( n , m ) is the set of all m -permutations of the n -set {1,2,...,n}. [ 32 ]
Ryser's computational result for permanents also generalizes. If A is an m × n matrix with m ≤ n , let A k {\displaystyle A_{k}} be obtained from A by deleting k columns, let P ( A k ) {\displaystyle P(A_{k})} be the product of the row-sums of A k {\displaystyle A_{k}} , and let σ k {\displaystyle \sigma _{k}} be the sum of the values of P ( A k ) {\displaystyle P(A_{k})} over all possible A k {\displaystyle A_{k}} . Then [ 10 ] perm ( A ) = ∑ k = 0 m − 1 ( − 1 ) k ( n − m + k k ) σ n − m + k . {\displaystyle \operatorname {perm} (A)=\sum _{k=0}^{m-1}(-1)^{k}{\binom {n-m+k}{k}}\sigma _{n-m+k}.}
The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance:
Let S 1 , S 2 , ..., S m be subsets (not necessarily distinct) of an n -set with m ≤ n . The incidence matrix of this collection of subsets is an m × n (0,1)-matrix A . The number of systems of distinct representatives (SDR's) of this collection is perm( A ). [ 33 ]
|
https://en.wikipedia.org/wiki/Van_der_Waerden's_conjecture
|
Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory . Van der Waerden's theorem states that for any given positive integers r and k , there is some number N such that if the integers {1, 2, ..., N } are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W ( r , k ), named after the Dutch mathematician B. L. van der Waerden . [ 1 ]
This was conjectured by Pierre Joseph Henry Baudet in 1921. Waerden heard of it in 1926 and published his proof in 1927, titled Beweis einer Baudetschen Vermutung [Proof of Baudet's conjecture] . [ 2 ] [ 3 ] [ 4 ]
For example, when r = 2, you have two colors, say red and blue . W (2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this:
and no three integers of the same color form an arithmetic progression . But you can't add a ninth integer to the end without creating such a progression. If you add a red 9 , then the red 3 , 6 , and 9 are in arithmetic progression. Alternatively, if you add a blue 9 , then the blue 1 , 5 , and 9 are in arithmetic progression.
In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W (2, 3) is 9.
It is an open problem to determine the values of W ( r , k ) for most values of r and k . The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color.
For r = 3 and k = 3, the bound given by the theorem is 7(2·3 7 + 1)(2·3 7·(2·3 7 + 1) + 1), or approximately 4.22·10 14616 . But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example:
An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$ 1000 for showing W (2, k ) < 2 k 2 . [ 5 ] In addition, he offered a US$ 250 prize for a proof of his conjecture involving more general off-diagonal van der Waerden numbers , stating W (2; 3, k ) ≤ k O(1) , while mentioning numerical evidence suggests W (2; 3, k ) = k 2 + o(1) . Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to W (2; 3, k ) < k r for any r . [ 6 ] The best upper bound currently known is due to Timothy Gowers , [ 7 ] who establishes
by first establishing a similar result for Szemerédi's theorem , which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales–Jewett theorem , which is another strengthening of Van der Waerden's theorem.
The best lower bound currently known for W ( 2 , k ) {\displaystyle W(2,k)} is that for all positive ε {\displaystyle \varepsilon } we have W ( 2 , k ) > 2 k / k ε {\displaystyle W(2,k)>2^{k}/k^{\varepsilon }} , for all sufficiently large k {\displaystyle k} . [ 8 ]
The following proof is due to Ron Graham , B.L. Rothschild, and Joel Spencer . [ 9 ] Khinchin [ 10 ] gives a fairly simple proof of the theorem without estimating W ( r , k ).
We will prove the special case mentioned above, that W (2, 3) ≤ 325. Let c ( n ) be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color.
Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5 b + 1, ..., 5 b + 5} for some b in {0, ..., 64}. Since each integer is colored either red or blue , each block is colored in one of 32 different ways. By the pigeonhole principle , there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers b 1 and b 2 , both in {0,...,32}, such that
for all k in {1, ..., 5}. Among the three integers 5 b 1 + 1, 5 b 1 + 2, 5 b 1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5 b 1 + a 1 and 5 b 1 + a 2 , where the a i are in {1,2,3} and a 1 < a 2 . Suppose (without loss of generality) that these two integers are both red . (If they are both blue , just exchange ' red ' and ' blue ' in what follows.)
Let a 3 = 2 a 2 − a 1 . If 5 b 1 + a 3 is red , then we have found our arithmetic progression: 5 b 1 + a i are all red .
Otherwise, 5 b 1 + a 3 is blue . Since a 3 ≤ 5, 5 b 1 + a 3 is in the b 1 block, and since the b 2 block is colored identically, 5 b 2 + a 3 is also blue .
Now let b 3 = 2 b 2 − b 1 . Then b 3 ≤ 64. Consider the integer 5 b 3 + a 3 , which must be ≤ 325. What color is it?
If it is red , then 5 b 1 + a 1 , 5 b 2 + a 2 , and 5 b 3 + a 3 form a red arithmetic progression. But if it is blue , then 5 b 1 + a 3 , 5 b 2 + a 3 , and 5 b 3 + a 3 form a blue arithmetic progression. Either way, we are done.
A similar argument can be advanced to show that W (3, 3) ≤ 7(2·3 7 +1)(2·3 7·(2·3 7 +1) +1). One begins by dividing the integers into 2·3 7·(2·3 7 + 1) + 1 groups of 7(2·3 7 + 1) integers each; of the first 3 7·(2·3 7 + 1) + 1 groups, two must be colored identically.
Divide each of these two groups into 2·3 7 +1 subgroups of 7 integers each; of the first 3 7 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red ; this implies either a red progression or an element of a different color, say blue , in the same subgroup.
Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue , would complete a red or blue progression, by a construction analogous to the one for W (2, 3). Suppose that this element is green . Since there is a group that is colored identically, it must contain copies of the red , blue , and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression.
The proof for W (2, 3) depends essentially on proving that W (32, 2) ≤ 33. We divide the integers {1,...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for W (3, 3) depends on proving that
By a double induction on the number of colors and the length of the progression, the theorem is proved in general.
A D-dimensional arithmetic progression (AP) consists of
numbers of the form:
where a is the basepoint, the s 's are positive step-sizes, and the i 's range from 0 to L − 1 . A d -dimensional AP is homogeneous for some coloring when it is all the same color.
A D -dimensional arithmetic progression with benefits is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices i 's can be equal to L . The sides you tack on are ones where the first k i 's are equal to L , and the remaining i 's are less than L .
The boundaries of a D -dimensional AP with benefits are these additional arithmetic progressions of dimension d − 1 , d − 2 , d − 3 , d − 4 {\displaystyle d-1,d-2,d-3,d-4} , down to 0. The 0-dimensional arithmetic progression is the single point at index value ( L , L , L , L , … , L ) {\displaystyle (L,L,L,L,\ldots ,L)} . A D -dimensional AP with benefits is homogeneous when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color.
Next define the quantity MinN( L , D , N ) to be the least integer so that any assignment of N colors to an interval of length MinN or more necessarily contains a homogeneous D -dimensional arithmetical progression with benefits.
The goal is to bound the size of MinN . Note that MinN( L ,1, N ) is an upper bound for Van der Waerden's number. There are two inductions steps, as follows:
Lemma 1 — Assume MinN is known for a given lengths L for all dimensions of arithmetic progressions with benefits up to D . This formula gives a bound on MinN when you increase the dimension to D + 1 :
let M = MinN ( L , D , n ) {\displaystyle M=\operatorname {MinN} (L,D,n)} , then
First, if you have an n -coloring of the interval 1... I , you can define a block coloring of k -size blocks. Just consider each sequence of k colors in each k block to define a unique color. Call this k -blocking an n -coloring. k -blocking an n coloring of length l produces an n k coloring of length l/ k .
So given a n -coloring of an interval I of size M ⋅ MinN ( L , 1 , n M ) ) {\displaystyle M\cdot \operatorname {MinN} (L,1,n^{M}))} you can M -block it into an n M coloring of length MinN ( L , 1 , n M ) {\displaystyle \operatorname {MinN} (L,1,n^{M})} . But that means, by the definition of MinN , that you can find a 1-dimensional arithmetic sequence (with benefits) of length L in the block coloring, which is a sequence of blocks equally spaced, which are all the same block-color, i.e. you have a bunch of blocks of length M in the original sequence, which are equally spaced, which have exactly the same sequence of colors inside.
Now, by the definition of M , you can find a d -dimensional arithmetic sequence with benefits in any one of these blocks, and since all of the blocks have the same sequence of colors, the same d -dimensional AP with benefits appears in all of the blocks, just by translating it from block to block. This is the definition of a d + 1 dimensional arithmetic progression, so you have a homogeneous d + 1 dimensional AP. The new stride parameter s D + 1 is defined to be the distance between the blocks.
But you need benefits. The boundaries you get now are all old boundaries, plus their translations into identically colored blocks, because i D+1 is always less than L . The only boundary which is not like this is the 0-dimensional point when i 1 = i 2 = ⋯ = i D + 1 = L {\displaystyle i_{1}=i_{2}=\cdots =i_{D+1}=L} . This is a single point, and is automatically homogeneous.
Lemma 2 — Assume MinN is known for one value of L and all possible dimensions D . Then you can bound MinN for length L + 1 .
Given an n -coloring of an interval of size MinN( L , n , n ) , by definition, you can find an arithmetic sequence with benefits of dimension n of length L . But now, the number of "benefit" boundaries is equal to the number of colors, so one of the homogeneous boundaries, say of dimension k , has to have the same color as another one of the homogeneous benefit boundaries, say the one of dimension p < k . This allows a length L + 1 arithmetic sequence (of dimension 1) to be constructed, by going along a line inside the k -dimensional boundary which ends right on the p -dimensional boundary, and including the terminal point in the p -dimensional boundary. In formulas:
if
then
This constructs a sequence of dimension 1, and the "benefits" are automatic, just add on another point of whatever color. To include this boundary point, one has to make the interval longer by the maximum possible value of the stride, which is certainly less than the interval size. So doubling the interval size will definitely work, and this is the reason for the factor of two. This completes the induction on L .
Base case: MinN(1, d , n ) = 1 , i.e. if you want a length 1 homogeneous d -dimensional arithmetic sequence, with or without benefits, you have nothing to do. So this forms the base of the induction. The Van der Waerden theorem itself is the assertion that MinN( L ,1, N ) is finite, and it follows from the base case and the induction steps. [ 11 ]
Furstenberg and Weiss proved an equivalent form of the theorem in 1978, using ergodic theory. [ 12 ]
multiple Birkhoff recurrence theorem (Furstenberg and Weiss, 1978) — If X {\textstyle X} is a compact metric space, and T 1 , … , T N : X → X {\textstyle T_{1},\dots ,T_{N}:X\to X} are homeomorphisms that commute, then ∃ x ∈ X {\textstyle \exists x\in X} , and an increasing sequence n 1 < n 2 < ⋯ {\textstyle n_{1}<n_{2}<\cdots } , such that lim j d ( T i n j x , x ) = 0 , ∀ i ∈ 1 : N {\displaystyle \lim _{j}d(T_{i}^{n_{j}}x,x)=0,\quad \forall i\in 1:N}
The proof of the above theorem is delicate, and the reader is referred to. [ 12 ] With this recurrence theorem, the van der Waerden theorem can be proved in the ergodic-theoretic style.
Theorem (van der Waerden, 1927) — If Z {\textstyle \mathbb {Z} } is partitioned into finitely many subsets S 1 , … , S n {\textstyle S_{1},\dots ,S_{n}} , then one of them S k {\textstyle S_{k}} contains infinitely many arithmetic progressions of arbitrarily long length
∀ N , N ′ , ∃ | a | ≥ N ′ , ∃ r ≥ 1 : { a + i r } i ∈ 1 : N ⊂ S k {\displaystyle \forall N,N',\;\exists |a|\geq N',\exists r\geq 1:\{a+ir\}_{i\in 1:N}\subset S_{k}}
It suffices to show that for each length N {\textstyle N} , there exist at least one partition that contains at least one arithmetic progression of length N {\textstyle N} .Once this is proved, we can cut out that arithmetic progression into N {\textstyle N} singleton sets, and repeat the process to create another arithmetic progression, and so one of the partitions contain infinitely many arithmetic progressions of length N {\textstyle N} . Then we can repeat this process to find that there exists at least one partition that contains infinitely many progressions of length N {\textstyle N} , for infinitely many N {\textstyle N} , and that is the partition we want.
Consider the state space Ω = ( 1 : N ) Z {\textstyle \Omega =(1:N)^{\mathbb {Z} }} , which is compact under the metric (in fact, ultrametric) d ( ( x i ) , ( y i ) ) = max { 2 − | i | : x i ≠ y i } . {\displaystyle d((x_{i}),(y_{i}))=\max\{2^{-\vert {i}\vert }:x_{i}\neq y_{i}\}.} Since the sets S 1 , … , S n {\textstyle S_{1},\dots ,S_{n}} partition Z {\textstyle \mathbb {Z} } , we have a well-defined sequence z = ( … , z − 1 , z 0 , z 1 , … ) = ( z i ) i {\textstyle z=(\dots ,z_{-1},z_{0},z_{1},\dots )=(z_{i})_{i}} with i ∈ S z i {\textstyle i\in S_{z_{i}}} for all i {\textstyle i} .
Let T : Ω → Ω {\textstyle T:\Omega \to \Omega } be the shift map T ( ( x i ) i ) = ( x i + 1 ) i , {\displaystyle T((x_{i})_{i})=(x_{i+1})_{i},} and let X = c l ( { T r z : r ∈ Z } ) {\textstyle X=cl(\{T^{r}z:r\in \mathbb {Z} \})} be the closure of all shifts of the sequence z {\textstyle z} . By the multiple Birkhoff recurrence theorem (for the maps T , T 2 , … , T N {\textstyle T,T^{2},\dots ,T^{N}} ), there exist a sequence x ∈ X {\textstyle x\in X} and an integer s ≥ 1 {\textstyle s\geq 1} such that d ( T s x , x ) , d ( T 2 s x , x ) , … , d ( T N s x , x ) < 1 4 . {\displaystyle d(T^{s}x,x),d(T^{2s}x,x),\dots ,d(T^{Ns}x,x)<{\frac {1}{4}}.}
Since X {\textstyle X} is the closure of shifts of z {\textstyle z} , and T {\textstyle T} is continuous, there exists a shift T m z {\textstyle T^{m}z} such that simultaneously, x {\textstyle x} is very close to T m z {\textstyle T^{m}z} , and T s x {\textstyle T^{s}x} is very close to T s + m z {\textstyle T^{s+m}z} , and so on: d ( x , T m z ) , d ( T s x , T m + s z ) , … , d ( T N s x , T m + N s z ) < 1 4 . {\displaystyle d(x,T^{m}z),d(T^{s}x,T^{m+s}z),\dots ,d(T^{Ns}x,T^{m+Ns}z)<{\frac {1}{4}}.}
By the triangle inequality, we then immediately have d ( T m + i s z , T m + j s z ) < 3 4 {\textstyle d(T^{m+is}z,T^{m+js}z)<{\frac {3}{4}}} for i , j = 0 , … , N {\textstyle i,j=0,\dots ,N} . But by construction, any sequences y , y ′ ∈ Ω {\textstyle y,y'\in \Omega } with d ( y , y ′ ) < 1 {\textstyle d(y,y')<1} must have y 0 = y ′ 0 {\textstyle y_{0}={y'}\!\!{}_{0}} . Thus z m = z m + s = ⋯ = z m + N s {\textstyle z_{m}=z_{m+s}=\dots =z_{m+Ns}} , and so all m , m + s , … , m + N s {\textstyle m,m+s,\dots ,m+Ns} lie in the partition S z m {\textstyle S_{z_{m}}} .
|
https://en.wikipedia.org/wiki/Van_der_Waerden's_theorem
|
In theoretical physics , Van der Waerden notation [ 1 ] [ 2 ] refers to the usage of two-component spinors ( Weyl spinors ) in four spacetime dimensions. This is standard in twistor theory and supersymmetry . It is named after Bartel Leendert van der Waerden .
Spinors with lower undotted indices have a left-handed chirality , and are called chiral indices.
Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices.
Without the indices, i.e. "index free notation", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated.
Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if
then a spinor in the chiral basis is represented as
where
In this notation the Dirac adjoint (also called the Dirac conjugate ) is
|
https://en.wikipedia.org/wiki/Van_der_Waerden_notation
|
Vanadium(II) oxide is the inorganic compound with the idealized formula VO. It is one of the several binary vanadium oxides . It adopts a distorted NaCl structure and contains weak V−V metal to metal bonds. VO is a semiconductor owing to delocalisation of electrons in the t 2g orbitals. VO is a non-stoichiometric compound , its composition varying from VO 0.8 to VO 1.3 . [ 2 ]
Diatomic VO is one of the molecules found in the spectrum of relatively cool M-type stars. [ 3 ] A potential use of vanadium(II) monoxide is as a molecular vapor in synthetic chemical reagents in low-temperature matrices. [ 4 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vanadium(II)_oxide
|
C 0.1 mg V 2 O 5 /m 3 (fume) [ 6 ]
Vanadium(V) oxide ( vanadia ) is the inorganic compound with the formula V 2 O 5 . Commonly known as vanadium pentoxide , it is a dark yellow solid, although when freshly precipitated from aqueous solution, its colour is deep orange. Because of its high oxidation state , it is both an amphoteric oxide and an oxidizing agent . From the industrial perspective, it is the most important compound of vanadium , being the principal precursor to alloys of vanadium and is a widely used industrial catalyst . [ 8 ]
The mineral form of this compound, shcherbinaite, is extremely rare, almost always found among fumaroles . A mineral trihydrate , V 2 O 5 ·3H 2 O, is also known under the name of navajoite.
Upon heating a mixture of vanadium(V) oxide and vanadium(III) oxide , comproportionation occurs to give vanadium(IV) oxide , as a deep-blue solid: [ 9 ]
The reduction can also be effected by oxalic acid , carbon monoxide , and sulfur dioxide . Further reduction using hydrogen or excess CO can lead to complex mixtures of oxides such as V 4 O 7 and V 5 O 9 before black V 2 O 3 is reached.
V 2 O 5 is an amphoteric oxide, and unlike most transition metal oxides, it is slightly water soluble , giving a pale yellow, acidic solution. Thus V 2 O 5 reacts with strong non-reducing acids to form solutions containing the pale yellow salts containing dioxovanadium(V) centers:
It also reacts with strong alkali to form polyoxovanadates , which have a complex structure that depends on pH . [ 10 ] If excess aqueous sodium hydroxide is used, the product is a colourless salt , sodium orthovanadate , Na 3 VO 4 . If acid is slowly added to a solution of Na 3 VO 4 , the colour gradually deepens through orange to red before brown hydrated V 2 O 5 precipitates around pH 2. These solutions contain mainly the ions HVO 4 2− and V 2 O 7 4− between pH 9 and pH 13, but below pH 9 more exotic species such as V 4 O 12 4− and HV 10 O 28 5− ( decavanadate ) predominate.
Upon treatment with thionyl chloride , it converts to the volatile liquid vanadium oxychloride , VOCl 3 : [ 11 ]
Hydrochloric acid and hydrobromic acid are oxidised to the corresponding halogen , e.g.,
Vanadates or vanadyl compounds in acid solution are reduced by zinc amalgam through the colourful pathway:
The ions are all hydrated to varying degrees.
Technical grade V 2 O 5 is produced as a black powder used for the production of vanadium metal and ferrovanadium . [ 10 ] A vanadium ore or vanadium-rich residue is treated with sodium carbonate and an ammonium salt to produce sodium metavanadate , NaVO 3 . This material is then acidified to pH 2–3 using H 2 SO 4 to yield a precipitate of "red cake" (see above ). The red cake is then melted at 690 °C to produce the crude V 2 O 5 .
Vanadium(V) oxide is produced when vanadium metal is heated with excess oxygen , but this product is contaminated with other, lower oxides. A more satisfactory laboratory preparation involves the decomposition of ammonium metavanadate at 500–550 °C: [ 13 ]
In terms of quantity, the dominant use for vanadium(V) oxide is in the production of ferrovanadium (see above ). The oxide is heated with scrap iron and ferrosilicon , with lime added to form a calcium silicate slag . Aluminium may also be used, producing the iron-vanadium alloy along with alumina as a byproduct.
Another important use of vanadium(V) oxide is in the manufacture of sulfuric acid , an important industrial chemical with an annual worldwide production of 165 million tonnes in 2001, with an approximate value of US$8 billion. Vanadium(V) oxide serves the crucial purpose of catalysing the mildly exothermic oxidation of sulfur dioxide to sulfur trioxide by air in the contact process :
The discovery of this simple reaction, for which V 2 O 5 is the most effective catalyst, allowed sulfuric acid to become the cheap commodity chemical it is today. The reaction is performed between 400 and 620 °C; below 400 °C the V 2 O 5 is inactive as a catalyst, and above 620 °C it begins to break down. Since it is known that V 2 O 5 can be reduced to VO 2 by SO 2 , one likely catalytic cycle is as follows:
followed by
It is also used as catalyst in the selective catalytic reduction (SCR) of NO x emissions in some power plants and diesel engines. Due to its effectiveness in converting sulfur dioxide into sulfur trioxide, and thereby sulfuric acid, special care must be taken with the operating temperatures and placement of a power plant's SCR unit when firing sulfur-containing fuels.
Maleic anhydride is produced by the V 2 O 5 -catalysed oxidation of butane with air:
Maleic anhydride is used for the production of polyester resins and alkyd resins . [ 15 ]
Phthalic anhydride is produced similarly by V 2 O 5 -catalysed oxidation of ortho -xylene or naphthalene at 350–400 °C. The equation for the vanadium oxide-catalysed oxidation of o -xylene to phthalic anhydride:
The equation for the vanadium oxide-catalysed oxidation of naphthalene to phthalic anhydride: [ 16 ]
Phthalic anhydride is a precursor to plasticisers , used for conferring pliability to polymers.
A variety of other industrial compounds are produced similarly, including adipic acid , acrylic acid , oxalic acid , and anthraquinone . [ 8 ]
Due to its high coefficient of thermal resistance , vanadium(V) oxide finds use as a detector material in bolometers and microbolometer arrays for thermal imaging . It also finds application as an ethanol sensor in ppm levels (up to 0.1 ppm).
Vanadium redox batteries are a type of flow battery used for energy storage , including large power facilities such as wind farms . [ 17 ] Vanadium oxide is also used as a cathode in lithium-ion batteries . [ 18 ]
Vanadium(V) oxide exhibits very modest acute toxicity to humans, with an LD 50 of about 470 mg/kg. The greater hazard is with inhalation of the dust, where the LD 50 ranges from 4–11 mg/kg for a 14-day exposure. [ 8 ] Vanadate ( VO 3− 4 ), formed by hydrolysis of V 2 O 5 at high pH, appears to inhibit enzymes that process phosphate (PO 4 3− ). However the mode of action remains elusive. [ 10 ] [ better source needed ]
|
https://en.wikipedia.org/wiki/Vanadium(V)_oxide
|
Vanadium-51 nuclear magnetic resonance ( 51 V NMR spectroscopy ) is a method for the characterization of vanadium -containing compounds and materials. 51 V comprises 99.75% of naturally occurring vanadium. The nucleus is quadrupolar with I = 7 / 2 , which is not favorable for NMR spectroscopy, although its quadrupole moment and thus the linewidths are unusually small, while its magnetogyric ratio is relatively high (+7.0492 rad T −1 s −1 ), so that 51 V has 38% receptivity vs 1 H. Its resonance frequency is close to that of 13 C (gyromagnetic ratio = 6.728284 rad T −1 s −1 ).
The chemical shift dispersion is great as illustrated by this series: 0 for VOCl 3 (chemical shift standard), −309 for VOCl 2 (O-i-Pr), −506 VOCl(O-i-Pr) 2 , and −629 VO(O-i-Pr) 3 . For vanadates, the parent orthovanadate and its conjugate acid absorb at −541 ([VO 4 ] 3- ) and 534 ([HVO 4 ] 2- ). For decavanadate , three shifts are observed in accord with the number of nonequivalent sites: −422, −502, −519. [ 1 ] [ 2 ]
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vanadium-51_nuclear_magnetic_resonance
|
Vanadium nitrogenase is a key enzyme for nitrogen fixation found in nitrogen-fixing bacteria , and is used as an alternative to molybdenum nitrogenase when molybdenum is unavailable. [ 1 ] Vanadium nitrogenases are an important biological use of vanadium , which is uncommonly used by life. An important component of the nitrogen cycle , vanadium nitrogenase converts nitrogen gas to ammonia, thereby making otherwise inaccessible nitrogen available to plants. Unlike molybdenum nitrogenase, vanadium nitrogenase can also reduce carbon monoxide to ethylene , ethane and propane [ 3 ] but both enzymes can reduce protons to hydrogen gas and acetylene to ethylene .
Vanadium nitrogenases are found in members of the bacterial genus Azotobacter as well as the species Rhodopseudomonas palustris and Anabaena variabilis . [ 1 ] [ 2 ] Most of the functions of vanadium nitrogenase match those of the more common molybdenum nitrogenases and serve as an alternative pathway for nitrogen fixation in molybdenum deficient conditions. [ 4 ] Like molybdenum nitrogenase, dihydrogen functions as a competitive inhibitor and carbon monoxide functions as a non-competitive inhibitor of nitrogen fixation. [ 5 ] Vanadium nitrogenase has an α 2 β 2 Ύ 2 subunit structure while molybdenum nitrogenase has an α 2 β 2 structure. Though the structural genes encoding vanadium nitrogenase show only about 15% conservation with molybdenum nitrogenases, the two nitrogenases share the same type of iron-sulphur redox centers. At room temperature, vanadium nitrogenase is less efficient at fixing nitrogen than molybdenum nitrogenases because it converts more H + to H 2 as a side reaction. [ 4 ] However, at low temperatures vanadium nitrogenases have been found to be more active than the molybdenum type, and at temperatures as low as 5 °C its nitrogen-fixing activity is 10 times higher than that of molybdenum nitrogenase. [ 6 ] Like molybdenum nitrogenase, vanadium nitrogenase is easily oxidized and is thus only active under anaerobic conditions. Various bacteria employ complex protection mechanisms to avoid oxygen . [ 1 ]
The overall stoichiometry of nitrogen fixation catalyzed by vanadium nitrogenase can be summarized as follows: [ 1 ]
The crystal structure of A. vinelandii vanadium nitrogenase was resolved in 2017 ( PDB : 5N6Y ). Compared to Mo nitrogenase, V nitrogenase replaces one sulfide in the active site with a bridging ligand. [ 7 ]
Research at the University of California Irvine showed the ability of vanadium nitrogenase to convert carbon monoxide into trace amounts of propane , ethylene , and ethane in the absence of nitrogen [ 3 ] [ 8 ] through the reduction of carbon monoxide by dithionite and ATP hydrolysis . The process of forming these hydrocarbons is carried out through proton and electron transfer in which short carbon chains are formed [ 8 ] and may ultimately allow the production of hydrocarbon fuel from CO at an industrial scale. [ 9 ]
|
https://en.wikipedia.org/wiki/Vanadium_nitrogenase
|
Vanadium tetrachloride is the inorganic compound with the formula V Cl 4 . This reddish-brown liquid serves as a useful reagent for the preparation of other vanadium compounds.
With one more valence electron than diamagnetic TiCl 4 , VCl 4 is a paramagnetic liquid. It is one of only a few paramagnetic compounds that is liquid at room temperature.
VCl 4 is prepared by chlorination of vanadium metal. VCl 5 does not form in this reaction; Cl 2 lacks the oxidizing power to attack VCl 4 . VCl 5 can however be prepared indirectly from VF 5 at −78 °C. [ 1 ]
Consistent with its high oxidizing power, VCl 4 reacts with HBr at -50 °C to produce VBr 3 . The reaction proceeds via VBr 4 , which releases Br 2 during warming to room temperature. [ 2 ]
VCl 4 forms adducts with many donor ligands, for example, VCl 4 ( THF ) 2 .
It is the precursor to vanadocene dichloride .
In organic synthesis , VCl 4 is used for the oxidative coupling of phenols. For example, it converts phenol into a mixture of 4,4'-, 2,4'-, and 2,2'- biphenols : [ 3 ]
VCl 4 is a catalyst for the polymerization of alkenes, especially those useful in the rubber industry. The underlying technology is related to Ziegler–Natta catalysis , which involves the intermediacy of vanadium alkyls.
VCl 4 is a volatile, aggressive oxidant that readily hydrolyzes to release HCl .
|
https://en.wikipedia.org/wiki/Vanadium_tetrachloride
|
Vanadium–gallium (V 3 Ga) is a superconducting alloy of vanadium and gallium . It is often used for the high field insert coils of superconducting electromagnets . [ clarification needed ] [ citation needed ]
Vanadium–gallium tape is used in the highest field magnets ( magnetic fields of 17.5 T ). The structure of the superconducting A15 phase of V 3 Ga is similar to that of the more common Nb 3 Sn . [ 1 ]
In conditions where the magnetic field is higher than 8 T and the temperature is higher than 4.2 K , Nb 3 Sn and V 3 Ga see use.
The main property of V 3 Ga that makes it so useful is that it can be used in magnetic fields up to about 18 T , while Nb 3 Sn can only be used in fields up to about 15 T . [ 2 ]
The high field characteristics can be improved by doping with high-Z elements such as Nb, Ta, Sn, Pt and Pb. [ 3 ]
[ 4 ]
V 3 Ga has an A15 phase , which makes it extremely brittle. One must be extremely cautious not to over-bend the wire when handling it. [ 2 ]
V 3 Ga wires can be formed using solid-state precipitation . [ 6 ]
This alloy-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vanadium–gallium
|
Vanadyl ribonucleoside is a potent transition-state analog of ribonucleic acid and potent inhibitor of many species of ribonuclease formed from a vanadium coordination complex and one ribonucleoside. [ 1 ] Vanadium's [ Ar ] 3d 3 4s 2 electron configuration allows it to make five sigma bonds and two pi bonds with adjacent atoms. [ 2 ]
RNA is notoriously unstable and vulnerable to ribonucleases, which has thus been an obstacle to the production and analysis of the cellular transcriptome . First referenced by Berger et al., the substance was used to prevent the digestion of RNA during isolation from white blood cells , and was rapidly adopted for such purposes as the acquisition of RNA from green beans . [ 1 ] [ 3 ]
Vanadyl ribonucleoside is produced by combining vanadyl sulphate with various ribonucleosides (such as guanosine ) in a 1:10 molar ratio. [ 4 ] [ 5 ]
Vanadyl ribonucleoside, along with other RNase inhibitors , has been a staple of molecular biochemistry since its invention by allowing for the stability of RNA in its storage and use.
|
https://en.wikipedia.org/wiki/Vanadyl_ribonucleoside
|
Vancomycin is a glycopeptide antibiotic medication used to treat certain bacterial infections . [ 6 ] It is administered intravenously ( injection into a vein ) to treat complicated skin infections , bloodstream infections , endocarditis , bone and joint infections, and meningitis caused by methicillin-resistant Staphylococcus aureus . [ 7 ] Blood levels may be measured to determine the correct dose. [ 8 ] Vancomycin is also taken orally (by mouth) to treat Clostridioides difficile infections . [ 6 ] [ 9 ] [ 10 ] When taken orally, it is poorly absorbed. [ 6 ]
Common side effects include pain in the area of injection and allergic reactions . [ 6 ] Occasionally, hearing loss , low blood pressure , or bone marrow suppression occur. [ 6 ] Safety in pregnancy is not clear, but no evidence of harm has been found, [ 6 ] [ 11 ] and it is likely safe for use when breastfeeding . [ 12 ] It is a type of glycopeptide antibiotic and works by blocking the construction of a cell wall . [ 6 ]
Vancomycin was approved for medical use in the United States in 1958. [ 13 ] It is on the World Health Organization's List of Essential Medicines . [ 14 ] The WHO classifies vancomycin as critically important for human medicine. [ 15 ] It is available as a generic medication. [ 8 ] Vancomycin is made by the soil bacterium Amycolatopsis orientalis . [ 6 ]
Vancomycin is indicated for the treatment of serious, life-threatening infections by Gram-positive bacteria of both aerobic and anaerobic types [ 16 ] that are unresponsive to other antibiotics. [ 17 ] [ 18 ] [ 19 ]
The increasing emergence of vancomycin-resistant enterococci (VRE) has resulted in the development of guidelines for use by the Centers for Disease Control Hospital Infection Control Practices Advisory Committee. These guidelines restrict use of vancomycin to these indications: [ 20 ] [ 21 ]
Vancomycin is a last-resort medication for the treatment of sepsis and lower respiratory tract , skin, and bone infections caused by Gram-positive bacteria. The minimum inhibitory concentration susceptibility data for a few medically significant bacteria are: [ 24 ]
Common side effects associated with oral vancomycin administration (used to treat intestinal infections) [ 25 ] include:
Serum vancomycin levels may be monitored in an effort to reduce side effects, [ 26 ] but the value of such monitoring has been questioned. [ 27 ] Peak and trough levels are usually monitored, and for research purposes the area under the concentration curve is also sometimes used. [ 28 ] Toxicity is best monitored by looking at trough values. [ 28 ] Immunoassays are commonly used to measure vancomycin levels. [ 26 ]
Common adverse drug reactions (≥1% of patients) associated with intravenous vancomycin include:
Damage to the kidneys ( nephrotoxicity ) and to the hearing ( ototoxicity ) were side effects of the early, impure versions of vancomycin, and were prominent in clinical trials conducted in the mid-1950s. [ 13 ] [ 31 ] Later trials using purer forms of vancomycin found nephrotoxicity is an infrequent adverse effect (0.1% to 1% of patients), but this is accentuated in the presence of aminoglycosides . [ 32 ]
Rare adverse effects associated with intravenous vancomycin (<0.1% of patients) include anaphylaxis , toxic epidermal necrolysis , erythema multiforme , superinfection , thrombocytopenia , neutropenia , leukopenia , tinnitus , dizziness and/or ototoxicity , and DRESS syndrome . [ 33 ]
Vancomycin can induce platelet-reactive antibodies in the patient, leading to severe thrombocytopenia and bleeding with florid petechial hemorrhages , ecchymoses , and wet purpura . [ 34 ]
Historically, vancomycin has been considered a nephrotoxic and ototoxic drug, based on numerous case reports in the medical literature following initial approval by the FDA in 1958. But as its use increased with the spread of MRSA beginning in the 1970s, toxicity risks were reassessed. With the removal of impurities present in earlier formulations of the drug, [ 13 ] and with the introduction of therapeutic drug monitoring , the risk of severe toxicity has been reduced.
The extent of nephrotoxicity for vancomycin remains controversial. [ 35 ] In 1980s, vancomycin with a purity > 90% was available, and kidney toxicity defined by an increase in serum creatinine of at least 0.5 mg/dL occurred in only about 5% of patients. [ 35 ] But dosing guidelines from the 1980s until 2008 recommended vancomycin trough concentrations between 5 and 15 μg/mL. [ 36 ] Concern for treatment failures prompted recommendations for higher dosing (troughs 15 to 20 μg/mL) for serious infection, and acute kidney injury (AKI) rates attributable to the vancomycin increased. [ 37 ]
Importantly, the risk of AKI increases with co-administration of other known nephrotoxins, in particular aminoglycosides. Furthermore, the sort of infections treated with vancomycin may also cause AKI, and sepsis is the most common cause of AKI in critically ill patients. Finally, studies in humans are mainly associations studies, where the cause of AKI is usually multifacotorial. [ 38 ] [ 39 ] [ 40 ] [ 41 ]
Animal studies have demonstrated that higher doses and longer duration of vancomycin exposure correlates with increased histopathologic damage and elevations in urinary biomarkers of AKI.37-38 [ 42 ] Damage is most prevalent at the proximal tubule, which is further supported by urinary biomarkers, such as kidney injury molecule-1 (KIM-1), clusterin, and osteopontin (OPN). [ 43 ] In humans, insulin-like growth factor binding protein 7 (IGFBP7) as part of the nephrocheck test. [ 44 ]
The mechanisms underlying the pathogenesis of vancomycin nephrotoxicity are multifactorial but include interstitial nephritis, tubular injury due to oxidative stress, and cast formation. [ 37 ]
Therapeutic drug monitoring can be used during vancomycin therapy to minimize the risk of nephrotoxicity associated with excessive drug exposure. Immunoassays are commonly utilized for measuring vancomycin levels. [ 26 ]
In children, concomitant administration of vancomycin and piperacillin/tazobactam has been associated with an elevated incidence of AKI relative to other antibiotic regimens. [ 45 ]
Attempts to establish rates of vancomycin-induced ototoxicity are even more difficult due to lack of good data. The consensus is that clearly related cases of vancomycin ototoxicity are rare. [ 46 ] [ 47 ] The association between vancomycin serum levels and ototoxicity is also uncertain. Cases of ototoxicity have been reported in patients whose vancomycin serum level exceeded 80 μg/mL, [ 48 ] but cases have also been reported in patients with therapeutic levels. Thus it remains unknown whether therapeutic drug monitoring of vancomycin for the purpose of maintaining "therapeutic" levels prevents ototoxicity. [ 48 ] Still, therapeutic drug monitoring can be used during vancomycin therapy to minimize the risk of ototoxicity associated with excessive drug exposure. [ 26 ]
Another area of controversy and uncertainty is whether and to what extent vancomycin increases the toxicity of other nephrotoxins. Clinical studies have yielded various results, but animal models indicate that the nephrotoxic effect probably increases when vancomycin is added to nephrotoxins such as aminoglycosides. A dose- or serum level-effect relationship has not been established. [ citation needed ]
Vancomycin is recommended to be administered in a dilute solution slowly, over at least 60 min (maximum rate of 10 mg/min for doses >500 mg) [ 20 ] due to the high incidence of pain and thrombophlebitis and to avoid an infusion reaction known as vancomycin flushing reaction. This phenomenon has been often clinically referred to as "red man syndrome". The reaction usually appears within 4 to 10 min after the commencement or soon after the completion of an infusion and is characterized by flushing and/or an erythematous rash that affects the face, neck, and upper torso, attributed to the release of histamine from mast cells. This reaction is caused by the interaction of vancomycin with MRGPRX2 , a GPCR-mediating IgE-independent mast cell degranulation. [ 49 ] Less frequently, hypotension and angioedema occur. Symptoms may be treated or prevented with antihistamines , including diphenhydramine , and are less likely to occur with slow infusion. [ 50 ] [ 51 ]
The recommended intravenous dosage in adults is 500 mg every 6 hours or 1000 mg every 12 hours, with modification to achieve a therapeutic range as needed. The recommended oral dosage in the treatment of antibiotic-induced pseudomembranous enterocolitis is 125 to 500 mg every 6 hours for 7 to 10 days. [ 52 ]
Dose optimization and target attainment of vancomycin in children involves adjusting the dosage to maximize effectiveness while minimizing the risk of adverse effects, specifically acute kidney injury. Dose optimization is achieved by therapeutic drug monitoring (TDM), which allows measurement of vancomycin levels in the blood. TDM using area under the curve (AUC)-guided dosing, preferably with Bayesian forecasting, is recommended to ensure that the AUC0-24h/minimal inhibitory concentration (MIC) ratio is maintained above a certain threshold (400-600) associated with optimal efficacy. [ 53 ]
In the United States , vancomycin is approved by the Food and Drug Administration for intravenous and oral administration. [ 25 ]
Vancomycin must be given intravenously for systemic therapy since it is poorly absorbed from the intestine. It is a large hydrophilic molecule that partitions poorly across the gastrointestinal mucosa . Due to its short half-life, it is often injected twice daily. [ 54 ]
The only approved indication for oral vancomycin therapy is in the treatment of pseudomembranous colitis, where it must be given orally to reach the site of infection in the colon. After oral administration, the fecal concentration of vancomycin is around 500 μg/mL [ 55 ] (sensitive strains of Clostridioides difficile have a mean inhibitory concentration of ≤2 μg/mL [ 56 ] )
Inhaled vancomycin can also be used off-label , [ 57 ] via nebulizer , to treat various infections of the upper and lower respiratory tract. [ 58 ] [ 59 ] [ 60 ] [ 61 ] [ 62 ]
Rectal administration is an off-label use of vancomycin for the treatment of Clostridioides difficile infection. [ 25 ]
Plasma level monitoring of vancomycin is necessary due to the drug's biexponential distribution, intermediate hydrophilicity, and potential for ototoxicity and nephrotoxicity, especially in populations with poor renal function and/or increased propensity to bacterial infection. Vancomycin activity is considered time-dependent; that is, antimicrobial activity depends on how long the serum drug concentration exceeds the minimum inhibitory concentration of the target organism. Thus, peak serum levels have not been shown to correlate with efficacy or toxicity; indeed, concentration monitoring is unnecessary in most cases. Circumstances in which therapeutic drug monitoring is warranted include patients receiving concomitant aminoglycoside therapy, patients with (potentially) altered pharmacokinetic parameters, patients on haemodialysis , patients administered high-dose or prolonged treatment, and patients with impaired renal function. In such cases, trough concentrations are measured. [ 20 ] [ 27 ] [ 63 ] [ 64 ]
Therapeutic drug monitoring is also used for dose optimization of vancomycin in treating children. [ 53 ]
Target ranges for serum vancomycin concentrations have changed over the years. Early authors suggested peak levels of 30 to 40 mg/L and trough levels of 5 to 10 mg/L, [ 65 ] but current recommendations are that peak levels need not be measured and that trough levels of 10 to 15 mg/L or 15 to 20 mg/L, depending on the nature of the infection and the specific patient's needs, may be appropriate. [ 66 ] [ 67 ] Measuring vancomycin concentrations to calculate doses optimizes therapy in patients with augmented renal clearance . [ 68 ]
Vancomycin is a branched tricyclic glycosylated nonribosomal peptide produced by the Actinomycetota species Amycolatopsis orientalis (formerly designated Nocardia orientalis ).
Vancomycin exhibits atropisomerism —it has multiple chemically distinct rotamers owing to the rotational restriction of some of the bonds. The form present in the drug is the thermodynamically more stable conformer . [ citation needed ]
Vancomycin is made by the soil bacterium Amycolatopsis orientalis . [ 6 ]
Vancomycin biosynthesis occurs primarily via three nonribosomal protein synthases (NRPSs) VpsA, VpsB, and VpsC. [ 69 ] The enzymes determine the amino acid sequence during its assembly through its 7 modules . Before vancomycin is assembled through NRPS, the non-proteinogenic amino acids are first synthesized. L -tyrosine is modified to become the β-hydroxytyrosine (β-HT) and 4-hydroxyphenylglycine (4-Hpg) residues. 3,5-dihydroxyphenylglycine ring (3,5-DPG) is derived from acetate. [ 70 ]
Nonribosomal peptide synthesis occurs through distinct modules that can load and extend the protein by one amino acid per module through the amide bond formation at the contact sites of the activating domains. [ 71 ] Each module typically consists of an adenylation (A) domain, a peptidyl carrier protein (PCP) domain, and a condensation (C) domain. In the A domain, the specific amino acid is activated by converting into an aminoacyl adenylate enzyme complex attached to a 4'-phosphopantetheine cofactor by thioesterification. [ 72 ] [ 73 ] The complex is then transferred to the PCP domain with the expulsion of AMP. The PCP domain uses the attached 4'-phosphopantethein prosthetic group to load the growing peptide chain and their precursors. [ 74 ] The organization of the modules necessary to biosynthesize vancomycin is shown in Figure 1. In the biosynthesis of vancomycin, additional modification domains are present, such as the epimerization (E) domain, which isomerizes the amino acid from one stereochemistry to another, and a thioesterase domain (TE) is used as a catalyst for cyclization and releases of the molecule via a thioesterase scission. [ citation needed ]
A set of NRPS enzymes (peptide synthase VpsA, VpsB, and VpsC) are responsible for assembling the heptapeptide. (Figure 2). [ 71 ] VpsA codes for modules 1, 2, and 3. VpsB codes for modules 4, 5, and 6, and VpsC codes for module 7. The vancomycin aglycone contains 4 D-amino acids, although the NRPSs only contain 3 epimerization domains. The origin of D-Leu at residue 1 is unknown. The three peptide syntheses are at the start of the region of the bacterial genome linked with antibiotic biosynthesis, and span 27 kb. [ 71 ]
β-hydroxytyrosine (β-HT) is synthesized before incorporation into the heptapeptide backbone. L-tyrosine is activated and loaded on the NRPS VpsD, hydroxylated by OxyD, and released by the thioesterase Vhp. [ 75 ] The timing of the chlorination by halogenase VhaA during biosynthesis is undetermined, but is proposed to occur before the complete assembly of the heptapeptide. [ 76 ]
After the linear heptapeptide molecule is synthesized, vancomycin must undergo further modifications, such as oxidative cross-linking and glycosylation , in trans [ clarification needed ] by distinct enzymes, referred to as tailoring enzymes, to become biologically active (Figure 3). To convert the linear heptapeptide to cross-linked, glycosylated vancomycin, six enzymes are required. The enzymes OxyA, OxyB, OxyC, and OxyD are cytochrome P450 enzymes. OxyB catalyzes oxidative cross-linking between residues 4 and 6, OxyA between residues 2 and 4, and OxyC between residues 5 and 7. This cross-linking occurs while the heptapeptide is covalently bound to the PCP domain of the 7th NRPS module. These P450s are recruited by the X domain in the 7th NRPS module, which is unique to glycopeptide antibiotic biosynthesis. [ 77 ] The cross-linked heptapeptide is then released by the action of the TE domain, and methyltransferase Vmt then N -methylates the terminal leucine residue. GtfE then joins D-glucose to the phenolic oxygen of residue 4, followed by the addition of vancosamine catalyzed by GtfD. [ citation needed ]
Some of the glycosyltransferases capable of glycosylating vancomycin and related nonribosomal peptides display notable permissivity and have been used to generate libraries of differentially glycosylated analogs through glycorandomization . [ 78 ] [ 79 ] [ 80 ]
Both the vancomycin aglycone [ 81 ] [ 82 ] and the complete vancomycin molecule [ 83 ] have been targets successfully reached by total synthesis . The target was first achieved by David Evans in October 1998, KC Nicolaou in December 1998, Dale Boger in 1999, and more selectively synthesized again by Boger in 2020. [ 81 ] [ 84 ] [ 85 ]
Vancomycin targets bacterial cell wall synthesis by binding to the basic building block of the bacterial cell wall of Gram-positive bacteria, whether it is of aerobic or anaerobic type. [ 16 ] Specifically, vancomycin forms hydrogen bonds with the D -alanyl- D -alanine ( D -Ala- D -Ala) peptide motif of the peptidoglycan precursor, a component of the bacterial cell wall. [ 17 ]
Peptidoglycan is a polymer that provides structural support to the bacterial cell wall. The peptidoglycan precursor is synthesized in the cytoplasm and then transported across the cytoplasmic membrane to the periplasmic space, where it is assembled into the cell wall. The assembly process involves two enzymatic activities: transglycosylation and transpeptidation. Transglycosylation involves the polymerization of the peptidoglycan precursor into long chains, while transpeptidation involves the cross-linking of these chains to form a three-dimensional mesh-like structure. [ 17 ]
Vancomycin inhibits bacterial cell wall synthesis by binding to the D -Ala- D -Ala peptide motif of the peptidoglycan precursor, thereby preventing its processing by the transglycosylase; as such, vancomycin disrupts the transglycosylation activity of the cell wall synthesis process. The disruption leads to an incomplete and corrupted cell wall, which makes the replicating bacteria vulnerable to external forces such as osmotic pressure, so that the bacteria cannot survive and are eliminated by the immune system. [ 17 ]
Gram-negative bacteria are insensitive to vancomycin due to their different cell wall morphology. The outer membrane of Gram-negative bacteria contains lipopolysaccharide, which acts as a barrier to vancomycin penetration. That is why vancomycin is mainly used to treat infections caused by Gram-positive bacteria [ 17 ] (except some nongonococcal species of Neisseria ). [ 87 ] [ 88 ]
The large hydrophilic molecule of vancomycin is able to form hydrogen bond interactions with the terminal D -alanyl- D -alanine moieties of the NAM/NAG-peptides. Under normal circumstances, this is a five-point interaction. This binding of vancomycin to the D -Ala- D -Ala prevents cell wall synthesis of the long polymers of N -acetylmuramic acid (NAM) and N -acetylglucosamine (NAG) that form the backbone strands of the bacterial cell wall, and prevents the backbone polymers from cross-linking with each other. [ 89 ]
Vancomycin is one of the few antibiotics used in plant tissue culture to eliminate Gram-positive bacterial infection. It has relatively low toxicity to plants. [ 90 ] [ 91 ]
A few Gram-positive bacteria, such as Leuconostoc and Pediococcus , are intrinsically resistant to vancomycin, but they rarely cause disease in humans. [ 92 ] Most Lactobacillus species are also intrinsically resistant to vancomycin, [ 92 ] except for L. acidophilus and L. delbrueckii , which are sensitive. [ 93 ] Other Gram-positive bacteria with intrinsic resistance to vancomycin include Erysipelothrix rhusiopathiae , Weissella confusa , and Clostridium innocuum . [ 94 ] [ 95 ] [ 96 ]
Most Gram-negative bacteria are intrinsically resistant to vancomycin because their outer membranes are impermeable to large glycopeptide molecules [ 97 ] (with the exception of some non- gonococcal Neisseria species). [ 98 ]
Evolution of microbial resistance to vancomycin is a growing problem, especially in healthcare facilities such as hospitals. While newer alternatives to vancomycin exist, such as linezolid (2000) and daptomycin (2003), the widespread use of vancomycin makes resistance to it a significant worry, especially for individual patients if resistant infections are not quickly identified and the patient continues an ineffective treatment. Vancomycin-resistant Enterococcus emerged in 1986. [ 99 ] Vancomycin resistance evolved in more common pathogenic organisms during the 1990s and 2000s, including vancomycin-intermediate S. aureus (VISA) and vancomycin-resistant S. aureus (VRSA). [ 100 ] [ 101 ] Agricultural use of avoparcin , another similar glycopeptide antibiotic, may have contributed to the evolution of vancomycin-resistant organisms. [ 102 ] [ 103 ] [ 104 ] [ 105 ]
One mechanism of resistance to vancomycin involves the alteration to the terminal amino acid residues of the NAM / NAG -peptide subunits, under normal conditions, D -alanyl- D -alanine, to which vancomycin binds. The D -alanyl- D -lactate variation results in the loss of one hydrogen-bonding interaction (4, as opposed to 5 for D -alanyl- D -alanine) possible between vancomycin and the peptide. This loss of just one point of interaction results in a 1000-fold decrease in affinity. The D -alanyl- D -serine variation causes a six-fold loss of affinity between vancomycin and the peptide, likely due to steric hindrance . [ 106 ]
In enterococci, this modification appears to be due to the expression of an enzyme that alters the terminal residue. Three main resistance variants have been characterised to date among resistant Enterococcus faecium and E. faecalis populations:
A variant of vancomycin has been tested that binds to the resistant D-lactic acid variation in vancomycin-resistant bacterial cell walls and also binds well to the original target (vancomycin-susceptible bacteria). [ 107 ] [ 108 ]
In 2020 a team at the University Hospital Heidelberg (Germany) regained vancomycin's antibacterial power by modifying the molecule with a cationic oligopeptide . The oligopeptide consists of six arginin units in Position V N . In comparison to the unmodified vancomycin the activity against vancomycin-resistant bacteria could be enhanced by a factor of 1,000. [ 109 ] [ 110 ] This pharmacon is still in preclinical development .
Vancomycin was first isolated in 1953 by Edmund Kornfeld (working at Eli Lilly ) from a bacteria in a soil sample collected from the interior jungles of Borneo by a missionary, William M. Bouw. [ 111 ] The organism that produced it was eventually named Amycolatopsis orientalis . [ 13 ] The original indication for vancomycin was to treat penicillin-resistant Staphylococcus aureus . [ 13 ] [ 31 ]
The compound was initially called compound 05865, but was later given the generic name vancomycin, derived from the term "vanquish". [ 13 ] One quickly apparent advantage was that staphylococci did not develop significant resistance, despite serial passage in culture media containing vancomycin. The rapid development of penicillin resistance by staphylococci led to its being fast-tracked for approval by the Food and Drug Administration . In 1958, Eli Lilly first marketed vancomycin hydrochloride under the trade name Vancocin. [ 31 ]
Vancomycin never became the first-line treatment for S. aureus for several reasons:
In 2004, Eli Lilly licensed Vancocin to ViroPharma in the U.S., Flynn Pharma in the UK, and Aspen Pharmacare in Australia. The patent expired in the early 1980s, and the FDA authorized the sale of several generic versions in the U.S., including from manufacturers Bioniche Pharma, Baxter Healthcare , Sandoz , Akorn - Strides , and Hospira . [ 113 ]
The combination of vancomycin powder and povidone-iodine lavage may reduce the risk of periprosthetic joint infection in hip and knee arthroplasties. [ 114 ]
|
https://en.wikipedia.org/wiki/Vancomycin
|
The Vancouver Area Network of Drug Users or VANDU is a not-for-profit organization [ 1 ] and advocacy group based in Vancouver , British Columbia , Canada. The group believes that all drug users should have their own rights and freedoms . The group's members have been actively involved in lobbying for support of Insite , North America's first safe injection site , located in the Downtown Eastside of Vancouver. [ 2 ]
Its board of directors consists entirely of current and former drug addicts. [ 3 ] It was co-founded by Ann Livingston and Bud Osborn . [ 1 ] Livingston had previously established a short-lived injection site called "Back Alley" on Powell Street in 1995. [ 4 ]
The group received a grant in 2022 from the city to perform street cleaning, but the contract was rescinded for not performing the work and instead, using the grant funds for other purposes.
VANDU was created in September 1997, to advocate for the delivery of health care services to drug users living in Vancouver who had been exposed to increasing rates of hepatitis C and HIV as a result of sharing needles , [ 1 ] and to address risks to their health, such as drug overdose . [ 2 ] It has operated an unauthorized drug consumption site and provided assisted illegal drug use for about four years until it was shut down in 2014. [ 5 ]
A few dozen people first met in Oppenheimer Park on 9 September 1997 in response to messages posted by Livingston on utility poles throughout the Downtown Eastside. [ 4 ] The assembled group of people decided to form an organization, and adopted the name Vancouver Area Network of Drug Users a year later. [ 4 ] One of the attendees was Donald MacPherson, who later became drug-policy coordinator for Vancouver municipal government, and who also established the Canadian Drug Policy Coalition. [ 4 ]
Membership grew to about 100 individuals in a few months, and eventually to over 2,000. [ 6 ] : 10 The organization's membership is open to all individuals, but those elected to the board of directors must be current or former addicts, and votes at the organization's meetings may only be cast by current or former addicts. [ 6 ]
VANDU was given a $320,000 grant from the City of Vancouver in 2022 to provide street cleaning services in the Hastings Street encampment. Questions were raised when VANDU couldn't be seen working and street cleanliness continued to deteriorate. The organization eventually admitted to diverting grant intended for street cleaning into its general funds. [ 7 ] $160,000 of the grant was paid out, however the City of Vancouver terminated the contract when services were not delivered as expected. [ 8 ] [ 9 ] The city council voted to deny VANDU a $7,500 grant for arts program in 2023 for the gross misuse of public funds in 2022 making it the only grant out of 84 grants recommended by city staff to be denied by city council. [ 10 ]
The organization also engages in local issues pertaining to Downtown Eastside area residents. [ 11 ]
VANDU defends harm reduction services and supervised injection facilities . [ 12 ] In recent years, VANDU has been engaging with the Drug Users Liberation Front (DULF) [ 13 ] to provide " safe supply " services. [ 14 ] The group handed out cocaine, meth and heroin to users in July 2021 in which city councilor Jean Swanson participated in the distribution. [ 15 ] Washington Examiner said it's uncertain if substances distributed by VANDU was obtained lawfully. [ 15 ] The DULF founders Jeremy Kalicum and Eris Nyx have been charged with possession with intent to distribute in May 2024. [ 16 ] [ 17 ] The pair had sourced the illicit drugs that was distributed by DULF and VANDU together through the dark web . [ 18 ] [ 19 ]
|
https://en.wikipedia.org/wiki/Vancouver_Area_Network_of_Drug_Users
|
Vandana Shiva (born 5 November 1952) is an Indian scholar , environmental activist , food sovereignty advocate, ecofeminist and anti-globalization author. [ 2 ] Based in Delhi , Shiva has written more than 20 books. [ 3 ] She is often referred to as "Gandhi of grain" for her activism associated with the anti-GMO movement. [ 4 ]
Shiva is one of the leaders and board members of the International Forum on Globalization (with Jerry Mander , Ralph Nader , and Helena Norberg-Hodge ), and a figure of the anti-globalisation movement. [ 5 ] She has argued in favour of many traditional practices, as in her interview in the book Vedic Ecology (by Ranchor Prime ). She is a member of the scientific committee of the Fundacion IDEAS , Spain's Socialist Party's think tank. She is also a member of the International Organization for a Participatory Society. [ 6 ]
Vandana Shiva was born in Dehradun . Her father was a conservator of forests, and her mother was a farmer with a love for nature. She was educated at St. Mary's Convent High School, Nainital , and at the Convent of Jesus and Mary , Dehradun. [ 7 ]
Shiva studied physics at Punjab University in Chandigarh , graduating with a Bachelor of Science degree in 1972. [ 8 ] After a brief stint at the Bhabha Atomic Research Centre , she moved to Canada to pursue a master's degree in the philosophy of science at the University of Guelph in 1977 where she wrote a thesis entitled "Changes in the concept of periodicity of light". [ 8 ] [ 9 ] In 1978, she completed and received her PhD in philosophy at the University of Western Ontario , [ 10 ] focusing on philosophy of physics . Her dissertation was titled "Hidden variables and locality in quantum theory" in which she discussed the mathematical and philosophical implications of hidden variable theories that fall outside of the purview of Bell's theorem . [ 11 ] She later went on to pursue interdisciplinary research in science, technology, and environmental policy at the Indian Institute of Science and the Indian Institute of Management in Bangalore . [ 7 ]
Vandana Shiva has written and spoken extensively about advances in the fields of agriculture and food. Intellectual property rights , biodiversity , biotechnology , bioethics , and genetic engineering are among the fields where Shiva has fought through activist campaigns. She has assisted grassroots organisations of the Green movement in Africa, Asia, Latin America, Ireland, Switzerland, and Austria with opposition to advances in agricultural development via genetic engineering.
In 1982, she founded the Research Foundation for Science, Technology and Ecology. [ 12 ] This led to the creation of Navdanya in 1991, a national movement to protect the diversity and integrity of living resources, especially native seed, the promotion of organic farming and fair trade. [ 13 ] Navdanya, which translates to "Nine Seeds" or "New Gift", is an initiative of the RFSTE to educate farmers of the benefits of maintaining diverse and individualised crops rather than accepting offers from monoculture food producers. The initiative established over 40 seed banks across India to provide regional opportunity for diverse agriculture. In 2004 Shiva started Bija Vidyapeeth, an international college for sustainable living in Doon Valley, Uttarakhand, in collaboration with Schumacher College , UK. [ 14 ]
In the area of intellectual property rights and biodiversity, Shiva and her team at the Research Foundation for Science, Technology and Ecology challenged the biopiracy of neem, basmati and wheat.
In 1990, she wrote a report for the FAO on Women and Agriculture titled "Most Farmers in India are Women". She founded the gender unit at the International Centre for Mountain Development (ICIMOD) in Kathmandu and was a founding board member of the Women's Environment & Development Organisation (WEDO). [ 15 ] [ 16 ] She received the Right Livelihood Award in 1993, an award established by Swedish-German philanthropist Jakob von Uexkull . [ 17 ]
Shiva's book Making Peace With the Earth discusses biodiversity and the relationship between communities and nature. "Accordingly, she aligns the destruction of natural biodiversity with the dismantling of traditional communities—those who 'understand the language of nature'". [ 18 ] David Wright wrote in a review of the book that to Shiva, "the Village becomes a symbol, almost a metaphor for 'the local' in all nations". [ 18 ] [ 19 ]
Shiva has also served as an advisor to governments in India and abroad as well as non-governmental organisations, including the International Forum on Globalization, the Women's Environment & Development Organisation and the Third World Network. She chairs the Commission on the Future of Food set up by the Region of Tuscany in Italy and is a member of the Scientific Committee that advised former prime minister Zapatero of Spain. Shiva is a member of the Steering Committee of the Indian People's Campaign Against WTO . She is a councilor of the World Future Council . Shiva serves on Government of India Committees on Organic Farming. She participated in the Stock Exchange of Visions project in 2007.
In 2021, she advised the government of Sri Lanka to ban inorganic fertilizers and pesticides [ 20 ] [ 21 ] stating "This decision will definitely help farmers become more prosperous. Use of organic fertilizer will help provide agri products rich with nutrients while retaining the fertility of the land." [ 22 ] The policy applied overnight, with the main purpose to save State foreign exchange bills on imported fertilizers, [ 23 ] caused a crisis with a significant reduction of farming output in several sectors, hitting the tea industry in particular [ 24 ] [ 25 ] [ 26 ] and reducing rice yields were by one third. [ 22 ] The ban was overturned seven months later. [ 21 ]
Her work on agriculture started in 1984 after the violence in Punjab and the Bhopal disaster caused by a gas leak from Union Carbide 's pesticide manufacturing plant. Her studies for the UN University led to the publication of her book The Violence of the Green Revolution . [ 27 ] [ 28 ] [ 29 ]
In an interview with David Barsamian , Shiva argues that the seed-chemical package promoted by green revolution agriculture has depleted fertile soil and destroyed living ecosystems. [ 30 ] In her work Shiva cites data allegedly demonstrating that today there are over 1400 pesticides that may enter the food system across the world. [ 31 ]
Shiva is a founding councillor of the World Future Council (WFC). The WFC was formed in 2007 "to speak on behalf of policy solutions that serve the interests of future generations." Their primary focus has been on climate security. [ 32 ]
She supports the crime of ecocide being introduced to the International Criminal Court stating "The ideal of limitless growth is leading to limitless violations of the rights of the Earth and of the rights of nature. This is ecocide". [ 33 ] [ 34 ]
Vandana supports the idea of seed freedom, or the rejection of patents on new plant lines or cultivars. She has campaigned against the implementation of the WTO 1994 Trade Related Intellectual Property Rights (TRIPS) agreement , which broadens the scope of patents to include life forms. Shiva has criticised the agreement as having close ties with the corporate sector and opening the door to further patents on life. [ 35 ] Shiva calls the patenting of life 'biopiracy', and has fought against attempted patents of several indigenous plants, such as basmati. [ 36 ] In 2005, Shiva's was one of the three organisations that won a 10-year battle in the European Patent Office against the biopiracy of Neem by the US Department of Agriculture and the corporation WR Grace . [ 37 ] In 1998, Shiva's organisation Navdanya began a campaign against the biopiracy of basmati rice by US corporation RiceTec Inc. In 2001, following intensive campaigning, RiceTec lost most of its claims to the patent.
Shiva strongly opposes golden rice , a breed of rice that has been genetically engineered to biosynthesise beta-carotene, a precursor of vitamin A. Shiva contends that Golden Rice is more harmful than beneficial in her explanation of what she calls the "Golden Rice hoax": "Unfortunately, Vitamin A rice is a hoax, and will bring further dispute to plant genetic engineering where public relations exercises seem to have replaced science in promotion of untested, unproven and unnecessary technology... This is a recipe for creating hunger and malnutrition, not solving it." [ 38 ] Adrian Dubock says that golden rice is as cheap as other rice and vitamin A deficiency is the greatest reason for blindness and causes 28% of global preschool child mortality. [ 39 ] Shiva has claimed that the women of Bengal grow and eat 150 greens which can do the same, [ 40 ] while environmental consultant Patrick Moore suggests that most of these 250 million children do not eat much else than a bowl of rice a day. [ 41 ] In the 2013 report "The economic power of the Golden Rice opposition", two economists, Wesseler and Zilberman from Munich University and the University of California, Berkeley respectively calculated that the absence of Golden Rice in India had caused the loss of over 1.4 million lives in the previous ten years. [ 42 ]
According to Shiva, "Soaring seed prices in India have resulted in many farmers being mired in debt and turning to suicide". The creation of seed monopolies, the destruction of alternatives, the collection of superprofits in the form of royalties, and the increasing vulnerability of monocultures has created a context for debt, suicides, and agrarian distress . According to data from the Indian government, nearly 75 percent of rural debt is due to purchased inputs. Shiva claims that farmers' debt grows as GMO corporation's profits grow. According to Shiva, it is in this systemic sense that GM seeds are those of suicide.
International Food Policy Research Institute (IFPRI) twice analysed academic articles and government data and concluded the decrease and that there was no evidence on "resurgence" of farmer suicide. [ 43 ] [ 44 ]
Shiva plays a major role in the global ecofeminist movement. According to her 2004 article Empowering Women, [ 45 ] a more sustainable and productive approach to agriculture can be achieved by reinstating the system of farming in India that is more centred on engaging women. She advocates against the prevalent "patriarchal logic of exclusion," claiming that a woman-focused system would be a great improvement. [ 46 ] She believes that ecological destruction and industrial catastrophes threaten daily life, and the maintenance of these problems have become women's responsibility. [ 47 ]
Cecile Jackson has criticised some of Shiva's views as essentialist . [ 48 ]
Shiva co-wrote the book Ecofeminism in 1993 with "German anarchist and radical feminist sociologist" [ 49 ] Maria Mies. It combined Western and Southern feminism with "environmental, technological and feminist issues, all incorporated under the term ecofeminism". [ 49 ] These theories are combined throughout the book in essays by Shiva and Mies.
Stefanie Lay described the book as a collection of thought-provoking essays but also found in it a lack of new ecofeminist theories and contemporary analysis, as well as "overall failure to acknowledge the work of others". [ 50 ]
In June 2014, Indian and international media reported that Navdanya and Vandana Shiva were named in a leaked, classified report by India's Intelligence Bureau (IB), which was prepared for the Indian Prime Minister's Office. [ 51 ]
The leaked report says that campaigning activities of Indian NGOs such as Navdanya are hampering India's growth and development. In its report, the IB said that Indian NGOs, including Navdanya, receive money from foreign donors under the 'charitable garb' of campaigning for human rights or women's equality , but instead use the money for 'nefarious purposes' . "These foreign donors lead local NGOs to provide field reports which are used to build a record against India and serve as tools for the strategic foreign policy interests of the Western governments," the IB report states. [ 52 ]
Investigative journalist Michael Specter , in an article in The New Yorker on 25 August 2014 called "Seeds of Doubt", [ 10 ] raised concerns over a number of Shiva's claims regarding GMOs and some of her campaigning methods. He wrote: "Shiva's absolutism about G.M.O.s can lead her in strange directions. In 1999, ten thousand people were killed and millions were left homeless when a cyclone hit India's eastern coastal state of Orissa. When the U.S. government dispatched grain and soy to help feed the desperate victims, Shiva held a news conference in New Delhi and said that the donation was proof that 'the United States has been using the Orissa victims as guinea pigs' for genetically-engineered products, although she made no mention that those same products are approved and consumed in the United States. She also wrote to the international relief agency Oxfam to say that she hoped it wasn't planning to send genetically modified foods to feed the starving survivors." [ 10 ]
Shiva responded that Specter was "ill informed" [ 53 ] and that "for the record, ever since I sued Monsanto in 1999 for its illegal Bt cotton trials in India, I have received death threats", adding that the "concerted PR assault on me for the last two years from Lynas, Specter and an equally vocal Twitter group is a sign that the global outrage against the control over our seed and food, by Monsanto through GMOs, is making the biotech industry panic." [ 53 ] David Remnick , the editor of the New Yorker , responded by publishing a letter supporting Specter's article. [ 54 ]
Cases of plagiarism have been pointed out against Shiva. Birendra Nayak noted that Shiva copied verbatim from a 1996 article in Voice Gopalpur in her 1998 book Stronger than Steel , [ 55 ] and that in 2016, she plagiarised several paragraphs of an article by S Faizi on the Plachimada/Coca-Cola issue published in The Statesman . [ 56 ]
Journalist Keith Kloor , in an article published in Discover on 23 October 2014 titled "The Rich Allure of a Peasant Champion", revealed that Shiva charges $40,000 per lecture, plus a business-class air ticket from New Delhi. Kloor wrote: "She is often heralded as a tireless 'defender of the poor,' someone who has courageously taken her stand among the peasant farmers of India. Let it be noted, however, that this champion of the downtrodden doesn't exactly live a peasant's lifestyle." [ 57 ]
Stewart Brand in Whole Earth Discipline described some of Shiva's statements as pseudo-scientific, calling her warnings about "heritable sterility" ( Stolen Harvest , 2000) a "biological impossibility" but also plagiarism from Geri Guidetti, owner of the seed supplier company Ark Institute, and a "distraction" created by inflating the potential of terminator genes based on a single 1998 patent granted to a US company. [ 58 ] Brand also criticised the position of anti-GMO activists, including Shiva, who forced Zambia 's government to reject internationally donated corn in 2001-02 because it was "poisoned", as well as during the cyclone disaster in India. On the latter Shiva argued, "emergency cannot be used as market opportunity", to which Brand responded, "anyone who encourages other people to starve on principle should do some of the starving themselves". In 1998 Shiva was also protesting against Bt cotton program in India, calling it "seeds of suicide, seeds of slavery, seeds of despair", claiming she was protecting the farmers. Restrictive laws established in India under anti-GMO lobbying, however, led to widespread grassroots "seed piracy" where Indian farmers illegally planted seeds of Bt cotton and Bt brinjal , obtained either from experimental plantations or from Bangladesh (where they are planted legally) due to increased yield and reduced pesticide usage. [ 58 ] As of 2005 over 2.5 million hectares were planted with "unofficial" Bt cotton in India, of which Noel Kingsbury said:
Shiva's "Operation Cremate Monsanto" had spectacularly failed, its anti-GM stance borrowed from Western intellectuals had made no headway with Indian farmers, who showed they were not passive recipients of either technology or propaganda, but could take an active role in shaping their lives. What they did is also perhaps more genuinely subversive of multinational capitalism than anything GM's opponents have ever managed.
In India, farmers planting GM crops illegally eventually formed the Shetkari Sanghatana movement, calling for reform of the restrictive laws created under anti-GMO lobbying and as of 2020 an estimated 25% of cotton farmed is GM. [ 59 ]
Shiva has repeatedly gone on record characterizing science as "a very narrow, patriarchal project" that has been around only "for a short period of history" and argued that "we name 'science' what is mechanistic and reductionist." She condemns the "kind of science" that " Bacon , Descartes , and others, who are called 'fathers of modern science', have created," because scientists, as she claims, "declare nature as dead" and then use a "mechanistic mode" to analyze it. Shiva assigns the so-called scientific progress of the last centuries to the advance of capitalism , a time when the new exploitation needed a knowledge that would justify it. [ 60 ] In her 8 March 2017 speech to the European Parliament , Shiva stated that the "rise of masculinist science with Descartes, Newton, Bacon, led to the domination of [a] reductionist mechanistic science and a subjugation of [the] knowledge systems [that are] based on interconnections and relationships," a knowledge, she argued, that "includes all indigenous knowledge systems, and women’s knowledge." [ 61 ] [ 62 ]
Vandana Shiva has been interviewed for a number of documentary films including Freedom Ahead , Roshni; [ 63 ] Deconstructing Supper: Is Your Food Safe?, The Corporation , Thrive, Dirt! The Movie , Normal is Over, and This is What Democracy Looks Like (a documentary about the Seattle WTO protests of 1999 ). [ 64 ] [ 65 ] and Michael Moore and Jeff Gibbs Planet of the Humans . [ 66 ]
Shiva's focus on water has caused her to appear in a number of films on this topic. These films include "Ganga From the Ground Up," a documentary on water issues in the river Ganges; [ 67 ] Blue Gold: World Water Wars by Sam Bozzo ; Irena Salina 's documentary Flow: For Love of Water (in competition at the 2008 Sundance Film Festival ), and the PBS NOW documentary On Thin Ice. [ 68 ]
On the topic of genetically modified crops, she was featured in the documentary Fed Up! (2002), on genetic engineering, industrial agriculture and sustainable alternatives; and the documentary The World According to Monsanto , a film made by the French independent journalist Marie-Monique Robin .
Shiva appeared in a documentary film about the Dalai Lama , entitled Dalai Lama Renaissance . [ 69 ]
In 2010, Shiva was interviewed in a documentary about honeybees and colony collapse disorder , entitled Queen of the Sun . [ 70 ]
She appears in the French films Demain [ 71 ] and Solutions locales pour un désordre global .
In 2016, she appeared in the vegan documentary film H.O.P.E.: What You Eat Matters , where she was critical of the animal agriculture industry and meat -intensive diets. [ 72 ]
She was recognized as one of the BBC's 100 women of 2019. [ 79 ]
|
https://en.wikipedia.org/wiki/Vandana_Shiva
|
In combinatorics , Vandermonde's identity (or Vandermonde's convolution ) is the following identity for binomial coefficients :
for any nonnegative integers r , m , n . The identity is named after Alexandre-Théophile Vandermonde (1772), although it was already known in 1303 by the Chinese mathematician Zhu Shijie . [ 1 ]
There is a q -analog to this theorem called the q -Vandermonde identity .
Vandermonde's identity can be generalized in numerous ways, including to the identity
In general, the product of two polynomials with degrees m and n , respectively, is given by
where we use the convention that a i = 0 for all integers i > m and b j = 0 for all integers j > n . By the binomial theorem ,
Using the binomial theorem also for the exponents m and n , and then the above formula for the product of polynomials, we obtain
where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for all i > m and j > n , respectively.
By comparing coefficients of x r , Vandermonde's identity follows for all integers r with 0 ≤ r ≤ m + n . For larger integers r , both sides of Vandermonde's identity are zero due to the definition of binomial coefficients.
Vandermonde's identity also admits a combinatorial double counting proof , as follows. Suppose a committee consists of m men and n women. In how many ways can a subcommittee of r members be formed? The answer is
The answer is also the sum over all possible values of k , of the number of subcommittees consisting of k men and r − k women:
Take a rectangular grid of r x ( m + n − r ) squares. There are
paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is because r right moves and m + n - r up moves must be made (or vice versa) in any order, and the total path length is m + n ). Call the bottom left vertex (0, 0).
There are ( m k ) {\displaystyle {\binom {m}{k}}} paths starting at (0, 0) that end at ( k , m − k ), as k right moves and m − k upward moves must be made (and the path length is m ). Similarly, there are ( n r − k ) {\displaystyle {\binom {n}{r-k}}} paths starting at ( k , m − k ) that end at ( r , m + n − r ), as a total of r − k right moves and ( m + n − r ) − ( m − k ) upward moves must be made and the path length must be r − k + ( m + n − r ) − ( m − k ) = n . Thus there are
paths that start at (0, 0), end at ( r , m + n − r ), and go through ( k , m − k ). This is a subset of all paths that start at (0, 0) and end at ( r , m + n − r ), so sum from k = 0 to k = r (as the point ( k , m − k ) is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at ( r , m + n − r ).
One can generalize Vandermonde's identity as follows:
This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simple double counting argument.
On the one hand, one chooses k 1 {\displaystyle \textstyle k_{1}} elements out of a first set of n 1 {\displaystyle \textstyle n_{1}} elements; then k 2 {\displaystyle \textstyle k_{2}} out of another set, and so on, through p {\displaystyle \textstyle p} such sets, until a total of m {\displaystyle \textstyle m} elements have been chosen from the p {\displaystyle \textstyle p} sets. One therefore chooses m {\displaystyle \textstyle m} elements out of n 1 + ⋯ + n p {\displaystyle \textstyle n_{1}+\dots +n_{p}} in the left-hand side, which is also exactly what is done in the right-hand side.
The identity generalizes to non-integer arguments. In this case, it is known as the Chu–Vandermonde identity (see Askey 1975, pp. 59–60 ) and takes the form
for general complex-valued s and t and any non-negative integer n . It can be proved along the lines of the algebraic proof above by multiplying the binomial series for ( 1 + x ) s {\displaystyle (1+x)^{s}} and ( 1 + x ) t {\displaystyle (1+x)^{t}} and comparing terms with the binomial series for ( 1 + x ) s + t {\displaystyle (1+x)^{s+t}} .
This identity may be rewritten in terms of the falling Pochhammer symbols as
in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type ). The Chu–Vandermonde identity can also be seen to be a special case of Gauss's hypergeometric theorem , which states that
where 2 F 1 {\displaystyle \;_{2}F_{1}} is the hypergeometric function and Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} is the gamma function . One regains the Chu–Vandermonde identity by taking a = − n and applying the identity
liberally.
The Rothe–Hagen identity is a further generalization of this identity.
When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resulting probability distribution is the hypergeometric distribution . That is the probability distribution of the number of red marbles in r draws without replacement from an urn containing n red and m blue marbles.
|
https://en.wikipedia.org/wiki/Vandermonde's_identity
|
In algebra , the Vandermonde polynomial of an ordered set of n variables X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} , named after Alexandre-Théophile Vandermonde , is the polynomial :
(Some sources use the opposite order ( X i − X j ) {\displaystyle (X_{i}-X_{j})} , which changes the sign ( n 2 ) {\displaystyle {\binom {n}{2}}} times: thus in some dimensions the two formulas agree in sign, while in others they have opposite signs.)
It is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix .
The value depends on the order of the terms: it is an alternating polynomial , not a symmetric polynomial .
The defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the X i {\displaystyle X_{i}} by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial – in fact, it is the basic alternating polynomial, as will be made precise below.
It thus depends on the order, and is zero if two entries are equal – this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding V n = − V n , {\displaystyle V_{n}=-V_{n},} and thus V n = 0 {\displaystyle V_{n}=0} (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric).
Among all alternating polynomials, the Vandermonde polynomial is the lowest degree monic polynomial.
Conversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have ( X i − X j ) {\displaystyle (X_{i}-X_{j})} as a factor for all i ≠ j {\displaystyle i\neq j} .
Thus, the Vandermonde polynomial (together with the symmetric polynomials ) generates the alternating polynomials .
The first derivative is ∂ i Δ n = Δ n ∑ 1 ≤ j ≤ n : i ≠ j 1 X i − X j {\displaystyle \partial _{i}\Delta _{n}=\Delta _{n}\sum _{1\leq j\leq n:i\neq j}{\frac {1}{X_{i}-X_{j}}}} .
Since it is the lowest degree monic alternating polynomial, and ∑ i ∂ i 2 V n {\displaystyle \sum _{i}\partial _{i}^{2}V_{n}} is also alternating, this implies ∑ i ∂ i 2 V n = 0 {\displaystyle \sum _{i}\partial _{i}^{2}V_{n}=0} , i.e. it is a harmonic function .
Its square is widely called the discriminant , though some sources call the Vandermonde polynomial itself the discriminant.
The discriminant (the square of the Vandermonde polynomial: Δ = V n 2 {\displaystyle \Delta =V_{n}^{2}} ) does not depend on the order of terms, as ( − 1 ) 2 = 1 {\displaystyle (-1)^{2}=1} , and is thus an invariant of the unordered set of points.
If one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables Λ n {\displaystyle \Lambda _{n}} , one obtains the quadratic extension Λ n [ V n ] / ⟨ V n 2 − Δ ⟩ {\displaystyle \Lambda _{n}[V_{n}]/\langle V_{n}^{2}-\Delta \rangle } , which is the ring of alternating polynomials .
Given a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field ; for a non- monic polynomial, with leading coefficient a , one may define the Vandermonde polynomial as
(multiplying with a leading term) to accord with the discriminant.
Over arbitrary rings , one instead uses a different polynomial to generate the alternating polynomials – see (Romagny, 2005).
The Vandermonde determinant is a very special case of the Weyl denominator formula applied to the trivial representation of the special unitary group S U ( n ) {\displaystyle \mathrm {SU} (n)} .
|
https://en.wikipedia.org/wiki/Vandermonde_polynomial
|
Vandortuzumab vedotin ( INN ; development code RG7450 ) is a humanized monoclonal antibody designed for the treatment of cancer. [ 1 ] [ 2 ]
This drug was developed by Genentech / Roche . Development was discontinued in 2017. [ 3 ]
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vandortuzumab_vedotin
|
Vanessa Lorraine Allen Sutherland is a corporate lawyer and former chairperson of the U.S. Chemical Safety and Hazard Investigation Board (CSB).
Sutherland was born at Sibley Memorial Hospital in Washington, D.C. She lived in Tantallon, Maryland , where she attended Queen Anne School. [ 3 ] She graduated from high school at the age of 16 and enrolled at Drew University , where she received a B.A. in political science and art history, and later attended American University , where she received a J.D. and M.B.A. [ 4 ] After graduating from college, she moved to Fort Washington, Maryland . [ 3 ]
After graduating from Drew, Sutherland worked at the office of the Inspector General of the Department of Energy prior to attending law school. While attending American University, she served as an associate at Federal Deposit Insurance Corporation and a clerk at Fulbright & Jaworski . [ 4 ]
After graduating from law school, she worked as a corporate attorney at the telecommunications company MCI Inc. At this company, she became vice president and deputy general counsel of Digex , a subsidiary. She later worked as a counsel for the tobacco product producer Altria (formerly Philip Morris Companies, Inc.). [ 5 ]
In 2011, Sutherland began government service as chief counsel for the Pipeline and Hazardous Materials Safety Administration . [ 6 ] [ 7 ]
Sutherland was nominated by President Barack Obama to the U.S. Chemical Safety Board in March 2015 after the resignation of Rafael Moure-Eraso over allegations of mismanagement. She was confirmed by the Senate in August 2015. [ 8 ]
In 2017, Sutherland was chairperson of the agency when the Trump administration attempted to defund the CSB for the 2018 United States federal budget . [ 9 ] In March 2018, the Office of Management and Budget informed Sutherland that the Trump administration had again proposed to shut down the agency as part of the 2019 United States federal budget . This caused Sutherland to resign despite having two years left in her five-year term. [ 10 ] After leaving the CSB, Sutherland joined Norfolk Southern Railway as a vice president. [ 11 ]
The agency was ultimately not defunded after the House Appropriations Committee opposed the Trump administration's proposal and proposed a $1 million increase in the agency's 2019 budget. [ 12 ] Kristen Kulinowski became the interim executive after Sutherland's departure until Katherine Lemos was confirmed as chair in March 2020. [ 13 ] [ 14 ] [ 15 ]
CSB says it closed thirteen incident investigations under Sutherland. [ 12 ]
|
https://en.wikipedia.org/wiki/Vanessa_Allen_Sutherland
|
Vanessa Claire Wood (born 25 February 1983) is an American engineer who is a professor at the ETH Zurich . She holds a chair in Materials and Device Engineering and serves as Vice President of Knowledge Transfer and Corporate Relations.
Wood earned her bachelor's degree in physics at Yale University . She moved to Massachusetts Institute of Technology for her graduate studies, where studied electrical engineering. She remained at MIT for graduate research, where she researched quantum dots in metal oxide structures with Vladimir Bulimic. [ 1 ] Her research developed strategies to integrate colloidal quantum dots in optoelectronic devices. [ 2 ] She created three light-emitting diodes where air-stable metal oxides were used to surround the quantum dot active layers. [ 2 ] This can improve the shelf-life and luminance of the light-emitting diodes. [ 2 ] She also demonstrated the world's first inorganic quantum dot displays incorporating metal oxide charge transport layers. After earning her doctorate, she worked for a short while as a postdoctoral research with Yet-Ming Chiang . She focused on lithium-ion battery flow cells. [ citation needed ]
In 2011, Wood joined the faculty at ETH Zurich . Her research considered lithium-ion batteries, and how electrode microstructure impacts battery efficiency. She created a new analytical method which can be used to monitor battery electrodes during the manufacturing process. [ 3 ] She was awarded a European Research Council starting grant to develop quantitative metrologie to guide lithium-ion battery manufacturing. [ 4 ]
Wood founded the spin-off company Battrion in 2015. Battrion looks to improve charging speed of high energy density lithium ion cells through the development of innovative fabrication strategies. [ 5 ] She was made full Professor in 2019. [ 6 ]
In 2021, Wood was made the Vice President for Knowledge Transfer and Corporate Relations at ETH Zurich. [ 7 ] She was appointed Meeting Chair of the Materials Research Society 2022 Spring Meeting. [ 8 ]
|
https://en.wikipedia.org/wiki/Vanessa_Wood
|
Vanilla software refers to applications and systems used in their unmodified, original state, as distributed by their vendors. [ 1 ] This term is often applied in fields such as enterprise resource planning (ERP), [ 2 ] e-government systems, [ 3 ] and software development, where simplicity and adherence to vendor standards are more important than expanded functionality. [ 4 ] By opting for vanilla software, organizations benefit from lower costs and straightforward maintenance, though the trade-off may include reduced flexibility and customization options. [ 4 ]
The term "vanilla" has become ubiquitous in computing and technology to describe configurations or implementations that lack customization. [ 3 ] In these contexts, it emphasizes simplicity, standardization, and ease of maintenance. [ 3 ]
The term vanilla is derived from the plain, unadorned flavor of vanilla ice cream , a connotation that dates back to its popularity as a universal base in desserts. [ 5 ] [ 6 ] Within computing, the term emerged as early as the 1980s, popularized in systems and user interfaces to describe default or base states. For example, IBM's BookMaster system referred to its simplest configuration as "vanilla" and its more complex counterpart as "mocha" to signify additional features. [ 7 ]
Eric S. Raymond 's Jargon File , an influential glossary of hacker slang, provides a notable definition of "vanilla," associating it with "ordinary" or "standard" states, as distinct from the default setting. [ 8 ] The use of the term expanded in the 1990s, encompassing Unix systems, where a "vanilla kernel" signified an unmodified kernel directly from the original source. [ 9 ] Video game culture also embraced the terminology, describing unmodified games without add-ons or user-created mods as "vanilla versions." [ 10 ] This versatility reflects its adaptability across various domains, from operating systems to web development and gaming.
Vanilla ERP systems are frequently deployed to standardize business processes across organizations, minimizing risks associated with customization. While vanilla implementations align closely with vendor-provided best practices, they may limit organizational flexibility, posing the "common system paradox." [ 11 ]
Vanilla software is integral to e-government initiatives, supporting data interoperability across agencies. However, while such systems facilitate standardization, studies have highlighted challenges in tailoring these solutions to meet unique institutional needs. [ 12 ]
In programming, "vanilla" describes frameworks and tools used without extensions or alterations, which can simplify coding processes and enhance maintainability. [ 1 ]
|
https://en.wikipedia.org/wiki/Vanilla_software
|
Vanillin is an organic compound with the molecular formula C 8 H 8 O 3 . It is a phenolic aldehyde . Its functional groups include aldehyde , hydroxyl , and ether . It is the primary component of the ethanolic extract of the vanilla bean . Synthetic vanillin is now used more often than natural vanilla extract as a flavoring in foods, beverages, and pharmaceuticals.
Vanillin and ethylvanillin are used by the food industry; ethylvanillin is more expensive, but has a stronger note . It differs from vanillin by having an ethoxy group (−O−CH 2 CH 3 ) instead of a methoxy group (−O−CH 3 ).
Natural vanilla extract is a mixture of several hundred different compounds in addition to vanillin. Artificial vanilla flavoring is often a solution of pure vanillin, usually of synthetic origin. Because of the scarcity and expense of natural vanilla extract, synthetic preparation of its predominant component has long been of interest. The first commercial synthesis of vanillin began with the more readily available natural compound eugenol (4-allyl-2-methoxyphenol). Today, artificial vanillin is made either from guaiacol or lignin .
Lignin-based artificial vanilla flavoring is alleged to have a richer flavor profile than that from guaiacol-based artificial vanilla; the difference is due to the presence of acetovanillone , a minor component in the lignin-derived product that is not found in vanillin synthesized from guaiacol. [ a ]
Although it is generally accepted that vanilla was domesticated in Mesoamerica and subsequently spread to the Old World in the 16th century, in 2019, researchers published a paper stating that vanillin residue had been discovered inside jars within a tomb in Israel dating to the 2nd millennium BCE, suggesting the possible cultivation of an unidentified, Old World-endemic Vanilla species in Canaan since the Middle Bronze Age . [ 4 ] [ 5 ] Traces of vanillin were also found in wine jars in Jerusalem , which were used by the Judahite elite before the city was destroyed in 586 BCE. [ 5 ]
Vanilla beans, called tlilxochitl, were discovered and cultivated as a flavoring for beverages by native Mesoamerican peoples, most famously the Totonacs of modern-day Veracruz , Mexico. Since at least the early 15th century, the Aztecs used vanilla as a flavoring for chocolate in drinks called xocohotl . [ 6 ]
Vanillin was first isolated as a relatively pure substance in 1858 by Théodore Nicolas Gobley , who obtained it by evaporating a vanilla extract to dryness and recrystallizing the resulting solids from hot water. [ 7 ] In 1874, the German scientists Ferdinand Tiemann and Wilhelm Haarmann deduced its chemical structure, at the same time finding a synthesis for vanillin from coniferin , a glucoside of isoeugenol found in pine bark. [ 8 ] Tiemann and Haarmann founded a company Haarmann and Reimer (now part of Symrise ) and started the first industrial production of vanillin using their process (now known as the Reimer–Tiemann reaction ) in Holzminden , Germany. In 1876, Karl Reimer synthesized vanillin ( 2 ) from guaiacol ( 1 ). [ 9 ]
By the late 19th century, semisynthetic vanillin derived from the eugenol found in clove oil was commercially available. [ b ]
Synthetic vanillin became significantly more available in the 1930s, when production from clove oil was supplanted by production from the lignin -containing waste produced by the sulfite pulping process for preparing wood pulp for the paper industry . By 1981, a single pulp and paper mill in Thorold, Ontario , supplied 60% of the world market for synthetic vanillin. [ 10 ] However, subsequent developments in the wood pulp industry have made its lignin wastes less attractive as a raw material for vanillin synthesis. Today, approximately 15% of the world's production of vanillin is still made from lignin wastes, [ 11 ] while approximately 85% is synthesized in a two-step process from the petrochemical precursors guaiacol and glyoxylic acid . [ 12 ] [ 13 ]
Beginning in 2000, Rhodia began marketing biosynthetic vanillin prepared by the action of microorganisms on ferulic acid extracted from rice bran . This product, sold at USD $700/kg under the trademarked name Rhovanil Natural, is not cost-competitive with petrochemical vanillin, which sells for around US$15/kg. [ 14 ] However, unlike vanillin synthesized from lignin or guaiacol, it can be labeled as a natural flavoring.
Vanillin is most prominent as the principal flavor and aroma compound in vanilla . Cured vanilla pods contain about 2% by dry weight vanillin. Relatively pure vanillin may be visible as a white dust or "frost" on the exteriors of cured pods of high quality.
It is also found in Leptotes bicolor , a species of orchid native to Paraguay and southern Brazil, [ 15 ] and the Southern Chinese red pine .
At lower concentrations, vanillin contributes to the flavor and aroma profiles of foodstuffs as diverse as olive oil , [ 16 ] butter , [ 17 ] raspberry , [ 18 ] and lychee [ 19 ] fruits.
Aging in oak barrels imparts vanillin to some wines , vinegar , [ 20 ] and spirits . [ 21 ]
In other foods, heat treatment generates vanillin from other compounds. In this way, vanillin contributes to the flavor and aroma of coffee , [ 22 ] [ 23 ] maple syrup , [ 24 ] and whole-grain products, including corn tortillas [ 25 ] and oatmeal . [ 26 ]
Natural vanillin is extracted from the seed pods of Vanilla planifolia , a vining orchid native to Mexico, but now grown in tropical areas around the globe. Madagascar is presently the largest producer of natural vanillin.
As harvested, the green seed pods contain vanillin in the form of glucovanillin , its β- D - glucoside ; the green pods do not have the flavor or odor of vanilla. [ 27 ] Vanillin is released from glucovanillin by the action of the enzyme β-glucosidase during ripening [ 28 ] [ 29 ] and during the curing process. [ 30 ]
After being harvested, their flavor is developed by a months-long curing process, the details of which vary among vanilla-producing regions, but in broad terms it proceeds as follows:
First, the seed pods are blanched in hot water, to arrest the processes of the living plant tissues. Then, for 1–2 weeks, the pods are alternately sunned and sweated: during the day they are laid out in the sun, and each night wrapped in cloth and packed in airtight boxes to sweat. During this process, the pods become dark brown, and enzymes in the pod release vanillin as the free molecule. Finally, the pods are dried and further aged for several months, during which time their flavors further develop. Several methods have been described for curing vanilla in days rather than months, although they have not been widely developed in the natural vanilla industry, [ c ] with its focus on producing a premium product by established methods, rather than on innovations that might alter the product's flavor profile.
Although the exact route of vanillin biosynthesis in V. planifolia is currently unknown, several pathways are proposed for its biosynthesis. Vanillin biosynthesis is generally agreed to be part of the phenylpropanoid pathway starting with L -phenylalanine, [ 31 ] which is deaminated by phenylalanine ammonia lyase (PAL) to form t- cinnamic acid . The para position of the ring is then hydroxylated by the cytochrome P450 enzyme cinnamate 4-hydroxylase (C4H/P450) to create p - coumaric acid . [ 32 ] Then, in the proposed ferulate pathway, 4-hydroxycinnamoyl-CoA ligase (4CL) attaches p -coumaric acid to coenzyme A (CoA) to create p -coumaroyl CoA. Hydroxycinnamoyl transferase (HCT) then converts p -coumaroyl CoA to 4-coumaroyl shikimate / quinate . This subsequently undergoes oxidation by the P450 enzyme coumaroyl ester 3’-hydroxylase (C3’H/P450) to give caffeoyl shikimate/quinate. HCT then exchanges the shikimate/quinate for CoA to create caffeoyl CoA, and 4CL removes CoA to afford caffeic acid. Caffeic acid then undergoes methylation by caffeic acid O- methyltransferase (COMT) to give ferulic acid. Finally, vanillin synthase hydratase/lyase (vp/VAN) catalyzes hydration of the double bond in ferulic acid followed by a retro-aldol elimination to afford vanillin. [ 32 ] Vanillin can also be produced from vanilla glycoside with the additional final step of deglycosylation. [ 27 ] In the past p -hydroxybenzaldehyde was speculated to be a precursor for vanillin biosynthesis. However, a 2014 study using radiolabelled precursor indicated that p -hydroxybenzaldehyde is not used to synthesise vanillin or vanillin glucoside in the vanilla orchids. [ 32 ]
The demand for vanilla flavoring has long exceeded the supply of vanilla beans. As of 2001 [update] , the annual demand for vanillin was 12,000 tons, but only 1,800 tons of natural vanillin were produced. [ 33 ] The remainder was produced by chemical synthesis . Vanillin was first synthesized from eugenol (found in oil of clove) in 1874–75, less than 20 years after it was first identified and isolated. Vanillin was commercially produced from eugenol until the 1920s. [ 34 ] Later it was synthesized from lignin-containing "brown liquor", a byproduct of the sulfite process for making wood pulp . [ 10 ] Counterintuitively, though it uses waste materials, the lignin process is no longer popular because of environmental concerns, and today most vanillin is produced from guaiacol . [ 10 ] Several routes exist for synthesizing vanillin from guaiacol. [ 35 ]
At present, the most significant of these is the two-step process practiced by Rhodia since the 1970s, in which guaiacol ( 1 ) reacts with glyoxylic acid by electrophilic aromatic substitution . [ 36 ] The resulting vanillylmandelic acid ( 2 ) is then converted by 4-Hydroxy-3-methoxyphenylglyoxylic acid ( 3 ) to vanillin ( 4 ) by oxidative decarboxylation. [ 12 ]
Although guaiacol can be obtained by pyrolysis of wood, the type intended for vanillin production is mainly produced by petrochemistry. [ 37 ] [ 10 ]
15% of the world's production of vanillin is produced from lignosulfonates , a byproduct from the manufacture of cellulose via the sulfite process . [ 10 ] [ 11 ] The sole remaining producer of wood-based vanillin is the company Borregaard located in Sarpsborg , Norway . [ 37 ] For this kind of use, softwood is preferred because there are more guaiacyl units convertible to vanillin. [ 10 ]
Early production of wood-based vanillin would involve four plants: a sulfite pulp mill, a fermentation plant, a vanillin plant, and a Kraft (sulfate) pulp mill. The sulfite mill provides the brown liquor to the fermentation plant, which makes use of the residual sugar. The spend liquor is sent to the vanillin plant, which uses alkaline oxidation with air at 160–170 °C and 10–12 atm pressure, toluene extraction, and back-extraction with NaOH to obtain a crude sodium vanillate. Addition of sulfurous acid affords easy separation of the soluble sulfide addition compound of vanillin from insoluble impurities such as acetovanillone . The vanillin is extracted, and the remaining liquor is sent to the Kraft mill for burning to recover energy and sodium sulfide, both important for a Kraft mill. [ 10 ] This process went out of favor in North America due to the large amounts of caustic liquids that needs to be disposed by the mill at the end: 160 kg for every 1 kg of vanillin produced. The recovery of sodium sulfide also became less and less profitable as the sodium-to-sulfur ratio became more and more unbalanced. [ 10 ]
Borregaard is able to keep operating because it runs its own pulp mill. They have improved a process from Monsanto by using ultrafiltration [ 10 ] to concentrate the incoming lignosulfonates , which reduces the amount of NaOH used and waste produced. The basic chemistry is unchanged: alkaline oxidation using a metal catalyst such a copper salt. [ 38 ] [ 39 ] According to Scientific American , vanillin produced this way contains armoatic impurities that add strength and creaminess to its flavor. [ 37 ] This is probably due to acetovanillone being present. [ a ]
The company Evolva has developed a genetically modified yeast which can produce vanillin. Because the microbe is a processing aid , the resulting vanillin would not fall under U.S. GMO labeling requirements, and because the production is nonpetrochemical, food using the ingredient can claim to contain "no artificial ingredients". [ 37 ] The biosynthetic process starts with glucose, or any sugar that can be converted into erythrose 4-phosphate (which leads to 3-dehydroshikimic acid). [ 40 ] The end product is 98% pure and is also considered natural in the EU. [ 41 ]
Using ferulic acid (a chemical found in rice) as an input and a specific non GMO species of Amycolatopsis bacteria, vanillin can be produced. Many other bacteria, either GMO or non-GMO, can be used for the same purpose. However, because vanillin inhibits the growth of free-floating bacteria, yields have been low. This can be overcome through the formation of biofilms , which has been done with the non-GMO B. subtilis strain CCTCC M2011162. [ 42 ] However, using ferulic acid as the starting material does not qualify for "natural ingredient" in the EU. [ 41 ]
Biotransformation of eugenol (from cloves) into vanillin by non-GMO microorganisms has also been reported. [ 43 ] The same has been reported for guaiacol and guaicyl lignin (from conifers). [ 44 ] [ 45 ] These starting materials do not qualify for "natural ingredient" in the EU. [ 41 ]
The largest use of vanillin is as a flavoring, usually in sweet foods. The ice cream and chocolate industries together comprise 75% of the market for vanillin as a flavoring, with smaller amounts being used in confections and baked goods . [ 46 ]
Vanillin is also used in the fragrance industry, in perfumes , and to mask unpleasant odors or tastes in medicines, livestock fodder , and cleaning products. [ 12 ] It is also used in the flavor industry, as a very important key note for many different flavors, especially creamy profiles such as cream soda .
Additionally, vanillin can be used as a general-purpose stain for visualizing spots on thin-layer chromatography plates. This stain yields a range of colors for these different components.
Several studies have suggested that vanillin can affect the performance of antibiotics in laboratory conditions . [ 47 ] [ 48 ]
Vanillin–HCl staining can be used to visualize the localisation of tannins in cells.
Vanillin has been used as a chemical intermediate in the production of pharmaceuticals , cosmetics , and other fine chemicals . [ 49 ] In 1970, more than half the world's vanillin production was used in the synthesis of other chemicals. [ 10 ] As of 2016, vanillin uses have expanded to include perfumes , flavoring and aromatic masking in medicines, various consumer and cleaning products, and livestock foods. [ 50 ]
Vanillin is becoming a popular choice for the development of bio-based plastics. [ 51 ]
Vanillin can trigger migraine headaches in a small fraction of the people who experience migraines. [ 52 ]
Some people have allergic reactions to vanilla. [ 53 ] They may be allergic to synthetically produced vanilla but not to natural vanilla, or the other way around, or to both. [ 54 ]
Vanilla orchid plants can trigger contact dermatitis , especially among people working in the vanilla trade if they come into contact with the plant's sap. [ 54 ] An allergic contact dermatitis called vanillism produces swelling and redness, and sometimes other symptoms. [ 54 ] The sap of most species of vanilla orchid which exudes from cut stems or where beans are harvested can cause moderate to severe dermatitis if it comes in contact with bare skin. The sap of vanilla orchids contains calcium oxalate crystals, which are thought to be the main causative agent of contact dermatitis in vanilla plantation workers. [ 55 ] [ 56 ]
A pseudophytodermatitis called vanilla lichen can be caused by flour mites ( Tyroglyphus farinae ). [ 54 ]
Scolytus multistriatus , one of the vectors of the Dutch elm disease , uses vanillin as a signal to find a host tree during oviposition . [ 57 ]
|
https://en.wikipedia.org/wiki/Vanillin
|
In organic chemistry , the vanillyl group (also known as vanilloyl ) is a functional group . Compounds containing a vanillyl group are called vanilloids , and include vanillin , vanillic acid , capsaicin , vanillylmandelic acid , etc. [ 1 ] [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vanillyl_group
|
Vanillylmandelic acid ( VMA ) is a chemical intermediate in the synthesis of artificial vanilla flavorings [ 1 ] and is an end-stage metabolite of the catecholamines ( epinephrine , and norepinephrine ). It is produced via intermediary metabolites.
VMA synthesis is the first step of a two-step process practiced by Rhodia since the 1970s to synthesize artificial vanilla. [ 1 ] Specifically the reaction entails the condensation of guaiacol and glyoxylic acid in an ice cold, aqueous solution with sodium hydroxide .
VMA is found in the urine , along with other catecholamine metabolites, including homovanillic acid (HVA), metanephrine , and normetanephrine . In timed urine tests the quantity excreted (usually per 24 hours) is assessed along with creatinine clearance, and the quantity of cortisols , catecholamines , and metanephrines excreted is also measured.
Urinary VMA is elevated in patients with tumors that secrete catecholamines. [ 3 ]
These urinalysis tests are used to diagnose an adrenal gland tumor called pheochromocytoma , a tumor of catecholamine -secreting chromaffin cells. These tests may also be used to diagnose neuroblastomas , and to monitor treatment of these conditions.
Norepinephrine is metabolised into normetanephrine and VMA. Norepinephrine is one of the hormones produced by the adrenal glands , which are found on top of the kidneys . These hormones are released into the blood during times of physical or emotional stress, which are factors that may skew the results of the test. [ citation needed ]
|
https://en.wikipedia.org/wiki/Vanillylmandelic_acid
|
In mathematics , a function is said to vanish at infinity if its values approach 0 as the input grows without bounds. There are two different ways to define this with one definition applying to functions defined on normed vector spaces and the other applying to functions defined on locally compact spaces .
Aside from this difference, both of these notions correspond to the intuitive notion of adding a point at infinity, and requiring the values of the function to get arbitrarily close to zero as one approaches it. This definition can be formalized in many cases by adding an (actual) point at infinity .
A function on a normed vector space is said to vanish at infinity if the function approaches 0 {\displaystyle 0} as the input grows without bounds (that is, f ( x ) → 0 {\displaystyle f(x)\to 0} as ‖ x ‖ → ∞ {\displaystyle \|x\|\to \infty } ). Or,
in the specific case of functions on the real line.
For example, the function
defined on the real line vanishes at infinity.
Alternatively, a function f {\displaystyle f} on a locally compact space Ω {\displaystyle \Omega } vanishes at infinity , if given any positive number ε > 0 {\displaystyle \varepsilon >0} , there exists a compact subset K ⊆ Ω {\displaystyle K\subseteq \Omega } such that
whenever the point x {\displaystyle x} lies outside of K . {\displaystyle K.} [ 1 ] [ 2 ] In other words, for each positive number ε > 0 {\displaystyle \varepsilon >0} , the set { x ∈ Ω : ‖ f ( x ) ‖ ≥ ε } {\displaystyle \left\{x\in \Omega :\|f(x)\|\geq \varepsilon \right\}} has compact closure.
For a given locally compact space Ω {\displaystyle \Omega } the set of such functions
valued in K , {\displaystyle \mathbb {K} ,} which is either R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} forms a K {\displaystyle \mathbb {K} } - vector space with respect to pointwise scalar multiplication and addition , which is often denoted C 0 ( Ω ) . {\displaystyle C_{0}(\Omega ).}
As an example, the function
where x {\displaystyle x} and y {\displaystyle y} are reals greater or equal 1 and correspond to the point ( x , y ) {\displaystyle (x,y)} on R ≥ 1 2 {\displaystyle \mathbb {R} _{\geq 1}^{2}} vanishes at infinity.
A normed space is locally compact if and only if it is finite-dimensional so in this particular case, there are two different definitions of a function "vanishing at infinity".
The two definitions could be inconsistent with each other: if f ( x ) = ‖ x ‖ − 1 {\displaystyle f(x)=\|x\|^{-1}} in an infinite dimensional Banach space , then f {\displaystyle f} vanishes at infinity by the ‖ f ( x ) ‖ → 0 {\displaystyle \|f(x)\|\to 0} definition, but not by the compact set definition.
Refining the concept, one can look more closely to the rate of vanishing of functions at infinity. One of the basic intuitions of mathematical analysis is that the Fourier transform interchanges smoothness conditions with rate conditions on vanishing at infinity. Using big O notation , the rapidly decreasing test functions of tempered distribution theory are smooth functions that are
for all N {\displaystyle N} , as | x | → ∞ {\displaystyle |x|\to \infty } , and such that all their partial derivatives satisfy the same condition too. This condition is set up so as to be self-dual under Fourier transform, so that the corresponding distribution theory of tempered distributions will have the same property.
|
https://en.wikipedia.org/wiki/Vanish_at_infinity
|
Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions .
For example, the theory implies that the Universe had fewer dimensions after the Big Bang when its energy was high. Then the number of dimensions may have increased as the system cooled and the Universe may gain more dimensions with time. There could have originally been only one spatial dimension , with two dimensions total — one time dimension and one space dimension. [ 1 ] When there were only two dimensions, the Universe lacked gravitational degrees of freedom. [ 2 ]
The theory is also tied to smaller amount of dimensions in smaller systems with the universe expansion being a suggested motivating phenomenon for growth of the number of dimensions with time, [ 3 ] suggesting a larger number of dimensions in systems on larger scale. [ 4 ]
In 2011, Dejan Stojkovic from the University at Buffalo and Jonas Mureika from the Loyola Marymount University described use of a Laser Interferometer Space Antenna system, intended to detect gravitational waves , to test the vanishing-dimension theory [ 5 ] by detecting a maximum frequency after which gravitational waves can't be observed.
The vanishing-dimensions theory is seen as an explanation to cosmological constant problem: a fifth dimension would answer the question of energy density required to maintain the constant. [ 2 ] [ 4 ]
|
https://en.wikipedia.org/wiki/Vanishing_dimensions_theory
|
A vanishing point is a point on the image plane of a perspective rendering where the two-dimensional perspective projections of parallel lines in three-dimensional space appear to converge. When the set of parallel lines is perpendicular to a picture plane , the construction is known as one-point perspective, and their vanishing point corresponds to the oculus , or "eye point", from which the image should be viewed for correct perspective geometry. [ 1 ] Traditional linear drawings use objects with one to three sets of parallels, defining one to three vanishing points.
Italian humanist polymath and architect Leon Battista Alberti first introduced the concept in his treatise on perspective in art, De pictura , written in 1435. [ 2 ] Straight railroad tracks are a familiar modern example. [ 3 ]
The vanishing point may also be referred to as the "direction point", as lines having the same directional vector, say D , will have the same vanishing point. Mathematically, let q ≡ ( x , y , f ) be a point lying on the image plane, where f is the focal length (of the camera associated with the image), and let v q ≡ ( x / h , y / h , f / h ) be the unit vector associated with q , where h = √ x 2 + y 2 + f 2 . If we consider a straight line in space S with the unit vector n s ≡ ( n x , n y , n z ) and its vanishing point v s , the unit vector associated with v s is equal to n s , assuming both point towards the image plane. [ 4 ]
When the image plane is parallel to two world-coordinate axes, lines parallel to the axis that is cut by this image plane will have images that meet at a single vanishing point. Lines parallel to the other two axes will not form vanishing points as they are parallel to the image plane. This is one-point perspective. Similarly, when the image plane intersects two world-coordinate axes, lines parallel to those planes will meet form two vanishing points in the picture plane. This is called two-point perspective. In three-point perspective the image plane intersects the x , y , and z axes and therefore lines parallel to these axes intersect, resulting in three different vanishing points.
The vanishing point theorem is the principal theorem in the science of perspective. It says that the image in a picture plane π of a line L in space, not parallel to the picture, is determined by its intersection with π and its vanishing point. Some authors have used the phrase, "the image of a line includes its vanishing point". Guidobaldo del Monte gave several verifications, and Humphry Ditton called the result the "main and Great Proposition". [ 5 ] Brook Taylor wrote the first book in English on perspective in 1714, which introduced the term "vanishing point" and was the first to fully explain the geometry of multipoint perspective, and historian Kirsti Andersen compiled these observations. [ 1 ] : 244–6 She notes, in terms of projective geometry , the vanishing point is the image of the point at infinity associated with L , as the sightline from O through the vanishing point is parallel to L .
As a vanishing point originates in a line, so a vanishing line originates in a plane α that is not parallel to the picture π . Given the eye point O , and β the plane parallel to α and lying on O , then the vanishing line of α is β ∩ π . For example, when α is the ground plane and β is the horizon plane, then the vanishing line of α is the horizon line β ∩ π .
To put it simply, the vanishing line of some plane, say α , is obtained by the intersection of the image plane with another plane, say β , parallel to the plane of interest ( α ), passing through the camera center. For different sets of lines parallel to this plane α , their respective vanishing points will lie on this vanishing line. The horizon line is a theoretical line that represents the eye level of the observer. If the object is below the horizon line, its lines angle up to the horizon line. If the object is above, they slope down.
1. Projections of two sets of parallel lines lying in some plane π A appear to converge, i.e. the vanishing point associated with that pair, on a horizon line, or vanishing line H formed by the intersection of the image plane with the plane parallel to π A and passing through the pinhole.
Proof: Consider the ground plane π , as y = c which is, for the sake of simplicity, orthogonal to the image plane. Also, consider a line L that lies in the plane π , which is defined by the equation ax + bz = d .
Using perspective pinhole projections, a point on L projected on the image plane will have coordinates defined as,
This is the parametric representation of the image L′ of the line L with z as the parameter. When z → −∞ it stops at the point ( x′ , y′ ) = (− fb / a ,0) on the x′ axis of the image plane. This is the vanishing point corresponding to all parallel lines with slope − b / a in the plane π . All vanishing points associated with different lines with different slopes belonging to plane π will lie on the x′ axis, which in this case is the horizon line.
2. Let A , B , and C be three mutually orthogonal straight lines in space and v A ≡ ( x A , y A , f ) , v B ≡ ( x B , y B , f ) , v C ≡ ( x C , y C , f ) be the three corresponding vanishing points respectively. If we know the coordinates of one of these points, say v A , and the direction of a straight line on the image plane, which passes through a second point, say v B , we can compute the coordinates of both v B and v C [ 4 ]
3. Let A , B , and C be three mutually orthogonal straight lines in space and v A ≡ ( x A , y A , f ) , v B ≡ ( x B , y B , f ) , v C ≡ ( x C , y C , f ) be the three corresponding vanishing points respectively. The orthocenter of the triangle with vertices in the three vanishing points is the intersection of the optical axis and the image plane. [ 4 ]
A curvilinear perspective is a drawing with either 4 or 5 vanishing points. In 5-point perspective the vanishing points are mapped into a circle with 4 vanishing points at the cardinal headings N, W, S, E and one at the circle's origin.
A reverse perspective is a drawing with vanishing points that are placed outside the painting with the illusion that they are "in front of" the painting.
Several methods for vanishing point detection make use of the line segments detected in images. Other techniques involve considering the intensity gradients of the image pixels directly.
There are significantly large numbers of vanishing points present in an image. Therefore, the aim is to detect the vanishing points that correspond to the principal directions of a scene. This is generally achieved in two steps. The first step, called the accumulation step, as the name suggests, clusters the line segments with the assumption that a cluster will have a common vanishing point. The next step finds the principal clusters present in the scene and therefore it is called the search step.
In the accumulation step , the image is mapped onto a bounded space called the accumulator space. The accumulator space is partitioned into units called cells. Barnard [ 6 ] assumed this space to be a Gaussian sphere centered on the optical center of the camera as an accumulator space. A line segment on the image corresponds to a great circle on this sphere, and the vanishing point in the image is mapped to a point. The Gaussian sphere has accumulator cells that increase when a great circle passes through them, i.e. in the image a line segment intersects the vanishing point. Several modifications have been made since, but one of the most efficient techniques was using the Hough Transform , mapping the parameters of the line segment to the bounded space. Cascaded Hough Transforms have been applied for multiple vanishing points.
The process of mapping from the image to the bounded spaces causes the loss of the actual distances between line segments and points.
In the search step , the accumulator cell with the maximum number of line segments passing through it is found. This is followed by removal of those line segments, and the search step is repeated until this count goes below a certain threshold. As more computing power is now available, points corresponding to two or three mutually orthogonal directions can be found.
|
https://en.wikipedia.org/wiki/Vanishing_point
|
A vanishing puzzle is a mechanical optical illusion comprising multiple pieces which can be rearranged to show different versions of a picture depicting several objects, the number of which depending on the arrangement of the pieces. [ 1 ] [ 2 ]
Wemple & Company marketed an advertising card named The Magic Egg Puzzle, (How Many Eggs?) in New York in 1880. [ 3 ] Cutting the rectangular card into four oblongs allowed the pieces to be rearranged to show either 8, 9 or 10 eggs. Many other similar puzzles have been published since. [ 4 ]
Chess player and recreational mathematician Sam Loyd patented rotary vanishing puzzles in 1896 and published versions named Get Off the Earth , Teddy and the Lion and The Disappearing Bicyclist (pictured). Each had a circular card connected to a cardboard backdrop with a pin, letting it freely rotate. [ 5 ] [ 6 ] [ 7 ] In The Disappearing Bicyclist , when the disc is rotated such that the arrow points to A, 13 boys can be counted, but when it points to B, there are only 12 boys. [ 8 ]
Prizes from $5 to $100 were offered for the best explanation of one illusion. Though the names of the winners were published, their explanations were not. [ 9 ]
The missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle , but one has a 1×1 hole in it.
Sam Loyd 's chessboard paradox demonstrates two rearrangements of an 8×8 square. In the "larger" rearrangement (the 5×13 rectangle in the image to the right), the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the original square figure. [ 10 ]
|
https://en.wikipedia.org/wiki/Vanishing_puzzle
|
In number theory , Vantieghems theorem is a primality criterion. It states that a natural number n ≥3 is prime if and only if
Similarly, n is prime, if and only if the following congruence for polynomials in X holds:
or:
Let n=7 forming the product 1*3*7*15*31*63 = 615195. 615195 = 7 mod 127 and so 7 is prime Let n=9 forming the product 1*3*7*15*31*63*127*255 = 19923090075. 19923090075 = 301 mod 511 and so 9 is composite
|
https://en.wikipedia.org/wiki/Vantieghems_theorem
|
In Vapnik–Chervonenkis theory , the Vapnik–Chervonenkis (VC) dimension is a measure of the size (capacity, complexity, expressive power, richness, or flexibility) of a class of sets. The notion can be extended to classes of binary functions. It is defined as the cardinality of the largest set of points that the algorithm can shatter , which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined by Vladimir Vapnik and Alexey Chervonenkis . [ 1 ]
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the thresholding of a high- degree polynomial : if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A high-degree polynomial can be wiggly, so that it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity. This notion of capacity is made rigorous below.
Let H {\displaystyle H} be a set family (a set of sets) and C {\displaystyle C} a set. Their intersection is defined as the following set family:
We say that a set C {\displaystyle C} is shattered by H {\displaystyle H} if H ∩ C {\displaystyle H\cap C} contains all the subsets of C {\displaystyle C} , i.e.:
The VC dimension D {\displaystyle D} of H {\displaystyle H} is the cardinality of the largest set that is shattered by H {\displaystyle H} . If arbitrarily large sets can be shattered, the VC dimension is ∞ {\displaystyle \infty } .
A binary classification model f {\displaystyle f} with some parameter vector θ {\displaystyle \theta } is said to shatter a set of generally positioned data points ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\ldots ,x_{n})} if, for every assignment of labels to those points, there exists a θ {\displaystyle \theta } such that the model f {\displaystyle f} makes no errors when evaluating that set of data points [ citation needed ] .
The VC dimension of a model f {\displaystyle f} is the maximum number of points that can be arranged so that f {\displaystyle f} shatters them. More formally, it is the maximum cardinal D {\displaystyle D} such that there exists a generally positioned data point set of cardinality D {\displaystyle D} that can be shattered by f {\displaystyle f} .
The VC dimension can predict a probabilistic upper bound on the test error of a classification model. Vapnik [ 3 ] proved that the probability of the test error (i.e., risk with 0–1 loss function) distancing from an upper bound (on data that is drawn i.i.d. from the same distribution as the training set) is given by:
where D {\displaystyle D} is the VC dimension of the classification model, 0 < η ⩽ 1 {\displaystyle 0<\eta \leqslant 1} , and N {\displaystyle N} is the size of the training set (restriction: this formula is valid when D ≪ N {\displaystyle D\ll N} . When D {\displaystyle D} is larger, the test-error may be much higher than the training-error. This is due to overfitting ).
The VC dimension also appears in sample-complexity bounds . A space of binary functions with VC dimension D {\displaystyle D} can be learned with: [ 4 ] : 73
samples, where ε {\displaystyle \varepsilon } is the learning error and δ {\displaystyle \delta } is the failure probability. Thus, the sample-complexity is a linear function of the VC dimension of the hypothesis space.
The VC dimension is one of the critical parameters in the size of ε-nets , which determines the complexity of approximation algorithms based on them; range sets without finite VC dimension may not have finite ε-nets at all.
A finite projective plane of order n is a collection of n 2 + n + 1 sets (called "lines") over n 2 + n + 1 elements (called "points"), for which:
The VC dimension of a finite projective plane is 2. [ 5 ]
Proof : (a) For each pair of distinct points, there is one line that contains both of them, lines that contain only one of them, and lines that contain none of them, so every set of size 2 is shattered. (b) For any triple of three distinct points, if there is a line x that contain all three, then there is no line y that contains exactly two (since then x and y would intersect in two points, which is contrary to the definition of a projective plane). Hence, no set of size 3 is shattered.
Suppose we have a base class B {\displaystyle B} of simple classifiers, whose VC dimension is D {\displaystyle D} .
We can construct a more powerful classifier by combining several different classifiers from B {\displaystyle B} ; this technique is called boosting . Formally, given T {\displaystyle T} classifiers h 1 , … , h T ∈ B {\displaystyle h_{1},\ldots ,h_{T}\in B} and a weight vector w ∈ R T {\displaystyle w\in \mathbb {R} ^{T}} , we can define the following classifier:
The VC dimension of the set of all such classifiers (for all selections of T {\displaystyle T} classifiers from B {\displaystyle B} and a weight-vector from R T {\displaystyle \mathbb {R} ^{T}} ), assuming T , D ≥ 3 {\displaystyle T,D\geq 3} , is at most: [ 4 ] : 108–109
A neural network is described by a directed acyclic graph G ( V , E ), where:
The VC dimension of a neural network is bounded as follows: [ 4 ] : 234–235
The VC dimension is defined for spaces of binary functions (functions to {0,1}). Several generalizations have been suggested for spaces of non-binary functions.
|
https://en.wikipedia.org/wiki/Vapnik–Chervonenkis_dimension
|
In chemistry , Vapochromism strongly overlaps with solvatochromism since vapochromic systems are ones in which dyes change colour in response to the vapour of an organic compound or gas. Vapochromic devices are the optical branch of electronic noses. The main applications are in sensors for detecting volatile organic compounds (VOCs) in a variety of environments, including industrial, domestic and medical areas.
An example of such a device is an array consisting of a metalloporphyrin ( Lewis acid ), a pH indicator dye and a solvatochromic dye. The array is scanned with a flat-bed recorder, and the result are compared with a library of known VOCs . Vaporchromic materials are sometimes Pt or Au complexes, which undergo distinct color changes when exposed to VOCs.
This article about materials science is a stub . You can help Wikipedia by expanding it .
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vapochromism
|
In physics, a vapor ( American English ) or vapour ( Commonwealth English ; see spelling differences ) is a substance in the gas phase at a temperature lower than its critical temperature , [ 1 ] which means that the vapor can be condensed to a liquid by increasing the pressure on it without reducing the temperature of the vapor. A vapor is different from an aerosol . [ 2 ] An aerosol is a suspension of tiny particles of liquid, solid, or both within a gas. [ 2 ]
For example, water has a critical temperature of 647 K (374 °C; 705 °F), which is the highest temperature at which liquid water can exist at any pressure. In the atmosphere at ordinary temperatures gaseous water (known as water vapor ) will condense into a liquid if its partial pressure is increased sufficiently.
A vapor may co-exist with a liquid (or a solid). When this is true, the two phases will be in equilibrium, and the gas-partial pressure will be equal to the equilibrium vapor pressure of the liquid (or solid). [ 1 ]
Vapor refers to a gas phase at a temperature where the same substance can also exist in the liquid or solid state, below the critical temperature of the substance. (For example, water has a critical temperature of 374 °C (647 K), which is the highest temperature at which liquid water can exist.) If the vapor is in contact with a liquid or solid phase, the two phases will be in a state of equilibrium . The term gas refers to a compressible fluid phase. Fixed gases are gases for which no liquid or solid can form at the temperature of the gas, such as air at typical ambient temperatures. A liquid or solid does not have to boil to release a vapor.
Vapor is responsible for the familiar processes of cloud formation and condensation . It is commonly employed to carry out the physical processes of distillation and headspace extraction from a liquid sample prior to gas chromatography .
The constituent molecules of a vapor possess vibrational, rotational, and translational motion. These motions are considered in the kinetic theory of gases .
The vapor pressure is the equilibrium pressure from a liquid or a solid at a specific temperature. The equilibrium vapor pressure of a liquid or solid is not affected by the amount of contact with the liquid or solid interface.
The normal boiling point of a liquid is the temperature at which the vapor pressure is equal to normal atmospheric pressure . [ 1 ]
For two-phase systems (e.g., two liquid phases), the vapor pressure of the individual phases are equal. In the absence of stronger inter-species attractions between like-like or like-unlike molecules, the vapor pressure follows Raoult's law , which states that the partial pressure of each component is the product of the vapor pressure of the pure component and its mole fraction in the mixture. The total vapor pressure is the sum of the component partial pressures. [ 3 ]
E-cigarettes produce aerosols , not vapors. [ 2 ]
Since it is in the gas phase, the amount of vapor present is quantified by the partial pressure of the gas. Also, vapors obey the barometric formula in a gravitational field, just as conventional atmospheric gases do.
|
https://en.wikipedia.org/wiki/Vapor
|
Vapor-compression desalination ( VC ) refers to a distillation process where the evaporation of sea or saline water is obtained by the application of heat delivered by compressed vapor.
Since compression of the vapor increases both the pressure and temperature of the vapor, it is possible to use the latent heat rejected during condensation to generate additional vapor. The effect of compressing water vapor can be done by two methods.
The first method utilizes an ejector system motivated by steam at manometric pressure from an external source in order to recycle vapor from the desalination process. The form is designated ejectocompression or thermocompression .
Using the second method, water vapor is compressed by means of a mechanical device, electrically driven in most cases. This form is designated mechanical vapor compression (MVC). The MVC process comprises two different versions: vapor compression (VC) and vacuum vapor compression (VVC). VC designates those systems in which the evaporation effect takes place at manometric pressure, and VVC the systems in which evaporation takes place at sub-atmospheric pressures (under vacuum).
The compression is mechanically powered by something such as a compression turbine. As vapor is generated, it is passed over to a heat exchanging condenser which returns the vapor to water. The resulting fresh water is moved to storage while the heat removed during condensation is transmitted to the remaining feedstock.
The VVC process is the more efficient distillation process available in the market today in terms of energy consumption and water recovery ratio. [ 1 ] As the system is electrically driven, it is considered a "clean" process, it is highly reliable and simple to operate and maintain.
|
https://en.wikipedia.org/wiki/Vapor-compression_desalination
|
Vapor-compression evaporation is the evaporation method by which a blower , compressor or jet ejector is used to compress , and thus, increase the pressure of the vapor produced. Since the pressure increase of the vapor also generates an increase in the condensation temperature, the same vapor can serve as the heating medium for its "mother" liquid or solution being concentrated, from which the vapor was generated to begin with. If no compression was provided, the vapor would be at the same temperature as the boiling liquid/solution, and no heat transfer could take place.
It is also sometimes called vapor compression distillation (VCD) . If compression is performed by a mechanically driven compressor or blower, this evaporation process is usually referred to as MVR (mechanical vapor recompression). In case of compression performed by high pressure motive steam ejectors , the process is usually called thermocompression , steam compression or ejectocompression . [ citation needed ]
In this case the energy input to the system lies in the pumping energy of the compressor. The theoretical energy consumption will be equal to E = Q ∗ ( H 2 − H 1 ) {\displaystyle E=Q*(H2-H1)} , where
In SI units, these are respectively measured in kJ , kg and kJ/kg.
The actual energy input will be greater than the theoretical value and will depend on the efficiency of the system, which is usually between 30% and 60%. For example, suppose the theoretical energy input is 300 kJ and the efficiency is 30%. The actual energy input would be 300 x 100/30 = 1,000 kJ.
In a large unit, the compression power is between 35 and 45 kW per metric ton of compressed vapors. [ clarification needed ]
The compressor is necessarily the core of the unit. Compressors used for this application are usually of the centrifugal type, or positive displacement units such as the Roots blowers , similar to the (much smaller) Roots type supercharger . Very large units (evaporation capacity 100 metric tons per hour or more) sometimes use Axial-flow compressors . The compression work will deliver the steam superheated if compared to the theoretical pressure/temperature equilibrium. For this reason, the vast majority of MVR units feature a desuperheater between the compressor and the main heat exchanger.
The energy input is here given by the energy of a quantity of steam ( motive steam ), at a pressure higher than those of both the inlet and the outlet vapors.
The quantity of compressed vapors is therefore higher than the inlet : Q d = Q s + Q m {\displaystyle Qd=Qs+Qm} Where Q d is the steam quantity at ejector delivery, Q s at ejector suction and Q m is the motive steam quantity. For this reason, a thermocompression evaporator often features a vapor condenser , due to the possible excess of steam necessary for the compression if compared with the steam required to evaporate the solution.
The quantity Q m of motive steam per unit suction quantity is a function of both the motive ratio of motive steam pressure vs. suction pressure and the compression ratio of delivery pressure vs. suction pressure. In principle, the higher the compression ratio and the lower the motive ratio the higher will be the specific motive steam consumption, i. e. the less efficient the energy balance.
The heart of any thermocompression evaporator is clearly the steam ejector , exhaustively described in the relevant page. The size of the other pieces of equipment, such as the main heat exchanger , the vapor head , etc. (see evaporator for details), is governed by the evaporation process.
These two compression-type evaporators have different fields of application, although they do sometimes overlap.
As a conclusion, MVR machines are used in large, energy-efficient units, while thermocompression units tend to limit their use to small units, where energy consumption is not a big issue.
The efficiency and feasibility of this process depends on the efficiency of the compressing device (e.g., blower, compressor or steam ejector) and the heat transfer coefficient attained in the heat exchanger contacting the condensing vapor and the boiling "mother" solution/liquid. Theoretically, if the resulting condensate is subcooled , this process could allow full recovery of the latent heat of vaporization that would otherwise be lost if the vapor, rather than the condensate, was the final product; therefore, this method of evaporation is very energy efficient. The evaporation process may be solely driven by the mechanical work provided by the compressing device.
A vapor-compression evaporator, like most evaporators , can make reasonably clean water from any water source. In a salt crystallizer, for example, a typical analysis of the resulting condensate shows a typical content of residual salt not higher than 50 ppm or, in terms of electrical conductance , not higher than 10 μS/cm . This results in a drinkable water, if the other sanitary requirements are fulfilled. While this cannot compete in the marketplace with reverse osmosis or demineralization , vapor compression chiefly differs from these thanks to its ability to make clean water from saturated or even crystallizing brines with total dissolved solids (TDS) up to 650 g/L. The other two technologies can make clean water from sources no higher in TDS than approximately 35 g/L.
For economic reasons evaporators are seldom operated on low-TDS water sources. Those applications are filled by reverse osmosis. The already brackish water which enters a typical evaporator is concentrated further. The increased dissolved solids act to increase the boiling point well beyond that of pure water. Seawater with a TDS of approximately 30 g/L exhibits a boiling point elevation of less than 1 K but saturated sodium chloride solution at 360 g/L has a boiling point elevation of about 7 K. This boiling point elevation is a challenge for vapor-compression evaporation in that it increases the pressure ratio that the steam compressor must attain to effect vaporization. Since boiling point elevation determines the pressure ratio in the compressor, it is the main overall factor in operating costs.
The technology used today to extract bitumen from the Athabasca oil sands is the water-intensive steam-assisted gravity drainage (SAGD) method. [ 1 ] In the late 1990s former nuclear engineer Bill Heins of General Electric Company 's RCC Thermal Products conceived an evaporator technology called falling film or mechanical vapor compression evaporation. In 1999 and 2002 Petro-Canada 's MacKay River facility was the first to install 1999 and 2002 GE SAGD zero-liquid discharge (ZLD) systems using a combination of the new evaporative technology and crystallizer system in which all the water was recycled and only solids were discharged off site. [ 1 ] This new evaporative technology began to replace older water treatment techniques employed by SAGD facilities which involved the use of warm lime softening to remove silica and magnesium and weak acid cation ion exchange used to remove calcium . [ 1 ] The vapor-compression evaporation process replaced the once-through steam generators (OTSG) traditionally used for steam production. OTSG generally ran on natural gas which in 2008 had become increasingly valuable. The water quality of evaporators is four times better which is needed for the drum boilers. The evaporators, when coupled with standard drum boilers, produce steam which is more "reliable, less costly to operate, and less water-intensive." By 2008 about 85 per cent of SAGD facilities in the Alberta oil sands had adopted evaporative technology. "SAGD, unlike other thermal processes such as cyclic steam stimulation (CSS), requires 100 per cent quality steam." [ 1 ]
|
https://en.wikipedia.org/wiki/Vapor-compression_evaporation
|
Vapour-compression refrigeration or vapor-compression refrigeration system ( VCRS ), [ 1 ] in which the refrigerant undergoes phase changes , is one of the many refrigeration cycles and is the most widely used method for air conditioning of buildings and automobiles . It is also used in domestic and commercial refrigerators , large-scale warehouses for chilled or frozen storage of foods and meats, refrigerated trucks and railroad cars, and a host of other commercial and industrial services. Oil refineries , petrochemical and chemical processing plants, and natural gas processing plants are among the many types of industrial plants that often utilize large vapor-compression refrigeration systems. Cascade refrigeration systems may also be implemented using two compressors.
Refrigeration may be defined as lowering the temperature of an enclosed space by removing heat from that space and transferring it elsewhere. A device that performs this function may also be called an air conditioner , refrigerator , air source heat pump , geothermal heat pump , or chiller ( heat pump ).
Vapor-compression uses a circulating liquid refrigerant as the medium which absorbs and removes heat from the space to be cooled and subsequently rejects that heat elsewhere. Figure 1 depicts a typical, single-stage vapor-compression system. All such systems have four components: a compressor , a condenser , a metering device or thermal expansion valve (also called a throttle valve), and an evaporator. Circulating refrigerant enters the compressor in the thermodynamic state known as a saturated vapor [ 2 ] and is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then in the thermodynamic state known as a superheated vapor and it is at a temperature and pressure at which it can be condensed with either cooling water or cooling air flowing across the coil or tubes.
The superheated vapor then passes through the condenser . This is where heat is transferred from the circulating refrigerant to an external medium, allowing the gaseous refrigerant to cool and condense into a liquid. The rejected heat is carried away by either the water or the air, depending on the type of condenser.
The condensed liquid refrigerant, in the thermodynamic state known as a saturated liquid , is next routed through an expansion valve where it undergoes an abrupt reduction in pressure. That pressure reduction results in the adiabatic flash evaporation of a part of the liquid refrigerant. The auto-refrigeration effect of the adiabatic flash evaporation lowers the temperature of the liquid and vapor refrigerant mixture to where it is colder than the temperature of the enclosed space to be refrigerated.
The cold refrigerant liquid and vapor mixture is then routed through the coil or tubes in the evaporator. Air in the enclosed space circulates across the coil or tubes due to either thermal convection or a fan . Since the air is warmer than the cold liquid refrigerant, heat is transferred from the air to the refrigerant, which cools the air and warms the refrigerant, causing evaporation , returning it to a gaseous state. While liquid remains in the refrigerant flow, its temperature will not rise above the boiling point of the refrigerant, which depends on the pressure in the evaporator. Most systems are designed to evaporate all of the refrigerant to ensure that no liquid is returned to the compressor.
To complete the refrigeration cycle , the refrigerant vapor from the evaporator is again a saturated vapor and is routed back into the compressor. Over time, the evaporator may collect ice or water from ambient humidity . The ice is melted through defrosting . The water from the melted ice or the evaporator then drips into a drip pan, and the water is carried away by gravity or a condensate pump.
The selection of working fluid has a significant impact on the performance of the refrigeration cycles and as such it plays a key role when it comes to designing or simply choosing an ideal machine for a certain task. One of the most widespread refrigerants is " Freon ". Freon is a trade name for a family of haloalkane refrigerants manufactured by DuPont and other companies. These refrigerants were commonly used due to their superior stability and safety properties: they were not flammable at room temperature and atmospheric pressure, nor obviously toxic as were the fluids they replaced, such as sulfur dioxide . Haloalkanes are also an order(s) of magnitude more expensive than petroleum-derived flammable alkanes of similar or better cooling performance.
Unfortunately, chlorine- and fluorine-bearing refrigerants reach the upper atmosphere when they escape. In the stratosphere , substances like CFCs and HCFCs break up due to UV radiation, releasing their chlorine free-radicals. These chlorine free-radicals act as catalysts in the breakdown of ozone through chain reactions. One CFC molecule can cause thousands of ozone molecules to break down. This causes severe damage to the ozone layer that shields the Earth's surface from the Sun's strong UV radiation and has been shown to lead to increased rates of skin cancer. The chlorine will remain active as a catalyst until and unless it binds with another particle, forming a stable molecule. CFC refrigerants in common but receding usage include R-11 and R-12 .
Newer refrigerants that have reduced ozone depletion effects compared to CFCs have replaced most CFC use. Examples include HCFCs (such as R-22 , used in most homes) and HFCs (such as R-134a , used in most cars). HCFCs in turn are being phased out under the Montreal Protocol and replaced by hydrofluorocarbons (HFCs), which do not contain chlorine atoms. However, CFCs, HCFCs, and HFCs all have very large global warming potential (GWP).
More benign refrigerants are currently the subject of research, such as supercritical carbon dioxide , known as R-744 . [ 3 ] These have similar efficiencies [ citation needed ] compared to existing CFC- and HFC-based compounds, and have many orders of magnitude lower global warming potential. General industry and governing body push are toward more GWP-friendly refrigerants. In industrial settings ammonia , as well as gasses like ethylene , propane , iso-butane and other hydrocarbons are commonly used (and have their own R-x customary numbers), depending on required temperatures and pressures. Many of these gases are flammable, explosive, or toxic; making their use restricted (i.e. well-controlled environment by qualified personnel, or a very small amount of refrigerant used). HFOs which can be considered to be HFCs with some carbon-carbon bonds being double bounds, do show promise of lowering GWP so little to be of no further concern. In the meantime, various blends of existing refrigerants are used to achieve the required properties and efficiency, at a reasonable cost and lower GWP.
The thermodynamics of the vapor compression cycle can be analyzed on a temperature versus entropy diagram as depicted in Figure 2. At point 1 in the diagram, the circulating refrigerant enters the compressor as a low-temperature, low-pressure saturated vapor. From point 1 to point 2, the vapor is isentropically compressed (compressed at constant entropy) and exits the compressor as a high-pressure, high-temperature vapor.
From point 2 to point 3, the vapor travels through part of the condenser which removes the heat by cooling the vapor. Between point 3 and point 4, the vapor travels through the remainder of the condenser and is condensed into a high-temperature, high-pressure subcooled liquid. Subcool is the amount of sensible heat removed from the liquid below its maximum saturation. The condensation process occurs at essentially constant pressure.
Between points 4 and 5, the subcooled liquid refrigerant passes through the expansion valve and undergoes an abrupt decrease of pressure. That process results in the adiabatic flash evaporation and auto-refrigeration of a portion of the liquid (typically, less than half of the liquid flashes). The adiabatic flash evaporation process is isenthalpic (occurs at constant enthalpy ).
Between points 5 and 1, the cold and partially vaporized refrigerant travels through the coil or tubes in the evaporator where it is totally vaporized by the warm air (from the space being refrigerated) that a fan circulates across the coil or tubes in the evaporator. The evaporation process occurs at essentially constant temperature. After evaporation is completed, the vapor will start to increase in temperature. The amount of sensible heat added to the vapor above its saturation point, i.e. its boiling point , is called superheat.
The resulting superheated vapor returns to the compressor inlet at point 1 to complete the thermodynamic cycle.
The above discussion is based on the ideal vapor-compression refrigeration cycle which does not take into account real world items like frictional pressure drop in the system, slight internal irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior (if any).
The most common compressors used in refrigeration are reciprocating and scroll compressors , but large chillers or industrial cycles may use rotary screw or centrifugal compressors. Each application prefers one or another due to size, noise, efficiency, and pressure issues. Compressors are often described as being either open, hermetic , or semi-hermetic, to describe how the compressor and/or motor is situated in relation to the refrigerant being compressed. Variations of motor/compressor types can lead to the following configurations:
Typically in hermetic, and most semi-hermetic compressors (sometimes known as accessible hermetic compressors), the compressor and motor driving the compressor are integrated, and operate within the refrigerant system. The motor is hermetic and is designed to operate, and be cooled by, the refrigerant being compressed. The obvious disadvantage of hermetic motor compressors is that the motor drive cannot be maintained in situ, and the entire compressor must be removed if a motor fails. A further disadvantage is that burnt out windings can contaminate whole refrigeration systems requiring the system to be entirely pumped down, and the refrigerant replaced.
An open compressor has a motor drive which is outside of the refrigeration system, and provides drive to the compressor by means of an input shaft with suitable gland seals. Open compressor motors are typically air-cooled and can be fairly easily exchanged or repaired without degassing of the refrigeration system. The disadvantage of this type of compressor is a failure of the shaft seals, leading to loss of refrigerant.
Open motor compressors are generally easier to cool (using ambient air) and therefore tend to be simpler in design and more reliable, especially in high pressure applications where compressed gas temperatures can be very high. However the use of liquid injection for additional cooling can generally overcome this issue in most hermetic motor compressors.
Reciprocating compressors are piston-style, positive displacement compressors.
Rotary screw compressors are also positive displacement compressors. Two meshing screw-rotors rotate in opposite directions, trapping refrigerant vapor, and reducing the volume of the refrigerant along the rotors to the discharge point.
Small units are not practical due to back-leakage but large units have very high efficiency and flow capacity.
Centrifugal compressors are dynamic compressors. These compressors raise the pressure of the refrigerant by imparting velocity or dynamic energy, using a rotating impeller, and converting it to pressure energy.
Chillers with centrifugal compressors have a 'Centrifugal Compressor Map' that shows the "surge line" and the "choke line." For the same capacity ratings, across a wider span of operating conditions, chillers with the larger diameter lower-speed compressor have a wider 'Centrifugal Compressor Map' and experience surge conditions less than those with the smaller diameter, less expensive, higher-speed compressors. The smaller diameter, higher-speed compressors have a flatter curve., [ 4 ] [ 5 ] [ 6 ]
As the refrigerant flow rate decreases, some compressors change the gap between the impeller and the volute to maintain the correct velocity to avoid surge conditions. [ 7 ]
Scroll compressors are also positive displacement compressors. The refrigerant is compressed when one spiral orbits around a second stationary spiral, creating smaller and smaller pockets and higher pressures. By the time the refrigerant is discharged, it is fully pressurized.
In order to lubricate the moving parts of the compressor, oil is added to the refrigerant during installation or commissioning. The type of oil may be mineral or synthetic to suit the compressor type, and also chosen so as not to react with the refrigerant type and other components in the system. In small refrigeration systems the oil is allowed to circulate throughout the whole circuit, but care must be taken to design the pipework and components such that oil can drain back under gravity to the compressor. In larger more distributed systems, especially in retail refrigeration, the oil is normally captured at an oil separator immediately after the compressor, and is in turn re-delivered, by an oil level management system, back to the compressor(s). Oil separators are not 100% efficient so system pipework must still be designed so that oil can drain back by gravity to the oil separator or compressor.
Some newer compressor technologies use magnetic bearings or air bearings and require no lubrication, for example the Danfoss Turbocor range of centrifugal compressors. Avoiding the need for oil lubrication and the design requirements and ancillaries associated with it, simplifies the design of the refrigerant system, increases the heat transfer coefficient in evaporators and condensers, eliminates the risk of refrigerant being contaminated with oil, and reduces maintenance requirements. [ 8 ]
In simple commercial refrigeration systems the compressor is normally controlled by a simple pressure switch, with the expansion performed by a capillary tube or thermal expansion valve . In more complex systems, including multiple compressor installations, the use of electronic controls is typical, with adjustable set points to control the pressure at which compressors cut in and cut out, and temperature control by the use of electronic expansion valves.
In addition to the operational controls, separate high-pressure and low-pressure switches are normally utilised to provide secondary protection to the compressors and other components of the system from operating outside of safe parameters.
In more advanced electronic control systems the use of floating head pressure, and proactive suction pressure, control routines allow the compressor operation to be adjusted to accurately meet differing cooling demands while reducing energy consumption.
The schematic diagram of a single-stage refrigeration system shown in Figure 1 does not include other equipment items that would be provided in a large commercial or industrial vapor compression refrigeration system, such as:
In most of the world, the cooling capacity of refrigeration systems is measured in watts . Common residential air conditioning units range in capacity from 3.5 to 18 kilowatt . In a few countries it is measured in " tons of refrigeration ", with common residential air conditioning units from about 1 to 5 tons of refrigeration.
Many systems still use HCFC refrigerants , which contribute to depletion of the Earth's ozone layer . In countries adhering to the Montreal Protocol , HCFCs are due to be phased out and are largely being replaced by ozone-friendly HFCs . However, systems using HFC refrigerants tend to be slightly less efficient than systems using HCFCs. HFCs also have an extremely large global warming potential , because they remain in the atmosphere for many years and trap heat more effectively than carbon dioxide .
With the ultimate phasing out of HCFCs already a certainty, alternative non- haloalkane refrigerants are gaining popularity. In particular, once-abandoned refrigerants such as hydrocarbons ( butane for example) and CO 2 are coming back into more extensive use. For example, Coca-Cola 's vending machines at the 2006 FIFA World Cup in Germany used refrigeration utilizing CO 2 . [ 11 ] Ammonia (NH 3 ) is one of the oldest refrigerants, with excellent performance and essentially no pollution problems. However, ammonia has two disadvantages: it is toxic and it is incompatible with copper tubing. [ 12 ]
In 1805, the American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. Heat would be removed from the environment by recycling vaporized refrigerant, where it would move through a compressor and condenser , and would eventually revert to a liquid form in order to repeat the refrigeration process over again. However, no such refrigeration unit was built by Evans. [ 13 ]
In 1834, an American expatriate to Great Britain, Jacob Perkins , built the first working vapor-compression refrigeration system in the world. [ 14 ] It was a closed-cycle that could operate continuously, as he described in his patent:
His prototype system worked although it did not succeed commercially. [ 15 ]
A similar attempt was made in 1842, by American physician, John Gorrie , [ 16 ] who built a working prototype, but it was a commercial failure. American engineer Alexander Twining took out a British patent in 1850 for a vapor compression system that used ether.
The first practical vapor compression refrigeration system was built by James Harrison , a British journalist who had emigrated to Australia . [ 17 ] His 1856 patent was for a vapor compression system using ether, alcohol or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong , Victoria , and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapor-compression refrigeration to breweries and meat packing houses and, by 1861, a dozen of his systems were in operation in Australia and England.
The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde , an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia , sulfur dioxide SO 2 , and methyl chloride (CH 3 Cl) as refrigerants and they were widely used for that purpose until the late 1920s.
|
https://en.wikipedia.org/wiki/Vapor-compression_refrigeration
|
Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. [ 1 ] [ 2 ] [ 3 ] In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as α {\displaystyle \alpha } .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages .
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide ).
For a liquid mixture of two components (called a binary mixture ) at a given temperature and pressure , the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a K {\displaystyle K} value (= y / x {\displaystyle y/x} ) for a more volatile component is larger than a K {\displaystyle K} value for a less volatile component. That means that α {\displaystyle \alpha } ≥ 1 since the larger K {\displaystyle K} value of the more volatile component is in the numerator and the smaller K {\displaystyle K} of the less volatile component is in the denominator.
α {\displaystyle \alpha } is a unitless quantity. When the volatilities of both key components are equal, α {\displaystyle \alpha } = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same ( azeotrope ). As the value of α {\displaystyle \alpha } increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation column consists predominantly of the more volatile component and some small amount of the less volatile component and the bottoms fraction consists predominantly of the less volatile component and some small amount of the more volatile component.
A liquid mixture containing many components is called a multi-component mixture. When a multi-component mixture is distilled, the overhead fraction and the bottoms fraction typically contain much more than one or two components. For example, some intermediate products in an oil refinery are multi-component liquid mixtures that may contain alkane , alkene and alkyne hydrocarbons —ranging from methane , having one carbon atom, to decanes having ten carbon atoms. For distilling such a mixture, the distillation column may be designed (for example) to produce:
Such a distillation column is typically called a depropanizer.
The designer would designate the key components governing the separation design to be propane as the so-called light key (LK) and isobutane as the so-called heavy key (HK) . In that context, a lighter component means a component with a lower boiling point (or a higher vapor pressure) and a heavier component means a component with a higher boiling point (or a lower vapor pressure).
Thus, for the distillation of any multi-component mixture, the relative volatility is often defined as
Large-scale industrial distillation is rarely undertaken if the relative volatility is less than 1.05. [ 2 ]
The values of K {\displaystyle K} have been correlated empirically or theoretically in terms of temperature, pressure and phase compositions in the form of equations, tables or graph such as the well-known DePriester charts . [ 4 ]
K {\displaystyle K} values are widely used in the design of large-scale distillation columns for distilling multi-component mixtures in oil refineries, petrochemical and chemical plants , natural gas processing plants and other industries.
|
https://en.wikipedia.org/wiki/Vapor-liquid_K_values
|
In physical chemistry , Henry's law is a gas law that states that the amount of dissolved gas in a liquid is directly proportional at equilibrium to its partial pressure above the liquid. The proportionality factor is called Henry's law constant . It was formulated by the English chemist William Henry , who studied the topic in the early 19th century.
In simple words, it states that the partial pressure of a gas in the vapour phase is directly proportional to the mole fraction of a gas in solution.
An example where Henry's law is at play is the depth-dependent dissolution of oxygen and nitrogen in the blood of underwater divers that changes during decompression , going to decompression sickness . An everyday example is carbonated soft drinks , which contain dissolved carbon dioxide. Before opening, the gas above the drink in its container is almost pure carbon dioxide , at a pressure higher than atmospheric pressure . After the bottle is opened, this gas escapes, moving the partial pressure of carbon dioxide above the liquid to be much lower, resulting in degassing as the dissolved carbon dioxide comes out of the solution.
In his 1803 publication about the quantity of gases absorbed by water, [ 1 ] William Henry described the results of his experiments:
… water takes up, of gas condensed by one, two, or more additional atmospheres, a quantity which, ordinarily compressed, would be equal to twice, thrice, &c. the volume absorbed under the common pressure of the atmosphere.
Charles Coulston Gillispie states that John Dalton "supposed that the separation of gas particles one from another in the vapor phase bears the ratio of a small whole number to their interatomic distance in solution. Henry's law follows as a consequence if this ratio is a constant for each gas at a given temperature." [ 2 ]
Under high pressure, solubility of CO 2 increases. On opening a container of a carbonated beverage under pressure, pressure decreases to atmospheric, so that solubility decreases and the carbon dioxide forms bubbles that are released from the liquid.
It is often noted that beer served by gravity (that is, directly from a tap in the cask) is less heavily carbonated than the same beer served via a hand-pump (or beer-engine). This is because beer is pressurised on its way to the point of service by the action of the beer engine, causing carbon dioxide to dissolve in the beer. This then comes out of solution once the beer has left the pump, causing a higher level of perceptible 'condition' in the beer.
Concentration of O 2 in the blood and tissues is so low that they feel weak and are unable to think properly, a condition called hypoxia .
In underwater diving , gas is breathed at the ambient pressure which increases with depth due to the hydrostatic pressure . Solubility of gases increases with greater depth (greater pressure) according to Henry's law, so the body tissues take on more gas over time in greater depths of water. When ascending the diver is decompressed and the solubility of the gases dissolved in the tissues decreases accordingly. If the supersaturation is too great, bubbles may form and grow, and the presence of these bubbles can cause blockages in capillaries, or distortion in the more solid tissues which can cause damage known as decompression sickness . To avoid this injury the diver must ascend slowly enough that the excess dissolved gas is carried away by the blood and released into the lung gas.
There are many ways to define the proportionality constant of Henry's law, which can be subdivided into two fundamental types: One possibility is to put the aqueous phase into the numerator and the gaseous phase into the denominator ("aq/gas"). This results in the Henry's law solubility constant H s {\displaystyle H_{\rm {s}}} . Its value increases with increased solubility. Alternatively, numerator and denominator can be switched ("gas/aq"), which results in the Henry's law volatility constant H v {\displaystyle H_{\rm {v}}} . The value of H v {\displaystyle H_{\rm {v}}} decreases with increased solubility. IUPAC describes several variants of both fundamental types. [ 3 ] This results from the multiplicity of quantities that can be chosen to describe the composition of the two phases. Typical choices for the aqueous phase are molar concentration ( c a {\displaystyle c_{\rm {a}}} ), molality ( b {\displaystyle b} ), and molar mixing ratio ( x {\displaystyle x} ). For the gas phase, molar concentration ( c g {\displaystyle c_{\rm {g}}} ) and partial pressure ( p {\displaystyle p} ) are often used. It is not possible to use the gas-phase mixing ratio ( y {\displaystyle y} ) because at a given gas-phase mixing ratio, the aqueous-phase concentration c a {\displaystyle c_{\rm {a}}} depends on the total pressure and thus the ratio y / c a {\displaystyle y/c_{\rm {a}}} is not a constant. [ 4 ] To specify the exact variant of the Henry's law constant, two superscripts are used. They refer to the numerator and the denominator of the definition. For example, H s c p {\displaystyle H_{\rm {s}}^{cp}} refers to the Henry solubility defined as c / p {\displaystyle c/p} .
Atmospheric chemists often define the Henry solubility as H s c p = c a p , {\displaystyle H_{\text{s}}^{cp}={\frac {c_{\text{a}}}{p}},} where c a {\displaystyle c_{\text{a}}} is the concentration of a species in the aqueous phase, and p {\displaystyle p} is the partial pressure of that species in the gas phase under equilibrium conditions.
The SI unit for H s c p {\displaystyle H_{\text{s}}^{cp}} is mol/(m 3 ·Pa); however, often the unit M / atm is used, since c a {\displaystyle c_{\text{a}}} is usually expressed in M (1 M = 1 mol/dm 3 ) and p {\displaystyle p} in atm (1 atm = 101325 Pa).
The Henry solubility can also be expressed as the dimensionless ratio between the aqueous-phase concentration c a {\displaystyle c_{\text{a}}} of a species and its gas-phase concentration c g {\displaystyle c_{\text{g}}} : H s c c = c a c g . {\displaystyle H_{\text{s}}^{cc}={\frac {c_{\text{a}}}{c_{\text{g}}}}.}
For an ideal gas, the conversion is H s c c = R T H s c p , {\displaystyle H_{\text{s}}^{cc}=RTH_{\text{s}}^{cp},} where R {\displaystyle R} is the gas constant , and T {\displaystyle T} is the temperature.
Sometimes, this dimensionless constant is called the water–air partitioning coefficient K WA {\displaystyle K_{\text{WA}}} . [ 5 ] It is closely related to the various, slightly different definitions of the Ostwald coefficient L {\displaystyle L} , as discussed by Battino (1984). [ 6 ]
Another Henry's law solubility constant is H s x p = x p , {\displaystyle H_{\text{s}}^{xp}={\frac {x}{p}},} where x {\displaystyle x} is the molar mixing ratio in the aqueous phase. For a dilute aqueous solution the conversion between x {\displaystyle x} and c a {\displaystyle c_{\text{a}}} is c a ≈ x ρ H 2 O M H 2 O , {\displaystyle c_{\text{a}}\approx x{\frac {\rho _{{\ce {H2O}}}}{M_{{\ce {H_2O}}}}},} where ρ H 2 O {\displaystyle \rho _{{\ce {H2O}}}} is the density of water, and M H 2 O {\displaystyle M_{{\ce {H2O}}}} is the molar mass of water. Thus H s x p ≈ M H 2 O ρ H 2 O H s c p . {\displaystyle H_{\text{s}}^{xp}\approx {\frac {M_{{\ce {H2O}}}}{\rho _{{\ce {H2O}}}}}H_{\text{s}}^{cp}.}
The SI unit for H s x p {\displaystyle H_{\text{s}}^{xp}} is Pa −1 , although atm −1 is still frequently used.
It can be advantageous to describe the aqueous phase in terms of molality instead of concentration. The molality of a solution does not change with T {\displaystyle T} , since it refers to the mass of the solvent. In contrast, the concentration c {\displaystyle c} does change with T {\displaystyle T} , since the density of a solution and thus its volume are temperature-dependent. Defining the aqueous-phase composition via molality has the advantage that any temperature dependence of the Henry's law constant is a true solubility phenomenon and not introduced indirectly via a density change of the solution. Using molality, the Henry solubility can be defined as H s b p = b p , {\displaystyle H_{\text{s}}^{bp}={\frac {b}{p}},} where b {\displaystyle b} is used as the symbol for molality (instead of m {\displaystyle m} ) to avoid confusion with the symbol m {\displaystyle m} for mass. The SI unit for H s b p {\displaystyle H_{\text{s}}^{bp}} is mol/(kg·Pa). There is no simple way to calculate H s c p {\displaystyle H_{\text{s}}^{cp}} from H s b p {\displaystyle H_{\text{s}}^{bp}} , since the conversion between concentration c a {\displaystyle c_{\text{a}}} and molality b {\displaystyle b} involves all solutes of a solution. For a solution with a total of n {\displaystyle n} solutes with indices i = 1 , … , n {\displaystyle i=1,\ldots ,n} , the conversion is c a = b ρ 1 + ∑ i = 1 n b i M i , {\displaystyle c_{\text{a}}={\frac {b\rho }{1+\sum \limits _{i=1}^{n}b_{i}M_{i}}},} where ρ {\displaystyle \rho } is the density of the solution, and M i {\displaystyle M_{i}} are the molar masses. Here b {\displaystyle b} is identical to one of the b i {\displaystyle b_{i}} in the denominator. If there is only one solute, the equation simplifies to c a = b ρ 1 + b M . {\displaystyle c_{\text{a}}={\frac {b\rho }{1+bM}}.}
Henry's law is only valid for dilute solutions where b M ≪ 1 {\displaystyle bM\ll 1} and ρ ≈ ρ H 2 O {\displaystyle \rho \approx \rho _{{\ce {H2O}}}} . In this case the conversion reduces further to c a ≈ b ρ H 2 O , {\displaystyle c_{\text{a}}\approx b\rho _{{\ce {H2O}}},} and thus H s b p ≈ H s c p ρ H 2 O . {\displaystyle H_{\text{s}}^{bp}\approx {\frac {H_{\text{s}}^{cp}}{\rho _{{\ce {H2O}}}}}.}
According to Sazonov and Shaw, [ 7 ] the dimensionless Bunsen coefficient α {\displaystyle \alpha } is defined as "the volume of saturating gas, V 1 , reduced to T ° = 273.15 K, p ° = 1 bar, which is absorbed by unit volume V 2 * of pure solvent at the temperature of measurement and partial pressure of 1 bar." If the gas is ideal, the pressure cancels out, and the conversion to H s c p {\displaystyle H_{\text{s}}^{cp}} is simply H s c p = α 1 R T STP , {\displaystyle H_{\text{s}}^{cp}=\alpha {\frac {1}{RT^{\text{STP}}}},} with T STP {\displaystyle T^{\text{STP}}} = 273.15 K. Note, that according to this definition, the conversion factor is not temperature-dependent. Independent of the temperature that the Bunsen coefficient refers to, 273.15 K is always used for the conversion. The Bunsen coefficient, which is named after Robert Bunsen , has been used mainly in the older literature, and IUPAC considers it to be obsolete. [ 3 ]
According to Sazonov and Shaw, [ 7 ] the Kuenen coefficient S {\displaystyle S} is defined as "the volume of saturating gas V (g), reduced to T ° = 273.15 K, p ° = bar, which is dissolved by unit mass of pure solvent at the temperature of measurement and partial pressure 1 bar." If the gas is ideal, the relation to H s c p {\displaystyle H_{\text{s}}^{cp}} is H s c p = S ρ R T STP , {\displaystyle H_{\text{s}}^{cp}=S{\frac {\rho }{RT^{\text{STP}}}},} where ρ {\displaystyle \rho } is the density of the solvent, and T STP {\displaystyle T^{\text{STP}}} = 273.15 K. The SI unit for S {\displaystyle S} is m 3 /kg. [ 7 ] The Kuenen coefficient, which is named after Johannes Kuenen , has been used mainly in the older literature, and IUPAC considers it to be obsolete. [ 3 ]
A common way to define a Henry volatility is dividing the partial pressure by the aqueous-phase concentration:
The SI unit for H v p c {\displaystyle H_{\rm {v}}^{pc}} is Pa·m 3 /mol.
Another Henry volatility is
The SI unit for H v p x {\displaystyle H_{\rm {v}}^{px}} is Pa. However, atm is still frequently used.
The Henry volatility can also be expressed as the dimensionless ratio between the gas-phase concentration c g {\displaystyle c_{\text{g}}} of a species and its aqueous-phase concentration c a {\displaystyle c_{\text{a}}} :
In chemical engineering and environmental chemistry , this dimensionless constant is often called the air–water partitioning coefficient K AW {\displaystyle K_{\text{AW}}} . [ 8 ] [ 9 ]
A large compilation of Henry's law constants has been published by Sander (2023). [ 10 ] A few selected values are shown in the table below:
When the temperature of a system changes, the Henry constant also changes. The temperature dependence of equilibrium constants can generally be described with the Van 't Hoff equation , which also applies to Henry's law constants:
where Δ sol H {\displaystyle \Delta _{\text{sol}}H} is the enthalpy of dissolution. Note that the letter H {\displaystyle H} in the symbol Δ sol H {\displaystyle \Delta _{\text{sol}}H} refers to enthalpy and is not related to the letter H {\displaystyle H} for Henry's law constants. This applies to the Henry's solubility ratio, H s {\displaystyle H_{s}} ; for Henry's volatility ratio, H v {\displaystyle H_{v}} , the sign of the right-hand side must be reversed.
Integrating the above equation and creating an expression based on H ∘ {\displaystyle H^{\circ }} at the reference temperature T ∘ {\displaystyle T^{\circ }} = 298.15 K yields:
The van 't Hoff equation in this form is only valid for a limited temperature range in which Δ sol H {\displaystyle \Delta _{\text{sol}}H} does not change much with temperature (around 20K of variations).
The following table lists some temperature dependencies:
Solubility of permanent gases usually decreases with increasing temperature at around room temperature. However, for aqueous solutions, the Henry's law solubility constant for many species goes through a minimum. For most permanent gases, the minimum is below 120 °C. Often, the smaller the gas molecule (and the lower the gas solubility in water), the lower the temperature of the maximum of the Henry's law constant. Thus, the maximum is at about 30 °C for helium, 92 to 93 °C for argon, nitrogen and oxygen, and 114 °C for xenon. [ 12 ]
The Henry's law constants mentioned so far do not consider any chemical equilibria in the aqueous phase. This type is called the intrinsic , or physical , Henry's law constant. For example, the intrinsic Henry's law solubility constant of formaldehyde can be defined as
In aqueous solution, formaldehyde is almost completely hydrated:
The total concentration of dissolved formaldehyde is
Taking this equilibrium into account, an effective Henry's law constant H s , e f f {\displaystyle H_{\rm {s,eff}}} can be defined as
For acids and bases, the effective Henry's law constant is not a useful quantity because it depends on the pH of the solution. [ 10 ] In order to obtain a pH-independent constant, the product of the intrinsic Henry's law constant H s cp {\displaystyle H_{\rm {s}}^{{\ce {cp}}}} and the acidity constant K A {\displaystyle K_{{\ce {A}}}} is often used for strong acids like hydrochloric acid (HCl):
Although H ′ {\displaystyle H'} is usually also called a Henry's law constant, it is a different quantity and it has different units than H s cp {\displaystyle H_{\rm {s}}^{{\ce {cp}}}} .
Values of Henry's law constants for aqueous solutions depend on the composition of the solution, i.e., on its ionic strength and on dissolved organics. In general, the solubility of a gas decreases with increasing salinity (" salting out "). However, a " salting in " effect has also been observed, for example for the effective Henry's law constant of glyoxal . The effect can be described with the Sechenov equation, named after the Russian physiologist Ivan Sechenov (sometimes the German transliteration "Setschenow" of the Cyrillic name Се́ченов is used). There are many alternative ways to define the Sechenov equation, depending on how the aqueous-phase composition is described (based on concentration, molality, or molar fraction) and which variant of the Henry's law constant is used. Describing the solution in terms of molality is preferred because molality is invariant to temperature and to the addition of dry salt to the solution. Thus, the Sechenov equation can be written as
where H s , 0 b p {\displaystyle H_{\rm {s,0}}^{bp}} is the Henry's law constant in pure water, H s b p {\displaystyle H_{\rm {s}}^{bp}} is the Henry's law constant in the salt solution, k s {\displaystyle k_{\rm {s}}} is the molality-based Sechenov constant, and b ( salt ) {\displaystyle b({\text{salt}})} is the molality of the salt.
Henry's law has been shown to apply to a wide range of solutes in the limit of infinite dilution ( x → 0), including non-volatile substances such as sucrose . In these cases, it is necessary to state the law in terms of chemical potentials . For a solute in an ideal dilute solution, the chemical potential depends only on the concentration. For non-ideal solutions, the activity coefficients of the components must be taken into account:
where γ c = H v p ∗ {\displaystyle \gamma _{c}={\frac {H_{\rm {v}}}{p^{*}}}} for a volatile solute; c ° = 1 mol/L.
For non-ideal solutions, the infinite dilution activity coefficient γ c depends on the concentration and must be determined at the concentration of interest. The activity coefficient can also be obtained for non-volatile solutes, where the vapor pressure of the pure substance is negligible, by using the Gibbs-Duhem relation :
By measuring the change in vapor pressure (and hence chemical potential) of the solvent, the chemical potential of the solute can be deduced.
The standard state for a dilute solution is also defined in terms of infinite-dilution behavior. Although the standard concentration c ° is taken to be 1 mol/L by convention, the standard state is a hypothetical solution of 1 mol/L in which the solute has its limiting infinite-dilution properties. This has the effect that all non-ideal behavior is described by the activity coefficient: the activity coefficient at 1 mol/L is not necessarily unity (and is frequently quite different from unity).
All the relations above can also be expressed in terms of molalities b rather than concentrations, e.g.:
where γ b = H v p b p ∗ {\displaystyle \gamma _{b}={\frac {H_{\rm {v}}^{pb}}{p^{*}}}} for a volatile solute; b ° = 1 mol/kg.
The standard chemical potential μ m °, the activity coefficient γ m and the Henry's law constant H v pb all have different numerical values when molalities are used in place of concentrations.
Henry's law solubility constant H s , 2 , M x p {\displaystyle H_{\rm {s,2,M}}^{xp}} for a gas 2 in a mixture M of two solvents 1 and 3 depends on the individual constants for each solvent, H s , 2 , 1 x p {\displaystyle H_{\rm {s,2,1}}^{xp}} and H s , 2 , 3 x p {\displaystyle H_{\rm {s,2,3}}^{xp}} according [ 13 ] to:
Where x 1 {\displaystyle x_{1}} , x 3 {\displaystyle x_{3}} are the molar ratios of each solvent in the mixture and a 13 is the interaction parameter of the solvents from Wohl expansion of the excess chemical potential of the ternary mixtures.
A similar relationship can be found for the volatility constant H v , 2 , M p x {\displaystyle H_{\rm {v,2,M}}^{px}} , by remembering that H v p x = 1 / H s x p {\displaystyle H_{\rm {v}}^{px}=1/H_{\rm {s}}^{xp}} and that, both being positive real numbers, ln H s x p = − ln ( 1 / H s x p ) = − ln H v p x {\displaystyle \ln H_{\rm {s}}^{xp}=-\ln(1/H_{\rm {s}}^{xp})=-\ln H_{\rm {v}}^{px}} , thus:
For a water-ethanol mixture, the interaction parameter a 13 has values around 0.1 ± 0.05 {\displaystyle 0.1\pm 0.05} for ethanol concentrations (volume/volume) between 5% and 25%. [ 14 ]
In geochemistry , a version of Henry's law applies to the solubility of a noble gas in contact with silicate melt. One equation used is
where
Henry's law is a limiting law that only applies for "sufficiently dilute" solutions, while Raoult's law is generally valid when the liquid phase is almost pure or for mixtures of similar substances. [ 15 ] The range of concentrations in which Henry's law applies becomes narrower the more the system diverges from ideal behavior. Roughly speaking, that is the more chemically "different" the solute is from the solvent.
For a dilute solution, the concentration of the solute is approximately proportional to its mole fraction x , and Henry's law can be written as
This can be compared with Raoult's law :
where p * is the vapor pressure of the pure component.
At first sight, Raoult's law appears to be a special case of Henry's law, where H v px = p *. This is true for pairs of closely related substances, such as benzene and toluene , which obey Raoult's law over the entire composition range: such mixtures are called ideal mixtures.
The general case is that both laws are limit laws , and they apply at opposite ends of the composition range. The vapor pressure of the component in large excess, such as the solvent for a dilute solution, is proportional to its mole fraction, and the constant of proportionality is the vapor pressure of the pure substance (Raoult's law). The vapor pressure of the solute is also proportional to the solute's mole fraction, but the constant of proportionality is different and must be determined experimentally (Henry's law). In mathematical terms:
Raoult's law can also be related to non-gas solutes.
|
https://en.wikipedia.org/wiki/Vapor-liquid_distribution_ratio
|
A vapor-tight tank is a piece of portable onshore oil production equipment designed to store crude oil and convey oil vapors to a flare stack .
Vapor-tight tanks are horizontal vessels that can usually hold up to 14.7 pounds per square inch ( gauge ) (1.01 bar(g)). They use that pressure to force oil vapors to the flare. Connection to a flare allows these systems to be operated in situations with a high hydrogen sulfide content. In fact, their original intended use was sour crude oil production. The first vapor-tight tanks were constructed from used crude oil tank cars by Tornado Technologies.
Vapor-tight tanks are frequently packaged with an integral separator , flare stack, and other equipment to form a complete single- well battery. Because of their small size and portability, they are mostly used in temporary production of oil wells.
Canadian regulations consider that vapor-tight tanks are process vessels, rather than storage tanks , so tankage spacing and secondary containment provisions are not applicable. [ 1 ] [ 2 ]
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Vapor-tight_tank
|
A Vapor horn is a device used primarily for two-phase (liquid/vapor) feeds to petroleum refinery fractionators , which is designed to provide both bulk phase separation of the vapor and liquid, and to provide initial distribution of the feed vapor. [ 1 ] [ 2 ] [ 3 ]
For vapor/liquid phase separation, vapor horns utilize an open bottom construction and induced centrifugal action to direct entrained liquid particles to the column wall, which will then flow down into the column sump or collector tray below. Some vapor horn designs employ baffles , to avoid excessive impingement and splashing which can result in the formation of small liquid particles that are at higher risk of being re-entrained. Additional baffles can be employed to eliminate cyclonic motion once the bulk phase separation is complete and the swirling motion is no longer desirable. [ 1 ] Some vapor horns have been designed to handle high inlet velocities. [ 2 ]
Vapor horns often find application on the inlet to crude oil vacuum distillation towers , where liquid entrainment can be quite detrimental to tower performance. [ 4 ] [ 5 ] It has been noted that in high vapor rate vacuum services, vapor horns have low levels of entrainment relative to similar technologies. [ 6 ] In some cases, refiners have removed vapor horns within vacuum towers to reduce coking . While this does work as intended, it also has the unintended consequence of increasing carryover into heavy vacuum gas oil , thereby lowering its quality. [ 7 ]
|
https://en.wikipedia.org/wiki/Vapor_horn
|
Vapor lock is a problem caused by liquid fuel changing state to vapor while still in the fuel delivery system of gasoline -fueled internal combustion engines . This disrupts the operation of the fuel pump , causing loss of feed pressure to the carburetor or fuel injection system, resulting in transient loss of power or complete stalling . Restarting the engine from this state may be difficult. [ 1 ]
The fuel can vaporize due to being heated by the engine, by the local climate or due to a lower boiling point at high altitude. In regions where fuels with lower viscosity (and lower boiling threshold) are used during the winter to improve engine startup, continued use of the specialized fuels during the summer can cause vapor lock to occur more readily.
Vapor lock was far more common in older gasoline-fuel systems incorporating a low-pressure mechanical fuel pump driven by the engine, located in the engine compartment and feeding a carburetor. Such pumps were typically located higher than the fuel tank, were directly heated by the engine and fed fuel directly to the float bowl inside the carburetor. Fuel was drawn under negative pressure ( gauge pressure ) from the feed line, increasing the risk of a vapor lock developing between the tank and pump. A vapor lock being drawn into the fuel pump could disrupt the fuel pressure long enough for the float chamber in the carburetor to partially or completely drain, causing fuel starvation in the engine. Even temporary disruption of fuel supply into the float chamber is not ideal; most carburetors are designed to run at a fixed level of fuel in the float bowl and reducing the level will reduce the fuel to air mixture delivered to the engine.
Carburetor units may not effectively deal with fuel vapor being delivered to the float chamber. Most designs incorporate a pressure-balance duct linking the top of the float bowl with either the intake to the carburetor or the outside air. Even if the pump can handle vapor locks effectively, fuel vapor entering the float bowl has to be vented. If this is done via the intake system, the mixture is, in effect, enriched, creating a mixture-control and pollution issue. If it is done by venting to the outside, the result is direct hydrocarbon emission and an effective loss of fuel efficiency and possibly a fuel-odor problem. For this reason, some fuel-delivery systems allow fuel vapor to be returned to the fuel tank to be condensed back to the liquid phase, or use an active carbon filled canister where fuel vapor is absorbed. This is usually implemented by removing fuel vapor from the fuel line near the engine rather than from the float bowl. Such a system may also divert excess fuel pressure from the pump back to the tank.
Most modern engines are equipped with fuel injection and have an electric submersible fuel pump in the fuel tank. Moving the fuel pump to the interior of the tank helps prevent vapor lock since the entire fuel-delivery system is under positive pressure and the fuel pump runs cooler than it would be if it is located in the engine compartment. This is the primary reason that vapor lock is rare in modern fuel systems. For the same reason, some carbureted engines are retrofitted with an electric fuel pump near the fuel tank.
A vapor lock is more likely to develop when the vehicle is in traffic because the under-hood temperature tends to rise. A vapor lock can also develop when the engine is stopped while hot and the vehicle is parked for a short period. The fuel in the line near the engine does not move and can thus heat up sufficiently to form a vapor lock. The problem is more likely in hot weather or high altitude in either case.
Gravity-feed fuel systems are not immune to vapor lock. Much of the foregoing applies equally to a gravity-feed system. If vapor forms in the fuel line, its lower density reduces the pressure developed by the weight of the fuel. This pressure is what normally moves fuel from the tank to the carburetor, so fuel supply will be disrupted until the vapor is removed, either by the remaining fuel pressure forcing it into the float bowl and out the vent or by allowing the vapor to cool and re-condense.
Vapor lock has been the cause of forced landings in aircraft. That is why aviation fuel is manufactured to far lower vapor pressure than automotive gasoline (petrol). [ citation needed ] In addition, aircraft are far more susceptible because of their ability to change altitude and associated ambient pressure rapidly. Liquids boil at lower temperatures when in lower pressure environments.
Vapor lock was a common occurrence in stock car racing , since the cars have traditionally used gasoline and carburetors. With the introduction of the fuel injection requirement for NASCAR -sanctioned events in 2012, vapor lock has been largely eliminated.
Vapor lock is also less common in other motorsports, such as Formula One and IndyCar racing, due to the use of fuel injection and alcohol fuels ( ethanol or methanol ), which have a lower vapor pressure than gasoline. However, it is not entirely unlikely to happen, as the double Red Bull Racing retirements at the 2022 Bahrain Grand Prix were caused by vapor lock, presumably due to the unusually high temperatures in the fuel system. [ 2 ]
The higher the volatility of the fuel, the more likely it is that vapor lock will occur. Historically, gasoline was a more volatile distillate than it is now and was more prone to vapor lock. Conversely, diesel fuel is far less volatile than gasoline, so that diesel engines almost never suffer from vapor lock. However, diesel engine fuel systems are far more susceptible to air locks in their fuel lines, because standard diesel fuel injection pumps rely on the fuel being non-compressible. Air locks are caused by air leaking into the fuel delivery line or entering from the tank; common causes include the fuel tank being allowed to run dry, changing a fuel filter, or leaky fuel lines. Air locks are eliminated by turning the engine over for a time using the starter motor , or by bleeding the fuel system.
Modern diesel injection systems have self-bleeding electric pumps which eliminate the air lock problem.
|
https://en.wikipedia.org/wiki/Vapor_lock
|
Vapor polishing is a method of polishing plastics to reduce the surface roughness or improve clarity. Typically, a component is exposed to a chemical vapor causing the surface to flow thereby improving the surface finish. This method of polishing is frequently used to return clear materials to an optical quality finish after machining . Vapor polishing works well in the internal features of components.
Feature size changes of the plastic component generally do not occur. Post stress relieving is usually required as vapor polishing sets up surface stresses that can cause crazing [ citation needed ] .
Plastics that respond well to vapor polishing are polycarbonate , acrylic , polysulfone , PEI , and ABS .
The technique is also being used to improve the surface of objects created with 3D printing techniques. As the printer deposits layer upon layer of material to build the object, the surface is often not entirely smooth. The smoothness of the surface can be greatly increased by vapor polishing. [ 1 ]
|
https://en.wikipedia.org/wiki/Vapor_polishing
|
Vapor pressure [ a ] or equilibrium vapor pressure is the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases (solid or liquid) at a given temperature in a closed system . The equilibrium vapor pressure is an indication of a liquid's thermodynamic tendency to evaporate. It relates to the balance of particles escaping from the liquid (or solid) in equilibrium with those in a coexisting vapor phase. A substance with a high vapor pressure at normal temperatures is often referred to as volatile . The pressure exhibited by vapor present above a liquid surface is known as vapor pressure. As the temperature of a liquid increases, the attractive interactions between liquid molecules become less significant in comparison to the entropy of those molecules in the gas phase, increasing the vapor pressure. Thus, liquids with strong intermolecular interactions are likely to have smaller vapor pressures, with the reverse true for weaker interactions.
The vapor pressure of any substance increases non-linearly with temperature, often described by the Clausius–Clapeyron relation . The atmospheric pressure boiling point of a liquid (also known as the normal boiling point ) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and cause the liquid to form vapor bubbles. Bubble formation in greater depths of liquid requires a slightly higher temperature due to the higher fluid pressure, due to hydrostatic pressure of the fluid mass above. More important at shallow depths is the higher temperature required to start bubble formation. The surface tension of the bubble wall leads to an overpressure in the very small initial bubbles.
Vapor pressure is measured in the standard units of pressure . The International System of Units (SI) recognizes pressure as a derived unit with the dimension of force per area and designates the pascal (Pa) as its standard unit. [ 1 ] One pascal is one newton per square meter (N·m −2 or kg·m −1 ·s −2 ).
Experimental measurement of vapor pressure is a simple procedure for common pressures between 1 and 200 kPa. [ 2 ] The most accurate results are obtained near the boiling point of the substance; measurements smaller than 1 kPa are subject to major errors. Procedures often consist of purifying the test substance, isolating it in a container, evacuating any foreign gas, then measuring the equilibrium pressure of the gaseous phase of the substance in the container at different temperatures. Better accuracy is achieved when care is taken to ensure that the entire substance and its vapor are both at the prescribed temperature. This is often done, as with the use of an isoteniscope , by submerging the containment area in a liquid bath.
Very low vapor pressures of solids can be measured using the Knudsen effusion cell method.
In a medical context, vapor pressure is sometimes expressed in other units, specifically millimeters of mercury (mmHg) . Accurate knowledge of the vapor pressure is important for volatile inhalational anesthetics , most of which are liquids at body temperature but have a relatively high vapor pressure.
The Antoine equation [ 3 ] [ 4 ] is a pragmatic mathematical expression of the relation between the vapor pressure and the temperature of pure liquid or solid substances. It is obtained by curve-fitting and is adapted to the fact that vapor pressure is usually increasing and concave as a function of temperature. The basic form of the equation is:
and it can be transformed into this temperature-explicit form:
where:
A simpler form of the equation with only two coefficients is sometimes used:
which can be transformed to:
Sublimations and vaporizations of the same substance have separate sets of Antoine coefficients, as do components in mixtures. [ 3 ] Each parameter set for a specific compound is only applicable over a specified temperature range. Generally, temperature ranges are chosen to maintain the equation's accuracy of a few up to 8–10 percent. For many volatile substances, several different sets of parameters are available and used for different temperature ranges. The Antoine equation has poor accuracy with any single parameter set when used from a compound's melting point to its critical temperature. Accuracy is also usually poor when vapor pressure is under 10 Torr because of the limitations of the apparatus [ citation needed ] used to establish the Antoine parameter values.
The Wagner equation [ 5 ] gives "one of the best" [ 6 ] fits to experimental data but is quite complex. It expresses reduced vapor pressure as a function of reduced temperature.
As a general trend, vapor pressures of liquids at ambient temperatures increase with decreasing boiling points. This is illustrated in the vapor pressure chart (see right) that shows graphs of the vapor pressures versus temperatures for a variety of liquids. [ 7 ] At the normal boiling point of a liquid, the vapor pressure is equal to the standard atmospheric pressure defined as 1 atmosphere, [ 1 ] 760 Torr, 101.325 kPa, or 14.69595 psi.
For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point at −24.2 °C (−11.6 °F), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere ( atm ) of absolute vapor pressure.
Although the relation between vapor pressure and temperature is non-linear, the chart uses a logarithmic vertical axis to produce slightly curved lines, so one chart can graph many liquids. A nearly straight line is obtained when the logarithm of the vapor pressure is plotted against 1/(T + 230) [ 8 ] where T is the temperature in degrees Celsius. The vapor pressure of a liquid at its boiling point equals the pressure of its surrounding environment.
Raoult's law gives an approximation to the vapor pressure of mixtures of liquids. It states that the activity (pressure or fugacity ) of a single-phase mixture is equal to the mole-fraction-weighted sum of the components' vapor pressures:
where P t o t {\displaystyle P_{\rm {tot}}} is the mixture's vapor pressure, x i {\displaystyle x_{i}} is the mole fraction of component i {\displaystyle i} in the liquid phase and y i {\displaystyle y_{i}} is the mole fraction of component i {\displaystyle i} in the vapor phase respectively. P i s a t {\displaystyle P_{i}^{\rm {sat}}} is the vapor pressure of component i {\displaystyle i} . Raoult's law is applicable only to non-electrolytes (uncharged species); it is most appropriate for non-polar molecules with only weak intermolecular attractions (such as London forces ).
Systems that have vapor pressures higher than indicated by the above formula are said to have positive deviations. Such a deviation suggests weaker intermolecular attraction than in the pure components, so that the molecules can be thought of as being "held in" the liquid phase less strongly than in the pure liquid. An example is the azeotrope of approximately 95% ethanol and water. Because the azeotrope's vapor pressure is higher than predicted by Raoult's law, it boils at a temperature below that of either pure component.
There are also systems with negative deviations that have vapor pressures that are lower than expected. Such a deviation is evidence for stronger intermolecular attraction between the constituents of the mixture than exists in the pure components. Thus, the molecules are "held in" the liquid more strongly when a second molecule is present. An example is a mixture of trichloromethane (chloroform) and 2-propanone (acetone), which boils above the boiling point of either pure component.
The negative and positive deviations can be used to determine thermodynamic activity coefficients of the components of mixtures.
Equilibrium vapor pressure can be defined as the pressure reached when a condensed phase is in equilibrium with its own vapor. In the case of an equilibrium solid, such as a crystal , this can be defined as the pressure when the rate of sublimation of a solid matches the rate of deposition of its vapor phase. For most solids this pressure is very low, but some notable exceptions are naphthalene , dry ice (the vapor pressure of dry ice is 5.73 MPa (831 psi, 56.5 atm) at 20 °C, which causes most sealed containers to rupture), and ice. All solid materials have a vapor pressure. However, due to their often extremely low values, measurement can be rather difficult. Typical techniques include the use of thermogravimetry and gas transpiration.
There are a number of methods for calculating the sublimation pressure (i.e., the vapor pressure) of a solid. One method is to estimate the sublimation pressure from extrapolated liquid vapor pressures (of the supercooled liquid), if the heat of fusion is known, by using this particular form of the Clausius–Clapeyron relation: [ 9 ]
where:
This method assumes that the heat of fusion is temperature-independent, ignores additional transition temperatures between different solid phases, and it gives a fair estimation for temperatures not too far from the melting point. It also shows that the sublimation pressure is lower than the extrapolated liquid vapor pressure (Δ fus H > 0) and the difference grows with increased distance from the melting point.
Like all liquids, water boils when its vapor pressure reaches its surrounding pressure. In nature, the atmospheric pressure is lower at higher elevations and water boils at a lower temperature. The boiling temperature of water for atmospheric pressures can be approximated by the Antoine equation :
or transformed into this temperature-explicit form:
where the temperature T b {\displaystyle T_{b}} is the boiling point in degrees Celsius and the pressure P {\displaystyle P} is in torr .
Dühring's rule states that a linear relationship exists between the temperatures at which two solutions exert the same vapor pressure.
The following table is a list of a variety of substances ordered by increasing vapor pressure (in absolute units).
Several empirical methods exist to estimate the vapor pressure from molecular structure for organic molecules. Some examples are SIMPOL.1 method, [ 13 ] the method of Moller et al., [ 9 ] and EVAPORATION (Estimation of VApour Pressure of ORganics, Accounting for Temperature, Intramolecular, and Non-additivity effects). [ 14 ] [ 15 ]
In meteorology , the term vapor pressure means the partial pressure of water vapor in the atmosphere, even if it is not in equilibrium. [ 16 ] This differs from its meaning in other sciences. [ 16 ] According to the American Meteorological Society Glossary of Meteorology , saturation vapor pressure properly refers to the equilibrium vapor pressure of water above a flat surface of liquid water or solid ice, and is a function only of temperature and whether the condensed phase is liquid or solid. [ 17 ] Relative humidity is defined relative to saturation vapor pressure. [ 18 ] Equilibrium vapor pressure does not require the condensed phase to be a flat surface; it might consist of tiny droplets possibly containing solutes (impurities), such as a cloud . [ 19 ] [ 18 ] Equilibrium vapor pressure may differ significantly from saturation vapor pressure depending on the size of droplets and presence of other particles which act as cloud condensation nuclei . [ 19 ] [ 18 ]
However, these terms are used inconsistently, and some authors use "saturation vapor pressure" outside the narrow meaning given by the AMS Glossary . For example, a text on atmospheric convection states, "The Kelvin effect causes the saturation vapor pressure over the curved surface of the droplet to be greater than that over a flat water surface" (emphasis added). [ 20 ]
The still-current term saturation vapor pressure derives from the obsolete theory that water vapor dissolves into air, and that air at a given temperature can only hold a certain amount of water before becoming "saturated". [ 18 ] Actually, as stated by Dalton's law (known since 1802), the partial pressure of water vapor or any substance does not depend on air at all, and the relevant temperature is that of the liquid. [ 18 ] Nevertheless, the erroneous belief persists among the public and even meteorologists, aided by the misleading terms saturation pressure and supersaturation and the related definition of relative humidity . [ 18 ]
|
https://en.wikipedia.org/wiki/Vapor_pressure
|
In polymer chemistry , vapor phase osmometry ( VPO ), also known as vapor-pressure osmometry , is an experimental technique for the determination of a polymer 's number average molecular weight , M n . It works by taking advantage of the decrease in vapor pressure that occurs when solutes are added to pure solvent . This technique can be used for polymers with a molecular weight of up to 20,000 though accuracy is best for those below 10,000. [ 1 ] Although membrane osmometry is also based on the measurement of colligative properties , it has a lower bound of 25,000 for sample molecular weight that can be measured owing to problems with membrane permeation. [ 2 ]
A typical vapor phase osmometer consists of: (1) two thermistors, one with a polymer-solvent solution droplet adhered to it and another with a pure solvent droplet adhered to it; (2) a thermostatted chamber with an interior saturated with solvent vapor; (3) a liquid solvent vessel in the chamber; and (4) an electric circuit to measure the bridge output imbalance difference between the two thermistors. [ 3 ] The voltage difference is an accurate way of measuring the temperature difference between the two thermistors, which is a consequence of solvent vapor condensing on the solution droplet (the solution droplet has a lower vapor pressure than the solvent). [ 4 ]
The number average molecular weight for a polymer sample is given by the following equation: [ 5 ]
where: K {\displaystyle K} is a calibration constant, Δ V {\displaystyle \Delta V} is the bridge imbalance output voltage, c {\displaystyle c} is the polymer-solvent solution concentration
It is necessary to calibrate a vapor phase osmometer and it is important to note that K is found for a particular solvent, operational temperature, and type of commercial apparatus. [ 6 ] A calibration can be carried out using a standard of known molecular weight. Some possible solvents for VPO include toluene, tetrahydrofuran , or chloroform. Once the experiment is performed, concentration and output voltage data can be graphed on a plot of (ΔV/c) versus c . The plot can be extrapolated to the y-axis in order to obtain the limit of (ΔV/c) as c approaches zero. The equation above can then be used to calculate K .
Vapor phase osmometry is well suited for the analysis of oligomers and short polymers while higher polymers can be analyzed using other techniques such as membrane osmometry and light scattering. As of 2008, VPO faces competition from matrix-assisted laser desorption ionisation mass spectrometry (MALDI-MS), but VPO still has some advantages when fragmentation of samples for mass spectrometry may be problematic. [ 7 ]
|
https://en.wikipedia.org/wiki/Vapor_pressure_osmometry
|
David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition . CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Vapor Pressure
David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition , online version. CRC Press. Boca Raton, Florida, 2003; Section 4, Properties of the Elements and Inorganic Compounds; Vapor Pressure of the Metallic Elements
National Physical Laboratory, Kaye and Laby Tables of Physical and Chemical Constants ; Section 3.4.4, D. Ambrose, Vapour pressures from 0.2 to 101.325 kPa . Retrieved Jan 2006.
W.E. Forsythe (ed.), Smithsonian Physical Tables 9th ed. , online version (1954; Knovel 2003). Table 363, Evaporation of Metals
|
https://en.wikipedia.org/wiki/Vapor_pressures_of_the_elements_(data_page)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.