text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Yuriy Drozd ( Ukrainian : Юрій Анатолійович Дрозд ; born October 15, 1944) is a Ukrainian mathematician working primarily in algebra . He is a Corresponding Member of the National Academy of Sciences of Ukraine and head of the Department of Algebra and Topology at the Institute of Mathematics of the National Academy of Sciences of Ukraine .
Drozd graduated from Kyiv University in 1966, pursuing a postgraduate degree at the Institute of Mathematics of the National Academy of Sciences of Ukraine in 1969. His PhD dissertation On Some Questions of the Theory of Integral Representations (1970) was supervised by Igor Shafarevich . [ 1 ]
From 1969 to 2006 Drozd worked at the Faculty of Mechanics and Mathematics at Kyiv University (at first as lecturer, then as associate professor and full professor). From 1980 to 1998 he headed the Department of Algebra and Mathematical Logic. Since 2006 he has been the head of the Department of Algebra and Topology (until 2014 - the Department of Algebra) of the Institute of Mathematics of the National Academy of Sciences of Ukraine . [ 2 ] His doctoral students include Volodymyr Mazorchuk . [ 1 ]
Since 2022, Drozd has taught at Harvard University . [ 3 ]
This article about a Ukrainian mathematician is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Yuriy_Drozd
|
Yury Georgievich Gogotsi [ a ] (born December 16, 1961) is a scientist in the field of material chemistry , professor at Drexel University , Philadelphia, United States since 2000 in the fields of Materials Science and Engineering and Nanotechnology. Distinguished University and Trustee Chair professor of materials science at Drexel University — director of the A.J. Drexel Nanotechnology Institute (since 2014 – A.J. Drexel Nanomaterials Institute).
Presently, Professor Y. Gogotsi leads a scientific research group that develops new nanostructured carbon materials ( nanotubes , graphene , nanodiamonds , [ 1 ] carbide-derived carbon , onion-like carbon) and works on the hydrothermal synthesis of carbon nanostructures [ 2 ] and ceramics. He also contributed to development of effective water desalination and capacitive deionization techniques, electrical energy storage — batteries and supercapacitors , as well as applications of carbon nanomaterials for energy [ 3 ] [ 4 ] [ 5 ] and biomedicine.
Gogotsi's work (together with P. Simon ) on the relations between the structure and capacitive performance of carbon nanomaterials led to a scientific progress in the field and ultimately resulted in the development of a new generation of supercapacitors that facilitate the storage and utilization of electrical energy. Prof. Yury Gogotsi produced several publications (Science, 2006; Science 2010; Science 2011, etc.), with the Simon/Gogotsi review in Nature Materials published in 2008 currently being the most cited article (Web of Science) in the electrochemical capacitors (supercapacitors) field. [ citation needed ]
Gogotsi was a part of the team that discovered a new family of two-dimensional (2D) carbides and nitrides — MXenes [ 6 ] that show exceptional potential for energy storage and other applications. He developed a general approach to synthesis of porous and low-dimensional materials using selective extraction of elements/components, which can be used to generate carbide-derived porous carbons, carbon nanotubes, graphene, 2D carbides, etc. [ 7 ] He described new forms of carbon, such as conical [ 8 ] and polygonal crystals. [ 9 ] He also discovered a new metastable phase of silicon. His work on phase transformations under contact load contributed to the field of high-pressure surface science. He was the first to conduct hydrothermal synthesis of carbon nanotubes [ 10 ] and show the anomalous slow movement of water in functionalized carbon nanotubes by in situ electron microscopy. [ 11 ] This study ultimately led to development of nanotube-tipped single-cell probes. [ 12 ]
Gogotsi is the co-author of two books, editor of 14 books, [ 13 ] [ 14 ] [ 15 ] has more than 100 publications in conference proceedings, and more than 800 articles in peer reviewed journals, credited on more than 80 European and US patents (more than 30 licensed to industry) and more than 250 plenary, keynote and invited lectures and seminars. He has been cited over 100,000 times and currently has an h-index of 175 (Google Scholar) / 152 (Web of Science).
In the Stanford’s list of top 2% researchers in the world across all scientific disciplines, [ 16 ] Yury Gogotsi was ranked #53 in 2019 among all living and deceased scientists.
In 1984 Yury Gogotsi received his Masters of Science (M.S.) degree in metallurgy from the Kyiv Polytechnic Institute , Department of high-temperature materials and powder metallurgy.
In 1986 he received his Ph.D. Candidate of Science in Physical Chemistry (advisor – prof. V.A.Lavrenko), at that time — the youngest Ph.D. in Chemistry in Ukraine, from the Kyiv Polytechnic Institute.
1995 he received a Doctor of Science (D.Sc.) degree in Materials Engineering from the National Academy of Sciences of Ukraine .
Drexel University College of Engineering , Philadelphia , United States 05/2017–present — Charles T. and Ruth M. Bach Endowed Professor; 2010—present — Distinguished University Professor; 2008—present — Trustee Chair Professor of Materials Science and Engineering; 2003—present — Founder and Director of the A.J. Drexel Nanotechnology Institute (since 2014 – A.J. Drexel Nanomaterials Institute); 2002—2007 — Associate Dean of the College of Engineering for Special Projects; 2002—present — Professor of Chemistry (courtesy appointment); 2001—present — Professor of Mechanical Engineering and Mechanics (courtesy appointment); 2000—present — Professor of Materials Science and Engineering; University of Illinois at Chicago , Chicago , United States 2001—2003 — Adjunct Professor of Mechanical Engineering ; 1999—2000 — Associate Professor of Mechanical Engineering with tenure; 1999—2000 — Assistant Director, UIC Research Resources Center; 1996—1999 — Assistant Professor of Mechanical Engineering University of Tübingen , Germany 1995—1996 — Research Scientist University of Oslo , Norway 1993—1995 — Research scientist at the Center for Materials Research, NATO/Norwegian Research Council Fellowship Tokyo Institute of Technology , Japan 1992—1993 — Research scientist, Japan Society for the Promotion of Science (JSPS) Fellowship University of Karlsruhe , Germany 1990—1992 — Research scientist, Alexander von Humboldt Fellowship Institute for Materials Science, National Academy of Sciences, Ukraine 1986—1990 — Research scientist
Gogotsi has received many awards and recognitions for his research accomplishments, some of which include: 2021 — MRS-Serbia Award for a Lasting and Outstanding Contribution to Materials Science and Engineering [ 17 ] 2021 — Manuel Cardona Lecture, Institut Català de Nanociència i Nanotecnologia [ 18 ] 2021 — Honorary Doctorate, Sumy State University , Ukraine 2021 — ACS Award in the Chemistry of Materials [ 19 ] 2021 — RASA-America Honorary Life Membership 2020 — ACS Philadelphia Section Award [ 20 ] 2020 — George Gamow Award from the Russian-American Science Association (RASA) [ 21 ] 2020 — International Ceramics Prize, the highest honor conferred by the World Academy of Ceramics; 2019 — Fellow, European Academy of Sciences; [ 22 ] 2019 — Sosman Lecture, American Ceramic Society ; 2018 — Clarivate Citations Laureate in physics (Web of Science/Clarivate) – work is deemed to be of Nobel stature; [ 23 ] 2018 — The Friendship Award from Chinese government (the highest award for foreigners in P.R. China); 2018 — Rudolf Zahradnik Lecture, Regional Center of Advanced Technologies and Materials, University of Olomouc , Czech Republic ; 2018 — Honorary Doctorate, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute" ; 2018 — Fellow, International Society of Electrochemistry ; 2018 — Tis Lahiri Memorial Lecture, Vanderbilt University ; 2017 — Energy Storage Materials Award (Elsevier); [ 24 ] 2017 — Honorary Doctorate from Frantsevich Institute for Problems of Materials Science , National Academy of Science of Ukraine; [ 25 ] 2016 — Honorary professorship in Jilin University ; 2016 — Honorary professorship in Beijing University of Chemical Technology ; 2016 — Nano Energy Award [ 26 ] 2015 — Has been admitted as a Fellow of the Royal Society of Chemistry (FRSC) 2015 — Laureate of RUSNANOPRIZE International Award [ 27 ] 2014 — Honorary Doctor of Science ( Doctor Honoris Causa ) Paul Sabatier University (fr. de l'Université Toulouse III Paul Sabatier); 2014 — 2020 – Highly Cited Researcher ( Thomson-Reuters ) in Materials Science and Chemistry; 2014 — Fred Kavli Distinguished Lectureship, Materials Research Society Conference; 2013 — Ross Coffin Purdy Award, American Ceramic Society ; 2012 — European Carbon Association Award; 2012 — Fellow, Materials Research Society; 2011 — NANOSMAT Prize at the 6th NANOSMAT Conference; 2009 — Fellow, American Association for Advancement of Science (AAAS); 2008 — Fellow, The Electrochemical Society ; 2006 — NANO 50 Awards from NASA Tech Briefs Magazine in the Innovator and Technology categories; 2005 — Fellow of the American Ceramic Society and Fellow of the World Innovation Foundation; 2004 — Academician, World Academy of Ceramics; 2003 — R&D 100 Award from R&D magazine (received again in 2009); 2003 — Roland B. Snow Award from the American Ceramic Society (received again in 2005, 2007, 2009, 2012); 2002 — S. Somiya Award from the International Union of Materials Research Societies (IUMRS); 2002 — G.C. Kuczynski Prize from the International Institute for the Science of Sintering; 2002 — Research Achievement Award from Drexel University (received again in 2009); 2001 — repeatedly included in the publication of Who’s Who in the World , Who’s Who in America , Who’s Who Among America’s Teachers , Who’s Who in Science and Engineering, Who’s Who in Engineering Education, International Who’s Who of Professionals; 1993 — I.N. Frantsevich Prize from the Ukrainian Academy of Sciences
|
https://en.wikipedia.org/wiki/Yury_Gogotsi
|
Yusif Haydar oglu Mammadaliyev ( Azerbaijani : Yusif Heydər oğlu Məmmədəliyev ; December 31, 1905 – December 15, 1961) was an Azerbaijani and Soviet chemist. He was a Doctor of Chemistry , academician of the National Academy of Sciences of the Azerbaijan SSR , and was the president of the National Academy of Sciences of the Azerbaijan SSR . [ 1 ]
He was born on December 31, 1905, in Ordubad .
In 1923, he entered the higher pedagogical institute of Baku . In 1926, after successful graduation from the institute, he taught at secondary school for 3 years. In 1929, he became a second-year student of chemistry faculty of MSU , from which he graduated in 1932. He was a student of Nikolay Zelinsky and Aleksei Balandin and one of the first seniors of the laboratory of organic chemistry of chemistry faculty's organic chemistry cathedra with “ organocatalysis ” speciality. On the termination of MSU he worked in Moscow at the chemical plant No.1, and then was transferred to Azerbaijan, where he managed the Cathedra of organic chemistry of the agricultural college of Azerbaijan at first. Then he worked (1933–1945) at the Azerbaijan Research Institute of Oil , where he became the manager of the laboratory. His work was dedicated to scientific problems of petrochemistry and organocatalysis and was closely connected with the development of domestic oil-refining and petrochemical industry . Some developments assumed as the basis of new industrial processes.
Starting from 1934, he led the great pedagogical work at Azerbaijan University named after S.M.Kirov , sequentially holding the positions of associate professor, professor, head of a cathedra and rector (1954–1958). In 1933, Candidate of Chemistry was conferred on Yusif Mammadaliyev without defend of dissertation.
In 1942, he became a Doctor of Chemistry and in 1943, a professor; in 1945, the academician of the Academy of Sciences of the Azerbaijan SSR (from the establishment of the academy). He was the director of Oil Academy of the Azerbaijan SSR . In 1946, he was nominated to the work in the Ministry of Oil Industry , where he became the chairman of the scientific-technical council of the ministry. In 1951–1954, he was the academician-secretary of physics , chemistry and oil departments of the Academy of Sciences of the Azerbaijan SSR, in 1954–1958, the rector of Azerbaijan State University.
In 1947–1951 and 1958–1961 Mammadaliyev was chosen the president of the Academy of Sciences of the Azerbaijan SSR. [ 2 ] The Institute of Petrochemical Processes was established in Baku on Mammadaliyev's initiative.
In 1958, Mammadaliyev was chosen as the corresponding member of the Academy of Sciences of the Azerbaijan SSR. [ 3 ]
Mammadaliyev died in 1961.
The main scientific works of Yusif Mammadaliyev are related to catalytic progressing of oil and Fuel oil . He is the founder of petrochemistry in Azerbaijan. He suggested new methods of chlorination and bromination of different hydrocarbons with participation of catalysts and especially showed the ways of obtaining carbon tetrachloride , chloromethane , dichloromethane and other valuable products by means of chlorination of methane , initially in stationary catalyst, and then in hot layer. Researches in the sphere of catalytic alkylation of aromatic , paraffinic , naphthenes hydrocarbons with the help of unsaturated hydrocarbons , enabled the synthesis of the components of aviation fuels on industrial scale. The major works were executed in the sphere of catalytic aromatization of benzine fraction of Baku oil, obtainment of washing agents, flint-organic compounds, production of plastics from pyrolized products, analysis of Naftalan oil's action mechanism. He repeatedly represented Azerbaijan in congresses, conventions and symposiums held by the USSR, United States , Italy , France , England , Moldavia , Poland and other countries.
|
https://en.wikipedia.org/wiki/Yusif_Mammadaliyev
|
Yves Jeannin is a French chemist born on 11 April 1931 in Boulogne sur Seine . He is the son of Raymond Jeannin, architect , and Suzanne Armynot du Chatelet. He married Suzanne Bellé in 1956 and has two children, Philippe and Sylvie, born in 1961 and 1969.
He is a corresponding member of the French Academy of sciences [ 1 ] and Professor Emeritus at the Pierre and Marie Curie University .
Yves Jeannin studied at the École Nationale Supérieure de Chimie de Paris (Engineer in 1954, graduation rank: first). His first job was at the IRSID for a sixteen-month stay in London at the Royal School of Mines with Prof. F.D. Richardson. He worked on the thermodynamics of the oxidation of iron - chromium alloys . He is preparing a PhD thesis in Physical Sciences (1962) under the supervision of Pr J. Bénard, on the crystallochemistry of titanium sulphides. In 1963, he spent a period in the United States as a Post-doctoral Research Associate of the United States Atomic Energy Commission , Argonne National Laboratory , and Iowa State University .
He became a lecturer at the Paul Sabatier University of Toulouse in 1964, then Professor at the Pierre and Marie Curie University, Paris (UPMC), in 1974 where he taught in preparation for the medical school entrance examination at the Pitié-Salpétrière Hospital , in inorganic chemistry , the Master of Chemistry at UPMC, the Master of Chemistry at L' École Normale Supérieure , the Advanced Study Diploma (DEA) in inorganic chemistry, the DEA in crystallography , the preparation for the agrégation in chemistry at L'École Normale Supérieure.
Jeannin became head of the Chemistry group of the Lagarrigue Commission in charge of rebuilding the chemistry curricula of high schools (1976-1980). He was a member of the jury for the agrégation examination in chemistry (1971-1974), a member of the jury for admission to the École Normale Supérieure (9 years), and a member of the jury for admission to the École Polytechnique (2 years). At the request of the Ministry, he took part in the setting up of the internal agrégation in Physical Sciences (President of the jury, 1985-1988). He is a member of the Commission proposing to the Minister the General Inspectors, member of the recruitment jury of the Engineers of the Corps of Mines . He will also be a chargé de mission at the French Ministry of Research (4 years).
Jeannin is a member of the University's Scientific Council, President of the Research Commission of the UFR of Chemistry, and a member of the Academic Council of the École Normale Supérieure.
In research, Jeannin shows an interest in the chemistry of transition metals , in the synthesis and structure of the species they form. First in solid state chemistry with the study of the non-stoichiometry of binary and ternary chalcogenides of titanium and zirconium , [ 2 ] [ 3 ] then he studies the iron complexes formed by solvation in non-aqueous media, [ 4 ] [ 5 ] the synthesis and X-ray study of organometallic polymetallic species, and finally the chemistry of polyoxotungstates. In the latter case, it is essentially the compounds containing the XW9 brick that have attracted his attention. [ 6 ] [ 7 ] [ 8 ] The laboratory has made a major contribution to the development of their synthesis and structural study by X-ray and NMR of tungsten , holding the record for the largest known polytungstate. [ 9 ] In organometallic chemistry, study of the action of aminoalkynes and thioalkynes on iron carbonyl and ruthenium carbonyl; [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] cluster compounds of up to five iron atoms have been isolated. [ 15 ] He has also been interested in the coordination chemistry of copper [ 16 ] and molybdenum . [ 17 ] In order to carry out this research on the creation of a centre for structure determination by X-ray diffraction , he was made available to French and foreign chemists and to external laboratories. [ 18 ] [ 19 ] [ 20 ]
This research has resulted in more than 300 publications. [ 21 ] [ 22 ]
|
https://en.wikipedia.org/wiki/Yves_Jeannin
|
In graph theory , ΔY- and YΔ-transformations (also written delta-wye and wye-delta ) are a pair of operations on graphs . A ΔY-transformation replaces a triangle by a vertex of degree three; and conversely, a YΔ-transformation replaces a vertex of degree three by a triangle. The names for the operations derive from the shapes of the involved subgraphs, which look respectively like the letter Y and the Greek capital letter Δ .
A YΔ-transformation may create parallel edges , even if applied to a simple graph . For this reason ΔY- and YΔ-transformations are most naturally considered as operations on multigraphs . On multigraphs both operations preserve the edge count and are exact inverses of each other. In the context of simple graphs it is common to combine a YΔ-transformation with a subsequent normalization step that reduces parallel edges to a single edge. This may no longer preserve the number of edges, nor be exactly reversible via a ΔY-transformation.
Let G {\displaystyle G} be a graph (potentially a multigraph).
Suppose G {\displaystyle G} contains a triangle Δ {\displaystyle \Delta } with vertices x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} and edges e 12 , e 23 , e 31 {\displaystyle e_{12},e_{23},e_{31}} .
A ΔY-transformation of G {\displaystyle G} at Δ {\displaystyle \Delta } deletes the edges e 12 , e 23 , e 31 {\displaystyle e_{12},e_{23},e_{31}} and adds a new vertex y {\displaystyle y} adjacent to each of x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} .
Conversely, if y {\displaystyle y} is a vertex of degree three with neighbors x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} , then a YΔ-transformation of G {\displaystyle G} at y {\displaystyle y} deletes y {\displaystyle y} and adds three new edges e 12 , e 23 , e 31 {\displaystyle e_{12},e_{23},e_{31}} , where e i j {\displaystyle e_{ij}} connects x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} .
If the resulting graph should be a simple graph, then any resulting parallel edges are to be replaced by a single edge.
ΔY- and YΔ-transformations are a tool both in pure graph theory as well as applications.
Both operations preserve a number of natural topological properties of graphs.
For example, applying a YΔ-transformation to a 3-vertex of a planar graph , or a ΔY-transformation to a triangular face of a planar graph, results again in a planar graph. [ 1 ] This was used in the original proof of Steinitz's theorem , showing that every 3-connected planar graph is the edge graph of a polyhedron .
Applying ΔY- and YΔ-transformations to a linkless graph results again in a linkless graph. [ 2 ] This fact is used to compactly describe the forbidden minors of the associated graph classes as ΔY-families generated from a small number of graphs (see the section on ΔY-families below).
A particularly relevant application exists in electrical engineering in the study of three-phase power systems (see Y-Δ transform (electrical engineering) ). [ 3 ] In this context they are also known as star-triangle transformations and are a special case of star-mesh transformations .
The ΔY-family generated by a graph G {\displaystyle G} is the smallest family of graphs that contains G {\displaystyle G} and is closed under YΔ- and ΔY-transformations. Equivalently, it is constructed from G {\displaystyle G} by recursively applying these transformations until no new graph is generated. If G {\displaystyle G} is a finite graph it generates a finite ΔY-family, all members of which have the same edge count.
The ΔY-family generated by several graphs is the smallest family that contains all these graphs and is closed under YΔ- and ΔY-transformation.
Some notable families are generated in this way:
A graph is YΔY-reducible if it can be reduced to a single vertex by a sequence of ΔY- or YΔ-transformations and the following normalization steps:
The YΔY-reducible graphs form a minor closed family and therefore have a forbidden minor characterization (by the Robertson–Seymour theorem ). The graphs of the Petersen family constitute some (but not all) of the excluded minors. [ 5 ] In fact, already more than 68 billion excluded minor are known. [ 6 ]
The class of YΔY-reducible graphs lies between the classes of planar graphs and linkless graphs: each planar graph is YΔY-reducible, while each YΔY-reducible graph is linkless. Both inclusions are strict: K 5 {\displaystyle K_{5}} is not planar but YΔY-reducible, while the graph in the figure is not YΔY-reducible but linkless. [ 5 ]
|
https://en.wikipedia.org/wiki/YΔ-_and_ΔY-transformation
|
Z++ (pronounced zed , or zee in American pronunciation , plus plus ) is an object-oriented extension to the Z specification language .
Z++ allows for the definition of classes , and the relation of classes through inheritance , association , or aggregation . The primary construct of Z++ is a class. A Z++ class consists of a number of clauses which are optional.
This programming-language -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Z++
|
Z-Cote is a commercial zinc oxide line [ 1 ] manufactured and owned by BASF . [ 2 ] Due to Z-Cote's photo-protective properties it is commonly used in personal care products and sunscreens. It is available in nano, non-nano, coated and uncoated forms. [ 3 ] Z-Cote is a derivative of zinc oxide which is Generally Recognized As Safe and Effective (GRASE) by the FDA as a nutrient, [ 4 ] cosmetic colour additive, [ 5 ] [ 6 ] skin protection active ingredient [ 7 ] and other OTC products. [ 8 ] Manufactured zinc oxide, such as Z-Cote, is only recognised as GRASE by the FDA when it is compliant with the Good manufacturing practice (GMP) standard. The original Sunsmart and Submicro Encapsulation Technologies Z-Cote patent filed in 1991 for UV skin protection expired in 2015. [ 9 ]
Z-Cote was acquired from SunSmart in 1999 by BASF. [ 10 ] [ 11 ] It has been used in sunscreen formulations since at least 1993 in microfine form. [ 12 ] The initial 1991 Z-Cote patent placed emphasis on broad-spectrum protection, especially UVA. [ 9 ] Only in 1993 did the FDA approve an alternative petrochemical sunscreen ingredient, avobenzone, that also provided protection against UVA. [ 13 ] [ 14 ]
Z-Cote was studied to evaluate hypothetical agricultural impacts if it were to contaminate irrigated water. In the study it was found that Z-Cote had a negligible impact on bean ( Phaseolus vulgaris ) pod production and increased root length and the concentration of more nutritional elements. [ 15 ]
Zinc oxide is compliant with Hawaii Act 104 banning non-reef safe petrochemicals. [ 16 ]
|
https://en.wikipedia.org/wiki/Z-Cote
|
Z-FA-FMK , abbreviating for benzyloxycarbonyl-phenylalanyl-alanyl-fluoromethyl ketone , is a very potent irreversible inhibitor of cysteine proteases , including cathepsins B , L , and S , cruzain , and papain . It also selectively inhibits effector caspases 2 , 3 , 6 , and 7 but not caspases 8 and 10 . [ 1 ] This compound has been shown to block the production of IL1-α , IL1-β , and TNF-α induced by LPS in macrophages by inhibiting NF-κB pathways.
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Z-FA-FMK
|
Z-HIT , also denoted as ZHIT , Z-HIT relationship , is a bidirectional mathematical transformation, connecting the two parts of a complex function , - i.e. its modulus and its phase . Z-HIT relations are somewhat similar to the Kramers–Kronig relations , where the real part can be computed from the imaginary part (or vice versa). In contrast to the Kramers–Kronig relations , in the Z-HIT the impedance modulus is computed from the course of the phase angle (or vice versa). The main practical advantage of Z-HIT relationships over Kramers–Kronig relationships is, that the Z-HIT integration limits do not require any extrapolation: instead, an integration over the experimentally available frequency range provides accurate data.
More specifically, the angular frequency (ω) boundaries for computing one component of the complex function from the other one using the Kramers-Kronig relations, are ω=0 and ω=∞; these boundaries require extrapolation procedures of the measured impedance spectra. Concerning the ZHIT however, the computing of the course of the impedance modulus from the course of the phase shift can be performed within the measured frequency range , without the need of extrapolation. This avoids complications which may arise from the fact that impedance spectra can only be measured in a limited frequency range . Therefore, the Z-HIT-algorithm allows for verification of the stationarity of the measured test object as well as calculating the impedance values using the phase data. The latter property becomes important when drift effects are present in the impedance spectra which had to be detected or even removed when analysing and/or interpreting the spectra.
Z-HIT relations find use in Dielectric spectroscopy and in Electrochemical Impedance Spectroscopy .
An important application of Z-HIT is the examination of experimental impedance spectra for artifacts . The examination of EIS series measurements is often difficult due to the tendency of examined objects to undergo changes during the measurement. This may occur in many standard EIS applications such as the evaluation of fuel cells or batteries during discharge. Further examples include the investigation of light-sensitive systems under illumination (e.g. Photoelectrochemistry ) or the analysis of water uptake of lacquers on metal surfaces (e.g. corrosion -protection).
A descriptive example for an unsteady system is a Lithium-ion battery . Under cyclization or discharging, the amount of charge in the battery changes over time. The change in charge is coupled with a chemical redox reaction, transferring to a change in concentrations of the involved substances. This violates the principles of stationarity and causality which are prerequisites for proper EIS measurements. In theory, this would exclude drift-affected samples from valid evaluation. Using the ZHIT-algorithm, these and similar artifacts can be recognized and spectra following causality can even be reconstructed, which are consistent with the Kramers–Kronig relations and thereby valid for analysis.
Z-HIT is a special case of the Hilbert transform and through restriction by the Kramers–Kronig relations it can be derived for one- Port -systems. The frequency-dependent relationship between impedance and phase angle can be observed in the Bode plot of an impedance spectrum. Equation (1) is obtained as a general solution of the correlation between impedance modulus and phase shift. [ 1 ] [ 2 ] ( 1 ) ln [ Z ( ω o ) ] − ln [ Z ( 0 ) ] = 2 π ⋅ ∫ ω S ω O φ ( ω ) d l n ( ω ) + γ k ⋅ ∑ k = 1 ∞ d k φ ( ω 0 ) d ln ( ω ) k w i t h k = 1 , 3 , 5 , 7 , … ( k = odd ) {\displaystyle (1){\text{ }}\ln \left[Z\left(\omega _{o}\right)\right]-\ln \left[Z\left(0\right)\right]{\text{ }}={\text{ }}{\frac {2}{\pi }}\cdot \int \limits _{\omega _{S}}^{\omega _{O}}\varphi \left(\omega \right)dln\left(\omega \right){\text{ }}+{\text{ }}\gamma _{k}\cdot \sum _{k=1}^{\infty }{\frac {d^{k}\varphi \left(\omega _{0}\right)}{d\ln {\left(\omega \right)}^{k}}}{\text{ }}{\text{ }}{\text{ }}{\text{ }}with{\text{ }}k{\text{ }}={\text{ }}1,3,5,7,\ldots (k{\text{ }}={\text{odd}})}
Equation (1) indicates that the logarithm of the impedance ( ln [ Z ( ω o ) ] {\displaystyle \ln \left[Z\left(\omega _{o}\right)\right]} ) at a specific frequency ω O {\displaystyle \omega _{O}} can be calculated up to a constant value of ( ln [ Z ( 0 ) ] {\displaystyle \ln \left[Z\left(0\right)\right]} ), if the phase shift φ ( ω ) {\displaystyle \varphi \left(\omega \right)} is integrated up to the frequency point of interest ω O {\displaystyle \omega _{O}} , while the starting value ω S {\displaystyle \omega _{S}} of the integral can be freely chosen. As an additional contribution to the calculation of ln [ Z ( ω o ) ] {\displaystyle \ln \left[Z\left(\omega _{o}\right)\right]} , the odd-numbered derivatives of the phase shift at the point ω O {\displaystyle \omega _{O}} have to be added, weighted with the factors γ k {\displaystyle \gamma _{k}} .
The factors γ k {\displaystyle \gamma _{k}} can be calculated according to equation (2), whereat ζ ( k + 1 ) {\displaystyle \zeta \left(k+1\right)} represents the Riemann ζ-function .
( 2 ) γ k = ( − 1 ) k ⋅ 2 π ⋅ 1 2 k ⋅ ζ ( k + 1 ) w i t h k = 1 , 3 , 5 , 7 , … ( k = odd numbered ) {\displaystyle (2){\text{ }}\gamma _{k}={\left(-1\right)}^{k}\cdot {\frac {2}{\pi }}\cdot {\frac {1}{2^{k}}}\cdot \zeta \left(k+1\right){\text{ }}{\text{ }}{\text{ }}{\text{ }}with{\text{ }}k{\text{ }}={\text{ }}1,3,5,7,\ldots (k{\text{ }}={\text{odd numbered}})}
The practically applied Z-HIT approximation is obtained from equation (1) by limitation to the first derivative of the phase shift neglecting higher derivatives (equation (3)), where C represents a constant.
( 3 ) ln [ Z ( ω o ) ] = 2 π ∫ ω S ω O φ ( ω ) d l n ( ω ) + γ 1 d φ ( ω O ) d ln ( ω ) + C {\displaystyle (3){\text{ }}\ln \left[Z\left(\omega _{o}\right)\right]{\text{ }}={\text{ }}{\frac {2}{\pi }}{\text{ }}\int \limits _{\omega _{S}}^{\omega _{O}}\varphi \left(\omega \right)dln\left(\omega \right){\text{ }}{\text{ }}+{\text{ }}{\text{ }}\gamma _{1}{\text{ }}{\frac {d\varphi \left(\omega _{O}\right)}{d\ln \left(\omega \right)}}{\text{ }}+{\text{ }}C}
The free choice of the integration boundaries in the ZHIT algorithm is a fundamental difference concerning the Kramers-Kronig relations; in ZHIT the integration boundaries are ω = ω S {\displaystyle \omega ={\omega _{S}}{\text{ }}} and ω = ω 0 {\displaystyle \omega ={\omega _{0}}{\text{ }}} .
The greatest advantage of the ZHIT results from the fact, that both integration boundaries can be chosen within the measured spectrum, and thus does not require extrapolation to frequencies 0 and ∞ {\displaystyle \infty } , as with the Kramers-Kronig relations.
The practical implementation of the Z-HIT approximation is shown schematically in Figure 1. A continuous curve ( spline ) for each of the two independent measured quantities (impedance and phase) is created by smoothing (part 1 in Figure (1)) from the measured data points. With the help of the spline for the phase shift, values for the impedance are now calculated. First, the integral of the phase shift is calculated up to the corresponding frequency ω 0 {\displaystyle \omega _{0}} , where (if suited) the highest measured frequency is selected as starting point ω S {\displaystyle \omega _{S}} for the integration - c.f. part 2 in Figure (1). From the spline of the phase shift, its slope can be calculated at ω 0 {\displaystyle \omega _{0}} (part 3 in figure (1)). Thereby, a reconstructed curve of the impedance is obtained which is (in the ideal case) only shifted parallelly with regard to the measured curve. There exist several possibilities to determine the constant C in the Z-HIT equation (part 4 in Figure (1)), one of which contains a parallel shift of the reconstructed impedance in a frequency range not affected by artifacts (see notes). This shift is performed by a linear regression procedure. Comparing the resulting reconstructed impedance curve to the measured data (or the Splines of the impedance), artifacts can easily be detected. These are usually located in the high frequency range (caused by induction or mutual induction , especially when low impedance systems are investigated) or in the low frequency range (caused by the change of the system during the measurement (=drift)).
The measurement time required for a single impedance measurement point strongly depends on the frequency of interest. While frequencies above about 1 Hz can be measured within seconds, the measurement time increases significantly in the lower frequency range. Although the exact duration for measuring a complete impedance spectrum depends on the measuring device as well as on internal settings, the following measurement times can be considered as rules of thumb when measuring the frequency measurement points sequentially, with the upper frequency assumed as 100 kHz or 1 MHz:
Measurements down to or below 0.01 Hz are typically associated with measurement times in the range of several hours. Therefore, a spectrum can be roughly divided into three sub-ranges with regard to the occurrence of artifacts: in the high-frequency domain (approx. > 100 to 1000 Hz), induction or mutual induction can dominate. In the low frequency region (< 1 Hz), drift can occur due to noticeable change in the system. The range between about 1 Hz and 1000 Hz is usually not affected by high- or low-frequency artifacts. However, the mains frequency (50/60 Hz) may come into play as distorting artifact in this region.
In addition to the reconstruction of the impedance from the phase shift, the reverse approach is also possible. [ 2 ] However, the herein presented procedure possesses several advantages:
Figure 3 shows an impedance spectrum of a measurement series of a painted steel sample during water uptake [ 6 ] (upper part in Figure 3). The symbols in the diagram represent the interpolation points (nodes) of the measurement, while the solid lines represent the theoretical values simulated according to an appropriate model. The interpolation points for the impedance were obtained by the Z-HIT reconstruction of the phase shift. The bottom part of Figure 3 depicts the normalized error (Z ZHIT − Z smooth )/Z ZHIT ·100 of the impedance. For the error calculation, two different procedures are used to determine the "extrapolated impedance values":
The simulation according to the appropriate model is performed using the two different impedance curves. The corresponding residuals are calculated and depicted in the bottom part of the diagram in Figure (3).
Note: Error patterns as shown in the magenta bottom diagram in Figure (3) may be the motivation to extend an existing model by additional elements to minimize the fitting error. However, this is not possible in every case. The drift in the impedance spectrum mainly influences the low-frequency part by means of a changing system during the measurement. The spectrum in Figure 3 is caused by water penetrating into the pores of the lacquer, which reduces the impedance (resistance) of the coating. Therefore, the system behaves as if at each low-frequency measurement point the resistance of the coating was replaced by a further, smaller resistance due to the water uptake. However, there is no impedance element that exhibits such behavior. Therefore, any extension of the model would only result in a "smearing" of the error over a wider frequency range without reducing the error itself. Only the removal of the drift by reconstructing the impedance using Z-HIT leads to a significantly better compatibility between measurement and model.
Figure 4 shows a Bode plot of an impedance series measurement, performed on a fuel cell where the hydrogen of the fuel gas was deliberately poisoned by the addition of carbon monoxide. [ 7 ] Due to the poisoning, active centers of the platinum catalyst are blocked, which severely impairs the performance of the fuel cell. Thereby, the blocking of the catalyst is depending on the potential, resulting in an alternating sorption and desorption of the carbon monoxide on the catalyst surface within the cell. This cyclical change of the active catalyst surface translates to pseudo-inductive behavior, which can be observed in the impedance spectrum of Figure 4 at low frequencies (< 3 Hz). The impedance curve was reconstructed by Z-HIT and is represented by the purple line, while the originally measured values are represented by the blue circles. The deviation in the low frequency part of the measurement can be clearly observed. Evaluation of the spectra shows [ 7 ] significantly better agreement between model and measurement if the reconstructed Z-HIT impedances are used instead of the original data.
Original work:
|
https://en.wikipedia.org/wiki/Z-HIT
|
In covalent bond classification , a Z-type ligand refers to a ligand that accepts two electrons from the metal center. [ 1 ] This is in contrast to X-type ligands, which form a bond with the ligand and metal center each donating one electron, and L-type ligands, which form a bond with the ligand donating two electrons. Typically, these Z-type ligands are Lewis acids , or electron acceptors. [ 2 ] They are also known as zero-electron reagents. [ 3 ]
The ability of Lewis acids to coordinate to transition metals as σ-acceptor ligands was recognized as early as in the 1970s, but the so-called Z-type ligands remained curiosities until the early 2000s. Over the last decade, significant progress has been made in this area, especially via the incorporation of Lewis acid moieties into multidentate , ambiphilic ligands. The understanding of the nature and influence of metal→Z-ligand interactions has considerably improved and the scope of Lewis acids susceptible to behave as σ-acceptor ligands has been significantly extended. [ 4 ]
Owing to the vacant orbital present in Z-ligands, many have incomplete octets which allow them to readily accept a pair of electrons from other atoms. [ 1 ] A Z‑function ligand interacts with a metal center via a dative covalent bond , differing from the L‑function in that both electrons are donated by the metal rather than the ligand. [ 5 ] As such, Z-ligands donate zero electrons to a metal center because they tend to be strong electron acceptors .
Although many Z-ligands are Lewis acids, they behave as neutral ligands in the complex without contributing to the overall charge present on the complex. But since the metal uses two of its electrons in forming the metal-ligand bond, the Z-ligand raises the valence of the metal center by two units. This means that presence of the Z-ligands change the d n configuration of the complex without changing the total e − count. [ 1 ]
A Z-ligand is usually accompanied by an L-ligand , as the presence of the L-ligand adds stability to the complex. As the electrons are being donated from the central metal atom to the Z-ligand, the L-ligand donates its pair of electrons to the metal atom. This unique type of bonding existing between two different ligands and the metal atom renders the complexes stable when present with a strong sigma donor ligand. [ 5 ] In such complexes, the L and Z ligands can be written in terms of X. For example, if one Z-ligand is accompanied by one L type ligand, it can be written as a complex containing two X type ligands; i.e. MLZ type complex becomes an MX 2 type. [ 1 ]
Many of the simplest Z-ligands are simple Lewis acids with electron-deficient center atoms such as BX 3 , BH 3 , BR 3 , AlX 3 , etc. While these molecules typically have trigonal planar geometry , when bonded to a metal center, they become tetrahedral . [ 4 ] This geometry change can be stabilized by the addition of an L-ligand on the metal center. The electrons donated from the L-ligand stabilize the Lewis acid into a tetrahedral form. Therefore, these Z-ligands can attack at (a) the metal (even in 18 electron compounds), (b) the metal-ligand bond, or (c) the ligands. In addition to the simple Lewis acids, there are several complex molecules that can act as both L- and Z-ligands. These are referred to donor buttresses, and are typically formed when large boron-alkyl molecules complex with a metal center. [ 5 ]
In addition to the geometry changes involved in the dative bonding from the metal to the Z-ligand complex, the bond itself can differ greatly depending on the type of buttresses involved. Typical boron-boron bonds are around 1.59 Å. [ 6 ] However, due to the dative bond character, the metal-boron bond distance can vary greatly depending on the bonding motif, as well as the various ligands attached to the metal. The boride and borylene motifs tend to have the shortest bonds, typically from 2.00 to 2.15 Å. Boryl complexes have metal-boron bond distances from 2.45 to 2.52 Å, and borane complexes have the largest range of metal-boron bond distances, 2.07-2.91 Å. In addition, for the metal base-stabilized borane complexes, the L-ligand that donates to the metal center plays an important role in the metal-boron bond length. Typically, the donor buttresses with sulfur and nitrogen donor ligands have metal-boron bond lengths of 2.05-2.25 Å, and donor buttresses with phosphorus donor ligands have metal-boron bond lengths of 2.17-2.91 Å. [ 5 ]
Both uncharged transition metal complexes and anionic complexes lead to the required adducts with acidic boranes . On the right is a typical reaction of a Z-ligand where the electron deficit BPh 3 adds to the anionic Fe complex. The presence of Cp and CO ligands further stabilize the Fe-BPh 3 bond. More specific examples include [NEt 4 ][CpFe(CO) 2 ] which gives the anionic borane iron complex as an amorphous solid from reaction with BPh 3 in diethyl ether. This could even be characterized in solution by a high-field shifted 11 B- NMR signal at −28.8 characteristic of fourfold coordinated boron. [ 7 ]
Most examples of Z-ligands are boron-centered molecules. These can range from the simple BX 3 molecules such as BF 3 , BH 3 , BCl 3 , and BR 3 , to the more complex boron-centered molecules such as B(C 6 F 5 ) 3 . [ 1 ] In addition, there are many complex boron-centered molecules that act as multiple ligands on a single metal atom, forming "scaffolding" structures. [ 5 ] One such structure is shown to the right. Other molecules that act as Z-ligands are AlCl 3 , AlR 3 , SO 2 , H + , Me + , CPh 3 , HgX 2 , Cu + , Ag+, CO 2 and certain silanes . [ 4 ]
|
https://en.wikipedia.org/wiki/Z-Ligand
|
In coding theory and information theory , a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems.
A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability p of being transmitted incorrectly as a 0, and probability 1– p of being transmitted correctly as a 1. In other words, if X and Y are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities : [ 1 ]
The channel capacity c a p ( Z ) {\displaystyle {\mathsf {cap}}(\mathbb {Z} )} of the Z-channel Z {\displaystyle \mathbb {Z} } with the crossover 1 → 0 probability p , when the input random variable X is distributed according to the Bernoulli distribution with probability α {\displaystyle \alpha } for the occurrence of 0, is given by the following equation:
where s ( p ) = H ( p ) 1 − p {\displaystyle {\mathsf {s}}(p)={\frac {{\mathsf {H}}(p)}{1-p}}} for the binary entropy function H ( ⋅ ) {\displaystyle {\mathsf {H}}(\cdot )} .
This capacity is obtained when the input variable X has Bernoulli distribution with probability α {\displaystyle \alpha } of having value 0 and 1 − α {\displaystyle 1-\alpha } of value 1, where:
For small p , the capacity is approximated by
as compared to the capacity 1 − H ( p ) {\displaystyle 1{-}{\mathsf {H}}(p)} of the binary symmetric channel with crossover probability p .
To find the maximum we differentiate
And we see the maximum is attained for
yielding the following value of c a p ( Z ) {\displaystyle {\mathsf {cap}}(\mathbb {Z} )} as a function of p
For any p > 0 {\displaystyle p>0} , α > 0.5 {\displaystyle \alpha >0.5} (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As p → 1 {\displaystyle p\rightarrow 1} , the limiting value of α {\displaystyle \alpha } is 1 − 1 e {\displaystyle 1-{\frac {1}{e}}} . [ 2 ]
Define the following distance function d A ( x , y ) {\displaystyle {\mathsf {d}}_{A}(\mathbf {x} ,\mathbf {y} )} on the words x , y ∈ { 0 , 1 } n {\displaystyle \mathbf {x} ,\mathbf {y} \in \{0,1\}^{n}} of length n transmitted via a Z-channel
Define the sphere V t ( x ) {\displaystyle V_{t}(\mathbf {x} )} of radius t around a word x ∈ { 0 , 1 } n {\displaystyle \mathbf {x} \in \{0,1\}^{n}} of length n as the set of all the words at distance t or less from x {\displaystyle \mathbf {x} } , in other words,
A code C {\displaystyle {\mathcal {C}}} of length n is said to be t -asymmetric-error-correcting if for any two codewords c ≠ c ′ ∈ { 0 , 1 } n {\displaystyle \mathbf {c} \neq \mathbf {c} '\in \{0,1\}^{n}} , one has V t ( c ) ∩ V t ( c ′ ) = ∅ {\displaystyle V_{t}(\mathbf {c} )\cap V_{t}(\mathbf {c} ')=\emptyset } . Denote by M ( n , t ) {\displaystyle M(n,t)} the maximum number of codewords in a t -asymmetric-error-correcting code of length n .
The Varshamov bound .
For n ≥1 and t ≥1,
The constant-weight [ clarification needed ] code bound .
For n > 2t ≥ 2 , let the sequence B 0 , B 1 , ..., B n-2t-1 be defined as
Then M ( n , t ) ≤ B n − 2 t − 1 . {\displaystyle M(n,t)\leq B_{n-2t-1}.}
|
https://en.wikipedia.org/wiki/Z-channel_(information_theory)
|
The Z-factor is a measure of statistical effect size . It has been proposed for use in high-throughput screening (HTS), where it is also known as Z-prime, [ 1 ] to judge whether the response in a particular assay is large enough to warrant further attention.
In HTS, experimenters often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to positive and negative control samples. The particular choice of experimental conditions and measurements is called an assay. Large screens are expensive in time and resources. Therefore, prior to starting a large screen, smaller test (or pilot) screens are used to assess the quality of an assay, in an attempt to predict if it would be useful in a high-throughput setting. The Z-factor is an attempt to quantify the suitability of a particular assay for use in a full-scale HTS.
The Z-factor is defined in terms of four parameters: the means ( μ {\displaystyle \mu } ) and standard deviations ( σ {\displaystyle \sigma } ) of samples (s) and controls (c). Given these values ( μ s {\displaystyle \mu _{s}} , σ s {\displaystyle \sigma _{s}} , and μ c {\displaystyle \mu _{c}} , σ c {\displaystyle \sigma _{c}} ), the Z-factor is defined as:
For assays of agonist/activation type, the control (c) data ( μ c {\displaystyle \mu _{c}} , σ c {\displaystyle \sigma _{c}} ) in the equation are substituted with the positive control (p) data ( μ p {\displaystyle \mu _{p}} , σ p {\displaystyle \sigma _{p}} ) which represent maximal activated signal; for assays of antagonist/inhibition type, the control (c) data ( μ c {\displaystyle \mu _{c}} , σ c {\displaystyle \sigma _{c}} ) in the equation are substituted with the negative control (n) data ( μ n {\displaystyle \mu _{n}} , σ n {\displaystyle \sigma _{n}} ) which represent minimal signal.
In practice, the Z-factor is estimated from the sample means and sample standard deviations
The Z'-factor (Z-prime factor) is defined in terms of four parameters: the means ( μ {\displaystyle \mu } ) and standard deviations ( σ {\displaystyle \sigma } ) of both the positive (p) and negative (n) controls ( μ p {\displaystyle \mu _{p}} , σ p {\displaystyle \sigma _{p}} , and μ n {\displaystyle \mu _{n}} , σ n {\displaystyle \sigma _{n}} ). Given these values, the Z'-factor is defined as:
The Z'-factor is a characteristic parameter of the assay itself, without intervention of samples.
The Z-factor defines a characteristic parameter of the capability of hit identification for each given assay. The following categorization of HTS assay quality by the value of the Z-Factor is a modification of Table 1 shown in Zhang et al. (1999); [ 2 ] note that the Z-factor cannot exceed one.
Note that by the standards of many types of experiments, a zero Z-factor would suggest a large effect size, rather than a borderline useless result as suggested above. For example, if σ p =σ n =1, then μ p =6 and μ n =0 gives a zero Z-factor. But for normally-distributed data with these parameters, the probability that the positive control value would be less than the negative control value is less than 1 in 10 5 . Extreme conservatism is used in high throughput screening due to the large number of tests performed.
The constant factor 3 in the definition of the Z-factor is motivated by the normal distribution , for which more than 99% of values occur within three times standard deviations of the mean. If the data follow a strongly non-normal distribution, the reference points (e.g. the meaning of a negative value) may be misleading.
Another issue is that the usual estimates of the mean and standard deviation are not robust ; accordingly many users in the high-throughput screening community prefer the "Robust Z-prime" which substitutes the median for the mean and the median absolute deviation for the standard deviation. [ 3 ] Extreme values (outliers) in either the positive or negative controls can adversely affect the Z-factor, potentially leading to an apparently unfavorable Z-factor even when the assay would perform well in actual screening
. [ 4 ] In addition, the application of the single Z-factor-based criterion to two or more positive controls with different strengths in the same assay will lead to misleading results
. [ 5 ] The absolute sign in the Z-factor makes it inconvenient to derive the statistical inference of Z-factor mathematically. [ 6 ] A recently proposed statistical parameter , strictly standardized mean difference ( SSMD ), can address these issues. [ 5 ] [ 6 ] [ 7 ] One estimate of SSMD is robust to outliers.
|
https://en.wikipedia.org/wiki/Z-factor
|
In chemistry , the Z-matrix is a way to represent a system built of atoms . A Z-matrix is also known as an internal coordinate representation . It provides a description of each atom in a molecule in terms of its atomic number , bond length, bond angle , and dihedral angle , the so-called internal coordinates , [ 1 ] [ 2 ] although it is not always the case that a Z-matrix will give information regarding bonding since the matrix itself is based on a series of vectors describing atomic orientations in space. However, it is convenient to write a Z-matrix in terms of bond lengths, angles, and dihedrals since this will preserve the actual bonding characteristics. The name arises because the Z-matrix assigns the second atom along the Z axis from the first atom, which is at the origin.
Z-matrices can be converted to Cartesian coordinates and back, as the structural information content is identical, the position and orientation in space, however is not meaning the Cartesian coordinates recovered will be accurate in terms of relative positions of atoms, but will not necessarily be the same as an original set of Cartesian coordinates if you convert Cartesian coordinates to a Z matrix and back again. While the transform is conceptually straightforward, algorithms of doing the conversion vary significantly in speed, numerical precision and parallelism. [ 1 ] These matter because macromolecular chains, such as polymers, proteins, and DNA, can have thousands of connected atoms and atoms consecutively distant along the chain that may be close in Cartesian space (and thus small round-off errors can accumulate to large force-field errors.) The optimally fastest and most numerically accurate algorithm for conversion from torsion-space to cartesian-space is the Natural Extension Reference Frame method. [ 1 ] Back-conversion from Cartesian to torsion angles is simple trigonometry and has no risk of cumulative errors.
They are used for creating input geometries for molecular systems in many molecular modelling and computational chemistry programs. A skillful choice of internal coordinates can make the interpretation of results straightforward. Also, since Z-matrices can contain molecular connectivity information (but do not always contain this information), quantum chemical calculations such as geometry optimization may be performed faster, because an educated guess is available for an initial Hessian matrix , and more natural internal coordinates are used rather than Cartesian coordinates.
The Z-matrix representation is often preferred, because this allows symmetry to be enforced upon the molecule (or parts thereof) by setting certain angles as constant. The Z-matrix simply is a representation for placing atomic positions in a relative way with the obvious convenience that the vectors it uses easily correspond to bonds. A conceptual pitfall is to assume all bonds appear as a line in the Z-matrix which is not true. For example: in ringed molecules like benzene , a z-matrix will not include all six bonds in the ring, because all of the atoms are uniquely positioned after just 5 bonds making the 6th redundant.
The methane molecule can be described by the following Cartesian coordinates (in Ångströms ):
Reorienting the molecule leads to Cartesian coordinates that make the symmetry more obvious. This removes the bond length of 1.089 from the explicit parameters.
The corresponding Z-matrix, which starts from the carbon atom, could look like this:
Only the 1.089000 value is not fixed by tetrahedral symmetry .
|
https://en.wikipedia.org/wiki/Z-matrix_(chemistry)
|
In mathematical analysis and computer science , functions which are Z-order , Lebesgue curve , Morton space-filling curve , [ 1 ] Morton order or Morton code map multidimensional data to one dimension while preserving locality of the data points (two points close together in multidimensions with high probability lie also close together in Morton order). It is named in France after Henri Lebesgue , who studied it in 1904, [ 2 ] and named in the United States after Guy Macdonald Morton , who first applied the order to file sequencing in 1966. [ 3 ] The z-value of a point in multidimensions is simply calculated by bit interleaving the binary representations of its coordinate values. However, when querying a multidimensional search range in these data, using binary search is not really efficient: It is necessary for calculating, from a point encountered in the data structure, the next possible Z-value which is in the multidimensional search range, called BIGMIN. The BIGMIN problem has first been stated and its solution shown by Tropf and Herzog in 1981. [ 4 ] Once the data are sorted by bit interleaving, any one-dimensional data structure can be used, such as simple one dimensional arrays , binary search trees , B-trees , skip lists or (with low significant bits truncated) hash tables . The resulting ordering can equivalently be described as the order one would get from a depth-first traversal of a quadtree or octree .
The figure below shows the Z-values for the two dimensional case with integer coordinates 0 ≤ x ≤ 7, 0 ≤ y ≤ 7 (shown both in decimal and binary). Interleaving the binary coordinate values (starting to the right with the x -bit (in blue) and alternating to the left with the y -bit (in red)) yields the binary z -values (tilted by 45° as shown). Connecting the z -values in their numerical order produces the recursively Z-shaped curve. Two-dimensional Z-values are also known as quadkey values.
The Z-values of the x coordinates are described as binary numbers from the Moser–de Bruijn sequence , having nonzero bits only in their even positions:
The sum and difference of two x values are calculated by using bitwise operations :
This property can be used to offset a Z-value, for example in two dimensions the coordinates to the top (decreasing y), bottom (increasing y), left (decreasing x) and right (increasing x) from the current Z-value z are:
And in general to add two two-dimensional Z-values w and z :
The Z-ordering can be used to efficiently build a quadtree (2D) or octree (3D) for a set of points. [ 5 ] [ 6 ] The basic idea is to sort the input set according to Z-order. Once sorted, the points can either be stored in a binary search tree and used directly, which is called a linear quadtree, [ 7 ] or they can be used to build a pointer based quadtree.
The input points are usually scaled in each dimension to be positive integers, either as a fixed point representation over the unit range [0, 1] or corresponding to the machine word size. Both representations are equivalent and allow for the highest order non-zero bit to be found in constant time. Each square in the quadtree has a side length which is a power of two, and corner coordinates which are multiples of the side length. Given any two points, the derived square for the two points is the smallest square covering both points. The interleaving of bits from the x and y components of each point is called the shuffle of x and y , and can be extended to higher dimensions. [ 5 ]
Points can be sorted according to their shuffle without explicitly interleaving the bits. To do this, for each dimension, the most significant bit of the exclusive or of the coordinates of the two points for that dimension is examined. The dimension for which the most significant bit is largest is then used to compare the two points to determine their shuffle order.
The exclusive or operation masks off the higher order bits for which the two coordinates are identical. Since the shuffle interleaves bits from higher order to lower order, identifying the coordinate with the largest most significant bit, identifies the first bit in the shuffle order which differs, and that coordinate can be used to compare the two points. [ 8 ] This is shown in the following Python code:
One way to determine whether the most significant bit is smaller is to compare the floor of the base-2 logarithm of each point. It turns out the following operation is equivalent, and only requires exclusive or operations: [ 8 ]
It is also possible to compare floating point numbers using the same technique. The less_msb function is modified to first compare the exponents. Only when they are equal is the standard less_msb function used on the mantissas. [ 9 ]
Once the points are in sorted order, two properties make it easy to build a quadtree: The first is that the points contained in a square of the quadtree form a contiguous interval in the sorted order. The second is that if more than one child of a square contains an input point, the square is the derived square for two adjacent points in the sorted order.
For each adjacent pair of points, the derived square is computed and its side length determined. For each derived square, the interval containing it is bounded by the first larger square to the right and to the left in sorted order. [ 5 ] Each such interval corresponds to a square in the quadtree. The result of this is a compressed quadtree, where only nodes containing input points or two or more children are present. A non-compressed quadtree can be built by restoring the missing nodes, if desired.
Rather than building a pointer based quadtree, the points can be maintained in sorted order in a data structure such as a binary search tree. This allows points to be added and deleted in O (log n ) time. Two quadtrees can be merged by merging the two sorted sets of points, and removing duplicates. Point location can be done by searching for the points preceding and following the query point in the sorted order. If the quadtree is compressed, the predecessor node found may be an arbitrary leaf inside the compressed node of interest. In this case, it is necessary to find the predecessor of the least common ancestor of the query point and the leaf found. [ 10 ]
By bit interleaving, the database records are converted to a (possibly very long) sequence of bits. The bit sequences are interpreted as binary numbers and the data are sorted or indexed by the binary values, using any one dimensional data structure, as mentioned in the introduction. However, when querying a multidimensional search range in these data, using binary search is not really efficient. Although Z-order is preserving locality well, for efficient range searches an algorithm is necessary for calculating, from a point encountered in the data structure, the next possible Z-value which is in the multidimensional search range:
In this example, the range being queried ( x = 2, ..., 3, y = 2, ..., 6) is indicated by the dotted rectangle. Its highest Z-value (MAX) is 45. In this example, the value F = 19 is encountered when searching a data structure in increasing Z-value direction, so we would have to search in the interval between F and MAX (hatched area). To speed up the search, one would calculate the next Z-value which is in the search range, called BIGMIN (36 in the example) and only search in the interval between BIGMIN and MAX (bold values), thus skipping most of the hatched area. Searching in decreasing direction is analogous with LITMAX which is the highest Z-value in the query range lower than F . The BIGMIN problem has first been stated and its solution shown in Tropf and Herzog. [ 4 ] For the history after the puplication see. [ 11 ]
An extensive explanation of the LITMAX/BIGMIN calculation algorithm, together with Pascal Source Code (3D, easy to adapt to nD) and hints on how to handle floating point data and possibly negative data, is provided 2021 by Tropf : Here, bit interleaving is not done explicitly; the data structure has just pointers to the original (unsorted) database records. With a general record comparison function (greater-less-equal, in the sense of z-value), complications with bit sequences length exceeding the computer word length are avoided, and the code can easily be adapted to any number of dimensions and any record key word length.
As the approach does not depend on the one dimensional data structure chosen, there is still free choice of structuring the data, so well known methods such as balanced trees can be used to cope with dynamic data, and keeping the tree balance when inserting or deleting takes O(log n) time. The method is also used in UB-trees (balanced). [ 12 ]
The Free choice makes it easier to incorporate the method into existing databases. This is in contrast for example to R-trees where special considerations are necessary.
Applying the method hierarchically (according to the data structure at hand), optionally in both increasing and decreasing direction, yields highly efficient multidimensional range search which is important in both commercial and technical applications, e.g. as a procedure underlying nearest neighbour searches. Z-order is one of the few multidimensional access methods that has found its way into commercial database systems. [ 13 ] The method is used in various technical applications of different fields. [ 14 ] and in commercial database systems. [ 15 ]
As long ago as 1966, G.M.Morton proposed Z-order for file sequencing of a static two dimensional geographical database. Areal data units are contained in one or a few quadratic frames represented by their sizes and lower right corner Z-values, the sizes complying with the Z-order hierarchy at the corner position. With high probability, changing to an adjacent frame is done with one or a few relatively small scanning steps. [ 3 ]
As an alternative, the Hilbert curve has been suggested as it has a better order-preserving behaviour, [ 6 ] and, in fact, was used in an optimized index, the S2-geometry. [ 16 ]
The Strassen algorithm for matrix multiplication is based on splitting the matrices in four blocks, and then recursively splitting each of these blocks in four smaller blocks, until the blocks are single elements (or more practically: until reaching matrices so small that the Moser–de Bruijn sequence trivial algorithm is faster). Arranging the matrix elements in Z-order then improves locality, and has the additional advantage (compared to row- or column-major ordering) that the subroutine for multiplying two blocks does not need to know the total size of the matrix, but only the size of the blocks and their location in memory. Effective use of Strassen multiplication
with Z-order has been demonstrated, see Valsalam and Skjellum's 2002 paper. [ 17 ]
Buluç et al. present a sparse matrix data structure that Z-orders its non-zero elements to enable parallel matrix-vector multiplication. [ 18 ]
Matrices in linear algebra can also be traversed using a space-filling curve. [ 19 ] Conventional loops traverse a matrix row by row. Traversing with the Z-curve allows efficient access to the memory hierarchy . [ 20 ]
Some GPUs store texture maps in Z-order to increase spatial locality of reference during texture mapped rasterization . This allows cache lines to represent rectangular tiles, increasing the probability that nearby accesses are in the cache. At a larger scale, it also decreases the probability of costly, so called, "page breaks" (i.e., the cost of changing rows ) in SDRAM/DDRAM. This is important because 3D rendering involves arbitrary transformations (rotations, scaling, perspective, and distortion by animated surfaces).
These formats are often referred to as swizzled textures or twiddled textures . Other tiled formats may also be used.
The Barnes–Hut algorithm requires construction of an octree. Storing the data as a pointer -based tree requires many sequential pointer dereferences to iterate over the octree in depth-first order (expensive on a distributed-memory machine). Instead, if one stores the data in a hashtable , using octree hashing, the Z-order curve naturally iterates the octree in depth-first order. [ 6 ]
|
https://en.wikipedia.org/wiki/Z-order_curve
|
The Z-tube is an experimental apparatus for measuring the tensile strength of a liquid .
It consists of a Z-shaped tube with open ends, filled with a liquid, and set on top of a spinning table. If the tube were straight, the liquid would immediately fly out one end or the other of the tube as it began to spin. By bending the ends of the tube back towards the center of rotation , a shift of the liquid away from center will result in the water level in one end of the tube rising and thus increasing the pressure in that end of the tube, and consequently returning the liquid to the center of the tube. By measuring the rotational speed and the distance from the center of rotation to the liquid level in the bent ends of the tube, the pressure reduction inside the tube can be calculated.
Negative pressures , (i.e. less than zero absolute pressure, or in other words, tension ) have been reported using water processed to remove dissolved gases . [ 1 ] Tensile strengths up to 280 atmospheres have been reported for water in glass. [ 2 ]
This physics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Z-tube
|
"F 0 " is defined as the number of equivalent minutes of steam sterilization at temperature 121.1 °C (250 °F) delivered to a container or unit of product calculated using a z-value of 10 °C. The term F-value or "F Tref/z " is defined as the equivalent number of minutes to a certain reference temperature (T ref ) for a certain control microorganism with an established Z-value . [ 1 ]
Z-value is a term used in microbial thermal death time calculations. It is the number of degrees the temperature has to be increased to achieve a tenfold (i.e. 1 log 10 ) reduction in the D-value . The D-value of an organism is the time required in a given medium, at a given temperature, for a ten-fold reduction in the number of organisms. It is useful when examining the effectiveness of thermal inactivations under different conditions, for example in food cooking and preservation. The z-value is a measure of the change of the D-value with varying temperature, and is a simplified version of an Arrhenius equation and it is equivalent to z=2.303 RT T ref /E. [ 2 ]
The z-value of an organism in a particular medium is the temperature change required for the D-value to change by a factor of ten, or put another way, the temperature required for the thermal destruction curve to move one log cycle . It is the reciprocal of the slope resulting from the plot of the logarithm of the D-value versus the temperature at which the D-value was obtained. While the D-value gives the time needed at a certain temperature to kill 90% of the organisms, the z-value relates the resistance of an organism to differing temperatures. The z-value allows calculation of the equivalency of two thermal processes, if the D-value and the z-value are known.
Example: if it takes an increase of 10 °C (18 °F) to move the curve one log, then our z-value is 10. Given a D-value of 4.5 minutes at 150 °C, the D-value can be calculated for 160 °C by reducing the time by 1 log. The new D-value for 160 °C given the z-value is 0.45 minutes. This means that each 10 °C (18 °F) increase in temperature will reduce our D-value by 1 log. Conversely, a 10 °C (18 °F) decrease in temperature will increase our D-value by 1 log. So, the D-value for a temperature of 140 °C would be 45 minutes.
|
https://en.wikipedia.org/wiki/Z-value_(temperature)
|
The Zuse Z23 was a transistorized computer first delivered in 1961, designed by the Zuse KG company. A total of 98 units were sold to commercial and academic customers up until 1967. It had a 40-bit word length and used an 8192 word drum memory as main storage, with 256 words of rapid-access ferrite memory. It operated on fixed and floating-point binary numbers. Fixed-point addition took 0.3 milliseconds, a fixed point multiplication took 10.3 milliseconds. It was similar in internal design to the earlier vacuum tube Z22 . Related variants were the Z25 and Z26 models. [ 1 ]
The Z23 used about 2700 transistors and 7700 diodes. Memory was magnetic-core memory . [ 2 ] The Z23 had an Algol 60 compiler. It had a basic clock speed of 150 kHz and consumed about 4000 watts of electric power. An improved version Z23V was released in 1965, with expanded memory and a higher processing speed.
The Z23 weighed about 1,000 kilograms (1.0 t; 1.1 short tons). [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Z23_(computer)
|
The Z5 was a computer designed by Konrad Zuse and manufactured by Zuse KG following an order by Ernst Leitz GmbH in Wetzlar in 1950. The computer was delivered in July 1953 [ 1 ] and was the first commercial built-to-order mainframe in Germany. The computer was purchased to help with the design of optical lens systems.
The Z5 is the successor of the Z4, and is much more compact and powerful. Zuse implemented the machine with relays , since vacuum tubes were too unreliable at the time. The Z5 used the same principles as the Z4 , but was six times faster.
It also had punched tape readers, which the Z4 did not have. It had conditional branching and five subroutine loops.
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Z5_(computer)
|
The zirconium-catalyzed asymmetric carbo-alumination reaction (or ZACA reaction ) was developed by Nobel laureate Ei-ichi Negishi . [ 1 ] It facilitates the chiral functionalization of alkenes using organoaluminium compounds under the influence of chiral bis-indenylzirconium catalysts [1] (e.g. bearing chiral terpene residues, [ 2 ] as in (+)- or (−)-bis[(1-neomenthyl)indenyl]zirconium dichloride [ 3 ] [ 4 ] ). In a first step the alkene inserts into an Al-C bond of the reagent, forming a new chiral organoaluminium compound in which the aluminium atom occupies the lesser hindered position. This intermediate is usually oxidized by oxygen to form the corresponding chiral alcohol (cf. hydroboration–oxidation reaction ). The reaction can also be applied to dienes, where the least sterically hindered double bond is attacked selectively.
|
https://en.wikipedia.org/wiki/ZACA_reaction
|
ZDNET is a business technology news website owned and operated by Ziff Davis . The brand was founded on April 1, 1991, as a general interest technology portal from Ziff Davis and evolved into an enterprise IT -focused online publication. After being under the ownership of CNET Networks (2000–2008), CBS Corporation / ViacomCBS (2008–2020), and Red Ventures (2020–2024), ZDNET was reacquired by Ziff Davis in August 2024. CNET was included in the acquisition as well.
ZDNET began as a subscription-based digital service called "ZiffNet" that offered computing information to users of CompuServe . It featured computer industry forums, events, features and searchable archives.
Initially, ZiffNet was intended to serve as a common place to find content from all Ziff-Davis print publications. As such, ZiffNet was an expansion on an earlier online service called PCMagNet for readers of PC Magazine . Launched in 1988, PCMagNet in turn was the evolution of Ziff Davis' first electronic publishing venture, a bulletin board, which launched in 1985. [ 3 ]
In late 1994, Ziff-Davis expanded onto the World Wide Web under the name "ZD Net". [ 4 ] [ 2 ] Dan Farber, former editor-in-chief of PC Week and MacWeek , was named editor-in-chief of the property. [ 5 ] By June 1995, the site was recording web traffic of 2.5 million pageviews per week. [ 6 ]
On June 20, 1995, Ziff-Davis announced the consolidation of its online information services under a single name, ZD Net . The service had grown its membership to 275,000 subscribers across six platforms: CompuServe, Prodigy , AT&T Interchange, the Microsoft Network , AppleLink and eWorld . [ 6 ]
By its fifth anniversary in 1996, the collective "ZD Net" brand—now on the Web, America Online , Microsoft Network and Prodigy—counted 300,000 subscribers and was named the second-highest grossing advertising site on the web. [ 3 ] The site also expanded overseas: initially to France , Germany and the United Kingdom ; later to China , Australia , Hong Kong , Italy , Korea , Malaysia , Russia , Spain , Taiwan and India . [ 7 ]
In 1997, the website—now the brand's flagship property—underwent another redesign that featured topical "channels" of content. It also marked the change in name from "ZD Net" to "ZDNet". [ 8 ]
Two months prior, the company launched ZDNet News, or "ZDNN", the site's first dedicated section to original reportage. [ 9 ] Among the journalists hired to staff the department were former Computer Shopper executive editor Charlie Cooper, San Jose Mercury News business editor Steve Hamm, PC Week Inside senior editor Bill Snyder, PC Week editor John Dodge , Computerworld editor Michael Fitzgerald and PC Week editorial director Jim Louderback . [ 10 ]
In 1996, the first dedicated advertising sales team began with the hiring of Ken Evans, formerly a CMP Media executive in New York, as Senior Director of Advertising Sales. The appointment of digital publishing executive Dan Rosensweig as ZDNet's first president capped a year of significant change for the brand. [ 11 ]
In 1998, ZDNet launched "Inter@active Investor", or ZDII, a spin-off website for investors that offered financial news and information on technology companies. [ 12 ]
On May 11, 1998, Ziff-Davis launched ZDTV as the first cable television channel and website to offer 24-hour programming about computing and the Internet. The venture, which was partly owned by Vulcan Enterprises, was supported with a staff of 170 and incorporated ZDNet content on its website, ZDTV.com. [ 13 ] [ 14 ] The channel would later become Tech TV .
By the end of 1998, ZDNet was the dominant technology brand online. It led its closest rival, CNET , by a 26 percent margin and was the 13th most popular site on the Web, reaching 8.4 million users, or 13.4 percent of all users on the Web. [ 15 ] The site would reach an additional 600,000 users within a year. [ 16 ]
In 1999, Ziff-Davis spun ZDNet off as a separate company and offered it as a tracking stock, ZDZ, to accompany the parent stock, ZD. An initial public offering raised $190 million, but the tracking stock was eliminated in early 2000 and revived as common stock. [ 17 ] The new company soon acquired Updates.com, a software upgrade service. It was incorporated into the site's "Help Channel." [ 18 ]
In 1999, ZDNet also launched "Tech Life", a network of six consumer-focused tech sites intended to attract parents ("FamilyPC"), music listeners ("ZDNet Music"), gadget enthusiasts ("ZDNet Equip"), gamers ("ZDNet GameSpot ") and basic users ("Internet Life" with Yahoo ).
It also launched "Computer Stew", a web-based comedy show about technology that featured John Hargrave and Jay Stevens, [ 19 ] as well as the first ZDNet Holiday Gift Guide.
On December 30, 1999, ZDNet launched a $25 million branding campaign in response to a $100 million advertising campaign launched by rival CNET. [ 17 ]
ZDNet's lead over the competition narrowed by 2000. Despite a record 10.7 million unique users in January, it managed only a 13 percent lead over the next competitor. [ 20 ] By mid-2000, ZDNet had expanded to 23 countries in 14 languages on six continents. [ 21 ]
On July 19, 2000 CNET Networks (ZDNet's largest rival) announced that it would acquire ZDNET for about $1.6 billion. [ 22 ] Some analysts thought that the merger of CNET and ZDNET would lead to redundancy in their product offerings, but research revealed that their target audiences had just 25 percent overlap. [ 23 ]
In 2001, Ziff Davis Media Inc. reached an agreement with CNET Networks Inc. and ZDNET to regain the URLs lost in the 2000 sale of Ziff Davis Inc, to Softbank Corp. [ 24 ]
In 2002, CNET Networks launched ZDNET sister site Builder.com, a site intended for enterprise software developers. [ 25 ] On July 7, 2002, CNET Networks acquired Newmediary for its database of more than 30,000 enterprise IT white papers. [ 26 ] ZDNET had integrated its services into its "Business & Technology" channel as early as January 2001. [ 27 ]
In 2003, CNET Networks redesigned and relaunched ZDNet as an enterprise-focused publication intended to help business executives make better technology decisions.
The entire site was realigned as part of a CNET Networks B2B portfolio that included CNET News.com, Builder.com and TechRepublic. [ 28 ]
A "Tech Update" section was created to serve as a directory of proprietary IT research (dubbed "IT Priorities"), and a new "Power Center" was implemented to prominently feature webcasts, white papers and case studies from partners. ZDNet also offered eight enterprise-targeted newsletters, as well launched its first blogs. [ 29 ]
In 2005, ZDNet Government was launched. Editorial features included writing by former Utah CIO Phil Windley, TechRepublic columnist Ramon Padilla and CNET News reporter Declan McCullagh. ZDNet also launched its first original podcasts in 2005. [ 30 ]
In 2006, ZDNET experienced another redesign that reduced its editorial focus on traditional news articles and product reviews and emphasized a growing network of expert bloggers, now totaling more than 30. The blogs covered topics such as enterprise IT, open source, Web 2.0, Google, Apple and Microsoft, and featured journalists David Berlind, Mary Jo Foley and Larry Dignan. [ 31 ]
On February 19, 2008, Larry Dignan was appointed editor-in-chief of ZDNet and editorial director of TechRepublic, [ 32 ] replacing Dan Farber, who became editor-in-chief of CNET News.com. [ 33 ]
On May 17, 2008, CBS Corporation announced that it would acquire CNET Networks for approximately $1.8 billion. [ 34 ] The entire company would be organized under its CBS Interactive division.
In May 2010, ZDNet redesigned its site to place emphasis on the topics its blog network covers—now "Companies," "Hardware," "Software," "Mobile," "Security" and "Research"—and de-emphasize the downloads and reviews it imported from CNET post-merger. [ 35 ] [ 36 ]
After the CBS Corporation merged with Viacom to form ViacomCBS in 2019, ZDNet was sold to Red Ventures in September 2020. [ 37 ] On August 17, 2022, ZDNet announced "the biggest upgrade in the 31-year history of the brand, including a new hand-drawn logo and new brand color, 'Energy Yellow'", in anticipation of "a wave of technology advances to sweep the world's biggest industries in the years ahead." [ 38 ] In March 2023, ZDNet was affected by layoffs that cut 35% of its staff. [ 39 ] In August 2024, Ziff Davis signed a deal to purchase CNET and ZDNet from Red Ventures. [ 40 ] The deal completed later in 2024, bringing ZDNET back under the ownership of Ziff Davis for the first time since it was acquired by CNET Networks in 2000. [ 41 ]
ZDNet operates a network of about 50 blogs loosely aligned by its major verticals: companies, hardware, software, mobile, security and IT research. Within those general areas are blogs on gadgets, management strategy, social media, datacenters, technology law, SOA, healthcare, CRM, virtualization and sustainability. [ citation needed ] The site also offers product reviews on consumer gadgets, electronics and home office equipment.
At the 14th Annual Computer Press Awards in 1999, ZDNet was adjudged the Best Overall Online Site. [ 42 ]
In 2007, the Association of Online Publishers awarded ZDNet UK under the Business Website category for its contribution to innovation in incorporating Web 2.0 and community features effectively on its site. [ 43 ]
A Japanese news publishing company called Asahi Interactive owns the ZDNet Japan website.
The ZDNet UK Live feature displays real time news updates and comments on the website and on social media including Twitter.
Other country editions include Australia, Asia, Belgium, China, Germany, Netherlands, UK and France, in their native languages.
|
https://en.wikipedia.org/wiki/ZDNET
|
ZFK equation , abbreviation for Zeldovich–Frank-Kamenetskii equation , is a reaction–diffusion equation that models premixed flame propagation. The equation is named after Yakov Zeldovich and David A. Frank-Kamenetskii who derived the equation in 1938 and is also known as the Nagumo equation. [ 1 ] [ 2 ] The equation is analogous to KPP equation except that is contains an exponential behaviour for the reaction term and it differs fundamentally from KPP equation with regards to the propagation velocity of the traveling wave. In non-dimensional form, the equation reads
∂ θ ∂ t = ∂ 2 θ ∂ x 2 + ω ( θ ) {\displaystyle {\frac {\partial \theta }{\partial t}}={\frac {\partial ^{2}\theta }{\partial x^{2}}}+\omega (\theta )}
with a typical form for ω {\displaystyle \omega } given by
ω = β 2 2 θ ( 1 − θ ) e − β ( 1 − θ ) {\displaystyle \omega ={\frac {\beta ^{2}}{2}}\theta (1-\theta )e^{-\beta (1-\theta )}}
where θ ∈ [ 0 , 1 ] {\displaystyle \theta \in [0,1]} is the non-dimensional dependent variable (typically temperature) and β {\displaystyle \beta } is the Zeldovich number . In the ZFK regime , β ≫ 1 {\displaystyle \beta \gg 1} . The equation reduces to Fisher's equation for β ≪ 1 {\displaystyle \beta \ll 1} and thus β ≪ 1 {\displaystyle \beta \ll 1} corresponds to KPP regime . The minimum propagation velocity U m i n {\displaystyle U_{min}} (which is usually the long time asymptotic speed) of a traveling wave in the ZFK regime is given by
U ZFK ∝ 2 ∫ 0 1 ω ( θ ) d θ {\displaystyle U_{\text{ZFK}}\propto {\sqrt {2\int _{0}^{1}\omega (\theta )d\theta }}}
whereas in the KPP regime, it is given by
U KPP = 2 d ω d θ | θ = 0 . {\displaystyle U_{\text{KPP}}=2{\sqrt {\left.{\frac {d\omega }{d\theta }}\right|_{\theta =0}}}.}
Similar to Fisher's equation , a traveling wave solution can be found for this problem. Suppose the wave to be traveling from right to left with a constant velocity U {\displaystyle U} , then in the coordinate attached to the wave, i.e., z = x + U t {\displaystyle z=x+Ut} , the problem becomes steady. The ZFK equation reduces to
U d θ d z = d 2 θ d z 2 + β 2 2 θ ( 1 − θ ) e − β ( 1 − θ ) {\displaystyle U{\frac {d\theta }{dz}}={\frac {d^{2}\theta }{dz^{2}}}+{\frac {\beta ^{2}}{2}}\theta (1-\theta )e^{-\beta (1-\theta )}}
satisfying the boundary conditions θ ( − ∞ ) = 0 {\displaystyle \theta (-\infty )=0} and θ ( + ∞ ) = 1 {\displaystyle \theta (+\infty )=1} . The boundary conditions are satisfied sufficiently smoothly so that the derivative d θ / d z {\displaystyle d\theta /dz} also vanishes as z → ± ∞ {\displaystyle z\to \pm \infty } . Since the equation is translationally invariant in the z {\displaystyle z} direction, an additional condition, say for example θ ( 0 ) = 1 / 2 {\displaystyle \theta (0)=1/2} , can be used to fix the location of the wave. The speed of the wave U {\displaystyle U} is obtained as part of the solution, thus constituting a nonlinear eigenvalue problem. [ 3 ] Numerical solution of the above equation, θ {\displaystyle \theta } , the eigenvalue U {\displaystyle U} and the corresponding reaction term ω {\displaystyle \omega } are shown in the figure, calculated for β = 15 {\displaystyle \beta =15} .
The ZFK regime as β → ∞ {\displaystyle \beta \to \infty } is formally analyzed using activation energy asymptotics . Since β {\displaystyle \beta } is large, the term e − β ( 1 − θ ) {\displaystyle e^{-\beta (1-\theta )}} will make the reaction term practically zero, however that term will be non-negligible if 1 − θ ∼ 1 / β {\displaystyle 1-\theta \sim 1/\beta } . The reaction term will also vanish when θ = 0 {\displaystyle \theta =0} and θ = 1 {\displaystyle \theta =1} . Therefore, it is clear that ω {\displaystyle \omega } is negligible everywhere except in a thin layer close to the right boundary θ = 1 {\displaystyle \theta =1} . Thus the problem is split into three regions, an inner diffusive-reactive region flanked on either side by two outer convective-diffusive regions.
The problem for outer region is given by
U d θ d z = d 2 θ d z 2 . {\displaystyle U{\frac {d\theta }{dz}}={\frac {d^{2}\theta }{dz^{2}}}.}
The solution satisfying the condition θ ( − ∞ ) = 0 {\displaystyle \theta (-\infty )=0} is θ = e U z {\displaystyle \theta =e^{Uz}} . This solution is also made to satisfy θ ( 0 ) = 1 {\displaystyle \theta (0)=1} (an arbitrary choice) to fix the wave location somewhere in the domain because the problem is translationally invariant in the z {\displaystyle z} direction. As z → 0 − {\displaystyle z\to 0^{-}} , the outer solution behaves like θ = 1 + U z + ⋯ {\displaystyle \theta =1+Uz+\cdots } which in turn implies d θ / d z = U + ⋯ . {\displaystyle d\theta /dz=U+\cdots .}
The solution satisfying the condition θ ( + ∞ ) = 1 {\displaystyle \theta (+\infty )=1} is θ = 1 {\displaystyle \theta =1} . As z → 0 + {\displaystyle z\to 0^{+}} , the outer solution behaves like θ = 1 {\displaystyle \theta =1} and thus d θ / d z = 0 {\displaystyle d\theta /dz=0} .
We can see that although θ {\displaystyle \theta } is continuous at z = 0 {\displaystyle z=0} , d θ / d z {\displaystyle d\theta /dz} has a jump at z = 0 {\displaystyle z=0} . The transition between the derivatives is described by the inner region.
In the inner region where 1 − θ ∼ 1 / β {\displaystyle 1-\theta \sim 1/\beta } , reaction term is no longer negligible. To investigate the inner layer structure, one introduces a stretched coordinate encompassing the point z = 0 {\displaystyle z=0} because that is where θ {\displaystyle \theta } is approaching unity according to the outer solution and a stretched dependent variable according to η = β z , Θ = β ( 1 − θ ) . {\displaystyle \eta =\beta z,\,\Theta =\beta (1-\theta ).} Substituting these variables into the governing equation and collecting only the leading order terms, we obtain
2 d 2 Θ d η 2 = Θ e − Θ . {\displaystyle 2{\frac {d^{2}\Theta }{d\eta ^{2}}}=\Theta e^{-\Theta }.}
The boundary condition as η → − ∞ {\displaystyle \eta \to -\infty } comes from the local behaviour of the outer solution obtained earlier, which when we write in terms of the inner zone coordinate becomes Θ → − U η = + ∞ {\displaystyle \Theta \to -U\eta =+\infty } and d Θ / d η = − U {\displaystyle d\Theta /d\eta =-U} . Similarly, as η → + ∞ {\displaystyle \eta \to +\infty } . we find Θ = d Θ / d η = 0 {\displaystyle \Theta =d\Theta /d\eta =0} . The first integral of the above equation after imposing these boundary conditions becomes
( d Θ d η ) 2 | Θ = ∞ − ( d Θ d η ) 2 | Θ = 0 = ∫ 0 ∞ Θ e − Θ d Θ U 2 = 1 {\displaystyle {\begin{aligned}\left.\left({\frac {d\Theta }{d\eta }}\right)^{2}\right|_{\Theta =\infty }-\left.\left({\frac {d\Theta }{d\eta }}\right)^{2}\right|_{\Theta =0}&=\int _{0}^{\infty }\Theta e^{-\Theta }d\Theta \\U^{2}&=1\end{aligned}}}
which implies U = 1 {\displaystyle U=1} . It is clear from the first integral, the wave speed square U 2 {\displaystyle U^{2}} is proportional to integrated (with respect to θ {\displaystyle \theta } ) value of ω {\displaystyle \omega } (of course, in the large β {\displaystyle \beta } limit, only the inner zone contributes to this integral). The first integral after substituting U = 1 {\displaystyle U=1} is given by
d Θ d η = − 1 − ( Θ + 1 ) exp ( − Θ ) . {\displaystyle {\frac {d\Theta }{d\eta }}=-{\sqrt {1-(\Theta +1)\exp(-\Theta )}}.}
In the KPP regime, U min = U KPP . {\displaystyle U_{\text{min}}=U_{\text{KPP}}.} For the reaction term used here, the KPP speed that is applicable for β ≪ 1 {\displaystyle \beta \ll 1} is given by [ 5 ]
U KPP = 2 d ω d θ | θ = 0 = 2 β e − β / 2 {\displaystyle U_{\text{KPP}}=2{\sqrt {\left.{\frac {d\omega }{d\theta }}\right|_{\theta =0}}}={\sqrt {2}}\beta e^{-\beta /2}}
whereas in the ZFK regime, as we have seen above U ZFK = 1 {\displaystyle U_{\text{ZFK}}=1} . Numerical integration of the equation for various values of β {\displaystyle \beta } showed that there exists a critical value β ∗ = 1.64 {\displaystyle \beta _{*}=1.64} such that only for β ≤ β ∗ {\displaystyle \beta \leq \beta _{*}} , U min = U KPP . {\displaystyle U_{\text{min}}=U_{\text{KPP}}.} For β ≥ β ∗ {\displaystyle \beta \geq \beta _{*}} , U min {\displaystyle U_{\text{min}}} is greater than U KPP {\displaystyle U_{\text{KPP}}} . As β ≫ 1 {\displaystyle \beta \gg 1} , U min {\displaystyle U_{\text{min}}} approaches U ZFK = 1 {\displaystyle U_{\text{ZFK}}=1} thereby approaching the ZFK regime. The region between the KPP regime and the ZFK regime is called the KPP–ZFK transition zone.
The critical value depends on the reaction model, for example we obtain
β ∗ = 3.04 for ω ∝ ( 1 − θ ) e − β ( 1 − θ ) β ∗ = 5.11 for ω ∝ ( 1 − θ ) 2 e − β ( 1 − θ ) . {\displaystyle {\begin{aligned}&\beta _{*}=3.04\quad {\text{for}}\quad \omega \propto (1-\theta )e^{-\beta (1-\theta )}\\&\beta _{*}=5.11\quad {\text{for}}\quad \omega \propto {\left(1-\theta \right)}^{2}e^{-\beta (1-\theta )}.\end{aligned}}}
To predict the KPP–ZFK transition analytically, Paul Clavin and Amable Liñán proposed a simple piecewise linear model [ 6 ]
ω ( θ ) = { θ if 0 ≤ θ ≤ 1 − ϵ , h ( 1 − θ ) / ϵ 2 if 1 − ϵ ≤ θ ≤ 1 {\displaystyle \omega (\theta )={\begin{cases}\theta \quad {\text{if}}\quad 0\leq \theta \leq 1-\epsilon ,\\h(1-\theta )/\epsilon ^{2}\quad {\text{if}}\quad 1-\epsilon \leq \theta \leq 1\end{cases}}}
where h {\displaystyle h} and ϵ {\displaystyle \epsilon } are constants. The KPP velocity of the model is U KPP = 2 {\displaystyle U_{\text{KPP}}=2} , whereas the ZFK velocity is obtained as U ZFK = h {\displaystyle U_{\text{ZFK}}={\sqrt {h}}} in the double limit ϵ → 0 {\displaystyle \epsilon \to 0} and h → ∞ {\displaystyle h\to \infty } that mimics a sharp increase in the reaction near θ = 1 {\displaystyle \theta =1} .
For this model there exists a critical value h ∗ = 1 − ϵ 2 {\displaystyle h_{*}=1-\epsilon ^{2}} such that
{ h < h ∗ : U min = U KPP , h > h ∗ : U min = h / ( 1 − ϵ ) + 1 − ϵ h / ( 1 − ϵ ) − ϵ , h ≫ h ∗ : U min → U ZFK {\displaystyle {\begin{cases}h<h_{*}:&\quad U_{\text{min}}=U_{\text{KPP}},\\h>h_{*}:&\quad U_{\text{min}}={\frac {h/(1-\epsilon )+1-\epsilon }{\sqrt {h/(1-\epsilon )-\epsilon }}},\\h\gg h_{*}:&\quad U_{\text{min}}\to U_{\text{ZFK}}\end{cases}}}
|
https://en.wikipedia.org/wiki/ZFK_equation
|
The ZGPAX S5 is a smartwatch from China. [ 1 ] It is part of a set of products (including Omate TrueSmart ) that use a similar chipset. [ citation needed ] The products in this set also have roughly the same specs but differ in names, case designs and camera placements. [ citation needed ]
For a time, they were considerably higher-spec than competing products from larger, well-known manufacturers in that they run a fully featured version of Android 4.0.3 and can run apps without the assistance of another connected smartphone or tablet. They use a dual-core ARMv7 processor manufactured by MediaTek, with each core running up to 1 GHz as needed. Additionally, a SIM slot for voice and data cellular connectivity is included, along with Bluetooth, Wi-Fi, GPS and FM radios, effectively replicating functionality and performance of a typical smartphone.
Another unique feature of ZGPAX S5 is a microSD slot which specs claim to support up to 32 GB of memory; however, it appears to recognize and operate with a 64 GB microSD. Along with its built-in memory, that amounts to very roughly 72 GB of data carried on a wrist.
There is a built in camera capable of adequate 720p recording, and a built in microphone and speaker which achieves decent enough fidelity and volume to function quite well with Siri-like software on Android.
ZGPAX S5, like other smartwatches, suffer from a very short battery life. However, it (and many of its derivatives) use an easily swappable battery pack which helps considerably. Typical use life is about one day for maybe less than an hour of overall use, about 2–3 days if on standby. As it uses an unusually high-performance CPU for a watch, certain applications will drain the battery much quicker. Batteries charge roughly within an hour. There is also a known issue where the ZGPAX S5 will lose connection with the Network Operator at random, this is likely the cause of poor manufacturing of the ZGPAX S5's radio. Additionally, at random, phone calls to the watch may result in the watch restarting.
|
https://en.wikipedia.org/wiki/ZGPAX_s5
|
ZHLS-GF (Zone-Based Hierarchical Link State Routing Protocol with Gateway Flooding) is a hybrid routing protocol for computer networks that is based on ZHLS. [ 1 ]
In ZHLS, all network nodes construct two routing tables — an intra-zone routing table and an inter-zone routing table — by flooding NodeLSPs within the zone and ZoneLSPs throughout the network. However, this incurs a large communication overhead in the network.
In ZHLS-GF, the flooding scheme floods ZoneLSPs only to the gateway nodes of zones, thus reducing the communication overhead significantly. Further, in ZHLS-GF only the gateway nodes store ZoneLSPs and constructs inter-zone routing tables, meaning that the total storage capacity required in the network is less than in ZHLS.
|
https://en.wikipedia.org/wiki/ZHLS-GF
|
The ZINC database ( recursive acronym : ZINC is not commercial ) is a curated collection of commercially available chemical compounds prepared especially for virtual screening . ZINC is used by investigators (generally people with training as biologists or chemists ) in pharmaceutical companies , biotechnology companies , and research universities . [ 1 ] [ 2 ] [ 3 ]
ZINC is different from other chemical databases because it aims to represent the biologically relevant, three dimensional form of the molecule .
ZINC is updated regularly and may be downloaded and used free of charge . It is developed by John Irwin in the Shoichet Laboratory in the Department of Pharmaceutical Chemistry at the University of California, San Francisco .
The latest release of the website interface is "ZINC-22". The database is continuously updated and as of 2023 [update] is claimed to contain over 37 billion commercially available molecules. [ 4 ]
The database is typically used for molecule mining , a process in which Quantitative structure–activity relationships are used to find new compounds with improved biological activity, given a known starting point found, for example, by high-throughput screening . [ 5 ] [ 6 ]
|
https://en.wikipedia.org/wiki/ZINC_database
|
ZMPSTE24 is a human gene . [ 1 ] [ 2 ] The protein encoded by this gene is a metallopeptidase . It is involved in the processing of lamin A . [ 3 ] Defects in the ZMPSTE24 gene lead to similar laminopathies as defects in lamin A, because the latter is a substrate for the former. [ 4 ] In humans, a mutation abolishing the ZMPSTE24 cleavage site in prelamin A causes a progeroid disorder . [ 5 ] Failure to correctly process prelamin A leads to deficient ability to repair DNA double-strand breaks . [ 6 ] [ 7 ]
As shown by Liu et al., [ 8 ] lack of Zmpste24 prevents lamin A formation from its precursor farnesyl-prelamin A. Lack of ZMPSTE24 causes progeroid phenotypes in mice and humans. This lack increases DNA damage and chromosome aberrations and sensitivity to DNA-damaging agents that cause double-strand breaks. Also, lack of ZMPSTE24 allows an increase in non-homologous end joining , but a deficiency in steps leading to homologous recombinational DNA repair.
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/ZMPSTE24
|
ZMapp is an experimental biopharmaceutical medication comprising three chimeric monoclonal antibodies under development as a treatment for Ebola virus disease . [ 1 ] Two of the three components were originally developed at the Public Health Agency of Canada 's National Microbiology Laboratory (NML), and the third at the U.S. Army Medical Research Institute of Infectious Diseases ; the cocktail was optimized by Gary Kobinger , a research scientist at the NML [ 2 ] and underwent further development under license by Mapp Biopharmaceutical . ZMapp was first used on humans during the Western African Ebola virus epidemic , having only been previously tested on animals and not yet subjected to a randomized controlled trial . [ 3 ] The National Institutes of Health (NIH) ran a clinical trial starting in January 2015 with subjects from Sierra Leone, Guinea, and Liberia aiming to enroll 200 people, but the epidemic waned and the trial closed early, leaving it too statistically underpowered to give a meaningful result about whether ZMapp worked. [ 4 ]
In 2016, a clinical study comparing ZMapp to the current standard of care for Ebola was inconclusive. [ 5 ]
The drug is composed of three monoclonal antibodies (mAbs), initially harvested from mice exposed to Ebola virus proteins, that have been chimerized with human constant regions . [ 6 ] The components are chimeric monoclonal antibody c13C6 from a previously existing antibody cocktail called "MB-003" and two chimeric mAbs from a different antibody cocktail called ZMab, c2G4, and c4G7. [ 7 ] ZMapp is manufactured in the tobacco plant Nicotiana benthamiana in the bioproduction process known as " pharming " by Kentucky BioProcessing, a subsidiary of Reynolds American . [ 1 ] [ 8 ] [ 9 ]
Like intravenous immunoglobulin therapy, ZMapp contains a mixture of neutralizing antibodies that confer passive immunity to an individual, enhancing the normal immune response, and is designed to be administered after exposure to the Ebola virus. [ 10 ] Such antibodies have been used in the treatment and prevention of various infectious diseases and are intended to attack the virus by interfering with its surface and neutralizing it to prevent further damage. [ 10 ] [ 11 ]
Two of the drug's three components were originally developed at the Public Health Agency of Canada's National Microbiology Laboratory (NML), and a third at the U.S. Army Medical Research Institute of Infectious Diseases; [ 2 ] the cocktail was optimized by Gary Kobinger, then branch chief of the NML, and is undergoing further development by Leaf Biopharmaceutical (LeafBio, Inc.), a San Diego–based arm of Mapp Biopharmaceutical. [ 12 ] LeafBio created ZMapp in collaboration with its parent and Defyrus Inc., each of which had licensed its own cocktail of antibodies, called MB-003 and ZMab. [ 13 ] [ citation needed ]
MB-003 is a cocktail of three humanized or human–mouse chimeric mAbs: c13C6, h13F6, and c6D8. [ 7 ] A study published in September 2012 found that rhesus macaques infected with Ebola virus (EBOV) survived when receiving MB-003 (mixture of 3 chimeric monoclonal antibodies) one hour after infection. When treated 24 or 48 hours after infection, four of six animals survived and had little to no viremia and few, if any, clinical symptoms. [ 14 ]
MB-003 was created by scientists at the U.S. Army Medical Research Institute of Infectious Diseases, Gene Olinger, and Jamie Pettitt in collaboration with Mapp Biopharmaceutical with years of funding from US government agencies including the National Institute of Allergy and Infectious Disease , Biomedical Advanced Research and Development Authority , and the Defense Threat Reduction Agency . [ 1 ] [ 2 ] [ 15 ]
ZMAb is a mixture of three mouse mAbs: m1H3, m2G4, and m4G7. [ 7 ] A study published in November 2013 found that EBOV-infected macaque monkeys survived after being given a therapy with a combination of three EBOV surface glycoprotein (EBOV-GP)-specific monoclonal antibodies (ZMAb) within 24 hours of infection. The authors concluded that post-exposure treatment resulted in a robust immune response, with good protection for up to 10 weeks and some protection at 13 weeks. [ 16 ] ZMab was created by the NML and licensed to Defyrus, a Toronto-based biodefense company, with further funding by the Public Health Agency of Canada . [ 2 ]
A 2014 paper described how Mapp and its collaborators, including investigators at Public Health Agency of Canada , Kentucky BioProcessing, and the National Institute of Allergy and Infectious Diseases , first chimerized the three antibodies comprising ZMAb, then tested combinations of MB-003 and the chimeric ZMAb antibodies in guinea pigs and then primates to determine the best combination, which turned out to be c13C6 from MB-003 and two chimeric mAbs from ZMAb, c2G4 and c4G7. This is ZMapp. [ 7 ]
In an experiment also published in the 2014 paper, 21 rhesus macaque primates were infected with the Kikwit Congolese variant of EBOV. Three primates in the control arm were given a non-functional antibody, and the 18 in the treatment arm were divided into three groups of six. All primates in the treatment arm received three doses of ZMapp, spaced 3 days apart. The first treatment group received its first dose on 3rd day after being infected; the second group on the 4th day after being infected, and the third group, on the 5th day after being infected. All three primates in the control group died; all 18 primates in the treatment arm survived. [ 7 ] Mapp then went on to show that ZMapp inhibits replication of a Guinean strain of EBOV in cell cultures. [ 17 ]
Mapp remains involved in the production of the drug through its contracts with Kentucky BioProcessing, a subsidiary of Reynolds American . [ 1 ] To produce the drug, genes coding for the chimeric mAbs were inserted into viral vectors , and tobacco plants are infected with the viral vector encoding for the antibodies, using Agrobacterium cultures. [ 18 ] [ 19 ] [ 20 ] Subsequently, antibodies are extracted and purified from the plants. Once the genes encoding the chimeric mAbs are in hand, the entire tobacco production cycle is believed to take a few months. [ 3 ] The development of these production methods was funded by the U.S. Defense Advanced Research Projects Agency as part of its bio-defense efforts following the 9/11 terrorist attacks. [ 21 ] [ 22 ]
ZMapp was first used during the 2014 West Africa Ebola Virus outbreak, having not previously undergone any human clinical trials to determine its efficacy or potential risks. [ 3 ] By October 2014, the United States Food and Drug Administration had approved the use of several experimental drugs, including ZMapp, to be used on patients infected with Ebola virus. The use of such drugs during the epidemic was also deemed ethical by the World Health Organization (WHO). [ 23 ] In 2014, a limited supply of ZMapp was used to treat 7 individuals infected with the Ebola virus; of these 2 died. [ 24 ] [ 25 ] The outcome is not considered to be statistically significant . Mapp announced in August 2014, that supplies of ZMapp had been exhausted. [ 26 ]
The lack of drugs and unavailability of experimental treatment in the most affected regions of the West African Ebola virus outbreak spurred some controversy. [ 3 ] The fact that the drug was first given to Americans and a European and not to Africans, according to the Los Angeles Times , "provoked outrage, feeding into African perceptions of Western insensitivity and arrogance, with a deep sense of mistrust and betrayal still lingering over the exploitation and abuses of the colonial era". [ 27 ] Salim S. Abdool Karim , the director of an AIDS research center in South Africa, placed the issue in the context of the history of exploitation and abuses. Responding to a question on how people might have reacted if ZMapp and other drugs had first been used on Africans, he said "It would have been the front-page screaming headline: 'Africans used as guinea pigs for American drug company's medicine ' ". [ 3 ]
In August 2014, the World Health Organization called for convening a panel of medical authorities "to consider whether experimental drugs should be more widely released." In a statement, Peter Piot (co-discoverer of the Ebola virus); Jeremy Farrar, the director of the Wellcome Trust ; and David Heymann of the Chatham House Center on Global Health Security, called for the release of experimental drugs for the 2014 West Africa Ebola outbreak . [ 27 ]
At an August 2014 press conference, Barack Obama , the President of the United States , was questioned regarding whether the cocktail should be fast-tracked for approval or be made available to sick patients outside of the United States. He responded, "I think we've got to let the science guide us. I don't think all the information's in on whether this drug is helpful." [ 28 ]
The National Institutes of Health announced on 27 February 2015 the commencement of a randomized controlled trial of ZMapp to be conducted in Liberia and the United States. [ 29 ] From March 2015 through November 2015, 72 individuals infected with the Ebola virus were enrolled in the trial; investigators stopped enrolling new subjects in January 2016, having failed to reach its enrollment goal of 200 due to the waning of the Ebola outbreak . As a result, although a 40% lower risk of death was calculated for those who received ZMapp, the difference was not statistically significant and ultimately it could not be determined whether the use of ZMapp was superior to the optimized standard of care alone. However, ZMapp was found to be safe and well tolerated. [ 4 ] [ 30 ] [ 31 ]
The ZMapp cocktail was assessed by the World Health Organization for emergency use under the Monitored Emergency Use of Unregistered and Investigational Interventions (MEURI) ethical protocol. The panel agreed that "the benefits of ZMapp outweigh its risks" while noting that it presented logistical challenges, particularly that of requiring a cold chain for distribution and storage. [ 32 ] Four alternative therapies ( remdesivir , the Regeneron product atoltivimab/maftivimab/odesivimab , favipiravir , and ansuvimab ) were also considered for use, but they were at earlier stages of development. [ 32 ] In August 2019, the Democratic Republic of the Congo's national health authorities, the World Health Organization, and the National Institutes of Health announced that they would stop using ZMapp, along with all other Ebola treatments except atoltivimab/maftivimab/odesivimab and ansuvimab, in their ongoing clinical trials, citing the higher mortality rates of patients not treated with atoltivimab/maftivimab/odesivimab and ansuvimab. [ 33 ] [ 34 ]
In October 2020, the US Food and Drug Administration (FDA) approved atoltivimab/maftivimab/odesivimab with an indication for the treatment of infection caused by Zaire ebolavirus . [ 35 ]
|
https://en.wikipedia.org/wiki/ZMapp
|
The ZND detonation model is a one-dimensional model for the process of detonation of an explosive . It was proposed during World War II independently by Yakov Zeldovich , [ 1 ] John von Neumann , [ 2 ] and Werner Döring , [ 3 ] hence the name.
This model admits finite-rate chemical reactions and thus the process of detonation consists of the following stages. First, an infinitesimally thin shock wave compresses the explosive to a high pressure called the von Neumann spike . At the von Neumann spike point the explosive still remains unreacted. The spike marks the onset of the zone of exothermic chemical reaction, which finishes at the Chapman–Jouguet condition . After that, the detonation products expand backward.
In the reference frame in which the shock is stationary, the flow following the shock is subsonic . Because of this, energy release behind the shock is able to be transported acoustically to the shock for its support. For a self-propagating detonation, the shock relaxes to a speed given by the Chapman–Jouguet condition, which induces the material at the end of the reaction zone to have a locally sonic speed in the reference frame in which the shock is stationary. In effect, all of the chemical energy is harnessed to propagate the shock wave forward.
However, in the 1960s, experiments revealed that gas-phase detonations were most often characterized by unsteady, three-dimensional structures, which can only in an averaged sense be predicted by one-dimensional steady theories. Indeed, such waves are quenched as their structure is destroyed. [ 4 ] [ 5 ] The Wood–Kirkwood detonation theory can correct for some of these limitations. [ 6 ]
|
https://en.wikipedia.org/wiki/ZND_detonation_model
|
The Z N {\displaystyle Z_{N}} model (also known as the clock model ) is a simplified statistical mechanical spin model . It is a generalization of the Ising model . Although it can be defined on an arbitrary graph , it is integrable only on one and two-dimensional lattices , in several special cases.
The Z N {\displaystyle Z_{N}} model is defined by assigning a spin value at each node r {\displaystyle r} on a graph, with the spins taking values s r = exp 2 π i q N {\displaystyle s_{r}=\exp {\frac {2\pi iq}{N}}} , where q ∈ { 0 , 1 , … , N − 1 } {\displaystyle q\in \{0,1,\ldots ,N-1\}} . The spins therefore take values in the form of complex roots of unity . Roughly speaking, we can think of the spins assigned to each node of the Z N {\displaystyle Z_{N}} model as pointing in any one of N {\displaystyle N} equidistant directions. The Boltzmann weights for a general edge r r ′ {\displaystyle rr'} are:
where ∗ {\displaystyle *} denotes complex conjugation and the x k ( r r ′ ) {\displaystyle x_{k}^{\left(rr'\right)}} are related to the interaction strength along the edge r r ′ {\displaystyle rr'} . Note that x k ( r r ′ ) = x N − k ( r r ′ ) {\displaystyle x_{k}^{\left(rr'\right)}=x_{N-k}^{\left(rr'\right)}} and x 0 {\displaystyle x_{0}} are often set to 1. The (real valued) Boltzmann weights are invariant under the transformations s r → ω k s r {\displaystyle s_{r}\rightarrow \omega ^{k}s_{r}} and s r → s r ∗ {\displaystyle s_{r}\rightarrow s_{r}^{*}} , analogous to universal rotation and reflection respectively.
There is a class of solutions to the Z N {\displaystyle Z_{N}} model defined on an in general anisotropic square lattice. If the model is self-dual in the Kramers–Wannier sense and thus critical , and the lattice is such that there are two possible 'weights' x k 1 {\displaystyle x_{k}^{1}} and x k 2 {\displaystyle x_{k}^{2}} for the two possible edge orientations, we can introduce the following parametrization in α {\displaystyle \alpha } :
Requiring the duality relation and the star–triangle relation , which ensures integrability , to hold, it is possible to find the solution:
with x 0 = 1 {\displaystyle x_{0}=1} . This particular case of the Z N {\displaystyle Z_{N}} model is often called the FZ model in its own right, after V.A. Fateev and A.B. Zamolodchikov who first calculated this solution. The FZ model approaches the XY model in the limit as N → ∞ {\displaystyle N\rightarrow \infty } . It is also a special case of the chiral Potts model and the Kashiwara–Miwa model .
As is the case for most lattice models in statistical mechanics , there are no known exact solutions to the Z N {\displaystyle Z_{N}} model in three dimensions. In two dimensions, however, it is exactly solvable on a square lattice for certain values of N {\displaystyle N} and/or the 'weights' x k {\displaystyle x_{k}} . Perhaps the most well-known example is the Ising model , which admits spins in two opposite directions (i.e. s r = ± 1 {\displaystyle s_{r}=\pm 1} ). This is precisely the Z N {\displaystyle Z_{N}} model for N = 2 {\displaystyle N=2} , and therefore the Z N {\displaystyle Z_{N}} model can be thought of as a generalization of the Ising model . Other exactly solvable models corresponding to particular cases of the Z N {\displaystyle Z_{N}} model include the three-state Potts model , with N = 3 {\displaystyle N=3} and x 1 = x 2 = x c {\displaystyle x_{1}=x_{2}=x_{c}} , where x c {\displaystyle x_{c}} is a certain critical value (FZ), and the critical Askin–Teller model where N = 4 {\displaystyle N=4} .
A quantum version of the Z N {\displaystyle Z_{N}} clock model can be constructed in a manner analogous to the transverse-field Ising model . The Hamiltonian of this model is the following:
Here, the subscripts refer to lattice sites, and the sum ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} is done over pairs of nearest neighbour sites i {\displaystyle i} and j {\displaystyle j} . The clock matrices X j {\displaystyle X_{j}} and Z j {\displaystyle Z_{j}} are generalisations of the Pauli matrices satisfying
and
where δ j , k {\displaystyle \delta _{j,k}} is 1 if j {\displaystyle j} and k {\displaystyle k} are the same site and zero otherwise. J {\displaystyle J} is a prefactor with dimensions of energy, and g {\displaystyle g} is another coupling coefficient that determines the relative strength of the external field compared to the nearest neighbour interaction.
|
https://en.wikipedia.org/wiki/ZN_model
|
ZOBODAT is an online catalogue of taxonomic , bibliographic, author and specimen data, [ 1 ] [ 2 ] [ 3 ] from mainly German language sources. The database is published by Oberösterreichische Landesmuseen [ de ] [ 4 ] [ 5 ] and was founded in 1972 by Ernst Reichl [ de ] . At August 16, 2022, it contained 3,476,485 occurrence records, [ 6 ] 1,089 journal records (together with their contents), [ 7 ] 25,379 authors (including their publications, and specimens collected and determined), [ 8 ] and information on 62,977 species. [ 9 ]
The reader may access the information in German, French, English, Spanish, Portuguese or Hungarian.
|
https://en.wikipedia.org/wiki/ZOBODAT
|
ZPEG is a motion video technology that applies a human visual acuity model to a decorrelated transform-domain space, thereby optimally reducing the redundancies in motion video by removing the subjectively imperceptible. This technology is applicable to a wide range of video processing problems such as video optimization , real-time motion video compression , subjective quality monitoring, and format conversion.
The ZPEG company produces modified versions of x264 , x265 , AV1 , and FFmpeg under the name ZPEG Engine (see § Video optimization ).
Pixel distributions are well-modeled as stochastic process , and a transformation to their ideal decorrelated representation is accomplished by the Karhunen–Loève transform (KLT) defined by the Karhunen–Loève theorem . The discrete cosine transform (DCT) is often used as a computationally efficient transform that closely approximates the Karhunen–Loève transform for video data due to the strong correlation in pixel space typical of video frames. [ 1 ] As the correlation in the temporal direction is just as high as that of the spatial directions, a three-dimensional DCT may be used to decorrelate motion video. [ 2 ]
A Human Visual Model may be formulated based on the contrast sensitivity of the visual perception system. [ 3 ] A time-varying Contrast Sensitivity model may be specified, and is applicable to the three-dimensional discrete cosine transform (DCT). [ 4 ] A three-dimensional Contrast Sensitivity model is used to generate quantizers for each of the three-dimensional basis vectors, resulting in a near-optimal visually lossless removal of imperceptible motion video artifacts. [ 5 ]
The perceptual strength of the Human Visual Model quantizer generation process is calibrated in visiBels (vB), a logarithmic scale roughly corresponding to perceptibility as measured in screen heights. As the eye moves further from the screen, it becomes less able to perceive details in the image. The ZPEG model also includes a temporal component, and thus is not fully described by viewing distance. In terms of viewing distance, the visiBel strength increases by six as the screen distance halves. The standard viewing distance for Standard Definition television (about 7 screen heights) is defined as 0vB. The normal viewing distance for high-definition video (HD video), about 4 screen heights, would be defined as about −6 vB (3.5 screen heights).
The ZPEG pre-processor optimizes motion video sequences for compression by existing motion estimation-based video compressors, such as Advanced Video Coding (AVC) (H.264) and High Efficiency Video Coding (HEVC) (H.265). The human visual acuity model is converted into quantizers for direct application to a three-dimensional transformed block of the motion video sequence, followed by an inverse quantization (signal processing) step by the same quantizers. The motion video sequence returned from this process is then used as input to the existing compressor.
The application of Human Visual System-generated quantizers to a block-based Discrete Cosine Transform results in increased compressibility of a motion video stream by removing imperceptible content from the stream. The result is a curated stream that has removed detailed spatial and temporal details that the compressor would otherwise be required to reproduce. The stream also produces better matches for motion estimation algorithms. The quantizers are generated to be imperceptible at a specified viewing distance, specified in visiBels. Typical pre-processing viewing conditions in common use are:
Average compression savings for 6 Mbps HD video using the x.264 codec when processed at −12 vB is 21.88%. Average compression savings for 16 Mbps Netflix 4K test suite video using the x.264 codec processed at −12 vB is 29.81%. The same Netflix test suite when compressed for immersive viewing (−18 vB) generates a 25.72% savings. These results are reproducible through use of a publicly-accessible test bed. [ 6 ]
While the effects of ZPEG pre-processing are imperceptible to the average viewer at the specified viewing distance, edge effects introduced by block-based transform processing still affect the performance advantage of the video optimization process. While existing deblocking filters may be applied to improve this performance, optimal results are obtained through use of a multi-plane deblocking algorithm. Each plane is offset by one-half the block size in each of four directions, such that the offset of the plane is one of (0,0), (0,4), (4, 0), and (4,4) in the case of 8x8 blocks [ 7 ] and four planes. Pixels values are then chosen according to their distance to the block edge, with interior pixel values being preferred to boundary pixel values. The resulting deblocked video generates substantially better optimization over a wide range of pre-processing strengths.
Conventional motion compression solutions are based on motion estimation technology. [ 8 ] While some transform-domain video codec technologies exist, ZPEG is based on the three-dimensional Discrete Cosine Transform (DCT), [ 9 ] where the three dimensions are pixel within line, line within frame, and temporal sequence of frames. The extraction of redundant visual data is performed by the computationally-efficient process of quantization of the transform-domain representation of the video, rather than the far more computationally expensive process of searching for object matches between blocks. Quantizer values are derived by applying a Human Visual Model to the basis set of DCT coefficients at a pre-determined perceptual processing strength. All perceptually redundant information is thereby removed from the transform domain representation of the video. Compression is then performed by an entropy removal process. [ 10 ]
Once the viewing conditions has been chosen under which the compressed content is to be viewed, a Human Visual Model generates quantizers for application to the three-dimensional Discrete Cosine Transform (DCT). [ 11 ] These quantizers are tuned to remove all imperceptible content from the motion video stream, greatly reducing the entropy of the representation. The viewing conditions expressed in visiBels and the correlation of pixels before transformation are generated for reference by the entropy encoding .
While quantized DCT coefficients have traditionally be modeled as Laplace distributions , [ 12 ] more recent work has suggested the Cauchy distribution better models the quantized coefficient distributions. [ 13 ] The ZPEG entropy encoder encodes quantized three-dimensional DCT values according to a distribution that is completely characterized by the quantization matrix and the pixel correlations. This side-band information carried in the compressed stream enables the decoder to synchronize its internal state to the encoder. [ 14 ]
Each DCT band is separately entropy coded to all other bands. These coefficients are transmitted in band-wise order, starting with the DC component, followed by the successive bands in order of low resolution to high, similar to wavelet packet decomposition . [ 15 ] Following this convention assures that the receiver will always receive the maximum possible resolution for any bandpass pipe, enabling a no-buffering transmission protocol.
The gold measure of perceived quality difference between a reference video and its degraded representation is defined in ITU-R recommendation BT-500. [ 16 ] The double-stimulus continuous quality-scale (DSCQS) method rates the perceived difference between the reference and distorted videos to create an overall difference score derived from individual scores ranging from −3 to 3:
In an analogy to the single-stimulus continuous quality-scale (SSCQS) normalized metric Mean Opinion Score (MOS), [ 17 ] the overall DSCQS score is normalized to the range (−100, 100) and is termed the Differential Mean Opinion Score (DMOS), a measure of subjective video quality .
An ideal objective measure will correlate strongly to the DMOS score when applied to a reference/impaired video pair. A survey of existing techniques and their overall merits may be found on the Netflix blog. [ 18 ] ZPEG extends the list of available techniques by providing a subjective quality metric generated by comparing the Mean Squared Error metric of the difference between the reference and impaired videos after pre-processing at various perceptual strengths (in visiBels). The effective viewing distance at which the impairment difference is no longer perceivable is reported as the impairment metric.
Statistically ideal format conversion is done by interpolation of video content in Discrete Cosine Transform space. [ 19 ] The conversion process, particularly in the case of up-sampling, must consider the ringing artifacts that occur when abrupt continuities take place in a sequence of pixels being re-sampled. The resulting algorithm can down-sample or up-sample video formats by changing the frame dimensions, pixel aspect ratio , and frame rate .
|
https://en.wikipedia.org/wiki/ZPEG
|
The ZX-calculus is a rigorous graphical language for reasoning about linear maps between qubits , which are represented as string diagrams called ZX-diagrams . A ZX-diagram consists of a set of generators called spiders that represent specific tensors . These are connected together to form a tensor network similar to Penrose graphical notation . Due to the symmetries of the spiders and the properties of the underlying category , topologically deforming a ZX-diagram (i.e. moving the generators without changing their connections) does not affect the linear map it represents. In addition to the equalities between ZX-diagrams that are generated by topological deformations, the calculus also has a set of graphical rewrite rules for transforming diagrams into one another. The ZX-calculus is universal in the sense that any linear map between qubits can be represented as a diagram, and different sets of graphical rewrite rules are complete for different families of linear maps. ZX-diagrams can be seen as a generalisation of quantum circuit notation , and they form a strict subset of tensor networks which represent general fusion categories and wavefunctions of quantum spin systems.
The ZX-calculus was first introduced by Bob Coecke and Ross Duncan in 2008 as an extension of the categorical quantum mechanics school of reasoning. They introduced the fundamental concepts of spiders, strong complementarity and most of the standard rewrite rules. [ 1 ] [ 2 ]
In 2009 Duncan and Perdrix found the additional Euler decomposition rule for the Hadamard gate , [ 3 ] which was used by Backens in 2013 to establish the first completeness result for the ZX-calculus. [ 4 ] Namely that there exists a set of rewrite rules that suffice to prove all equalities between stabilizer ZX-diagrams, where phases are multiples of π / 2 {\displaystyle \pi /2} , up to global scalars. This result was later refined to completeness including scalar factors. [ 5 ]
Following an incompleteness result, [ 6 ] in 2017, a completion of the ZX-calculus for the approximately universal π / 4 {\displaystyle \pi /4} fragment was found, [ 7 ] in addition to two different completeness results for the universal ZX-calculus (where phases are allowed to take any real value). [ 8 ] [ 9 ]
Also in 2017 the book Picturing Quantum Processes was released, that builds quantum theory from the ground up, using the ZX-calculus. [ 10 ] See also the 2019 book Categories for Quantum Theory . [ 11 ]
ZX-diagrams consist of green and red nodes called spiders , which are connected by wires . Wires may curve and cross, arbitrarily many wires may connect to the same spider, and multiple wires can go between the same pair of nodes. There are also Hadamard nodes, usually denoted by a yellow box, which always connect to exactly two wires.
ZX-diagrams represent linear maps between qubits , similar to the way in which quantum circuits represent unitary maps between qubits. ZX-diagrams differ from quantum circuits in two main ways. The first is that ZX-diagrams do not have to conform to the rigid topological structure of circuits, and hence can be deformed arbitrarily. The second is that ZX-diagrams come equipped with a set of rewrite rules, collectively referred to as the ZX-calculus . Using these rules, calculations can be performed in the graphical language itself.
The building blocks or generators of the ZX-calculus are graphical representations of specific states , unitary operators, linear isometries , and projections in the computational basis | 0 ⟩ , | 1 ⟩ {\displaystyle |0\rangle ,|1\rangle } and the Hadamard-transformed basis | + ⟩ = | 0 ⟩ + | 1 ⟩ 2 {\displaystyle |+\rangle ={\frac {|0\rangle +|1\rangle }{\sqrt {2}}}} and | − ⟩ = | 0 ⟩ − | 1 ⟩ 2 {\displaystyle |-\rangle ={\frac {|0\rangle -|1\rangle }{\sqrt {2}}}} . The colour green (or sometimes white) is used to represent the computational basis and the colour red (or sometimes grey) is used to represent the Hadamard-transformed basis. Each of these generators can furthermore be labelled by a phase, which is a real number from the interval [ 0 , 2 π ) {\displaystyle [0,2\pi )} . If the phase is zero it is usually not written.
The generators are:
The generators can be composed in two ways:
These laws correspond to the composition and tensor product of linear maps.
Any diagram written by composing generators in this way is called a ZX-diagram. ZX-diagrams are closed under both composition laws: connecting an output of one ZX-diagram to an input of another creates a valid ZX-diagram, and vertically stacking two ZX-diagrams creates a valid ZX-diagram.
Two diagrams represent the same linear operator if they consist of the same generators connected in the same ways. In other words, whenever two ZX-diagrams can be transformed into one another by topological deformation, then they represent the same linear map. Thus, the controlled-NOT gate can be represented as follows:
The following example of a quantum circuit constructs a GHZ-state . By translating it into a ZX-diagram, using the rules that "adjacent spiders of the same color merge", "Hadamard changes the color of spiders", and "parity-2 spiders are identities", it can be graphically reduced to a GHZ-state:
Any linear map between qubits can be represented as a ZX-diagram, i.e. ZX-diagrams are universal . A given ZX-diagram can be transformed into another ZX-diagram using the rewrite rules of the ZX-calculus if and only if the two diagrams represent the same linear map, i.e. the ZX-calculus is sound and complete .
The category of ZX-diagrams is a dagger compact category , which means that it has symmetric monoidal structure (a tensor product), is compact closed (it has cups and caps ) and comes equipped with a dagger , such that all these structures suitably interact. The objects of the category are the natural numbers, with the tensor product given by addition (the category is a PROP ). The morphisms of this category are ZX-diagrams. Two ZX-diagrams compose by juxtaposing them horizontally and connecting the outputs of the left-hand diagram to the inputs of the right-hand diagram. The monoidal product of two diagrams is represented by placing one diagram above the other.
Indeed, all ZX-diagrams are built freely from a set of generators via composition and monoidal product, modulo the equalities induced by the compact structure and the rules of the ZX-calculus given below. For instance, the identity of the object n {\displaystyle n} is depicted as n {\displaystyle n} parallel wires from left to right, with the special case n = 0 {\displaystyle n=0} being the empty diagram.
The following table gives the generators together with their standard interpretations as linear maps, expressed in Dirac notation . The computational basis states are denoted by ∣ 0 ⟩ , | 1 ⟩ {\displaystyle \mid 0\rangle ,\vert 1\rangle } and the Hadamard -transformed basis states are ∣ ± ⟩ = 1 2 ( | 0 ⟩ ± | 1 ⟩ ) {\displaystyle \mid \pm \rangle ={\frac {1}{\sqrt {2}}}(\vert 0\rangle \pm \vert 1\rangle )} .
The n {\displaystyle n} -fold tensor-product of the vector ∣ ψ ⟩ {\displaystyle \mid \psi \rangle } is denoted by ∣ ψ ⟩ ⊗ n {\displaystyle \mid \psi \rangle ^{\otimes n}} .
There are many different versions of the ZX-calculus, using different systems of rewrite rules as axioms. All share the meta rule "only the topology matters", which means that two diagrams are equal if they consist of the same generators connected in the same way, no matter how these generators are arranged in the diagram.
The following are some of the core set of rewrite rules, here given "up to scalar factor": i.e. two diagrams are considered to be equal if their interpretations as linear maps differ by a non-zero complex factor.
The ZX-calculus has been used in a variety of quantum information and computation tasks.
The rewrite rules of the ZX-calculus can be implemented formally as an instance of double-pushout rewriting . This has been used in the software Quantomatic to allow automated rewriting of ZX-diagrams (or more general string diagrams ). [ 24 ] In order to formalise the usage of the "dots" to denote any number of wires, such as used in the spider fusion rule, this software uses bang-box notation [ 25 ] to implement rewrite rules where the spiders can have any number of inputs or outputs.
A more recent project to handle ZX-diagrams is PyZX, which is primarily focused on circuit optimisation. [ 15 ]
A LaTeX package zx-calculus can be used to typeset ZX-diagrams. Many authors also use the software TikZiT as a GUI to help typeset diagrams.
The ZX-calculus is only one of several graphical languages for describing linear maps between qubits. The ZW-calculus was developed alongside the ZX-calculus, and can naturally describe the W-state and Fermionic quantum computing. [ 26 ] [ 27 ] It was the first graphical language which had a complete rule-set for an approximately universal set of linear maps between qubits, [ 8 ] and the early completeness results of the ZX-calculus use a reduction to the ZW-calculus.
A more recent language is the ZH-calculus . This adds the H-box as a generator, that generalizes the Hadamard gate from the ZX-calculus. It can naturally describe quantum circuits involving Toffoli gates. [ 28 ]
Up to scalars, the phase-free ZX-calculus, generated by 0 {\displaystyle 0} -labelled spiders is equivalent to the dagger compact closed category of linear relations over the finite field F 2 {\displaystyle \mathbb {F} _{2}} . In other words, given a diagram with n {\displaystyle n} inputs and m {\displaystyle m} outputs in the phase-free ZX-calculus, its X stabilizers form a linear subspace of F 2 n ⊕ F 2 m {\displaystyle \mathbb {F} _{2}^{n}\oplus \mathbb {F} _{2}^{m}} , and the composition of phase-free ZX diagrams corresponds to relational composition of these subspaces. In particular, the Z comonoid (given by the Z spider with one input and two outputs, and the Z spider with one input and no outputs) and X monoid (given by the X spider with one output and two inputs, and the X spider with one output and no inputs) generate the symmetric monoidal category of matrices over F 2 {\displaystyle \mathbb {F} _{2}} with respect to the direct sum as the monoidal product.
|
https://en.wikipedia.org/wiki/ZX-calculus
|
The ZYPAD is a PDA designed to be worn on a user's wrist like a bracer and offers interface port features similar to laptop computer. [ 2 ] It was developed by Eurotech . [ 3 ] It is arguable whether it qualifies as a watch, but it is referred to as a "Wrist Worn PC". It ships with Linux kernel 2.6 and also supports Windows CE 5.0, and can sense motion, allowing such possibilities of use such as going into standby mode when a user lowers his/her arm. It can determine its position by dead reckoning as well as via GPS. It supports Bluetooth , IrDA , and WiFi . [ 2 ]
The ZYPAD debuted in 2006 and the ZYPAD WL 1000 was the first marketed device, followed by the WL 1100 with Windows CE 6.0 support. [ 4 ] [ 5 ] Initial retail prices were set to be around $2000. [ 6 ] The Zypad WR1100 debuted in 2008 and features housing made out of high strength fiberglass-reinforced nylon-magnesium alloy and a biometric fingerprint scanner. [ 7 ]
|
https://en.wikipedia.org/wiki/ZYPAD
|
The Z User Group ( ZUG ) was established in 1992 to promote use and development of the Z notation , a formal specification language for the description of and reasoning about computer-based systems. [ 3 ] [ 4 ] [ 5 ] It was formally constituted on 14 December 1992 during the ZUM'92 Z User Meeting [ 6 ] in London , England . [ 7 ]
ZUG has organised a series of Z User Meetings approximately every 18 months initially. [ 8 ] [ 6 ] [ 9 ] From 2000, these became the ZB Conference (jointly with the B-Method , co-organized with APCB ), and from 2008 the ABZ Conference (with abstract state machines as well). In 2010, the ABZ Conference also includes Alloy , a Z-like specification language with associated tool support. [ 10 ]
The Z User Group participated at the FM'99 World Congress on Formal Methods in Toulouse, France, in 1999. [ 11 ] The group and the associated Z notation have been studied as a community of practice . [ 12 ]
The following proceedings were produced by the Z User Group: [ 13 ] [ 14 ]
The following ZB conference proceedings were jointly produced with the Association de Pilotage des Conférences B (APCB), covering the Z notation and the related B-Method : [ 13 ]
From 2008, the ZB conferences were expanded to be the ABZ conference, also including abstract state machines. [ 15 ]
Successive chairs have been:
Successive secretaries have been:
|
https://en.wikipedia.org/wiki/Z_User_Group
|
The Z User Group ( ZUG ) was established in 1992 to promote use and development of the Z notation , a formal specification language for the description of and reasoning about computer-based systems. [ 3 ] [ 4 ] [ 5 ] It was formally constituted on 14 December 1992 during the ZUM'92 Z User Meeting [ 6 ] in London , England . [ 7 ]
ZUG has organised a series of Z User Meetings approximately every 18 months initially. [ 8 ] [ 6 ] [ 9 ] From 2000, these became the ZB Conference (jointly with the B-Method , co-organized with APCB ), and from 2008 the ABZ Conference (with abstract state machines as well). In 2010, the ABZ Conference also includes Alloy , a Z-like specification language with associated tool support. [ 10 ]
The Z User Group participated at the FM'99 World Congress on Formal Methods in Toulouse, France, in 1999. [ 11 ] The group and the associated Z notation have been studied as a community of practice . [ 12 ]
The following proceedings were produced by the Z User Group: [ 13 ] [ 14 ]
The following ZB conference proceedings were jointly produced with the Association de Pilotage des Conférences B (APCB), covering the Z notation and the related B-Method : [ 13 ]
From 2008, the ZB conferences were expanded to be the ABZ conference, also including abstract state machines. [ 15 ]
Successive chairs have been:
Successive secretaries have been:
|
https://en.wikipedia.org/wiki/Z_User_Meeting
|
The Z notation / ˈ z ɛ d / is a formal specification language used for describing and modelling computing systems. [ 1 ] It is targeted at the clear specification of computer programs and computer-based systems in general.
In 1974, Jean-Raymond Abrial published "Data Semantics". [ 2 ] He used a notation that would later be taught in the University of Grenoble until the end of the 1980s. While at EDF ( Électricité de France ), working with Bertrand Meyer , Abrial also worked on developing Z. [ 3 ] The Z notation is used in the 1980 book Méthodes de programmation . [ 4 ]
Z was originally proposed by Abrial in 1977 with the help of Steve Schuman and Bertrand Meyer . [ 5 ] It was developed further at the Programming Research Group at Oxford University , where Abrial worked in the early 1980s, having arrived at Oxford in September 1979.
Abrial has said that Z is so named "Because it is the ultimate language!" [ 6 ] although the name " Zermelo " is also associated with the Z notation through its use of Zermelo–Fraenkel set theory .
In 1992, the Z User Group (ZUG) was established to oversee activities concerning the Z notation, especially meetings and conferences. [ 7 ]
Z is based on the standard mathematical notation used in axiomatic set theory , lambda calculus , and first-order predicate logic . [ 8 ] All expressions in Z notation are typed , thereby avoiding some of the paradoxes of naive set theory . Z contains a standardized catalogue (called the mathematical toolkit ) of commonly used mathematical functions and predicates, defined using Z itself. It is augmented with Z schema boxes, which can be combined using their own operators, based on standard logical operators, and also by including schemas within other schemas. This allows Z specifications to be built up into large specifications in a convenient manner.
Because Z notation uses many non- ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX . There are also Unicode encodings for all standard Z symbols. [ 9 ]
ISO completed a Z standardization effort in 2002. This standard [ 10 ] and a technical corrigendum [ 11 ] are available from ISO free:
In 1992, Oxford University Computing Laboratory and IBM were jointly awarded The Queen's Award for Technological Achievement "for the development of ... the Z notation, and its application in the IBM Customer Information Control System ( CICS ) product." [ 12 ]
|
https://en.wikipedia.org/wiki/Z_notation
|
Zaječická hořká ("Zaječice's Bitter Water"; German : Saidschitzer Bitterwasser ) is strongly mineralized natural bitter water from the village of Zaječice in the Ústí nad Labem Region of the Czech Republic .
Zaječická hořká is known since the 16th century for its purgative and gentle laxative effects. It rises from a wells located in the vicinity of Zaječice, Korozluky and Sedlec (part of Korozluky). It ranks among strongly mineralized mineral waters of the Magnesium sulphate type; it is cool, hypertonic , slightly opalescent , yellowish, scent-free, with a strongly bitter flavour.
During the history of the area, bitter waters from Zaječice, (Seidschitz), Sedlec (Sedlitz), Korozluky (Kollosoruk) and Bylany (Püllna) were exported to the whole world as the equivalent of Epsom salt products.
Trademarks for different markets were: Zaječická hořká, Seidschitzcher bitter-wasser, Sedlitz bitterwasser, Sedlitz water, Püllna wasser, Pillnaer bitter wasser.
The salt obtained by evaporation were made into "Biliner digestive pastiles".
Thanks to the well known curative effects of Zaječická and Sedlická water, end of the 19th century spread " Sedlitz powder " name for a laxative powder, which, however, had nothing to do with "Biliner digestive pastiles". So-called " Sedlitz powder ", produced in different laboratories, had different chemical composition and side effects.
Zaječická bitter water was from 17th century the House of Lobkowicz at the Spa Bílinská Kyselka in the nearby town Bilina . Water from wells was thickened by evaporation and then filled into glass bottles.
The first scientific description of the therapeutic effects of water comes from balneologists Josef von Löschner , Franz Ambrosius Reuss and August Emanuel von Reuss .
|
https://en.wikipedia.org/wiki/Zaječická_hořká
|
Zalcitabine (2′-3′-di deoxycytidine , ddC ), also called dideoxycytidine, is a nucleoside analog reverse-transcriptase inhibitor (NRTI) sold under the trade name Hivid . Zalcitabine was the third antiretroviral to be approved by the Food and Drug Administration (FDA) for the treatment of HIV/AIDS . It is used as part of a combination regimen .
Zalcitabine appears less potent than some other nucleoside RTIs, has an inconvenient three-times daily frequency and is associated with serious adverse events. For these reasons it is now rarely used to treat human immunodeficiency virus ( HIV ), and it has even been removed from pharmacies entirely in some countries. [ 1 ]
Zalcitabine was the third antiretroviral to be approved by the Food and Drug Administration (FDA) for the treatment of HIV/AIDS. It was approved on June 19, 1992, as a monotherapy and again in 1996 for use in combination with zidovudine (AZT). Using combinations of NRTIs was in practice prior to the second FDA approval and the triple drug combinations with dual NRTIs and a protease inhibitor (PI) were not far off by this time.
In 1992 dideoxycytidine was listed as a specialty drug . [ 2 ]
The sale and distribution of zalcitabine has been discontinued since December 31, 2006. [ 3 ]
Lamivudine (3TC) significantly inhibits the intracellular phosphorylation of zalcitabine to the active form, and accordingly the drugs should not be administered together. [ 4 ]
Additionally, zalcitabine should not be used with other drugs that can cause peripheral neuropathy , such as didanosine and stavudine . [ 4 ]
The most common adverse events at the beginning of treatment are nausea and headache. More serious adverse events are peripheral neuropathy , which can occur in up to 33% of patients with advanced disease, oral ulcers , oesophageal ulcers and, rarely, pancreatitis . [ 4 ]
Resistance to zalcitabine develops infrequently compared with other nRTIs, and generally only occurs at a low level. [ 5 ] The most common mutation observed in vivo is T69D, which does not appear to give rise to cross-resistance to other nRTIs; mutations at positions 65, 74, 75, 184 and 215 in the pol gene are observed more rarely. [ 4 ] [ 5 ]
Zalcitabine has a very high oral absorption rate of over 80%. It is predominantly eliminated by the renal route, with a half-life of 2 hours. [ 4 ]
Zalcitabine is an analog of pyrimidine . It is a derivative of the naturally existing deoxy cytidine , made by replacing the hydroxyl group in position 3' with a hydrogen .
It is phosphorylated in T cells and other HIV target cells into its active triphosphate form, ddCTP. This active metabolite works as a substrate for HIV reverse transcriptase , and also by incorporation into the viral DNA, hence terminating the chain elongation due to the missing hydroxyl group.
Apart from inhibiting retroviral reverse transcriptase, ddC inhibits mitochondrial DNA polymerase gamma, resulting in a dose-limiting toxicity. [ 6 ]
Zalcitabine was first synthesized in the 1960s by Jerome Horwitz [ 7 ] [ 8 ] and subsequently developed as an anti-HIV agent by Samuel Broder , Hiroaki Mitsuya , and Robert Yarchoan at the National Cancer Institute (NCI). Like didanosine , it was then licensed because the NCI may not market or sell drugs. The National Institutes of Health (NIH) thus licensed it to Hoffmann-La Roche .
|
https://en.wikipedia.org/wiki/Zalcitabine
|
Zalo is a Vietnamese instant messaging multi-platform service developed by VNG Corporation . Zalo is also used in other countries outside of Vietnam, including the United States , Japan , South Korea , Australia , Germany , Myanmar and Singapore . [ 1 ]
As of late September 2024, Zalo has approximately 77.6 million active monthly users with nearly 1.97 billion messages sent each day. [ 2 ] Zalo is currently available in all 63 provinces of Vietnam , including islands, remote areas, and is accessible in more than 23 countries worldwide.
The name for Zalo is a blend of Zing (an online website managed by VNG ) and alo (a standard phrase used to answer a call in Vietnam). [ 3 ]
Zalo was created by Vương Quang Khải, the executive Vice President of VNG. [ 4 ] After studying in the United States, he returned to Vietnam in 2007. [ 5 ] Recently, he began to work primarily on Artificial Intelligence technology, developing Kiki , an AI based virtual assistant .
The first software test version of Zalo was released in August 2012, eight months after development began in late 2011. By September 2012, Zalo had released new versions on iOS, Android and the Nokia S40 . [ 6 ] However, Zalo received little recognition due to various user issues, including having to use Zing ID for login and utilizing web platforms for the mobile application. [ 3 ] [ 7 ]
In December 2012, the official version for Zalo was released to the public. On 8 January 2013, Zalo reached the top spot on the App Store in Vietnam, passing their main competitor, WeChat . [ 8 ]
In March 2013, Zalo reached the million-user milestone and continued to grow quickly, having over 2 million users only two months later, surpassing all international competitors, including Viber and Line . [ 9 ]
Zalo has continuously been adding new features, such as Short videos , enhanced privacy features [ 10 ] and other improvements to the software.
In 2021, Zalo launched the Zalo Connect feature to help citizens in COVID-19 -affected areas connect together and support each other. [ 11 ]
In May 2022, Zalo added experimental E2EE that utilizes the Signal protocol . [ 12 ] [ 13 ] As of version 25.01.01, this feature is no longer available. [ citation needed ]
Since 2022, Zalo began to introduce a series of artificial intelligence features, including eKYC in 2022; text-to-speech , voice-to-text , Zalo AI Avatar in 2023; voice dictation and zSticker AI in 2024. [ 14 ] [ 15 ]
In 2018, Zalo was found operating social media without a license, which resulted in a punishment. However, the authorities allowed the domains to remain active, giving VNG time to complete licensing documents, which they failed to do. [ 37 ]
In July 2019, the Ho Chi Minh City branch of the Ministry of Information and Communications issued a document demanding parties responsible for registering and managing domain names revoke Zalo.vn and Zalo.me by the 19th. [ 37 ] The domains continued to function beyond that date. Afterwards, the Ministry of Information and Communications extended the submission deadline. [ 38 ]
On 11 November 2019, the Zalo.vn domain was suspended for 45 days to allow authorities to investigate potential violations. Prior, VNG submitted an application for the license that did not meet the criteria for approval, resulting in the temporary suspension. [ 38 ]
On 24 December 2019, Zalo was officially granted a social network license. [ 39 ]
The Find Nearby feature of the Zalo mobile app allows users to scan for nearby individuals. [ 40 ] This feature was enabled by default, leading to strangers messaging users when location services are enabled, raising privacy concerns. [ 41 ]
On 1 August 2022, Zalo began charging users, resulting in restricted access to certain features for those with free accounts. [ 42 ] Immediately after, Zalo saw an increase in 1-star ratings on the App Store and Google Play , along with boycotts and multiple threats to delete the application. [ 43 ]
|
https://en.wikipedia.org/wiki/Zalo
|
Zamano plc was an Internet and mobile technology company based in Dublin . [ 3 ] [ 4 ] The company decided in February 2017 to bring their premium rate SMS business lines to a close by the end of 2017. [ 5 ] In November 2018, Zamano plc issued a press release stating that it was entering voluntary liquidation . [ 6 ] A liquidator was appointed in early 2019. [ 7 ]
The company was founded in 2000. [ 8 ] During 2002, Zamano reportedly partnered with RTÉ Interactive to launch several mobile and SMS-based games. [ 9 ] [ 10 ] Later in 2002, Zamano acquired Avoca's interactive SMS business. This provided the company with 10 premium rate SMS short codes covering four UK mobile networks. [ 11 ] [ 12 ]
In 2007, Zamano completed the acquisitions of Red Circle Technologies and Eirborne. [ 13 ] [ 14 ] [ 15 ] The company's IPO, in early 2007, coincided with these of acquisitions. [ 16 ] [ 17 ]
In 2011, Zamano announced an investment in a new entity called Newsworthie. [ 18 ] However, investment in Newsworthie was ended in October 2011 with outgoing CEO John O'Shea announcing that "As our investment capacity is limited, we intend focusing it entirely on opportunities related to our mobile expertise, and have suspended further investment in Newsworthie." [ 19 ] O'Shea departed the company as CEO in November 2011 and was replaced by interim-CEO Pat Landy in a temporary role. [ 20 ]
In April 2015, Zamano, signed a deal with Three mobile to allow the company's technology to be used as a conduit by e-commerce providers. [ 21 ]
In 2017 the company announced plans to close parts of its business, [ 5 ] and by September 2017 it was suggested that the company, following a management buyout, was considering investing in oil and gas exploration companies. [ 22 ] By late 2018, the company announced its liquidation, [ 6 ] and a liquidator was appointed in 2019. [ 7 ]
Zamano offered products for mobile messaging, mobile payments and mobile advertising . Everneo was a division of Zamano that offered mobile and cloud video, video clubs, mobile games , apps , entertainment portals, ringtones , wallpapers, screensavers , SMS services, and IVR lines . [ citation needed ]
In November 2013, Zamano launched Message Hero, an online business SMS service available to customers in Ireland and the UK [ 23 ]
Zamano appeared in the Deloitte Technology Fast 50 from 2006 to 2009. [ 24 ] [ 25 ] [ 26 ] [ 27 ] This followed from coming second in Deloitte's "rising star" category in 2005. [ 28 ]
In 2002, Zamano was nominated for an Annual Global Mobile Award, in the Best Mobile Internet Application category for its TxT Manager. [ 29 ]
On 4 February 2010, the UK phone service regulator PhonepayPlus fined and formally reprimanded Zamano £15,000 for breaching its terms and conditions in relation to unsolicited reverse-charge premium rate SMS messages. PhonepayPlus also ordered Zamano to issue refunds. [ 30 ] On 29 March 2012, Zamano was fined a further £35,000 by PhonepayPlus for breaching its terms and conditions in relation to unsolicited reverse-charge premium rate SMS messages. [ 31 ]
Between May 2012 and September 2013 Zamano received 587 complaints from consumers in relation to a competition service, "Play2Win". Consumers stated that they had received unsolicited, reverse-billed text messages and that they had not engaged with the service, or acknowledged engaging with the service but stated that they believed it was free. A Phonepayplus tribunal found that Zamano breached the Fairness and Misleading aspects of its code. Given repeated breaches of the code by Zamano, the tribunal imposed an increased fine of £40,000 [ 32 ]
|
https://en.wikipedia.org/wiki/Zamano
|
Zanamivir , sold under the brand name Relenza among others, is an anti-viral medication used to treat and prevent influenza caused by influenza A and influenza B viruses . It is a neuraminidase inhibitor and was developed by the Australian biotech firm Biota Holdings. It was licensed to Glaxo Wellcome in 1990 [ citation needed ] and approved in the US in 1999, only for use as a treatment for influenza. In 2006, it was approved for prevention of influenza A and B. [ 5 ] Zanamivir is the first neuraminidase inhibitor commercially developed. [ citation needed ] It was developed by GlaxoSmithKline .
Zanamivir is used for the treatment of infections caused by influenza A and influenza B viruses, but in otherwise-healthy individuals, benefits overall appear to be small. It decreases the risk of one contracting symptomatic, but not asymptomatic influenza. The combination of diagnostic uncertainty, the risk for virus strain resistance, possible side effects and financial cost outweigh the small benefits of zanamivir for the prophylaxis and treatment of healthy individuals. [ 6 ]
In otherwise-healthy individuals, benefits overall appear to be small. [ 6 ] Zanamivir shortens the duration of symptoms of influenza-like illness (unconfirmed influenza or 'the flu') by less than a day. In children with asthma there was no clear effect on the time to first alleviation of symptoms. [ 7 ] Whether it affects the risk of one's need to be hospitalized or the risk of death is not clear. [ 6 ] There is no proof that zanamivir reduced hospitalizations or pneumonia and other complications of influenza, such as bronchitis , middle ear infection , and sinusitis . [ 7 ] Zanamivir did not reduce the risk of self reported investigator mediated pneumonia or radiologically confirmed pneumonia in adults. The effect on pneumonia in children was also not significant. [ 8 ]
Low to moderate evidence indicates it decreases the risk of one's getting influenza by 1 to 12% in those exposed. [ 6 ] Prophylaxis trials showed that zanamivir reduced the risk of symptomatic influenza in individuals and households, but there was no evidence of an effect on asymptomatic influenza or on other, influenza-like illnesses. Also there was no evidence of reduction of risk of person-to-person spread of the influenza virus. [ 7 ] The evidence for a benefit in preventing influenza is weak in children, with concerns of publication bias in the literature. [ 9 ]
As of 2009, no influenza had shown any signs of resistance in the US. [ 10 ] A meta-analysis from 2011 found that zanamivir resistance had been rarely reported. [ 11 ] Antiviral resistance can emerge during or after treatment with antivirals in certain people (e.g., immunosuppressed ). [ 12 ] In 2013 genes expressing resistance to zanamivir (and oseltamivir ) were found in Chinese patients infected with avian influenza A H7N9. [ 13 ]
Dosing is limited to the inhalation route. This restricts its usage, as treating asthmatics could induce bronchospasms . [ 14 ] In 2006, the US Food and Drug Administration (FDA) found that breathing problems (bronchospasm), including deaths, were reported in some patients after the initial approval of Relenza. Most of these patients had asthma or chronic obstructive pulmonary disease. Relenza therefore was not recommended for treatment or prophylaxis of seasonal influenza in individuals with asthma or chronic obstructive pulmonary disease. [ 5 ] In 2009, the zanamivir package insert contains precautionary information regarding risk of bronchospasm in patients with respiratory disease. [ 15 ] GlaxoSmithKline (GSK) and FDA notified healthcare professionals of a report of the death of a patient with influenza having received zanamivir inhalation powder, which was solubilized and administered by mechanical ventilation. [ 16 ]
In adults there was no increased risk of reported adverse events in trials. There was little evidence of the possible harms associated with the treatment of children with zanamivir. [ 7 ] Zanamivir has not been known to cause toxic effects and has low systemic exposure to the human body. [ 17 ]
Zanamivir works by binding to the active site of the neuraminidase protein, rendering the influenza virus unable to escape its host cell and infect others. [ 18 ] It is also an inhibitor of influenza virus replication in vitro and in vivo . In clinical trials, zanamivir was found to reduce the time-to-symptom resolution by 1.5 days if therapy was started within 48 hours of the onset of symptoms. [ citation needed ]
The bioavailability of zanamivir is 2%. After inhalation, zanamivir is concentrated in the lungs and oropharynx , where up to 15% of the dose is absorbed and excreted in urine. [ 19 ]
Zanamivir was first made in 1989 by scientists led by Peter Colman [ 20 ] [ 21 ] and Joseph Varghese [ 22 ] at the Australian CSIRO , in collaboration with the Victorian College of Pharmacy and Monash University . Zanamivir was the first of the neuraminidase inhibitors . [ citation needed ] The discovery was initially funded by the Australian biotechnology company Biota and was part of Biota's ongoing program to develop antiviral agents through rational drug design . Its strategy relied on the availability of the structure of influenza neuraminidase by X-ray crystallography . It was also known, as far back as 1974, that 2-deoxy-2,3-didehydro- N -acetylneuraminic acid (DANA), a sialic acid analogue, is an inhibitor of neuraminidase. [ 23 ]
Computational chemistry techniques were used to probe the active site of the enzyme, in an attempt to design derivatives of DANA that would bind tightly to the amino acid residues of the catalytic site, so would be potent and specific inhibitors of the enzyme. The GRID software by Molecular Discovery was used to determine energetically favourable interactions between various functional groups and residues in the catalytic site canyon. This investigation showed a negatively charged zone occurs in the neuraminidase active site that aligns with the C 4 hydroxyl group of DANA. This hydroxyl is, therefore, replaced with a positively charged amino group; the 4-amino DANA was shown to be 100 times better as an inhibitor than DANA, owing to the formation of a salt bridge with a conserved glutamic acid (119) in the active site. Glu 119 was also noticed to be at the bottom of a conserved pocket in the active site that is just big enough to accommodate the larger, but more basic guanidine functional group . [ 24 ] Zanamivir, a transition-state analogue inhibitor of neuraminidase, was the result. [ 25 ]
In 1999, the zanamivir was approved in the US [ 26 ] and the European Union [ citation needed ] for the treatment of influenza A and B. The FDA advisory committee recommended by a vote 13 to 4 that it should not be approved, because it lacked efficacy and was no more effective than placebo when the patients were on other drugs such as paracetamol. But the FDA leadership overruled the committee. [ 27 ] In 2006, zanamivir was approved in the US [ 5 ] and the European Union [ citation needed ] for prevention of influenza A and B.
|
https://en.wikipedia.org/wiki/Zanamivir
|
Zanja de Alsina ( Spanish pronunciation: [ˈsaŋxa ðe alˈsina] , Alsina 's trench ) were a system of trenches and wooden watchtowers ( mangrullos ) built in the central and southern parts of Buenos Aires Province to defend the territories of the federal government against indigenous Mapuche malones . The 3-meter (9.8 ft)-wide trench was reinforced with 80 small strongholds and garrisons, called fortines . The defensive line was named after Adolfo Alsina , Argentine Minister of War under President Nicolás Avellaneda who planned the building of the trench in the 1870s. The trench's purpose was denounced when it became clear that it was unable to stop large-scale incursions between 1876 and 1877.
This article about the history of Argentina is a stub . You can help Wikipedia by expanding it .
This article about the military of Argentina is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zanja_de_Alsina
|
The Zanstra method is a method to determine the temperature of central stars of planetary nebulae . It was developed by Herman Zanstra in 1927.
It is assumed that the nebula is optically thick in the Lyman continuum , which means that all ionizing photons from the central star are absorbed inside the nebula.
Based on this assumption, the intensity ratio of a stellar reference frequency to a nebular line such as Hβ can be used to determine the central star's effective temperature.
For a pure hydrogen nebula, the ionization equilibrium states that the number per unit time of ionizing photons from the central star has to be balanced by the rate of recombinations of protons and electrons to neutral hydrogen inside the Strömgren sphere of the nebula. Ionizations can only be caused by photons having at least the frequency ν 0 {\displaystyle \nu _{0}} , corresponding to the ionization potential of hydrogen which is 13.6eV:
∫ ν 0 ∞ L ν h ν d ν = ∫ 0 r 1 n p n e α B d V {\displaystyle \int _{\nu _{0}}^{\infty }{\frac {L_{\nu }}{h\nu }}d\nu =\int _{0}^{r_{1}}n_{p}n_{e}\alpha _{B}dV}
Here, r 1 {\displaystyle r_{1}} is the radius of the Strömgren sphere and n p , n e {\displaystyle n_{p},n_{e}} are the number densities of protons and electrons, respectively. The luminosity of the central star is denoted by L ν {\displaystyle L_{\nu }} and α B {\displaystyle \alpha _{B}} is the recombination coefficient to the excited levels of hydrogen.
The ratio between the number of photons emitted by the nebula in the Hβ line and the number of ionizing photons from the central star can then be estimated:
L ν H β ∫ ν 0 ∞ L ν h ν d ν ≈ h ν H β α H β eff α B {\displaystyle {\frac {L_{\nu _{H\beta }}}{\int _{\nu _{0}}^{\infty }{\frac {L_{\nu }}{h\nu }}d\nu }}\approx h\nu _{H\beta }{\frac {\alpha _{H\beta }^{\text{eff}}}{\alpha _{B}}}}
where α H β eff {\displaystyle \alpha _{H\beta }^{\text{eff}}} is the effective recombination coefficient for Hβ.
Given a stellar reference frequency ν s {\displaystyle \nu _{s}} , the Zanstra ratio is defined by
Z = L ν s ∫ ν 0 ∞ L ν h ν d ν = h ν H β α H β eff α B F ν s F H β {\displaystyle Z={\frac {L_{\nu _{s}}}{\int _{\nu _{0}}^{\infty }{\frac {L_{\nu }}{h\nu }}d\nu }}=h\nu _{H\beta }{\frac {\alpha _{H\beta }^{\text{eff}}}{\alpha _{B}}}{\frac {F_{\nu _{s}}}{F_{H\beta }}}}
with F ν s {\displaystyle F_{\nu _{s}}} and F H β {\displaystyle F_{H\beta }} being the fluxes in the stellar reference frequency and in Hβ, respectively. Using the second formula, the Zanstra ratio can be determined by observations.
On the other hand, applying model stellar atmospheres, theoretical Zanstra ratios may be computed in dependence of the central star's effective temperature which may be fixed by comparison with the observed value of the Zanstra ratio.
|
https://en.wikipedia.org/wiki/Zanstra_method
|
The Zapple Monitor was a firmware -based product developed by Roger Amidon [ 1 ] at Technical Design Laboratories (also known as TDL ). TDL was based in Princeton, New Jersey , USA in the 1970s and early 1980s. [ 2 ]
The Zapple monitor was a primitive operating system which could be expanded and used as a Basic Input/Output Services ( BIOS ) 8080 - and Z80 -based computers. Much of the functionality of Zapple would find its way into applications like 'Debug' in MS-DOS .
Zapple commands would allow a user to examine and modify memory, I/O , execute software (Goto or Call) and had a variety of other commands. The program required little in the way of then-expensive read-only memory (ROM) or RAM . An experienced user could use Zapple to test and debug code, verify hardware function, test memory, and so on.
A typical command line would start with a letter such as 'X' (examine memory) followed by a hexadecimal word (the memory address – 01AB) and [enter] or [space]. After this sequence the content of the memory location would be shown [FF] and the user could enter a hexadecimal byte [00] to replace the contents of the address, or hit [space] or [enter] to move to the next address [01AB]. An experienced user could enter a small program in this manner, entering machine language from memory.
Because of the simple structure of the program, consisting of a vector table (one for each letter) and a small number of subroutines, and because the source code was readily available, adding or modifying Zapple was straightforward. The dominant operating system of the era, CP/M , required the computer manufacturer or hobbyist to develop hardware specific BIOS. Many users tested their BIOS subroutines using Zapple to verify, for example, a floppy disk track seek command, or read sector command, etc., was functioning correctly by extending Zapple to accommodate these operations in the hardware environment.
The general structure of Zapple lives on in the code of many older programmers working on embedded systems as it provides a simple mechanism to test the hardware before moving to more advanced user interfaces.
This operating-system -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zapple_Monitor
|
The Zarankiewicz problem , an unsolved problem in mathematics, asks for the largest possible number of edges in a bipartite graph that has a given number of vertices and has no complete bipartite subgraphs of a given size. [ 1 ] It belongs to the field of extremal graph theory , a branch of combinatorics , and is named after the Polish mathematician Kazimierz Zarankiewicz , who proposed several special cases of the problem in 1951. [ 2 ]
A bipartite graph G = ( U ∪ V , E ) {\displaystyle G=(U\cup V,E)} consists of two disjoint sets of vertices U {\displaystyle U} and V {\displaystyle V} , and a set of edges each of which connects a vertex in U {\displaystyle U} to a vertex in V {\displaystyle V} . No two edges can both connect the same pair of vertices. A complete bipartite graph is a bipartite graph in which every pair of a vertex from U {\displaystyle U} and a vertex from V {\displaystyle V} is connected to each other. A complete bipartite graph in which U {\displaystyle U} has s {\displaystyle s} vertices and V {\displaystyle V} has t {\displaystyle t} vertices is denoted K s , t {\displaystyle K_{s,t}} . If G = ( U ∪ V , E ) {\displaystyle G=(U\cup V,E)} is a bipartite graph, and there exists a set of s {\displaystyle s} vertices of U {\displaystyle U} and t {\displaystyle t} vertices of V {\displaystyle V} that are all connected to each other, then these vertices induce a subgraph of the form K s , t {\displaystyle K_{s,t}} . (In this formulation, the ordering of s {\displaystyle s} and t {\displaystyle t} is significant: the set of s {\displaystyle s} vertices must be from U {\displaystyle U} and the set of t {\displaystyle t} vertices must be from V {\displaystyle V} , not vice versa.)
The Zarankiewicz function z ( m , n ; s , t ) {\displaystyle z(m,n;s,t)} denotes the maximum possible number of edges in a bipartite graph G = ( U ∪ V , E ) {\displaystyle G=(U\cup V,E)} for which | U | = m {\displaystyle |U|=m} and | V | = n {\displaystyle |V|=n} , but which does not contain a subgraph of the form K s , t {\displaystyle K_{s,t}} . As a shorthand for an important special case, z ( n ; t ) {\displaystyle z(n;t)} is the same as z ( n , n ; t , t ) {\displaystyle z(n,n;t,t)} . The Zarankiewicz problem asks for a formula for the Zarankiewicz function, or (failing that) for tight asymptotic bounds on the growth rate of z ( n ; t ) {\displaystyle z(n;t)} assuming that t {\displaystyle t} is a fixed constant, in the limit as n {\displaystyle n} goes to infinity.
For s = t = 2 {\displaystyle s=t=2} this problem is the same as determining cages with girth six. The Zarankiewicz problem, cages and finite geometry are strongly interrelated. [ 3 ]
The same problem can also be formulated in terms of digital geometry . The possible edges of a bipartite graph G = ( U ∪ V , E ) {\displaystyle G=(U\cup V,E)} can be visualized as the points of a | U | × | V | {\displaystyle |U|\times |V|} rectangle in the integer lattice , and a complete subgraph is a set of rows and columns in this rectangle in which all points are present. Thus, z ( m , n ; s , t ) {\displaystyle z(m,n;s,t)} denotes the maximum number of points that can be placed within an m × n {\displaystyle m\times n} grid in such a way that no subset of rows and columns forms a complete s × t {\displaystyle s\times t} grid. [ 4 ] An alternative and equivalent definition is that z ( m , n ; s , t ) {\displaystyle z(m,n;s,t)} is the smallest integer k {\displaystyle k} such that every (0,1)-matrix of size m × n {\displaystyle m\times n} with k + 1 {\displaystyle k+1} ones must have a set of s {\displaystyle s} rows and t {\displaystyle t} columns such that the corresponding s × t {\displaystyle s\times t} submatrix is made up only of 1s .
The number z ( n ; 2 ) {\displaystyle z(n;2)} asks for the maximum number of edges in a bipartite graph with n {\displaystyle n} vertices on each side that has no 4-cycle (its girth is six or more). Thus, z ( 2 ; 2 ) = 3 {\displaystyle z(2;2)=3} (achieved by a three-edge path), and z ( 3 ; 2 ) = 6 {\displaystyle z(3;2)=6} (a hexagon ).
In his original formulation of the problem, Zarankiewicz asked for the values of z ( n ; 3 ) {\displaystyle z(n;3)} for n = 4 , 5 , 6 {\displaystyle n=4,5,6} . The answers were supplied soon afterwards by Wacław Sierpiński : z ( 4 ; 3 ) = 13 {\displaystyle z(4;3)=13} , z ( 5 ; 3 ) = 20 {\displaystyle z(5;3)=20} , and z ( 6 ; 3 ) = 26 {\displaystyle z(6;3)=26} . [ 4 ] The case of z ( 4 ; 3 ) {\displaystyle z(4;3)} is relatively simple: a 13-edge bipartite graph with four vertices on each side of the bipartition, and no K 3 , 3 {\displaystyle K_{3,3}} subgraph, may be obtained by adding one of the long diagonals to the graph of a cube . In the other direction, if a bipartite graph with 14 edges has four vertices on each side, then two vertices on each side must have degree four. Removing these four vertices and their 12 incident edges leaves a nonempty set of edges, any of which together with the four removed vertices forms a K 3 , 3 {\displaystyle K_{3,3}} subgraph.
The Kővári–Sós–Turán theorem provides an upper bound on the solution to the Zarankiewicz problem. It was established by Tamás Kővári, Vera T. Sós and Pál Turán shortly after the problem had been posed:
Kővári, Sós, and Turán originally proved this inequality for z ( n ; t ) {\displaystyle z(n;t)} . [ 5 ] Shortly afterwards, Hyltén-Cavallius observed that essentially the same argument can be used to prove the above inequality. [ 6 ] An improvement on the second term of the upper bound on z ( n ; t ) {\displaystyle z(n;t)} was given by Štefan Znám : [ 7 ]
If s {\displaystyle s} and t {\displaystyle t} are assumed to be constant, then asymptotically, using the big O notation , these formulae can be expressed as
In the particular case m = n {\displaystyle m=n} , assuming without loss of generality that s ≤ t {\displaystyle s\leq t} , we have the asymptotic upper bound
One can verify that among the two asymptotic upper bounds of z ( m , n ; s , t ) {\displaystyle z(m,n;s,t)} in the previous section, the first bound is better when m = o ( n s / t ) {\displaystyle m=o(n^{s/t})} , and the second bound becomes better when m = ω ( n s / t ) {\displaystyle m=\omega (n^{s/t})} . Therefore, if one can show a lower bound for z ( n s / t , n ; s , t ) {\displaystyle z(n^{s/t},n;s,t)} that matches the upper bound up to a constant, then by a simple sampling argument (on either an n t / s × t {\displaystyle n^{t/s}\times t} bipartite graph or an m × m s / t {\displaystyle m\times m^{s/t}} bipartite graph that achieves the maximum edge number), we can show that for all m , n {\displaystyle m,n} , one of the above two upper bounds is tight up to a constant. This leads to the following question: is it the case that for any fixed s ≤ t {\displaystyle s\leq t} and m ≤ n s / t {\displaystyle m\leq n^{s/t}} , we have
In the special case m = n {\displaystyle m=n} , up to constant factors, z ( n , n ; s , t ) {\displaystyle z(n,n;s,t)} has the same order as ex ( n , K s , t ) {\displaystyle {\text{ex}}(n,K_{s,t})} , the maximum number of edges in an n {\displaystyle n} -vertex (not necessarily bipartite) graph that has no K s , t {\displaystyle K_{s,t}} as a subgraph. In one direction, a bipartite graph with n {\displaystyle n} vertices on each side and z ( n , n ; s , t ) {\displaystyle z(n,n;s,t)} edges must have a subgraph with n {\displaystyle n} vertices and at least z ( n , n ; s , t ) / 4 {\displaystyle z(n,n;s,t)/4} edges; this can be seen from choosing n / 2 {\displaystyle n/2} vertices uniformly at random from each side, and taking the expectation. In the other direction, we can transform a graph with n {\displaystyle n} vertices and no copy of K s , t {\displaystyle K_{s,t}} into a bipartite graph with n {\displaystyle n} vertices on each side of its bipartition, twice as many edges and still no copy of K s , t {\displaystyle K_{s,t}} , by taking its bipartite double cover . [ 9 ] Same as above, with the convention that s ≤ t {\displaystyle s\leq t} , it has been conjectured that
for all constant values of s , t {\displaystyle s,t} . [ 10 ]
For some specific values of s , t {\displaystyle s,t} (e.g., for t {\displaystyle t} sufficiently larger than s {\displaystyle s} , or for s = 2 {\displaystyle s=2} ), the above statements have been proved using various algebraic and random algebraic constructions. At the same time, the answer to the general question is still unknown to us.
For s = t = 2 {\displaystyle s=t=2} , a bipartite graph with n {\displaystyle n} vertices on each side, Ω ( n 3 / 2 ) {\displaystyle \Omega (n^{3/2})} edges, and no K 2 , 2 {\displaystyle K_{2,2}} may be obtained as the Levi graph , or point-line incidence graph, of a projective plane of order q {\displaystyle q} , a system of q 2 + q + 1 {\displaystyle q^{2}+q+1} points and q 2 + q + 1 {\displaystyle q^{2}+q+1} lines in which each two points determine a unique line, and each two lines intersect at a unique point. We construct a bipartite graph associated to this projective plane that has one vertex part as its points, the other vertex part as its lines, such that a point and a line is connected if and only if they are incident in the projective plane. This leads to a K 2 , 2 {\displaystyle K_{2,2}} -free graph with q 2 + q + 1 {\displaystyle q^{2}+q+1} vertices and ( q 2 + q + 1 ) ( q + 1 ) {\displaystyle (q^{2}+q+1)(q+1)} edges.
Since this lower bound matches the upper bound given by I. Reiman, [ 11 ] we have the asymptotic [ 12 ]
For s = t = 3 {\displaystyle s=t=3} , bipartite graphs with n {\displaystyle n} vertices on each side, Ω ( n 5 / 3 ) {\displaystyle \Omega (n^{5/3})} edges, and no K 3 , 3 {\displaystyle K_{3,3}} may again be constructed from finite geometry, by letting the vertices represent points and spheres (of a carefully chosen fixed radius) in a three-dimensional finite affine space , and letting the edges represent point-sphere incidences. [ 13 ]
More generally, consider s = 2 {\displaystyle s=2} and any t {\displaystyle t} . Let F q {\displaystyle \mathbb {F} _{q}} be the q {\displaystyle q} -element finite field, and h {\displaystyle h} be an element of multiplicative order t {\displaystyle t} , in the sense that H = { 1 , h , … , h t − 1 } {\displaystyle H=\{1,h,\dots ,h^{t-1}\}} form a t {\displaystyle t} -element subgroup of the multiplicative group F q ∗ {\displaystyle \mathbb {F} _{q}^{*}} . We say that two nonzero elements ( a , b ) , ( a ′ , b ′ ) ∈ F q × F q {\displaystyle (a,b),(a',b')\in \mathbb {F} _{q}\times \mathbb {F} _{q}} are equivalent if we have a ′ = h d a {\displaystyle a'=h^{d}a} and b ′ = h d b {\displaystyle b'=h^{d}b} for some d {\displaystyle d} . Consider a graph G {\displaystyle G} on the set of all equivalence classes ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } , such that ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } and ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } are connected if and only if a x + b y ∈ H {\displaystyle ax+by\in H} . One can verify that G {\displaystyle G} is well-defined and free of K 2 , t + 1 {\displaystyle K_{2,t+1}} , and every vertex in G {\displaystyle G} has degree q {\displaystyle q} or q − 1 {\displaystyle q-1} . Hence we have the upper bound [ 14 ]
For t {\displaystyle t} sufficiently larger than s {\displaystyle s} , the above conjecture z ( n , n ; s , t ) = Θ ( n 2 − 1 / s ) {\displaystyle z(n,n;s,t)=\Theta (n^{2-1/s})} was verified by Kollár, Rónyai, and Szabó [ 15 ] and Alon, Rónyai, and Szabó [ 16 ] using the construction of norm graphs and projective norm graphs over finite fields.
For t > s ! {\displaystyle t>s!} , consider the norm graph NormGraph p,s with vertex set F p s {\displaystyle \mathbb {F} _{p^{s}}} , such that every two vertices a , b ∈ F p s {\displaystyle a,b\in \mathbb {F} _{p^{s}}} are connected if and only if N ( a + b ) = 1 {\displaystyle N(a+b)=1} , where N : F p s → F p {\displaystyle N\colon \mathbb {F} _{p^{s}}\rightarrow \mathbb {F} _{p}} is the norm map
It is not hard to verify that the graph has p s {\displaystyle p^{s}} vertices and at least p 2 s − 1 / 2 {\displaystyle p^{2s-1}/2} edges. To see that this graph is K s , s ! + 1 {\displaystyle K_{s,s!+1}} -free, observe that any common neighbor x {\displaystyle x} of s {\displaystyle s} vertices y 1 , … , y s ∈ F p s {\displaystyle y_{1},\ldots ,y_{s}\in \mathbb {F} _{p^{s}}} must satisfy
for all i = 1 , … , s {\displaystyle i=1,\ldots ,s} , which a system of equations that has at most s ! {\displaystyle s!} solutions.
The same result can be proved for all t > ( s − 1 ) ! {\displaystyle t>(s-1)!} using the projective norm graph , a construction slightly stronger than the above. The projective norm graph ProjNormGraph p,s is the graph on vertex set F p s − 1 × F p × {\displaystyle \mathbb {F} _{p^{s-1}}\times \mathbb {F} _{p}^{\times }} , such that two vertices ( X , x ) , ( Y , y ) {\displaystyle (X,x),(Y,y)} are adjacent if and only if N ( X + Y ) = x y {\displaystyle N(X+Y)=xy} , where N : F p s → F p {\displaystyle N\colon \mathbb {F} _{p^{s}}\rightarrow \mathbb {F} _{p}} is the norm map defined by N ( x ) = x ( p s − 1 ) / ( p − 1 ) {\displaystyle N(x)=x^{(p^{s}-1)/(p-1)}} . By a similar argument to the above, one can verify that it is a K s , t {\displaystyle K_{s,t}} -free graph with Ω ( n 2 − 1 / s ) {\displaystyle \Omega (n^{2-1/s})} edges.
The above norm graph approach also gives tight lower bounds on z ( m , n ; s , t ) {\displaystyle z(m,n;s,t)} for certain choices of m , n {\displaystyle m,n} . [ 16 ] In particular, for s ≥ 2 {\displaystyle s\geq 2} , t > s ! {\displaystyle t>s!} , and n 1 / t ≤ m ≤ n 1 + 1 / t {\displaystyle n^{1/t}\leq m\leq n^{1+1/t}} , we have
In the case m = ( 1 + o ( 1 ) ) n 1 + 1 / s {\displaystyle m=(1+o(1))n^{1+1/s}} , consider the bipartite graph G {\displaystyle G} with bipartition V = V 1 ∪ V 2 {\displaystyle V=V_{1}\cup V_{2}} , such that V 1 = F p t × F p × {\displaystyle V_{1}=\mathbb {F} _{p^{t}}\times \mathbb {F} _{p}^{\times }} and V 2 = F p t {\displaystyle V_{2}=\mathbb {F} _{p^{t}}} . For A ∈ V 1 {\displaystyle A\in V_{1}} and ( B , b ) ∈ V 2 {\displaystyle (B,b)\in V_{2}} , let A ∼ ( B , b ) {\displaystyle A\sim (B,b)} in G {\displaystyle G} if and only if N ( A + B ) = b {\displaystyle N(A+B)=b} , where N ( ⋅ ) {\displaystyle N(\cdot )} is the norm map defined above. To see that G {\displaystyle G} is K s , t {\displaystyle K_{s,t}} -free, consider s {\displaystyle s} tuples ( B 1 , b 1 ) , … , ( B s , b s ) ∈ V 1 {\displaystyle (B_{1},b_{1}),\ldots ,(B_{s},b_{s})\in V_{1}} . Observe that if the s {\displaystyle s} tuples have a common neighbor, then the B i {\displaystyle B_{i}} must be distinct. Using the same upper bound on he number of solutions to the system of equations, we know that these s {\displaystyle s} tuples have at most s ! < t {\displaystyle s!<t} common neighbors.
Using a related result on clique partition numbers, Alon, Mellinger, Mubayi and Verstraëte [ 17 ] proved a tight lower bound on z ( m , n ; 2 , t ) {\displaystyle z(m,n;2,t)} for arbitrary t {\displaystyle t} : if m = ( 1 + o ( 1 ) ) n t / 2 {\displaystyle m=(1+o(1))n^{t/2}} , then we have
For 2 ≤ t ≤ n {\displaystyle 2\leq t\leq n} , we say that a collection of subsets A 1 , … , A ℓ ⊂ [ n ] {\displaystyle A_{1},\dots ,A_{\ell }\subset [n]} is a clique partition of H ⊂ ( [ n ] t ) {\displaystyle H\subset {[n] \choose t}} if ⋃ i = 1 ℓ ( A i t ) {\displaystyle \bigcup _{i=1}^{\ell }{A_{i} \choose t}} form a partition of H {\displaystyle H} . Observe that for any k {\displaystyle k} , if there exists some H ⊂ ( [ n ] t ) {\displaystyle H\subset {[n] \choose t}} of size ( 1 − o ( 1 ) ) ( n t ) {\displaystyle (1-o(1)){n \choose t}} and m = ( 1 + o ( 1 ) ) ( n t ) / ( k t ) {\displaystyle m=(1+o(1)){n \choose t}/{k \choose t}} , such that there is a partition of H {\displaystyle H} into m {\displaystyle m} cliques of size k {\displaystyle k} , then we have z ( m , n ; 2 , t ) = k m {\displaystyle z(m,n;2,t)=km} . Indeed, supposing A 1 , … , A m ⊂ [ n ] {\displaystyle A_{1},\dots ,A_{m}\subset [n]} is a partition of H {\displaystyle H} into m {\displaystyle m} cliques of size k {\displaystyle k} , we can let G {\displaystyle G} be the m × n {\displaystyle m\times n} bipartite graph with V 1 = { A 1 , … , A m } {\displaystyle V_{1}=\{A_{1},\dots ,A_{m}\}} and V 2 = [ n ] {\displaystyle V_{2}=[n]} , such that A i ∼ v {\displaystyle A_{i}\sim v} in G {\displaystyle G} if and only if v ∈ A i {\displaystyle v\in A_{i}} . Since the A i {\displaystyle A_{i}} form a clique partition, G {\displaystyle G} cannot contain a copy of K 2 , t {\displaystyle K_{2,t}} .
It remains to show that such a clique partition exists for any m = ( 1 + o ( 1 ) ) n t / 2 {\displaystyle m=(1+o(1))n^{t/2}} . To show this, let F q {\displaystyle \mathbb {F} _{q}} be the finite field of size q {\displaystyle q} and V = F q × F q {\displaystyle V=\mathbb {F} _{q}\times \mathbb {F} _{q}} . For every polynomial p ( ⋅ ) {\displaystyle p(\cdot )} of degree at most t − 1 {\displaystyle t-1} over F q {\displaystyle \mathbb {F} _{q}} , define C p = { ( x , p ( x ) ) : x ∈ F q } ⊂ V {\displaystyle C_{p}=\{(x,p(x)):x\in \mathbb {F} _{q}\}\subset V} . Let C {\displaystyle {\mathcal {C}}} be the collection of all C p {\displaystyle C_{p}} , so that | C | = q t = n t / 2 {\displaystyle |{\mathcal {C}}|=q^{t}=n^{t/2}} and every C p {\displaystyle C_{p}} has size q = n {\displaystyle q={\sqrt {n}}} . Clearly no two members of C {\displaystyle {\mathcal {C}}} can share t {\displaystyle t} members. Since the only t {\displaystyle t} -sets in V {\displaystyle V} that do not belong to H {\displaystyle H} are those that have at least two points sharing the same first coordinate, we know that almost all t {\displaystyle t} -subsets of V {\displaystyle V} are contained in some C p {\displaystyle C_{p}} .
Alternative proofs of ex ( n , K s , t ) = Ω ( n 2 − 1 / s ) {\displaystyle {\text{ex}}(n,K_{s,t})=\Omega (n^{2-1/s})} for t {\displaystyle t} sufficiently larger than s {\displaystyle s} were also given by Blagojević, Bukh and Karasev [ 18 ] and by Bukh [ 19 ] using the method of random algebraic constructions. The basic idea is to take a random polynomial f : F q s × F q s → F q {\displaystyle f:\mathbb {F} _{q}^{s}\times \mathbb {F} _{q}^{s}\rightarrow \mathbb {F} _{q}} and consider the graph G {\displaystyle G} between two copies of F q s {\displaystyle \mathbb {F} _{q}^{s}} whose edges are all those pairs ( x , y ) {\displaystyle (x,y)} such that f ( x , y ) = 0 {\displaystyle f(x,y)=0} .
To start with, let q {\displaystyle q} be a prime power and n = q 2 {\displaystyle n=q^{2}} . Let
be a random polynomial with degree at most s 2 {\displaystyle s^{2}} in X = ( x 1 , … , x s ) {\displaystyle X=(x_{1},\dots ,x_{s})} , degree at most s 2 {\displaystyle s^{2}} in Y = ( y 1 , … , y s ) {\displaystyle Y=(y_{1},\dots ,y_{s})} , and furthermore satisfying f ( X , Y ) = f ( Y , X ) {\displaystyle f(X,Y)=f(Y,X)} for all X , Y {\displaystyle X,Y} . Let G {\displaystyle G} be the associated random graph on vertex set F q s {\displaystyle \mathbb {F} _{q}^{s}} , such that two vertices x {\displaystyle x} and y {\displaystyle y} are adjacent if and only if f ( x , y ) = 0 {\displaystyle f(x,y)=0} .
To prove the asymptotic lower bound, it suffices to show that the expected number of edges in G {\displaystyle G} is Ω ( q 2 s − 1 ) {\displaystyle \Omega (q^{2s-1})} . For every s {\displaystyle s} -subset U ⊂ F q s {\displaystyle U\subset \mathbb {F} _{q}^{s}} , we let Z U {\displaystyle Z_{U}} denote the vertex subset of F q s ∖ U {\displaystyle \mathbb {F} _{q}^{s}\setminus U} that "vanishes on f ( ⋅ , U ) {\displaystyle f(\cdot ,U)} ":
Using the Lang-Weil bound for polynomials f ( ⋅ , u ) {\displaystyle f(\cdot ,u)} in F q s {\displaystyle \mathbb {F} _{q}^{s}} , we can deduce that one always has Z U ≤ C {\displaystyle Z_{U}\leq C} or Z U > q / 2 {\displaystyle Z_{U}>q/2} for some large constant C {\displaystyle C} , which implies
Since f {\displaystyle f} is chosen randomly over F q {\displaystyle \mathbb {F} _{q}} , it is not hard to show that the right-hand side probability is small, so the expected number of s {\displaystyle s} -subsets U {\displaystyle U} with | Z U | > C {\displaystyle |Z_{U}|>C} also turned out to be small. If we remove a vertex from every such U {\displaystyle U} , then the resulting graph is K s , C + 1 {\displaystyle K_{s,C+1}} free, and the expected number of remaining edges is still large. This finishes the proof that ex ( n , K s , t ) = Ω ( n 2 − 1 / s ) {\displaystyle {\text{ex}}(n,K_{s,t})=\Omega (n^{2-1/s})} for all t {\displaystyle t} sufficiently large with respect to s {\displaystyle s} . More recently, there have been a number of results verifying the conjecture z ( m , n ; s , t ) = Ω ( n 2 − 1 / s ) {\displaystyle z(m,n;s,t)=\Omega (n^{2-1/s})} for different values of s , t {\displaystyle s,t} , using similar ideas but with more tools from algebraic geometry. [ 8 ] [ 20 ]
The Kővári–Sós–Turán theorem has been used in discrete geometry to bound the number of incidences between geometric objects of various types. As a simple example, a set of n {\displaystyle n} points and m {\displaystyle m} lines in the Euclidean plane necessarily has no K 2 , 2 {\displaystyle K_{2,2}} , so by the Kővári–Sós–Turán it has O ( n m 1 / 2 + m ) {\displaystyle O(nm^{1/2}+m)} point-line incidences. This bound is tight when m {\displaystyle m} is much larger than n {\displaystyle n} , but not when m {\displaystyle m} and n {\displaystyle n} are nearly equal, in which case the Szemerédi–Trotter theorem provides a tighter O ( n 2 / 3 m 2 / 3 + n + m ) {\displaystyle O(n^{2/3}m^{2/3}+n+m)} bound. However, the Szemerédi–Trotter theorem may be proven by dividing the points and lines into subsets for which the Kővári–Sós–Turán bound is tight. [ 21 ]
|
https://en.wikipedia.org/wiki/Zarankiewicz_problem
|
In algebra, Zariski's finiteness theorem gives a positive answer to Hilbert's 14th problem for the polynomial ring in two variables, as a special case. [ 1 ] Precisely, it states:
This commutative algebra -related article is a stub . You can help Wikipedia by expanding it .
This article about the history of mathematics is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zariski's_finiteness_theorem
|
In mathematics , a Zariski geometry consists of an abstract structure introduced by Ehud Hrushovski and Boris Zilber , in order to give a characterisation of the Zariski topology on an algebraic curve , and all its powers. The Zariski topology on a product of algebraic varieties is very rarely the product topology , but richer in closed sets defined by equations that mix two sets of variables. The result described gives that a very definite meaning, applying to projective curves and compact Riemann surfaces in particular.
A Zariski geometry consists of a set X and a topological structure on each of the sets
satisfying certain axioms.
(N) Each of the X n is a Noetherian topological space , of dimension at most n .
Some standard terminology for Noetherian spaces will now be assumed.
(A) In each X n , the subsets defined by equality in an n - tuple are closed. The mappings
defined by projecting out certain coordinates and setting others as constants are all continuous.
(B) For a projection
and an irreducible closed subset Y of X m , p ( Y ) lies between its closure Z and Z \ Z ′ where Z ′ is a proper closed subset of Z . (This is quantifier elimination , at an abstract level.)
(C) X is irreducible.
(D) There is a uniform bound on the number of elements of a fiber in a projection of any closed set in X m , other than the cases where the fiber is X .
(E) A closed irreducible subset of X m , of dimension r , when intersected with a diagonal subset in which s coordinates are set equal, has all components of dimension at least r − s + 1.
The further condition required is called very ample (cf. very ample line bundle ). It is assumed there is an irreducible closed subset P of some X m , and an irreducible closed subset Q of P × X 2 , with the following properties:
(I) Given pairs ( x , y ), ( x ′ , y ′ ) in X 2 , for some t in P , the set of ( t , u , v ) in Q includes ( t , x , y ) but not ( t , x ′ , y ′ )
(J) For t outside a proper closed subset of P , the set of ( x , y ) in X 2 , ( t , x , y ) in Q is an irreducible closed set of dimension 1.
(K) For all pairs ( x , y ), ( x ′ , y ′ ) in X 2 , selected from outside a proper closed subset, there is some t in P such that the set of ( t , u , v ) in Q includes ( t , x , y ) and ( t , x ′ , y ′ ).
Geometrically this says there are enough curves to separate points (I), and to connect points (K); and that such curves can be taken from a single parametric family .
Then Hrushovski and Zilber prove that under these conditions there is an algebraically closed field K , and a non-singular algebraic curve C , such that its Zariski geometry of powers and their Zariski topology is isomorphic to the given one. In short, the geometry can be algebraized.
|
https://en.wikipedia.org/wiki/Zariski_geometry
|
Zarr is an open standard for storing large multidimensional array data. It specifies a protocol and data format, and is designed to be " cloud ready" including random access , by dividing data into subsets referred to as chunks. [ 1 ] [ 2 ] Zarr can be used within many programming languages, including Python , Java , JavaScript , C++ , Rust and Julia . [ 3 ] It has been used by organizations such as Google and Microsoft to publish large datasets. [ 4 ] [ 5 ] Early versions of Zarr were first released in 2015 by Alistair Miles. [ 6 ] [ 7 ]
Zarr is designed to support high-throughput distributed I/O on different storage systems, which is a common requirement in cloud computing . Multiple read operations can efficiently occur to a Zarr array in parallel, or multiple write operations in parallel. [ 8 ]
The main data format in Zarr is multidimensional arrays. For parallelisable access, these arrays are stored and accessed as a grid of so-called "chunks". The actual data format on disk depends on the compressor and storage plugins selected by the user. [ 8 ]
Zarr's design was influenced by that of HDF5 , and so it includes similar features for metadata and grouping: arrays can be grouped into named hierarchies, and they can also be annotated with key-value metadata stored alongside the array. [ 8 ]
For bioimaging such as microscopy , a consortium called the Open Microscopy Environment (OME) created a format called "OME-Zarr", based on Zarr with some discipline-specific extensions. [ 9 ] Similarly, Zarr is being used to publish weather and satellite data [ 10 ] and energy data, [ 11 ] among others.
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zarr_(data_format)
|
In mathematics, the Zassenhaus algorithm [ 1 ] is a method to calculate a basis for the intersection and sum of two subspaces of a vector space .
It is named after Hans Zassenhaus , but no publication of this algorithm by him is known. [ 2 ] It is used in computer algebra systems . [ 3 ]
Let V be a vector space and U , W two finite-dimensional subspaces of V with the following spanning sets :
and
Finally, let B 1 , … , B m {\displaystyle B_{1},\ldots ,B_{m}} be linearly independent vectors so that u i {\displaystyle u_{i}} and w i {\displaystyle w_{i}} can be written as
and
The algorithm computes the base of the sum U + W {\displaystyle U+W} and a base of the intersection U ∩ W {\displaystyle U\cap W} .
The algorithm creates the following block matrix of size ( ( n + k ) × ( 2 m ) ) {\displaystyle ((n+k)\times (2m))} :
Using elementary row operations , this matrix is transformed to the row echelon form . Then, it has the following shape:
Here, ∙ {\displaystyle \bullet } stands for arbitrary numbers, and the vectors ( c p , 1 , c p , 2 , … , c p , m ) {\displaystyle (c_{p,1},c_{p,2},\ldots ,c_{p,m})} for every p ∈ { 1 , … , q } {\displaystyle p\in \{1,\ldots ,q\}} and ( d p , 1 , … , d p , m ) {\displaystyle (d_{p,1},\ldots ,d_{p,m})} for every p ∈ { 1 , … , ℓ } {\displaystyle p\in \{1,\ldots ,\ell \}} are nonzero.
Then ( y 1 , … , y q ) {\displaystyle (y_{1},\ldots ,y_{q})} with
is a basis of U + W {\displaystyle U+W} and ( z 1 , … , z ℓ ) {\displaystyle (z_{1},\ldots ,z_{\ell })} with
is a basis of U ∩ W {\displaystyle U\cap W} .
First, we define π 1 : V × V → V , ( a , b ) ↦ a {\displaystyle \pi _{1}:V\times V\to V,(a,b)\mapsto a} to be the projection to the first component.
Let H := { ( u , u ) ∣ u ∈ U } + { ( w , 0 ) ∣ w ∈ W } ⊆ V × V . {\displaystyle H:=\{(u,u)\mid u\in U\}+\{(w,0)\mid w\in W\}\subseteq V\times V.} Then π 1 ( H ) = U + W {\displaystyle \pi _{1}(H)=U+W} and H ∩ ( 0 × V ) = 0 × ( U ∩ W ) {\displaystyle H\cap (0\times V)=0\times (U\cap W)} .
Also, H ∩ ( 0 × V ) {\displaystyle H\cap (0\times V)} is the kernel of π 1 | H {\displaystyle {\pi _{1}|}_{H}} , the projection restricted to H .
Therefore, dim ( H ) = dim ( U + W ) + dim ( U ∩ W ) {\displaystyle \dim(H)=\dim(U+W)+\dim(U\cap W)} .
The Zassenhaus algorithm calculates a basis of H . In the first m columns of this matrix, there is a basis y i {\displaystyle y_{i}} of U + W {\displaystyle U+W} .
The rows of the form ( 0 , z i ) {\displaystyle (0,z_{i})} (with z i ≠ 0 {\displaystyle z_{i}\neq 0} ) are obviously in H ∩ ( 0 × V ) {\displaystyle H\cap (0\times V)} . Because the matrix is in row echelon form , they are also linearly independent.
All rows which are different from zero ( ( y i , ∙ ) {\displaystyle (y_{i},\bullet )} and ( 0 , z i ) {\displaystyle (0,z_{i})} ) are a basis of H , so there are dim ( U ∩ W ) {\displaystyle \dim(U\cap W)} such z i {\displaystyle z_{i}} s. Therefore, the z i {\displaystyle z_{i}} s form a basis of U ∩ W {\displaystyle U\cap W} .
Consider the two subspaces U = ⟨ ( 1 − 1 0 1 ) , ( 0 0 1 − 1 ) ⟩ {\displaystyle U=\left\langle \left({\begin{array}{r}1\\-1\\0\\1\end{array}}\right),\left({\begin{array}{r}0\\0\\1\\-1\end{array}}\right)\right\rangle } and W = ⟨ ( 5 0 − 3 3 ) , ( 0 5 − 3 − 2 ) ⟩ {\displaystyle W=\left\langle \left({\begin{array}{r}5\\0\\-3\\3\end{array}}\right),\left({\begin{array}{r}0\\5\\-3\\-2\end{array}}\right)\right\rangle } of the vector space R 4 {\displaystyle \mathbb {R} ^{4}} .
Using the standard basis , we create the following matrix of dimension ( 2 + 2 ) × ( 2 ⋅ 4 ) {\displaystyle (2+2)\times (2\cdot 4)} :
Using elementary row operations , we transform this matrix into the following matrix:
Therefore ( ( 1 0 0 0 ) , ( 0 1 0 − 1 ) , ( 0 0 1 − 1 ) ) {\displaystyle \left(\left({\begin{array}{r}1\\0\\0\\0\end{array}}\right),\left({\begin{array}{r}0\\1\\0\\-1\end{array}}\right),\left({\begin{array}{r}0\\0\\1\\-1\end{array}}\right)\right)} is a basis of U + W {\displaystyle U+W} , and ( ( 1 − 1 0 1 ) ) {\displaystyle \left(\left({\begin{array}{r}1\\-1\\0\\1\end{array}}\right)\right)} is a basis of U ∩ W {\displaystyle U\cap W} .
|
https://en.wikipedia.org/wiki/Zassenhaus_algorithm
|
In organic chemistry , Zaytsev's rule (or Zaitsev's rule , Saytzeff's rule , Saytzev's rule ) is an empirical rule for predicting the favored alkene product(s) in elimination reactions . While at the University of Kazan , Russian chemist Alexander Zaytsev studied a variety of different elimination reactions and observed a general trend in the resulting alkenes. Based on this trend, Zaytsev proposed that the alkene formed in greatest amount is that which corresponded to removal of the hydrogen from the alpha-carbon having the fewest hydrogen substituents . For example, when 2-iodobutane is treated with alcoholic potassium hydroxide (KOH), but-2-ene is the major product and but-1-ene is the minor product. [ 1 ]
More generally, Zaytsev's rule predicts that in an elimination reaction the most substituted product will be the most stable, and therefore the most favored. The rule makes no generalizations about the stereochemistry of the newly formed alkene, but only the regiochemistry of the elimination reaction. While effective at predicting the favored product for many elimination reactions, Zaytsev's rule is subject to many exceptions.
Many of them include exceptions under Hofmann product (analogous to Zaytsev product). These include compounds having quaternary nitrogen and leaving groups like NR 3 + , SO 3 H, etc. In these eliminations the Hofmann product is preferred. In case the leaving group is halogens, except fluorine; others give the Zaytsev product. [ clarification needed ]
Alexander Zaytsev first published his observations regarding the products of elimination reactions in Justus Liebigs Annalen der Chemie in 1875. [ 2 ] [ 3 ] Although the paper contained some original research done by Zaytsev's students, it was largely a literature review and drew heavily upon previously published work. [ 4 ] In it, Zaytsev proposed a purely empirical rule for predicting the favored regiochemistry in the dehydrohalogenation of alkyl iodides, though it turns out that the rule is applicable to a variety of other elimination reactions as well. While Zaytsev's paper was well referenced throughout the 20th century, it was not until the 1960s that textbooks began using the term "Zaytsev's rule". [ 3 ]
Zaytsev was not the first chemist to publish the rule that now bears his name. Aleksandr Nikolaevich Popov published an empirical rule similar to Zaytsev's in 1872, [ 5 ] and presented his findings at the University of Kazan in 1873. Zaytsev had cited Popov's 1872 paper in previous work and worked at the University of Kazan, and was thus probably aware of Popov's proposed rule. In spite of this, Zaytsev's 1875 Liebigs Annalen paper makes no mention of Popov's work. [ 3 ] [ 4 ]
Any discussion of Zaytsev's rule would be incomplete without mentioning Vladimir Vasilyevich Markovnikov . Zaytsev and Markovnikov both studied under Alexander Butlerov , taught at the University of Kazan during the same period, and were bitter rivals. Markovnikov, who published in 1870 what is now known as Markovnikov's rule , and Zaytsev held conflicting views regarding elimination reactions: the former believed that the least substituted alkene would be favored, whereas the latter felt the most substituted alkene would be the major product. Perhaps one of the main reasons Zaytsev began investigating elimination reactions was to disprove his rival. [ 3 ] Zaytsev published his rule for elimination reactions just after Markovnikov published the first article in a three-part series in Comptes Rendus detailing his rule for addition reactions. [ 4 ]
The hydrogenation of alkenes to alkanes is exothermic . The amount of energy released during a hydrogenation reaction, known as the heat of hydrogenation, is inversely related to the stability of the starting alkene: the more stable the alkene, the lower its heat of hydrogenation. Examining the heats of hydrogenation for various alkenes reveals that stability increases with the amount of substitution. [ 6 ]
The increase in stability associated with additional substitutions is the result of several factors. Alkyl groups are electron donating by inductive effect, and increase the electron density on the sigma bond of the alkene. Also, alkyl groups are sterically large, and are most stable when they are far away from each other. In an alkane, the maximum separation is that of the tetrahedral bond angle, 109.5°. In an alkene, the bond angle increases to near 120°. As a result, the separation between alkyl groups is greatest in the most substituted alkene. [ 7 ]
Hyperconjugation , which describes the stabilizing interaction between the HOMO of the alkyl group and the LUMO of the double bond, also helps explain the influence of alkyl substitutions on the stability of alkenes. In regards to orbital hybridization , a bond between an sp 2 carbon and an sp 3 carbon is stronger than a bond between two sp 3 -hybridized carbons. Computations reveal a dominant stabilizing hyperconjugation effect of 6 kcal/mol per alkyl group. [ 8 ]
In E2 elimination reactions, a base abstracts a proton that is beta to a leaving group, such as a halide. The removal of the proton and the loss of the leaving group occur in a single, concerted step to form a new double bond. When a small, unhindered base – such as sodium hydroxide , sodium methoxide , or sodium ethoxide – is used for an E2 elimination, the Zaytsev product is typically favored over the least substituted alkene, known as the Hofmann product . For example, treating 2-Bromo-2-methyl butane with sodium ethoxide in ethanol produces the Zaytsev product with moderate selectivity. [ 9 ]
Due to steric interactions, a bulky base – such as potassium tert -butoxide , triethylamine , or 2,6-lutidine – cannot readily abstract the proton that would lead to the Zaytsev product. In these situations, a less sterically hindered proton is preferentially abstracted instead. As a result, the Hofmann product is typically favored when using bulky bases. When 2-Bromo-2-methyl butane is treated with potassium tert -butoxide instead of sodium ethoxide, the Hofmann product is favored. [ 10 ]
Steric interactions within the substrate also prevent the formation of the Zaytsev product. These intramolecular interactions are relevant to the distribution of products in the Hofmann elimination reaction, which converts amines to alkenes. In the Hofmann elimination, treatment of a quaternary ammonium iodide salt with silver oxide produces hydroxide ions, which act as a base and eliminate the tertiary amine to give an alkene. [ 11 ]
In the Hofmann elimination, the least substituted alkene is typically favored due to intramolecular steric interactions. The quaternary ammonium group is large, and interactions with alkyl groups on the rest of the molecule are undesirable. As a result, the conformation necessary for the formation of the Zaytsev product is less energetically favorable than the conformation required for the formation of the Hofmann product. As a result, the Hofmann product is formed preferentially. The Cope elimination is very similar to the Hofmann elimination in principle but occurs under milder conditions. It also favors the formation of the Hofmann product, and for the same reasons. [ 12 ]
In some cases, the stereochemistry of the starting material can prevent the formation of the Zaytsev product. For example, when menthyl chloride is treated with sodium ethoxide, the Hofmann product is formed exclusively, [ 13 ] but in very low yield: [ 14 ]
This result is due to the stereochemistry of the starting material. E2 eliminations require anti -periplanar geometry, in which the proton and leaving group lie on opposite sides of the C-C bond, but in the same plane. When menthyl chloride is drawn in the chair conformation , it is easy to explain the unusual product distribution.
Formation of the Zaytsev product requires elimination at the 2-position, but the isopropyl group – not the proton – is anti -periplanar to the chloride leaving group; this makes elimination at the 2-position impossible. In order for the Hofmann product to form, elimination must occur at the 6-position. Because the proton at this position has the correct orientation relative to the leaving group, elimination can and does occur. As a result, this particular reaction produces only the Hofmann product.
|
https://en.wikipedia.org/wiki/Zaytsev's_rule
|
Zdeněk Herman (24 March 1934 – 25 February 2021) was a Czech physical chemist .
Herman was born on 24 March 1934 in Libušín . [ 1 ] He studied physical chemistry and radiochemistry at the School of Mathematics and Physics of Charles University , Prague (1952–1957). He then joined the Institute of Physical Chemistry of the Czech Academy of Sciences , to which he remained affiliated.
Herman's early work, with Vladimír Čermák concerned mass spectrometric studies of the kinetics of collision and ionization processes of ions (chemical reaction of ions, Penning and associative ionization ). During his post-doctoral years (1964–1965), with Richard Wolfgang at Yale University , Herman built one of the first crossed beam machines to study ion-molecule processes.
Herman also built an improved crossed beam machine that was used in Prague with colleagues to investigate the dynamics of ion-molecule and charge transfer reactions of cations and dications , and ion-surface collisions by the scattering method (1970–2010).
Herman has published over 240 scientific articles in this field.
Herman's academic awards include the Ian Marcus Marci Medal (Czech Spectroscopic Society, 1989), the Alexander von Humboldt Research Prize (awarded in Germany in 1992, the first time the prize was awarded to a Czech natural scientist), the Česká hlava ("Czech Head") National Prize for lifetime achievements (2003), an Honorary Degree from the Leopold-Franzens University in Innsbruck (2007), and honorary membership of the Czech Mass Spectrometric Society.
Special honorary issues of The Journal of Physical Chemistry (1995) [ 2 ] and The International Journal of Mass Spectrometry (2009) [ 3 ] were issued to celebrate his 60th and 75th birthdays respectively. Since 2014 the Resonance Foundation awards "The Zdeněk Herman Prize" for the best PhD thesis in chemical physics and mass spectrometry. Since 2016 the international conference MOLEC (Dynamics of Molecular Systems) awards the "Zdeněk Herman Young Scientist Prize".
In his free time, Herman painted and sculpted, and has exhibited his work on several occasions. Busts by Herman of founders of several institutes of the Academy of Sciences are on display at those institutes. Three statues sculpted by Herman stand in the countryside around Rakovník (e.g., in the park in Pavlíkov).
|
https://en.wikipedia.org/wiki/Zdeněk_Herman
|
Zearalanone ( ZAN ) is a mycoestrogen that is a derivative of zearalenone (ZEN). [ 1 ] Zearalanone can be extracted from foodstuffs along with aflatoxins in the same time by a specific immunoaffinity column . [ 2 ]
This article about an organic compound is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zearalanone
|
Zeba Islam Seraj is a Bangladeshi scientist known for her research in developing salt-tolerant rice varieties suitable for growth in the coastal areas of Bangladesh. She is currently a professor at the Department of Biochemistry and Molecular Biology, University of Dhaka. [ 1 ]
Seraj studied at the University of Dhaka , Bangladesh obtaining a B.Sc. in 1980. She completed her M.Sc. from the same university in 1982. She obtained her PhD in biochemistry from University of Glasgow in 1986 and went to University of Liverpool for post-doctoral work in the following year. After completing her post-doc., she joined the Department of Biochemistry and Molecular Biology, University of Dhaka in 1988. She became an associate professor in 1991 and later a professor in 1997 at the same university. She has been supervising plant biotechnology projects funded by foreign and local grants as a principal investigator since 1991. She is a visiting researcher with UT Austin since 2013.
Seraj has established a well-equipped plant biotechnology laboratory at the University of Dhaka . She has been a co-principal investigator in several projects, such as the Generation Challenge Program (GCP)—an initiative to use molecular biology to help boost agricultural production. [ 2 ] Seraj has not only worked on fine mapping of the major QTLs for salinity tolerance in Pokkali, but also characterized traditional rice landraces with the aim of finding genetic loci responsible for salt tolerance and applying markers linked to these loci to aid breeding programs for incorporation of salinity tolerance in rice. She also works on developing genetically modified rice varieties with improved salt tolerance suitable for growing in the coastal region of Bangladesh. She was the recipient of the PEER award (joint USAID-NSF initiative) for using next generation sequencing technologies to find the basis of salt tolerance of a rice landrace endemic to the Bangladesh coast, where University of Texas at Austin served as the host for collaborative work. [ 3 ]
Seraj has been a visiting scientist in PBGB, IRRI (Constructs for salinity tolerance with Dr. John Bennett Jan-March 1998), PBGB & CSWS Division, IRRI (IRRI-PETRRA Bangladesh project on development of MV rice for the coastal wetlands of Bangladesh, June 11–29, 2002 and June 16–20, 2003), USDA research station at Beaumont, Texas, USA (Aug. 4–16, 2003) and at the Department of Molecular, Cell and Developmental Biology, University of Texas, Austin, USA as Norman Borlaug Fellow (August 15–December 15, 2005). She has been honored with Visiting researcher status at University of Texas at Austin (October 2014–September 2020). She was awarded the Annanya Award, 2017 for her scientific research. [ 4 ] She was invited for a Tedx talk on how to save crops from sea level rise and salinity (Jan 16, 2018). She was featured in NHK TV, Japan in a talk on Science for Sustainable Earth in 2019.
Zeba was married to Toufiq M Seraj , a Bangladeshi businessman who was the founder and managing director of Sheltech . They have two daughters.
url= http://sites.nationalacademies.org/PGA/PEER/PEERscience/PGA_084034
|
https://en.wikipedia.org/wiki/Zeba_Islam_Seraj
|
The ZebraBox [ 1 ] is an automated analysis chamber for the non-intrusive video observation of different types of freshwater marine indicator species , such as Dania rerio and Pimephales promelas . It is a type of Larval Photomoter Response (LPR) assay, which is used to monitor the swimming behaviour of larvae. [ 2 ]
The ZebraBox contains a controlled enclosed system of 96-well plates containing a high-resolution camera fitted with an infrared light and a fixed-angle lens. [ 3 ] The lighting conditions and illumination patterns can be manually controlled for fish acclimation and the simulation of circadian rhythms. [ 4 ] The apparatus allows for the analysis of zebrafish locomotion and activity, thus it is used in the fields of drug discovery and toxicological studies. [ 5 ] [ 6 ]
|
https://en.wikipedia.org/wiki/ZebraBox
|
Zebra is the American medical slang for a surprising, often exotic, medical diagnosis , especially when a more commonplace explanation is more likely. [ 1 ] It is shorthand for the aphorism coined in the late 1940s by Theodore Woodward , professor at the University of Maryland School of Medicine , who instructed his medical interns : "When you hear hoofbeats behind you, don't expect to see a zebra." [ 2 ] (Alternative phrasing: when you hear hoofbeats, think of horses, not zebras . Since zebras are much rarer than horses in the United States, the sound of hoofbeats would almost certainly be from a horse.) By 1960, the aphorism was widely known in medical circles. [ 3 ] [ 4 ] The saying is a warning against the statistical base rate fallacy where the likelihood of something like a disease among the population is not taken into consideration for an individual.
Medical novices are predisposed to make rare diagnoses because of (a) the availability heuristic ("events more easily remembered are judged more probable") and (b) the phenomenon first enunciated in Rhetorica ad Herennium ( c. 85 BC ), "the striking and the novel stay longer in the mind." Thus, the aphorism is an important caution against these biases when teaching medical students to weigh medical evidence. [ 5 ]
Diagnosticians have noted, however, that "zebra"-type diagnoses must nonetheless be held in mind until the evidence conclusively rules them out:
In making the diagnosis of the cause of illness in an individual case, calculations of probability have no meaning. The pertinent question is whether the disease is present or not. Whether it is rare or common does not change the odds in a single patient. [...] If the diagnosis can be made on the basis of specific criteria, then these criteria are either fulfilled or not fulfilled.
Comparable slang for an obscure and rare diagnosis in medicine is fascinoma .
Necrotic skin lesions in the United States are often diagnosed as loxoscelism ( recluse spider bites), even in areas where Loxosceles species are rare or not present. This is a matter of concern because such misdiagnoses can delay correct diagnosis and treatment. [ 7 ]
Ehlers–Danlos syndrome is considered a rare condition and those with it are known as medical zebras. The zebra was adopted across the world as the EDS mascot to bring the patient community together and raise awareness. [ 8 ]
|
https://en.wikipedia.org/wiki/Zebra_(medicine)
|
Zebra striping is the coloring of every other row of a table to improve readability . [ 1 ] Although zebra striping has been used for a long time to improve readability, there is relatively little data on how much it helps. [ 2 ] [ 3 ]
In HTML documents, zebra striping can be implemented using the Cascading Style Sheets :nth-child(even) pseudo-selector. [ 4 ] [ 5 ]
The Bootstrap CSS framework features zebra striping through the .table-striped class. [ 6 ]
This computing article is a stub . You can help Wikipedia by expanding it .
This systems -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zebra_striping
|
The zebrafish ( Danio rerio ) is a species of freshwater ray-finned fish belonging to the family Danionidae of the order Cypriniformes . Native to South Asia, [ 3 ] it is a popular aquarium fish , frequently sold under the trade name zebra danio [ 4 ] (and thus often called a " tropical fish " although it is both tropical and subtropical ).
The zebrafish is an important and widely used vertebrate model organism in scientific research, particularly developmental biology , but also gene function, oncology , teratology , and drug development , in particular pre-clinical development . [ 5 ] It is also notable for its regenerative abilities, [ 6 ] and has been modified by researchers to produce many transgenic strains. [ 7 ] [ 8 ] [ 9 ] Zebrafish are popular aquarium fish known for being under the trade name, zebra danio.
The zebrafish is a derived member of the genus Brachydanio , of the family Cyprinidae . [ 10 ] It has a sister-group relationship with Danio aesculapii . [ 11 ] Zebrafish are also closely related to the genus Devario , as demonstrated by a phylogenetic tree of close species. [ 12 ]
The zebrafish is native to freshwater habitats in South Asia where it is found in India, Pakistan, Bangladesh, Nepal and Bhutan. [ 1 ] [ 13 ] [ 14 ] [ 15 ] The northern limit is in the South Himalayas , ranging from the Sutlej river basin in the Pakistan–India border region to the state of Arunachal Pradesh in northeast India. [ 1 ] [ 14 ] Its range is concentrated in the Ganges and Brahmaputra River basins, [ 10 ] and the species was first described from Kosi River (lower Ganges basin) of India. Its range further south is more local, with scattered records from the Western and Eastern Ghats regions. [ 15 ] [ 16 ] It has frequently been said to occur in Myanmar (Burma), but this is entirely based on pre-1930 records and likely refers to close relatives only described later, notably Danio kyathit . [ 15 ] [ 17 ] [ 18 ] [ 19 ] Likewise, old [ clarification needed ] records from Sri Lanka are highly questionable and remain unconfirmed. [ 17 ]
Zebrafish have been introduced to a variety of places outside their natural range, [ 10 ] [ 15 ] including California, Connecticut, Florida and New Mexico in the United States, presumably by deliberate release by aquarists or by escape from fish farms . The New Mexico population had been extirpated by 2003 and it is unclear if the others survive, as the last published records were decades ago. [ 20 ] Elsewhere the species has been introduced to Colombia and Malaysia. [ 14 ] [ 21 ]
Zebrafish typically inhabit moderately flowing to stagnant clear water of quite shallow depth in streams, canals, ditches, oxbow lakes , ponds and rice paddies . [ 15 ] [ 21 ] [ 22 ] [ 10 ] There is usually some vegetation, either submerged or overhanging from the banks, and the bottom is sandy, muddy or silty, often mixed with pebbles or gravel. In surveys of zebrafish locations throughout much of its Bangladeshi and Indian distribution, the water had a near-neutral to somewhat basic pH and mostly ranged from 16.5 to 34 °C (61.7–93.2 °F) in temperature. [ 15 ] [ 23 ] One unusually cold site was only 12.3 °C (54.1 °F) and another unusually warm site was 38.6 °C (101.5 °F), but the zebrafish still appeared healthy. The unusually cold temperature was at one of the highest known zebrafish locations at 1,576 m (5,171 ft) above sea level, although the species has been recorded to 1,795 m (5,889 ft). [ 15 ]
The zebrafish is named for the five uniform, pigmented, horizontal, blue stripes on the side of the body, which are reminiscent of a zebra's stripes, and which extend to the end of the caudal fin . [ 22 ] Its shape is fusiform and laterally compressed, with its mouth directed upwards. The male is torpedo -shaped, with gold stripes between the blue stripes; the female has a larger, whitish belly and silver stripes instead of gold. Adult females exhibit a small genital papilla in front of the anal fin origin. The zebrafish can reach up to 4–5 cm (1.6–2.0 in) in length, [ 18 ] although they typically are 1.8–3.7 cm (0.7–1.5 in) in the wild with some variations depending on location. [ citation needed ] Its lifespan in captivity is around two to three years, although in ideal conditions, this may be extended to over five years. [ 22 ] [ 24 ] In the wild it is typically an annual species. [ 1 ]
In 2015, a study was published about zebrafishes' capacity for episodic memory . The individuals showed a capacity to remember context with respect to objects, locations and occasions (what, when, where). Episodic memory is a capacity of explicit memory systems, typically associated with conscious experience . [ 25 ]
The Mauthner cells integrate a wide array of sensory stimuli to produce the escape reflex . Those stimuli are found to include the lateral line signals by McHenry et al. 2009 and visual signals consistent with looming objects by Temizer et al. 2015, Dunn et al. 2016, and Yao et al. 2016. [ 26 ]
The approximate generation time for Danio rerio is three months. A male must be present for ovulation and spawning to occur. Zebrafish are asynchronous spawners [ 27 ] and under optimal conditions (such as food availability and favorable water parameters) can spawn successfully frequently, even on a daily basis. [ 28 ] Females are able to spawn at intervals of two to three days, laying hundreds of eggs in each clutch . Upon release, embryonic development begins; in absence of sperm, growth stops after the first few cell divisions. Fertilized eggs almost immediately become transparent, a characteristic that makes D. rerio a convenient research model species . [ 22 ] Sex determination of common laboratory strains was shown to be a complex genetic trait, rather than to follow a simple ZW or XY system. [ 29 ]
The zebrafish embryo develops rapidly, with precursors to all major organs appearing within 36 hours of fertilization. The embryo begins as a yolk with a single enormous cell on top (see image, 0 h panel), which divides into two (0.75 h panel) and continues dividing until there are thousands of small cells (3.25 h panel). The cells then migrate down the sides of the yolk (8 h panel) and begin forming a head and tail (16 h panel). The tail then grows and separates from the body (24 h panel). The yolk shrinks over time because the fish uses it for food as it matures during the first few days (72 h panel). After a few months, the adult fish reaches reproductive maturity (bottom panel).
To encourage the fish to spawn, some researchers use a fish tank with a sliding bottom insert, which reduces the depth of the pool to simulate the shore of a river. Zebrafish spawn best in the morning due to their Circadian rhythms . Researchers have been able to collect 10,000 embryos in 10 minutes using this method. [ 30 ] In particular, one pair of adult fish is capable of laying 200–300 eggs in one morning in approximately 5 to 10 at time. [ 31 ] Male zebrafish are furthermore known to respond to more pronounced markings on females, i.e., "good stripes", but in a group, males will mate with whichever females they can find. What attracts females is not currently understood. The presence of plants, even plastic plants, also apparently encourages spawning. [ 30 ]
Exposure to environmentally relevant concentrations of diisononyl phthalate (DINP), commonly used in a large variety of plastic items, disrupt the endocannabinoid system and thereby affect reproduction in a sex-specific manner. [ 32 ]
Zebrafish feeding practices vary significantly across different developmental stages, reflecting their changing nutritional needs. For newly hatched larvae, which begin feeding at approximately 5 days post-fertilization (dpf), small live prey such as Paramecium or rotifers are commonly used until they reach 9–15 dpf. [ 33 ] This early diet is crucial for their growth and survival, as these small organisms provide essential nutrients. As the larvae develop, from 15 dpf onwards, they are typically transitioned to a diet that includes brine shrimp nauplii and dry feeds, which are more nutritionally balanced and easier to manage in laboratory settings. For larvae aged 25 dpf, feeding rates can range from 50% to 300% of their body weight (BW) per day, depending on their size and growth requirements. [ 34 ] As zebrafish grow into juveniles (30–90 dpf), the recommended feeding rate decreases to about 6–8% of their BW per day, with a focus on high-quality dry feeds that meet their protein and energy needs. Upon reaching adulthood (over 90 dpf), zebrafish typically require a feeding rate of around 5% of their BW per day. Throughout these stages, it is essential to adjust the particle size of the feed: less than 100 μm for newly hatched larvae, 100–200 μm for those between 16 and 30 dpf, and larger particles for juveniles and adults. This structured approach to feeding not only supports optimal growth and health but also enhances the reliability of experimental outcomes in research settings. [ 35 ]
Zebrafish are hardy fish and considered good for beginner aquarists. Their enduring popularity can be attributed to their playful disposition, [ 36 ] as well as their rapid breeding, aesthetics, cheap price and broad availability. They also do well in schools or shoals of six or more, and interact well with other fish species in the aquarium. However, they are susceptible to Oodinium or velvet disease, microsporidia ( Pseudoloma neurophilia ), and Mycobacterium species. Given the opportunity, adults eat hatchlings, which may be protected by separating the two groups with a net, breeding box or separate tank.
In captivity, zebrafish live approximately forty-two months. Some captive zebrafish can develop a curved spine. [ 37 ]
The zebra danio was also used to make genetically modified fish and were the first species to be sold as GloFish (fluorescent colored fish).
In late 2003, transgenic zebrafish that express green , red, and yellow fluorescent proteins became commercially available in the United States. The fluorescent strains are trade-named GloFish ; other cultivated varieties include "golden", "sandy", "longfin" and "leopard".
The leopard danio, previously known as Danio frankei , is a spotted colour morph of the zebrafish which arose due to a pigment mutation. [ 38 ] Xanthistic forms of both the zebra and leopard pattern, along with long-finned strains, have been obtained via selective breeding programs for the aquarium trade. [ 39 ]
Various transgenic and mutant strains of zebrafish were stored at the China Zebrafish Resource Center (CZRC), a non-profit organization, which was jointly supported by the Ministry of Science and Technology of China and the Chinese Academy of Sciences . [ 40 ]
The Zebrafish Information Network ( ZFIN ) provides up-to-date information about current known wild-type (WT) strains of D. rerio , some of which are listed below. [ 41 ]
Hybrids between different Danio species may be fertile: for example, between D. rerio and D. nigrofasciatus . [ 12 ]
D. rerio is a common and useful scientific model organism for studies of vertebrate development and gene function. Its use as a laboratory animal was pioneered by the American molecular biologist George Streisinger and his colleagues at the University of Oregon in the 1970s and 1980s; Streisinger's zebrafish clones were among the earliest successful vertebrate clones created. [ 42 ] Its importance has been consolidated by successful large-scale forward genetic screens (commonly referred to as the Tübingen/Boston screens). The fish has a dedicated online database of genetic, genomic, and developmental information, the Zebrafish Information Network (ZFIN). The Zebrafish International Resource Center (ZIRC) is a genetic resource repository with 29,250 alleles available for distribution to the research community. D. rerio is also one of the few fish species to have been sent into space .
Research with D. rerio has yielded advances in the fields of developmental biology , oncology , [ 43 ] toxicology , [ 31 ] [ 44 ] [ 45 ] reproductive studies, teratology , genetics , neurobiology , environmental sciences , stem cell research, regenerative medicine , [ 46 ] [ 47 ] muscular dystrophies [ 48 ] and evolutionary theory . [ 12 ]
As a model biological system, the zebrafish possesses numerous advantages for scientists. Its genome has been fully sequenced , and it has well-understood, easily observable and testable developmental behaviors. Its embryonic development is very rapid, and its embryos are relatively large, robust, and transparent, and able to develop outside their mother. [ 49 ] Furthermore, well-characterized mutant strains are readily available.
Other advantages include the species' nearly constant size during early development, which enables simple staining techniques to be used, and the fact that its two-celled embryo can be fused into a single cell to create a homozygous embryo. The zebrafish embryos are transparent and they develop outside of the uterus, which allows scientists to study the details of development starting from fertilization and continuing throughout development. The zebrafish is also demonstrably similar to mammalian models and humans in toxicity testing, and exhibits a diurnal sleep cycle with similarities to mammalian sleep behavior. [ 50 ] However, zebrafish are not a universally ideal research model; there are a number of disadvantages to their scientific use, such as the absence of a standard diet [ 51 ] and the presence of small but important differences between zebrafish and mammals in the roles of some genes related to human disorders. [ 52 ] [ 53 ]
Zebrafish have the ability to regenerate their heart and lateral line hair cells during their larval stages. [ 54 ] [ 55 ] The cardiac regenerative process likely involves signaling pathways such as Notch and Wnt ; hemodynamic changes in the damaged heart are sensed by ventricular endothelial cells and their associated cardiac cilia by way of the mechanosensitive ion channel TRPV4 , subsequently facilitating the Notch signaling pathway via KLF2 and activating various downstream effectors such as BMP-2 and HER2/neu . [ 56 ] In 2011, the British Heart Foundation ran an advertising campaign publicising its intention to study the applicability of this ability to humans, stating that it aimed to raise £50 million in research funding. [ 57 ] [ 58 ]
Zebrafish have also been found to regenerate photoreceptor cells and retinal neurons following injury, which has been shown to be mediated by the dedifferentiation and proliferation of Müller glia . [ 59 ] Researchers frequently amputate the dorsal and ventral tail fins and analyze their regrowth to test for mutations. It has been found that histone demethylation occurs at the site of the amputation, switching the zebrafish's cells to an "active", regenerative, stem cell-like state. [ 60 ] [ 61 ] In 2012, Australian scientists published a study revealing that zebrafish use a specialised protein , known as fibroblast growth factor , to ensure their spinal cords heal without glial scarring after injury. [ 6 ] [ 62 ] In addition, hair cells of the posterior lateral line have also been found to regenerate following damage or developmental disruption. [ 55 ] [ 63 ] Study of gene expression during regeneration has allowed for the identification of several important signaling pathways involved in the process, such as Wnt signaling and Fibroblast growth factor . [ 63 ] [ 64 ]
In probing disorders of the nervous system, including neurodegenerative diseases, movement disorders, psychiatric disorders and deafness, researchers are using the zebrafish to understand how the genetic defects underlying these conditions cause functional abnormalities in the human brain, spinal cord and sensory organs. [ 65 ] [ 66 ] [ 67 ] [ 68 ] Researchers have also studied the zebrafish to gain new insights into the complexities of human musculoskeletal diseases, such as muscular dystrophy . [ 69 ] Another focus of zebrafish research is to understand how a gene called Hedgehog , a biological signal that underlies a number of human cancers, controls cell growth.
Inbred strains and traditional outbred stocks have not been developed for laboratory zebrafish, and the genetic variability of wild-type lines among institutions may contribute to the replication crisis in biomedical research. [ 70 ] Genetic differences in wild-type lines among populations maintained at different research institutions have been demonstrated using both Single-nucleotide polymorphisms [ 71 ] and microsatellite analysis. [ 72 ]
Due to their fast and short life cycles and relatively large clutch sizes, D. rerio or zebrafish are a useful model for genetic studies. A common reverse genetics technique is to reduce gene expression or modify splicing using Morpholino antisense technology. Morpholino oligonucleotides (MO) are stable, synthetic macromolecules that contain the same bases as DNA or RNA; by binding to complementary RNA sequences, they can reduce the expression of specific genes or block other processes from occurring on RNA. MO can be injected into one cell of an embryo after the 32-cell stage, reducing gene expression in only cells descended from that cell. However, cells in the early embryo (less than 32 cells) are permeable to large molecules, [ 73 ] [ 74 ] allowing diffusion between cells. Guidelines for using Morpholinos in zebrafish describe appropriate control strategies. [ 75 ] Morpholinos are commonly microinjected in 500pL directly into 1–2 cell stage zebrafish embryos. The morpholino is able to integrate into most cells of the embryo. [ 76 ]
A known problem with gene knockdowns is that, because the genome underwent a duplication after the divergence of ray-finned fishes and lobe-finned fishes , it is not always easy to silence the activity of one of the two gene paralogs reliably due to complementation by the other paralog. [ 77 ] Despite the complications of the zebrafish genome , a number of commercially available global platforms exist for analysis of both gene expression by microarrays and promoter regulation using ChIP-on-chip . [ 78 ]
The Wellcome Trust Sanger Institute started the zebrafish genome sequencing project in 2001, and the full genome sequence of the Tuebingen reference strain is publicly available at the National Center for Biotechnology Information (NCBI)'s Zebrafish Genome Page . The zebrafish reference genome sequence is annotated as part of the Ensembl project , and is maintained by the Genome Reference Consortium . [ 79 ]
In 2009, researchers at the Institute of Genomics and Integrative Biology in Delhi, India, announced the sequencing of the genome of a wild zebrafish strain, containing an estimated 1.7 billion genetic letters. [ 80 ] [ 81 ] The genome of the wild zebrafish was sequenced at 39-fold coverage. Comparative analysis with the zebrafish reference genome revealed over 5 million single nucleotide variations and over 1.6 million insertion deletion variations. The zebrafish reference genome sequence of 1.4GB and over 26,000 protein coding genes was published by Kerstin Howe et al. in 2013. [ 82 ]
In October 2001, researchers from the University of Oklahoma published D. rerio's complete mitochondrial DNA sequence. [ 83 ] Its length is 16,596 base pairs. This is within 100 base pairs of other related species of fish, and it is notably only 18 pairs longer than the goldfish ( Carassius auratus ) and 21 longer than the carp ( Cyprinus carpio ). Its gene order and content are identical to the common vertebrate form of mitochondrial DNA. It contains 13 protein -coding genes and a noncoding control region containing the origin of replication for the heavy strand. In between a grouping of five tRNA genes, a sequence resembling vertebrate origin of light strand replication is found. It is difficult to draw evolutionary conclusions because it is difficult to determine whether base pair changes have adaptive significance via comparisons with other vertebrates' nucleotide sequences. [ 83 ]
T-boxes and homeoboxes are vital in Danio similarly to other vertebrates. [ 84 ] [ 85 ] The Bruce et al. team are known for this area, and in Bruce et al. 2003 & Bruce et al. 2005 uncover the role of two of these elements in oocytes of this species. [ 84 ] [ 85 ] By interfering via a dominant nonfunctional allele and a morpholino they find the T-box transcription activator Eomesodermin and its target mtx2 – a transcription factor – are vital to epiboly . [ 84 ] [ 85 ] (In Bruce et al. 2003 they failed to support the possibility that Eomesodermin behaves like Vegt . [ 84 ] Neither they nor anyone else has been able to locate any mutation which – in the mother – will prevent initiation of the mesoderm or endoderm development processes in this species.) [ 84 ]
In 1999, the nacre mutation was identified in the zebrafish ortholog of the mammalian MITF transcription factor. [ 86 ] Mutations in human MITF result in eye defects and loss of pigment, a type of Waardenburg Syndrome . In December 2005, a study of the golden strain identified the gene responsible for its unusual pigmentation as SLC24A5 , a solute carrier that appeared to be required for melanin production, and confirmed its function with a Morpholino knockdown. The orthologous gene was then characterized in humans and a one base pair difference was found to strongly segregate fair-skinned Europeans and dark-skinned Africans. [ 87 ] Zebrafish with the nacre mutation have since been bred with fish with a roy orbison (roy) mutation to make Casper strain fish that have no melanophores or iridophores, and are transparent into adulthood. These fish are characterized by uniformly pigmented eyes and translucent skin. [ 8 ] [ 88 ]
Transgenesis is a popular approach to study the function of genes in zebrafish. Construction of transgenic zebrafish is rather easy by a method using the Tol2 transposon system. Tol2 element which encodes a gene for a fully functional transposase capable of catalyzing transposition in the zebrafish germ lineage. Tol2 is the only natural DNA transposable element in vertebrates from which an autonomous member has been identified. [ 89 ] [ 90 ] Examples include the artificial interaction produced between LEF1 and Catenin beta-1 /β-catenin/ CTNNB1 . Dorsky et al. 2002 investigated the developmental role of Wnt by transgenically expressing a Lef1/β-catenin reporter. [ 91 ] The Tol2 transposon system was used to develop transgenic zebrafish as sensitive biosensors for heavy metal detection. This involved creating a transgenic zebrafish line expressing a fluorescent protein under the control of a heavy metal-responsive promoter, enabling the detection of low concentrations of cadmium (Cd2+) and zinc (Zn2+). [ 92 ]
There are well-established protocols for editing zebrafish genes using CRISPR-Cas9 [ 93 ] and this tool has been used to generate genetically modified models.
In 2008, researchers at Boston Children's Hospital developed a new strain of zebrafish, named Casper, whose adult bodies had transparent skin. [ 8 ] This allows for detailed visualization of cellular activity, circulation, metastasis and many other phenomena. [ 8 ] In 2019 researchers published a crossing of a prkdc -/- and a IL2rga -/- strain that produced transparent, immunodeficient offspring, lacking natural killer cells as well as B - and T-cells . This strain can be adapted to 37 °C (99 °F) warm water and the absence of an immune system makes the use of patient derived xenografts possible. [ 94 ] In January 2013, Japanese scientists genetically modified a transparent zebrafish specimen to produce a visible glow during periods of intense brain activity. [ 9 ]
In January 2007, Chinese researchers at Fudan University genetically modified zebrafish to detect oestrogen pollution in lakes and rivers, which is linked to male infertility. The researchers cloned oestrogen-sensitive genes and injected them into the fertile eggs of zebrafish. The modified fish turned green if placed into water that was polluted by oestrogen. [ 7 ]
In 2015, researchers at Brown University discovered that 10% of zebrafish genes do not need to rely on the U2AF2 protein to initiate RNA splicing . These genes have the DNA base pairs AC and TG as repeated sequences at the ends of each intron . On the 3'ss (3' splicing site), the base pairs adenine and cytosine alternate and repeat, and on the 5'ss (5' splicing site), their complements thymine and guanine alternate and repeat as well. They found that there was less reliance on U2AF2 protein than in humans, in which the protein is required for the splicing process to occur. The pattern of repeating base pairs around introns that alters RNA secondary structure was found in other teleosts , but not in tetrapods . This indicates that an evolutionary change in tetrapods may have led to humans relying on the U2AF2 protein for RNA splicing while these genes in zebrafish undergo splicing regardless of the presence of the protein. [ 95 ]
D. rerio has three transferrins , all of which cluster closely with other vertebrates . [ 96 ]
When close relatives mate, progeny may exhibit the detrimental effects of inbreeding depression . Inbreeding depression is predominantly caused by the homozygous expression of recessive deleterious alleles. [ 97 ] For zebrafish, inbreeding depression might be expected to be more severe in stressful environments, including those caused by anthropogenic pollution . Exposure of zebrafish to environmental stress induced by the chemical clotrimazole, an imidazole fungicide used in agriculture and in veterinary and human medicine, amplified the effects of inbreeding on key reproductive traits. [ 98 ] Embryo viability was significantly reduced in inbred exposed fish and there was a tendency for inbred males to sire fewer offspring.
Zebrafish are common models for research into fish farming , including pathogens [ 99 ] [ 100 ] [ 101 ] and parasites [ 99 ] [ 101 ] causing yield loss or spreading to adjacent wild populations.
This usefulness is less than it might be due to Danio ' s taxonomic distance from the most common aquaculture species. [ 100 ] Because the most common are salmonids and cod in the Protacanthopterygii and sea bass , sea bream , tilapia , and flatfish , in the Percomorpha , zebrafish results may not be perfectly applicable. [ 100 ] Various other models – Goldfish ( Carassius auratus ), Medaka ( Oryzias latipes ), Stickleback ( Gasterosteus aculeatus ), Roach ( Rutilus rutilus ), Pufferfish ( Takifugu rubripes ), Swordtail ( Xiphophorus hellerii ) – are less used normally but would be closer to particular target species. [ 101 ]
The only exception are the Carp (including Grass Carp, Ctenopharyngodon idella ) [ 100 ] and Milkfish ( Chanos chanos ) [ 101 ] which are quite close, both being in the Cyprinidae . However it should also be noted that Danio consistently proves to be a useful model for mammals in many cases and there is dramatically more genetic distance between them than between Danio and any farmed fish. [ 100 ]
In a glucocorticoid receptor -defective mutant with reduced exploratory behavior , fluoxetine rescued the normal exploratory behavior. [ 102 ] This demonstrates relationships between glucocorticoids, fluoxetine, and exploration in this fish. [ 102 ]
Zebrafish have been used as a model for studying DNA repair pathways. [ 103 ] Embryos of externally fertilized fish species, such as zebrafish during their development, are directly exposed to environmental conditions such as pollutants and reactive oxygen species that may cause damage to their DNA . [ 103 ] To cope with such DNA damages, a variety of different DNA repair pathways are expressed during development. [ 103 ] Zebrafish have, in recent years, proven to be a useful model for assessing environmental pollutants that might cause DNA damage. [ 104 ]
The zebrafish and zebrafish larva is a suitable model organism for drug discovery and development. As a vertebrate with 70% genetic homology with humans, [ 82 ] it can be predictive of human health and disease, while its small size and fast development facilitates experiments on a larger and quicker scale than with more traditional in vivo studies, including the development of higher-throughput, automated investigative tools. [ 105 ] [ 106 ] As demonstrated through ongoing research programmes, the zebrafish model enables researchers not only to identify genes that might underlie human disease, but also to develop novel therapeutic agents in drug discovery programmes. [ 107 ] Zebrafish embryos have proven to be a rapid, cost-efficient, and reliable teratology assay model. [ 108 ]
Drug screens in zebrafish can be used to identify novel classes of compounds with biological effects, or to repurpose existing drugs for novel uses; an example of the latter would be a screen which found that a commonly used statin ( rosuvastatin ) can suppress the growth of prostate cancer . [ 109 ] To date, 65 small-molecule screens have been carried out and at least one has led to clinical trials. [ 110 ] Within these screens, many technical challenges remain to be resolved, including differing rates of drug absorption resulting in levels of internal exposure that cannot be extrapolated from the water concentration, and high levels of natural variation between individual animals. [ 110 ]
To understand drug effects, the internal drug exposure is essential, as this drives the pharmacological effect. Translating experimental results from zebrafish to higher vertebrates (like humans) requires concentration-effect relationships, which can be derived from pharmacokinetic and pharmacodynamic analysis. [ 5 ] Because of its small size, however, it is very challenging to quantify the internal drug exposure. Traditionally multiple blood samples would be drawn to characterize the drug concentration profile over time, but this technique remains to be developed. To date, only a single pharmacokinetic model for paracetamol has been developed in zebrafish larvae. [ 111 ]
Using smart data analysis methods, pathophysiological and pharmacological processes can be understood and subsequently translated to higher vertebrates, including humans. [ 5 ] [ 112 ] An example is the use of systems pharmacology , which is the integration of systems biology and pharmacometrics .
Systems biology characterizes (part of) an organism by a mathematical description of all relevant processes. These can be for example different signal transduction pathways that upon a specific signal lead to a certain response. By quantifying these processes, their behaviour in healthy and diseased situation can be understood and predicted.
Pharmacometrics uses data from preclinical experiments and clinical trials to characterize the pharmacological processes that are underlying the relation between the drug dose and its response or clinical outcome. These can be for example the drug absorption in or clearance from the body, or its interaction with the target to achieve a certain effect. By quantifying these processes, their behaviour after different doses or in different patients can be understood and predicted to new doses or patients.
By integrating these two fields, systems pharmacology has the potential to improve the understanding of the interaction of the drug with the biological system by mathematical quantification and subsequent prediction to new situations, like new drugs or new organisms or patients.
Using these computational methods, the previously mentioned analysis of paracetamol internal exposure in zebrafish larvae showed reasonable correlation between paracetamol clearance in zebrafish with that of higher vertebrates, including humans. [ 111 ]
Zebrafish have been used to make several transgenic models of cancer, including melanoma , leukemia , pancreatic cancer and hepatocellular carcinoma . [ 113 ] [ 114 ] Zebrafish expressing mutated forms of either the BRAF or NRAS oncogenes develop melanoma when placed onto a p53 deficient background. Histologically , these tumors strongly resemble the human disease, are fully transplantable, and exhibit large-scale genomic alterations. The BRAF melanoma model was utilized as a platform for two screens published in March 2011 in the journal Nature . In one study, the model was used as a tool to understand the functional importance of genes known to be amplified and overexpressed in human melanoma. [ 115 ] One gene, SETDB1, markedly accelerated tumor formation in the zebrafish system, demonstrating its importance as a new melanoma oncogene. This was particularly significant because SETDB1 is known to be involved in the epigenetic regulation that is increasingly appreciated to be central to tumor cell biology.
In another study, an effort was made to therapeutically target the genetic program present in the tumor's origin neural crest cell using a chemical screening approach. [ 116 ] This revealed that an inhibition of the DHODH protein (by a small molecule called leflunomide) prevented development of the neural crest stem cells which ultimately give rise to melanoma via interference with the process of transcriptional elongation . Because this approach would aim to target the "identity" of the melanoma cell rather than a single genetic mutation, leflunomide may have utility in treating human melanoma. [ 117 ]
In cardiovascular research, the zebrafish has been used to model human myocardial infarction model. The zebrafish heart completely regenerates after about 2 months of injury without any scar formation. [ 118 ] The Alpha-1 adrenergic signalling mechanism involved in this process was identified in a 2023 study. [ 119 ] Zebrafish is also used as a model for blood clotting , blood vessel development , and congenital heart and kidney disease . [ 120 ]
In programmes of research into acute inflammation , a major underpinning process in many diseases, researchers have established a zebrafish model of inflammation, and its resolution. This approach allows detailed study of the genetic controls of inflammation and the possibility of identifying potential new drugs. [ 121 ]
Zebrafish has been extensively used as a model organism to study vertebrate innate immunity. The innate immune system is capable of phagocytic activity by 28 to 30 h postfertilization (hpf) [ 122 ] while adaptive immunity is not functionally mature until at least 4 weeks postfertilization. [ 123 ]
As the immune system is relatively conserved between zebrafish and humans, many human infectious diseases can be modeled in zebrafish. [ 124 ] [ 125 ] [ 126 ] [ 127 ] The transparent early life stages are well suited for in vivo imaging and genetic dissection of host-pathogen interactions. [ 128 ] [ 129 ] [ 130 ] [ 131 ] Zebrafish models for a wide range of bacterial, viral and parasitic pathogens have already been established; for example, the zebrafish model for tuberculosis provides fundamental insights into the mechanisms of pathogenesis of mycobacteria. [ 132 ] [ 133 ] [ 134 ] [ 135 ] Other bacteria commonly studied using zebrafish models include Clostridioides difficile , Staphylococcus aureus , and Pseudomonas aeruginosa . [ 136 ] Furthermore, robotic technology has been developed for high-throughput antimicrobial drug screening using zebrafish infection models. [ 137 ] [ 138 ]
Another notable characteristic of the zebrafish is that it possesses four types of cone cell , with ultraviolet -sensitive cells supplementing the red, green and blue cone cell subtypes found in humans. Zebrafish can thus observe a very wide spectrum of colours. The species is also studied to better understand the development of the retina; in particular, how the cone cells of the retina become arranged into the so-called 'cone mosaic'. Zebrafish, in addition to certain other teleost fish, are particularly noted for having extreme precision of cone cell arrangement. [ 139 ]
This study of the zebrafish's retinal characteristics has also extrapolated into medical enquiry. In 2007, researchers at University College London grew a type of zebrafish adult stem cell found in the eyes of fish and mammals that develops into neurons in the retina. These could be injected into the eye to treat diseases that damage retinal neurons—nearly every disease of the eye, including macular degeneration , glaucoma , and diabetes -related blindness. The researchers studied Müller glial cells in the eyes of humans aged from 18 months to 91 years, and were able to develop them into all types of retinal neurons. They were also able to grow them easily in the lab. The stem cells successfully migrated into diseased rats' retinas, and took on the characteristics of the surrounding neurons. The team stated that they intended to develop the same approach in humans. [ 140 ] [ 141 ]
Muscular dystrophies (MD) are a heterogeneous group of genetic disorders that cause muscle weakness, abnormal contractions and muscle wasting, often leading to premature death. Zebrafish is widely used as model organism to study muscular dystrophies. [ 48 ] For example, the sapje ( sap ) mutant is the zebrafish orthologue of human Duchenne muscular dystrophy (DMD). [ 142 ] The Machuca-Tzili and co-workers applied zebrafish to determine the role of alternative splicing factor, MBNL, in myotonic dystrophy type 1 (DM1) pathogenesis. [ 143 ] More recently, Todd et al. described a new zebrafish model designed to explore the impact of CUG repeat expression during early development in DM1 disease. [ 144 ] Zebrafish is also an excellent animal model to study congenital muscular dystrophies including CMD Type 1 A (CMD 1A) caused by mutation in the human laminin α2 (LAMA2) gene. [ 145 ] The zebrafish, because of its advantages discussed above, and in particular the ability of zebrafish embryos to absorb chemicals, has become a model of choice in screening and testing new drugs against muscular dystrophies. [ 146 ]
Zebrafish have been used as model organisms for bone metabolism, tissue turnover, and resorbing activity. These processes are largely evolutionary conserved. They have been used to study osteogenesis (bone formation), evaluating differentiation, matrix deposition activity, and cross-talk of skeletal cells, to create and isolate mutants modeling human bone diseases, and test new chemical compounds for the ability to revert bone defects. [ 147 ] [ 148 ] The larvae can be used to follow new ( de novo ) osteoblast formation during bone development. They start mineralising bone elements as early as 4 days post fertilisation. Recently, adult zebrafish are being used to study complex age related bone diseases such as osteoporosis and osteogenesis imperfecta . [ 149 ] The (elasmoid) scales of zebrafish function as a protective external layer and are little bony plates made by osteoblasts. These exoskeletal structures are formed by bone matrix depositing osteoblasts and are remodeled by osteoclasts. The scales also act as the main calcium storage of the fish. They can be cultured ex-vivo (kept alive outside of the organism) in a multi-well plate, which allows manipulation with drugs and even screening for new drugs that could change bone metabolism (between osteoblasts and osteoclasts). [ 149 ] [ 150 ] [ 151 ]
Zebrafish pancreas development is very homologous to mammals, such as mice. The signaling mechanisms and way the pancreas functions are very similar. The pancreas has an endocrine compartment, which contains a variety of cells. Pancreatic PP cells that produce polypeptides, and β-cells that produce insulin are two examples of those such cells. This structure of the pancreas, along with the glucose homeostasis system, are helpful in studying diseases, such as diabetes, that are related to the pancreas. Models for pancreas function, such as fluorescent staining of proteins, are useful in determining the processes of glucose homeostasis and the development of the pancreas. Glucose tolerance tests have been developed using zebrafish, and can now be used to test for glucose intolerance or diabetes in humans. The function of insulin are also being tested in zebrafish, which will further contribute to human medicine. The majority of work done surrounding knowledge on glucose homeostasis has come from work on zebrafish transferred to humans. [ 152 ]
Zebrafish have been used as a model system to study obesity, with research into both genetic obesity and over-nutrition induced obesity. Obese zebrafish, similar to obese mammals, show dysregulation of lipid controlling metabolic pathways, which leads to weight gain without normal lipid metabolism. [ 152 ] Also like mammals, zebrafish store excess lipids in visceral, intramuscular, and subcutaneous adipose deposits. These reasons and others make zebrafish good models for studying obesity in humans and other species. Genetic obesity is usually studied in transgenic or mutated zebrafish with obesogenic genes. As an example, transgenic zebrafish with overexpressed AgRP, an endogenous melanocortin antagonist, showed increased body weight and adipose deposition during growth. [ 152 ] Though zebrafish genes may not be the exact same as human genes, these tests could provide important insight into possible genetic causes and treatments for human genetic obesity. [ 152 ] Diet-induced obesity zebrafish models are useful, as diet can be modified from a very early age. High fat diets and general overfeeding diets both show rapid increases in adipose deposition, increased BMI, hepatosteatosis, and hypertriglyceridemia. [ 152 ] However, the normal fat, overfed specimens are still metabolically healthy, while high-fat diet specimens are not. [ 152 ] Understanding differences between types of feeding-induced obesity could prove useful in human treatment of obesity and related health conditions. [ 152 ]
Zebrafish have been used as a model system in environmental toxicology studies. [ 31 ]
The combination of transparent zebrafish larva, light sheet fluorescence microscopy , and optical calcium indicators such as GCaMP , allow the monitoring of all neurons in an awake, behaving animal. [ 153 ]
Zebrafish have been used as a model system to study epilepsy. Mammalian seizures can be recapitulated molecularly, behaviorally, and electrophysiologically, using a fraction of the resources required for experiments in mammals. [ 154 ]
|
https://en.wikipedia.org/wiki/Zebrafish
|
Zech logarithms are used to implement addition in finite fields when elements are represented as powers of a generator α {\displaystyle \alpha } .
Zech logarithms are named after Julius Zech , [ 1 ] [ 2 ] [ 3 ] [ 4 ] and are also called Jacobi logarithms , [ 5 ] after Carl G. J. Jacobi who used them for number theoretic investigations. [ 6 ]
Given a primitive element α {\displaystyle \alpha } of a finite field, the Zech logarithm relative to the base α {\displaystyle \alpha } is defined by the equation
which is often rewritten as
The choice of base α {\displaystyle \alpha } is usually dropped from the notation when it is clear from the context.
To be more precise, Z α {\displaystyle Z_{\alpha }} is a function on the integers modulo the multiplicative order of α {\displaystyle \alpha } , and takes values in the same set. In order to describe every element, it is convenient to formally add a new symbol − ∞ {\displaystyle -\infty } , along with the definitions
where e {\displaystyle e} is an integer satisfying α e = − 1 {\displaystyle \alpha ^{e}=-1} , that is e = 0 {\displaystyle e=0} for a field of characteristic 2, and e = q − 1 2 {\displaystyle e={\frac {q-1}{2}}} for a field of odd characteristic with q {\displaystyle q} elements.
Using the Zech logarithm, finite field arithmetic can be done in the exponential representation:
These formulas remain true with our conventions with the symbol − ∞ {\displaystyle -\infty } , with the caveat that subtraction of − ∞ {\displaystyle -\infty } is undefined. In particular, the addition and subtraction formulas need to treat m = − ∞ {\displaystyle m=-\infty } as a special case.
This can be extended to arithmetic of the projective line by introducing another symbol + ∞ {\displaystyle +\infty } satisfying α + ∞ = ∞ {\displaystyle \alpha ^{+\infty }=\infty } and other rules as appropriate.
For fields of characteristic 2,
For sufficiently small finite fields, a table of Zech logarithms allows an especially efficient implementation of all finite field arithmetic in terms of a small number of integer addition/subtractions and table look-ups.
The utility of this method diminishes for large fields where one cannot efficiently store the table. This method is also inefficient when doing very few operations in the finite field, because one spends more time computing the table than one does in actual calculation.
Let α ∈ GF(2 3 ) be a root of the primitive polynomial x 3 + x 2 + 1 . The traditional representation of elements of this field is as polynomials in α of degree 2 or less.
A table of Zech logarithms for this field are Z (−∞) = 0 , Z (0) = −∞ , Z (1) = 5 , Z (2) = 3 , Z (3) = 2 , Z (4) = 6 , Z (5) = 1 , and Z (6) = 4 . The multiplicative order of α is 7, so the exponential representation works with integers modulo 7.
Since α is a root of x 3 + x 2 + 1 then that means α 3 + α 2 + 1 = 0 , or if we recall that since all coefficients are in GF(2), subtraction is the same as addition, we obtain α 3 = α 2 + 1 .
The conversion from exponential to polynomial representations is given by
Using Zech logarithms to compute α 6 + α 3 :
or, more efficiently,
and verifying it in the polynomial representation:
|
https://en.wikipedia.org/wiki/Zech's_logarithm
|
In mathematics , Zeckendorf's theorem , named after Belgian amateur mathematician Edouard Zeckendorf , is a theorem about the representation of integers as sums of Fibonacci numbers .
Zeckendorf's theorem states that every positive integer can be represented uniquely as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. More precisely, if N is any positive integer, there exist positive integers c i ≥ 2 , with c i + 1 > c i + 1 , such that
where F n is the n th Fibonacci number. Such a sum is called the Zeckendorf representation of N . The Fibonacci coding of N can be derived from its Zeckendorf representation.
For example, the Zeckendorf representation of 64 is
There are other ways of representing 64 as the sum of Fibonacci numbers
but these are not Zeckendorf representations because 34 and 21 are consecutive Fibonacci numbers, as are 5 and 3.
For any given positive integer, its Zeckendorf representation can be found by using a greedy algorithm , choosing the largest possible Fibonacci number at each stage.
While the theorem is named after the eponymous author who published his paper in 1972, the same result had been published 20 years earlier by Gerrit Lekkerkerker . [ 1 ] As such, the theorem is an example of Stigler's Law of Eponymy .
Zeckendorf's theorem has two parts:
The first part of Zeckendorf's theorem (existence) can be proven by induction . For n = 1, 2, 3 it is clearly true (as these are Fibonacci numbers), for n = 4 we have 4 = 3 + 1 . If n is a Fibonacci number then there is nothing to prove. Otherwise there exists j such that F j < n < F j + 1 . Now suppose each positive integer a < n has a Zeckendorf representation (induction hypothesis) and consider b = n − F j . Since b < n , b has a Zeckendorf representation by the induction hypothesis. At the same time, b = n − F j < F j + 1 − F j = F j − 1 (we apply the definition of Fibonacci number in the last equality), so the Zeckendorf representation of b does not contain F j − 1 , and hence also does not contain F j . As a result, n can be represented as the sum of F j and the Zeckendorf representation of b , such that the Fibonacci numbers involved in the sum are distinct. [ 2 ]
The second part of Zeckendorf's theorem (uniqueness) requires the following lemma:
The lemma can be proven by induction on j .
Now take two non-empty sets S {\displaystyle S} and T {\displaystyle T} of distinct non-consecutive Fibonacci numbers which have the same sum, ∑ x ∈ S x = ∑ x ∈ T x {\textstyle \sum _{x\in S}x=\sum _{x\in T}x} . Consider sets S ′ {\displaystyle S'} and T ′ {\displaystyle T'} which are equal to S {\displaystyle S} and T {\displaystyle T} from which the common elements have been removed (i. e. S ′ = S ∖ T {\displaystyle S'=S\setminus T} and T ′ = T ∖ S {\displaystyle T'=T\setminus S} ). Since S {\displaystyle S} and T {\displaystyle T} had equal sum, and we have removed exactly the elements from S ∩ T {\displaystyle S\cap T} from both sets, S ′ {\displaystyle S'} and T ′ {\displaystyle T'} must have the same sum as well, ∑ x ∈ S ′ x = ∑ x ∈ T ′ x {\textstyle \sum _{x\in S'}x=\sum _{x\in T'}x} .
Now we will show by contradiction that at least one of S ′ {\displaystyle S'} and T ′ {\displaystyle T'} is empty. Assume the contrary, i. e. that S ′ {\displaystyle S'} and T ′ {\displaystyle T'} are both non-empty and let the largest member of S ′ {\displaystyle S'} be F s and the largest member of T ′ {\displaystyle T'} be F t . Because S ′ {\displaystyle S'} and T ′ {\displaystyle T'} contain no common elements, F s ≠ F t . Without loss of generality , suppose F s < F t . Then by the lemma, ∑ x ∈ S ′ x < F s + 1 {\textstyle \sum _{x\in S'}x<F_{s+1}} , and, by the fact that F s < F s + 1 ≤ F t {\textstyle F_{s}<F_{s+1}\leq F_{t}} , ∑ x ∈ S ′ x < F t {\textstyle \sum _{x\in S'}x<F_{t}} , whereas clearly ∑ x ∈ T ′ x ≥ F t {\textstyle \sum _{x\in T'}x\geq F_{t}} . This contradicts the fact that S ′ {\displaystyle S'} and T ′ {\displaystyle T'} have the same sum, and we can conclude that either S ′ {\displaystyle S'} or T ′ {\displaystyle T'} must be empty.
Now assume (again without loss of generality) that S ′ {\displaystyle S'} is empty. Then S ′ {\displaystyle S'} has sum 0, and so must T ′ {\displaystyle T'} . But since T ′ {\displaystyle T'} can only contain positive integers, it must be empty too. To conclude: S ′ = T ′ = ∅ {\displaystyle S'=T'=\emptyset } which implies S = T {\displaystyle S=T} , proving that each Zeckendorf representation is unique. [ 2 ]
One can define the following operation a ∘ b {\displaystyle a\circ b} on natural numbers a , b : given the Zeckendorf representations a = ∑ i = 0 k F c i ( c i ≥ 2 ) {\displaystyle a=\sum _{i=0}^{k}F_{c_{i}}\;(c_{i}\geq 2)} and b = ∑ j = 0 l F d j ( d j ≥ 2 ) {\displaystyle b=\sum _{j=0}^{l}F_{d_{j}}\;(d_{j}\geq 2)} we define the Fibonacci product a ∘ b = ∑ i = 0 k ∑ j = 0 l F c i + d j . {\displaystyle a\circ b=\sum _{i=0}^{k}\sum _{j=0}^{l}F_{c_{i}+d_{j}}.}
For example, the Zeckendorf representation of 2 is F 3 {\displaystyle F_{3}} , and the Zeckendorf representation of 4 is F 4 + F 2 {\displaystyle F_{4}+F_{2}} ( F 1 {\displaystyle F_{1}} is disallowed from representations), so 2 ∘ 4 = F 3 + 4 + F 3 + 2 = 13 + 5 = 18. {\displaystyle 2\circ 4=F_{3+4}+F_{3+2}=13+5=18.}
(The product is not always in Zeckendorf form. For example, 4 ∘ 4 = ( F 4 + F 2 ) ∘ ( F 4 + F 2 ) = F 4 + 4 + 2 F 4 + 2 + F 2 + 2 = 21 + 2 ⋅ 8 + 3 = 40 = F 9 + F 5 + F 2 . {\displaystyle 4\circ 4=(F_{4}+F_{2})\circ (F_{4}+F_{2})=F_{4+4}+2F_{4+2}+F_{2+2}=21+2\cdot 8+3=40=F_{9}+F_{5}+F_{2}.} )
A simple rearrangement of sums shows that this is a commutative operation; however, Donald Knuth proved the surprising fact that this operation is also associative . [ 3 ]
The Fibonacci sequence can be extended to negative index n using the rearranged recurrence relation
which yields the sequence of " negafibonacci " numbers satisfying
Any integer can be uniquely represented [ 4 ] as a sum of negafibonacci numbers in which no two consecutive negafibonacci numbers are used. For example:
0 = F −1 + F −2 , for example, so the uniqueness of the representation does depend on the condition that no two consecutive negafibonacci numbers are used.
This gives a system of coding integers , similar to the representation of Zeckendorf's theorem. In the string representing the integer x , the n th digit is 1 if F −n appears in the sum that represents x ; that digit is 0 otherwise. For example, 24 may be represented by the string 100101001, which has the digit 1 in places 9, 6, 4, and 1, because 24 = F −1 + F −4 + F −6 + F −9 . The integer x is represented by a string of odd length if and only if x > 0 .
This article incorporates material from proof that the Zeckendorf representation of a positive integer is unique on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
|
https://en.wikipedia.org/wiki/Zeckendorf's_theorem
|
The Zeeman effect ( Dutch: [ˈzeːmɑn] ) is the splitting of a spectral line into several components in the presence of a static magnetic field . It is caused by the interaction of the magnetic field with the magnetic moment of the atomic electron associated with its orbital motion and spin ; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman , who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect , the splitting of a spectral line into several components in the presence of an electric field . Also, similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules .
Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas .
In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland 's highest resolving diffraction gratings . Zeeman had read James Clerk Maxwell 's article in Encyclopædia Britannica describing Michael Faraday 's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not. [ 1 ] : 75
When illuminated by a slit-shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10- kilogauss magnet around the flame, he observed a slight broadening of the sodium images. [ 1 ] : 76
When Zeeman switched to cadmium as the source, he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz 's then-new electron theory . In retrospect, we now know that the magnetic effects on sodium require quantum-mechanical treatment. [ 1 ] : 77 Zeeman and Lorentz were awarded the 1902 Nobel Prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images. [ 2 ]
Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland [ 3 ] ). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied: "How can one look happy when he is thinking about the anomalous Zeeman effect?" [ 4 ]
At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect .
In modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect , [ 5 ] referring to the Zeeman effect in an absorption spectral line.
A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect . [ 6 ]
The total Hamiltonian of an atom in a magnetic field is H = H 0 + V M , {\displaystyle H=H_{0}+V_{\text{M}},} where H 0 {\displaystyle H_{0}} is the unperturbed Hamiltonian of the atom, and V M {\displaystyle V_{\text{M}}} is the perturbation due to the magnetic field: V M = − μ → ⋅ B → , {\displaystyle V_{\text{M}}=-{\vec {\mu }}\cdot {\vec {B}},} where μ → {\displaystyle {\vec {\mu }}} is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore, μ → ≈ − μ B g J → ℏ , {\displaystyle {\vec {\mu }}\approx -{\frac {\mu _{\text{B}}g{\vec {J}}}{\hbar }},} where μ B {\displaystyle \mu _{\text{B}}} is the Bohr magneton , J → {\displaystyle {\vec {J}}} is the total electronic angular momentum , and g {\displaystyle g} is the Landé g-factor .
A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum L → {\displaystyle {\vec {L}}} and the spin angular momentum S → {\displaystyle {\vec {S}}} , with each multiplied by the appropriate gyromagnetic ratio : μ → = − μ B ( g l L → + g s S → ) ℏ , {\displaystyle {\vec {\mu }}=-{\frac {\mu _{\text{B}}(g_{l}{\vec {L}}+g_{s}{\vec {S}})}{\hbar }},} where g l = 1 {\displaystyle g_{l}=1} , and g s ≈ 2.0023193 {\displaystyle g_{s}\approx 2.0023193} (the anomalous gyromagnetic ratio , deviating from 2 due to the effects of quantum electrodynamics ). In the case of the LS coupling , one can sum over all electrons in the atom: g J → = ⟨ ∑ i ( g l l → i + g s s → i ) ⟩ = ⟨ ( g l L → + g s S → ) ⟩ , {\displaystyle g{\vec {J}}={\Big \langle }\sum _{i}(g_{l}{\vec {l}}_{i}+g_{s}{\vec {s}}_{i}){\Big \rangle }={\big \langle }(g_{l}{\vec {L}}+g_{s}{\vec {S}}){\big \rangle },} where L → {\displaystyle {\vec {L}}} and S → {\displaystyle {\vec {S}}} are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum.
If the interaction term V M {\displaystyle V_{\text{M}}} is small (less than the fine structure ), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, V M {\displaystyle V_{\text{M}}} exceeds the LS coupling significantly (but is still small compared to H 0 {\displaystyle H_{0}} ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed H 0 {\displaystyle H_{0}} , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases that are more complex than these limit cases.
If the spin–orbit interaction dominates over the effect of the external magnetic field, L → {\displaystyle {\vec {L}}} and S → {\displaystyle {\vec {S}}} are not separately conserved, only the total angular momentum J → = L → + S → {\displaystyle {\vec {J}}={\vec {L}}+{\vec {S}}} is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector J → {\displaystyle {\vec {J}}} . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of J → {\displaystyle {\vec {J}}} : S → avg = ( S → ⋅ J → ) J 2 J → , {\displaystyle {\vec {S}}_{\text{avg}}={\frac {({\vec {S}}\cdot {\vec {J}})}{J^{2}}}{\vec {J}},} and for the (time-)"averaged" orbital vector: L → avg = ( L → ⋅ J → ) J 2 J → . {\displaystyle {\vec {L}}_{\text{avg}}={\frac {({\vec {L}}\cdot {\vec {J}})}{J^{2}}}{\vec {J}}.}
Thus ⟨ V M ⟩ = μ B ℏ J → ( g L L → ⋅ J → J 2 + g S S → ⋅ J → J 2 ) ⋅ B → . {\displaystyle \langle V_{\text{M}}\rangle ={\frac {\mu _{\text{B}}}{\hbar }}{\vec {J}}\left(g_{L}{\frac {{\vec {L}}\cdot {\vec {J}}}{J^{2}}}+g_{S}{\frac {{\vec {S}}\cdot {\vec {J}}}{J^{2}}}\right)\cdot {\vec {B}}.} Using L → = J → − S → {\displaystyle {\vec {L}}={\vec {J}}-{\vec {S}}} and squaring both sides, we get S → ⋅ J → = 1 2 ( J 2 + S 2 − L 2 ) = ℏ 2 2 [ j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) ] , {\displaystyle {\vec {S}}\cdot {\vec {J}}={\frac {1}{2}}(J^{2}+S^{2}-L^{2})={\frac {\hbar ^{2}}{2}}[j(j+1)-l(l+1)+s(s+1)],} and using S → = J → − L → {\displaystyle {\vec {S}}={\vec {J}}-{\vec {L}}} and squaring both sides, we get L → ⋅ J → = 1 2 ( J 2 − S 2 + L 2 ) = ℏ 2 2 [ j ( j + 1 ) + l ( l + 1 ) − s ( s + 1 ) ] . {\displaystyle {\vec {L}}\cdot {\vec {J}}={\frac {1}{2}}(J^{2}-S^{2}+L^{2})={\frac {\hbar ^{2}}{2}}[j(j+1)+l(l+1)-s(s+1)].}
Combining everything and taking J z = ℏ m j {\displaystyle J_{z}=\hbar m_{j}} , we obtain the magnetic potential energy of the atom in the applied external magnetic field: V M = μ B B m j [ g L j ( j + 1 ) + l ( l + 1 ) − s ( s + 1 ) 2 j ( j + 1 ) + g S j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) 2 j ( j + 1 ) ] = μ B B m j [ 1 + ( g S − 1 ) j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) 2 j ( j + 1 ) ] = μ B B m j g J , {\displaystyle {\begin{aligned}V_{\text{M}}&=\mu _{\text{B}}Bm_{j}\left[g_{L}{\frac {j(j+1)+l(l+1)-s(s+1)}{2j(j+1)}}+g_{S}{\frac {j(j+1)-l(l+1)+s(s+1)}{2j(j+1)}}\right]\\&=\mu _{\text{B}}Bm_{j}\left[1+(g_{S}-1){\frac {j(j+1)-l(l+1)+s(s+1)}{2j(j+1)}}\right]\\&=\mu _{\text{B}}Bm_{j}g_{J},\end{aligned}}} where the quantity in square brackets is the Landé g-factor g J {\displaystyle g_{J}} of the atom ( g L = 1 , {\displaystyle g_{L}=1,} g S ≈ 2 {\displaystyle g_{S}\approx 2} ), and m j {\displaystyle m_{j}} is the z component of the total angular momentum.
For a single electron above filled shells, with s = 1 / 2 {\displaystyle s=1/2} and j = l ± s {\displaystyle j=l\pm s} , the Landé g-factor can be simplified to g J = 1 ± g S − 1 2 l + 1 . {\displaystyle g_{J}=1\pm {\frac {g_{S}-1}{2l+1}}.}
Taking V M {\displaystyle V_{\text{M}}} to be the perturbation, the Zeeman correction to the energy is E Z ( 1 ) = ⟨ n l j m j | H Z ′ | n l j m j ⟩ = ⟨ V M ⟩ Ψ = μ B g J B ext m j . {\displaystyle E_{\text{Z}}^{(1)}=\langle nljm_{j}|H_{\text{Z}}^{'}|nljm_{j}\rangle =\langle V_{\text{M}}\rangle _{\Psi }=\mu _{\text{B}}g_{J}B_{\text{ext}}m_{j}.}
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions 2 2 P 1 / 2 → 1 2 S 1 / 2 {\displaystyle 2\,^{2}\!P_{1/2}\to 1\,^{2}\!S_{1/2}} and 2 2 P 3 / 2 → 1 2 S 1 / 2 . {\displaystyle 2\,^{2}\!P_{3/2}\to 1\,^{2}\!S_{1/2}.}
In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1 2 S 1 / 2 {\displaystyle 1\,^{2}\!S_{1/2}} and 2 2 P 1 / 2 {\displaystyle 2\,^{2}\!P_{1/2}} levels into 2 states each ( m j = + 1 / 2 , − 1 / 2 {\displaystyle m_{j}=+1/2,-1/2} ) and the 2 2 P 3 / 2 {\displaystyle 2\,^{2}\!P_{3/2}} level into 4 states ( m j = + 3 / 2 , + 1 / 2 , − 1 / 2 , − 3 / 2 {\displaystyle m_{j}=+3/2,+1/2,-1/2,-3/2} ). The Landé g-factors for the three levels are g J = 2 for 1 2 S 1 / 2 ( j = 1 / 2 , l = 0 ) , g J = 2 / 3 for 2 2 P 1 / 2 ( j = 1 / 2 , l = 1 ) , g J = 4 / 3 for 2 2 P 3 / 2 ( j = 3 / 2 , l = 1 ) . {\displaystyle {\begin{aligned}g_{J}&=2&&{\text{for}}\ 1\,^{2}\!S_{1/2}\ (j=1/2,l=0),\\g_{J}&=2/3&&{\text{for}}\ 2\,^{2}\!P_{1/2}\ (j=1/2,l=1),\\g_{J}&=4/3&&{\text{for}}\ 2\,^{2}\!P_{3/2}\ (j=3/2,l=1).\end{aligned}}}
Note in particular that the size of the energy splitting is different for the different orbitals because the g J values are different. Fine-structure splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields.
The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital ( L → {\displaystyle {\vec {L}}} ) and spin ( S → {\displaystyle {\vec {S}}} ) angular momenta. This effect is the strong-field limit of the Zeeman effect. When s = 0 {\displaystyle s=0} , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back . [ 7 ]
When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume [ H 0 , S ] = 0 {\displaystyle [H_{0},S]=0} . This allows the expectation values of L z {\displaystyle L_{z}} and S z {\displaystyle S_{z}} to be easily evaluated for a state | ψ ⟩ {\displaystyle |\psi \rangle } . The energies are simply
The above may be read as implying that the LS-coupling is completely broken by the external field. However, m l {\displaystyle m_{l}} and m s {\displaystyle m_{s}} are still "good" quantum numbers. Together with the selection rules for an electric dipole transition , i.e., Δ s = 0 , Δ m s = 0 , Δ l = ± 1 , Δ m l = 0 , ± 1 {\displaystyle \Delta s=0,\Delta m_{s}=0,\Delta l=\pm 1,\Delta m_{l}=0,\pm 1} this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the Δ m l = 0 , ± 1 {\displaystyle \Delta m_{l}=0,\pm 1} selection rule. The splitting Δ E = B μ B Δ m l {\displaystyle \Delta E=B\mu _{\rm {B}}\Delta m_{l}} is independent of the unperturbed energies and electronic configurations of the levels being considered.
More precisely, if s ≠ 0 {\displaystyle s\neq 0} , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit: [ 8 ]
In this example, the fine-structure corrections are ignored.
( n = 2 , l = 1 {\displaystyle n=2,l=1} )
∣ m l , m s ⟩ {\displaystyle \mid m_{l},m_{s}\rangle }
( n = 1 , l = 0 {\displaystyle n=1,l=0} )
∣ m l , m s ⟩ {\displaystyle \mid m_{l},m_{s}\rangle }
In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is [ citation needed ]
where A {\displaystyle A} is the hyperfine splitting at zero applied magnetic field, μ B {\displaystyle \mu _{\rm {B}}} and μ N {\displaystyle \mu _{\rm {N}}} are the Bohr magneton and nuclear magneton , respectively (note that the last term in the expression above describes the nuclear Zeeman effect), J → {\displaystyle {\vec {J}}} and I → {\displaystyle {\vec {I}}} are the electron and nuclear angular momentum operators and g J {\displaystyle g_{J}} is the Landé g-factor : g J = g L J ( J + 1 ) + L ( L + 1 ) − S ( S + 1 ) 2 J ( J + 1 ) + g S J ( J + 1 ) − L ( L + 1 ) + S ( S + 1 ) 2 J ( J + 1 ) . {\displaystyle g_{J}=g_{L}{\frac {J(J+1)+L(L+1)-S(S+1)}{2J(J+1)}}+g_{S}{\frac {J(J+1)-L(L+1)+S(S+1)}{2J(J+1)}}.}
In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the | F , m f ⟩ {\displaystyle |F,m_{f}\rangle } basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of | I , J , m I , m J ⟩ {\displaystyle |I,J,m_{I},m_{J}\rangle } or just | m I , m J ⟩ {\displaystyle |m_{I},m_{J}\rangle } since I {\displaystyle I} and J {\displaystyle J} will be constant within a given level.
To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the | F , m F ⟩ {\displaystyle |F,m_{F}\rangle } and | m I , m J ⟩ {\displaystyle |m_{I},m_{J}\rangle } basis states. For J = 1 / 2 {\displaystyle J=1/2} , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi ). Notably, the electric quadrupole interaction is zero for L = 0 {\displaystyle L=0} ( J = 1 / 2 {\displaystyle J=1/2} ), so this formula is fairly accurate.
We now utilize quantum mechanical ladder operators , which are defined for a general angular momentum operator L {\displaystyle L} as
These ladder operators have the property
as long as m L {\displaystyle m_{L}} lies in the range − L , … . . . , L {\displaystyle {-L,\dots ...,L}} (otherwise, they return zero). Using ladder operators J ± {\displaystyle J_{\pm }} and I ± {\displaystyle I_{\pm }} We can rewrite the Hamiltonian as
We can now see that at all times, the total angular momentum projection m F = m J + m I {\displaystyle m_{F}=m_{J}+m_{I}} will be conserved. This is because both J z {\displaystyle J_{z}} and I z {\displaystyle I_{z}} leave states with definite m J {\displaystyle m_{J}} and m I {\displaystyle m_{I}} unchanged, while J + I − {\displaystyle J_{+}I_{-}} and J − I + {\displaystyle J_{-}I_{+}} either increase m J {\displaystyle m_{J}} and decrease m I {\displaystyle m_{I}} or vice versa, so the sum is always unaffected. Furthermore, since J = 1 / 2 {\displaystyle J=1/2} there are only two possible values of m J {\displaystyle m_{J}} which are ± 1 / 2 {\displaystyle \pm 1/2} . Therefore, for every value of m F {\displaystyle m_{F}} there are only two possible states, and we can define them as the basis:
This pair of states is a two-level quantum mechanical system . Now we can determine the matrix elements of the Hamiltonian:
Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system ), or more easily, with a computer algebra system – we arrive at the energy shifts:
where Δ W {\displaystyle \Delta W} is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field B {\displaystyle B} , x {\displaystyle x} is referred to as the 'field strength parameter' (Note: for m F = ± ( I + 1 / 2 ) {\displaystyle m_{F}=\pm (I+1/2)} the expression under the square root is an exact square, and so the last term should be replaced by + h Δ W 2 ( 1 ± x ) {\displaystyle +{\frac {h\Delta W}{2}}(1\pm x)} ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an s {\displaystyle s} ( J = 1 / 2 {\displaystyle J=1/2} ) level. [ 9 ] [ 10 ]
Note that index F {\displaystyle F} in Δ E F = I ± 1 / 2 {\displaystyle \Delta E_{F=I\pm 1/2}} should be considered not as total angular momentum of the atom but as asymptotic total angular momentum . It is equal to total angular momentum only if B = 0 {\displaystyle B=0} otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different F {\displaystyle F} but equal m F {\displaystyle m_{F}} (the only exceptions are | F = I + 1 / 2 , m F = ± F ⟩ {\displaystyle |F=I+1/2,m_{F}=\pm F\rangle } ).
George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, [ 11 ] and to analyze the magnetic field geometries in other stars. [ 12 ]
The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower . [ 13 ]
Zeeman-energy mediated coupling of spin and orbital motions
is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance . [ 14 ]
Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing . [ 15 ]
The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy . [ citation needed ]
A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. [ 16 ]
The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy . [ citation needed ]
The electron spin resonance spectroscopy is based on the Zeeman effect. [ citation needed ]
The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor.
The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet. [ 17 ]
Alternatively, salt ( sodium chloride ) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. [ 18 ] However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. [ 17 ] These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission. [ 19 ]
When salt is added to the Bunsen burner, it dissociates to give sodium and chloride . The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. [ 20 ] [ failed verification ] When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency anymore, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted. [ citation needed ]
|
https://en.wikipedia.org/wiki/Zeeman_effect
|
Zeeman energy , or the external field energy, is the potential energy of a magnetised body in an external magnetic field. It is named after the Dutch physicist Pieter Zeeman , primarily known for the Zeeman effect . In SI units, it is given by
where H Ext is the external field, M the local magnetisation, and the integral is done over the volume of the body. This is the statistical average (over a
unit volume macroscopic sample) of a corresponding microscopic Hamiltonial (energy) for each individual magnetic moment m , which is however experiencing a local induction B :
|
https://en.wikipedia.org/wiki/Zeeman_energy
|
In astrophysics , Zeeman–Doppler imaging is a tomographic technique dedicated to the cartography of stellar magnetic fields , as well as surface brightness or spots and temperature distributions.
This method makes use of the ability of magnetic fields to polarize the light emitted (or absorbed) in spectral lines formed in the stellar atmosphere (the Zeeman effect ). The periodic modulation of Zeeman signatures during the stellar rotation is employed to make an iterative reconstruction of the vectorial magnetic field at stellar surface.
The method was first proposed by Marsh and Horne in 1988, as a way to interpret the emission line variations of cataclysmic variable stars . [ 1 ] This techniques is based on the principle of maximum entropy image reconstruction; it yields the simplest magnetic field geometry (as a spherical harmonics expansion) among the various solutions compatible with the data. [ 2 ]
This technique is the first to enable the reconstruction of the vectorial magnetic geometry of stars similar to the Sun . It now enables systematic studies of stellar magnetism and provides insights into the geometry of large arches formed by magnetic fields above stellar surfaces. To collect the observations related to Zeeman-Doppler Imaging, astronomers use stellar spectropolarimeters like ESPaDOnS [ 3 ] at CFHT on Mauna Kea ( Hawaii ), HARPSpol [ 4 ] at the ESO's 3.6m telescope ( La Silla Observatory , Chile ), as well as NARVAL [ 5 ] at Bernard Lyot Telescope ( Pic du Midi de Bigorre , France ).
The technique is very reliable, as the reconstruction of the magnetic field maps with different algorithms yield almost identical results, even with poorly sampled data sets. [ 6 ] It makes use of high-resolution time-series spectropolarimetric observations ( Stokes parameter spectra). [ 7 ] It has however been shown, from both numerical simulations [ 8 ] and observations, [ 9 ] that the magnetic field strength and complexity is underestimated if no linear polarization spectra is available from observations. Since linear polarization signatures are weaker compared circular polarization their detections are not as reliable, particularly for cool stars. Therefore, the observations are normally limited to only Stokes IV parameters. [ 10 ] With more modern spectropolarimeters such as the recently installed SPIRou [ 11 ] at CFHT and CRIRES+ [ 12 ] at the Very Large Telescope ( Chile ) the sensitivity to linear polarization will increase, allowing for more detailed studies of cool stars in the future.
|
https://en.wikipedia.org/wiki/Zeeman–Doppler_imaging
|
The Zeisel determination or Zeisel test is a chemical test for the presence of esters or ethers in a chemical substance . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
It is named after the Czech chemist Simon Zeisel (1854–1933). In a qualitative test a sample is first reacted with a mixture of acetic acid and hydrogen iodide in a test tube . The ensuing reaction results in the cleavage of the ether or the ester into an alkyl iodide and respectively an alcohol or a carboxylic acid .
By heating this mixture, the gases are allowed to come into contact with a piece of paper higher up the test tube saturated with silver nitrate . Any alkyl iodide present will give a reaction with the silver compound to silver iodide which has a red or yellow color. By filtering and weighing this precipitate it is possible to quantitatively calculate the number of iodine atoms and hence alkoxy groups. For example, prior to the development of the more precise methods of NMR spectroscopy and mass spectrometry , the Zeisel test was widely used to determine the number of methoxy (-OCH 3 ) and ethoxy (-OCH 2 CH 3 ) groups in carbohydrate and organophosphorus insecticides. [ 5 ]
An alternative qualitative Zeisel test can be done with the use of mercury(II) nitrate instead of silver nitrate, leading to the formation of scarlet red mercury(II) iodide . [ 5 ]
Synthetic applications:
|
https://en.wikipedia.org/wiki/Zeisel_determination
|
Zeitschrift für Physikalische Chemie (English: Journal of Physical Chemistry ) is a monthly peer-reviewed scientific journal covering physical chemistry that is published by Oldenbourg Wissenschaftsverlag . Its English subtitle is "International Journal of Research in Physical Chemistry and Chemical Physics". It was established in 1887 by Wilhelm Ostwald , Jacobus Henricus van 't Hoff , and Svante August Arrhenius as the first scientific journal for publications specifically in the field of physical chemistry. [ 1 ] The editor-in-chief is Klaus Rademann ( Humboldt University of Berlin ).
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2020 impact factor of 2.408. [ 2 ]
This article about a physical chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Zeitschrift_für_Physikalische_Chemie
|
Zel'dovich mechanism is a chemical mechanism that describes the oxidation of nitrogen and NO x formation, first proposed by the Russian physicist Yakov Borisovich Zel'dovich in 1946. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The reaction mechanisms read as
where k 1 {\displaystyle k_{1}} and k 2 {\displaystyle k_{2}} are the reaction rate constants in Arrhenius law . The overall global reaction is given by
The overall reaction rate is mostly governed by the first reaction (i.e., rate-determining reaction ), since the second reaction is much faster than the first reaction and occurs immediately following the first reaction. At fuel-rich conditions, due to lack of oxygen, reaction 2 becomes weak, hence, a third reaction is included in the mechanism, also known as extended Zel'dovich mechanism (with all three reactions), [ 5 ] [ 6 ]
Assuming the initial concentration of NO is low and the reverse reactions can therefore be ignored, the forward rate constants of the reactions are given by [ 7 ]
where the pre-exponential factor is measured in units of cm, mol, s and K (these units are incorrect), temperature in kelvins , and the activation energy in cal/mol; R is the universal gas constant .
The rate of NO concentration increase is given by
Similarly, the rate of N concentration increase is
|
https://en.wikipedia.org/wiki/Zeldovich_mechanism
|
The Zeldovich number is a dimensionless number which provides a quantitative measure for the activation energy of a chemical reaction which appears in the Arrhenius exponent, named after the Russian scientist Yakov Borisovich Zeldovich , who along with David A. Frank-Kamenetskii , first introduced in their paper in 1938. [ 1 ] [ 2 ] [ 3 ] In 1983 ICDERS meeting at Poitiers , it was decided that the non-dimensional number will be named after Zeldovich. [ 4 ]
It is defined as
where
In terms of heat release parameter q {\displaystyle q} , it is given by
For typical combustion phenomena, the value for Zel'dovich number lies in the range β ≈ 8 − 20 {\displaystyle \beta \approx 8-20} . Activation energy asymptotics uses this number as the large parameter of expansion.
This combustion article is a stub . You can help Wikipedia by expanding it .
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zeldovich_number
|
Zeldovich regularization refers to a regularization method to calculate divergent integrals and divergent series , that was first introduced by Yakov Zeldovich in 1961. [ 1 ] Zeldovich was originally interested in calculating the norm of the Gamow wave function which is divergent since there is an outgoing spherical wave. Zeldovich regularization uses a Gaussian type-regularization and is defined, for divergent integrals, by [ 2 ]
and, for divergent series, by [ 3 ] [ 4 ] [ 5 ]
|
https://en.wikipedia.org/wiki/Zeldovich_regularization
|
A Zeldovich spontaneous wave , also known as the Zeldovich gradient mechanism , is a theoretical type of reaction wave that can occur in a reacting substance, such as a gas mixture, where the initial temperature varies across different locations. [ 1 ] This variation in temperature creates gradients that cause different parts of the substance to react at slightly different times, driving the wave's propagation. Unlike typical combustion waves, such as subsonic deflagrations and supersonic detonations, it's characterized by the absence of interactions between different parts of the substance, such as those caused by pressure changes or heat transfer.
Introduced by Yakov Zeldovich in 1980 [ 2 ] building on his earlier research, [ 3 ] this concept is often cited to explain the yet-unsolved problem of deflagration to detonation transition (DDT) , [ 4 ] [ 5 ] [ 6 ] [ 7 ] where a slow-moving subsonic flame ( deflagration ) accelerates to a supersonic detonation . Essentially, the Zeldovich spontaneous wave helps explain how a reaction can spread solely due to initial temperature differences, independent of factors like heat conduction or sound speed (provided the initial temperature gradients are small). While it simplifies real-world conditions by neglecting gas dynamic effects, it offers valuable insights into the fundamental mechanisms of rapid reactions. The wave's behavior is dependent on the initial temperature distribution.
Let T ( x , y , z ) {\displaystyle T(x,y,z)} be the initial temperature distribution, which is non trivial, indicating that chemical reactions at different points in space proceed at different rates. To this distribution, we can associate a function t a d ( x , y , z ) {\displaystyle t_{ad}(x,y,z)} , where t a d {\displaystyle t_{ad}} is the adiabatic induction period. Now, define in space some surface t a d ( x , y , z ) = c o n s t . {\displaystyle t_{ad}(x,y,z)=\mathrm {const.} } ; suppose if T = T ( x ) {\displaystyle T=T(x)} , then this surface for some constant will be parallel to y z {\displaystyle yz} -plane. Examine the change of position of this surface with the passage of time according to [ 8 ]
From this, we can easily extract the direction and the propagation speed of the spontaneous front. The direction of the wave is clearly normal to this surface which is given by ∇ t a d / | ∇ t a d | {\displaystyle \nabla t_{ad}/|\nabla t_{ad}|} and the rate of propagation is just the magnitude of inverse of the gradient of t a d {\displaystyle t_{ad}} :
Note that adiabatic thermal runaways at different places are not casually connected events and therefore u s p {\displaystyle u_{sp}} can assume, in principle, any positive value. By comparing u s p {\displaystyle u_{sp}} with other relevant speeds such as, the deflagration speed, u f {\displaystyle u_{f}} , the sound speed , c {\displaystyle c} and the speed of the Chapman–Jouguet detonation wave, u C J {\displaystyle u_{CJ}} , we can identify different regimes:
|
https://en.wikipedia.org/wiki/Zeldovich_spontaneous_wave
|
In combustion , Zeldovich–Liñán–Dold model or ZLD model or ZLD mechanism is a two-step reaction model for the combustion processes, named after Yakov Borisovich Zeldovich , Amable Liñán and John W. Dold . The model includes a chain-branching and a chain-breaking (or radical recombination) reaction. The model was first introduced by Zeldovich in 1948, [ 1 ] later analysed by Liñán using activation energy asymptotics in 1971 [ 2 ] and later refined by John W. Dold in the 2000s. [ 3 ] [ 4 ] The ZLD mechanism mechanism reads as
where F {\displaystyle {\rm {F}}} is the fuel , Z {\displaystyle {\rm {Z}}} is an intermediate radical , M {\displaystyle {\rm {M}}} is the third body and P {\displaystyle {\rm {P}}} is the product. This mechanism exhibits a linear or first-order recombination . The model originally studied before Dold's refinement pertains to a quadratic or second-order recombination and is referred to as Zeldovich–Liñán model . The ZL mechanism readsas
In both models, the first reaction is the chain-branching reaction (it produces two radicals by consuming one radical), which is considered to be auto-catalytic (consumes no heat and releases no heat), with very large activation energy and the second reaction is the chain-breaking (or radical-recombination) reaction (it consumes radicals), where all of the heat in the combustion is released, with almost negligible activation energy . [ 5 ] [ 6 ] [ 7 ] Therefore, the rate constants are written as [ 8 ]
where A I {\displaystyle A_{\rm {I}}} and A I I {\displaystyle A_{\rm {II}}} are the pre-exponential factors, E I {\displaystyle E_{\rm {I}}} is the activation energy for chain-branching reaction which is much larger than the thermal energy and T {\displaystyle T} is the temperature.
Albeit, there are two fundamental aspects that differentiate Zeldovich–Liñán–Dold (ZLD) model from the Zeldovich–Liñán (ZL) model. First of all, the so-called cold-boundary difficulty in premixed flames does not occur in the ZLD model [ 4 ] and secondly the so-called crossover temperature exist in the ZLD, but not in the ZL model. [ 9 ]
For simplicity, consider a spatially homogeneous system, then the concentration C Z ( t ) {\displaystyle C_{\mathrm {Z} }(t)} of the radical in the ZLD model evolves according to
It is clear from this equation that the radical concentration will grow in time if the righthand side term is positive. More preceisley, the initial equilibrium state C Z ( 0 ) = 0 {\displaystyle C_{\mathrm {Z} }(0)=0} is unstable if the right-side term is positive. If C F ( 0 ) = C F , 0 {\displaystyle C_{\mathrm {F} }(0)=C_{\mathrm {F} ,0}} denotes the initial fuel concentration, a crossover temperature T ∗ {\displaystyle T^{*}} as a temperature at which the branching and recombination rates are equal can be defined, i.e., [ 7 ]
When T > T ∗ {\displaystyle T>T^{*}} , branching dominates over recombination and therefore the radial concentration will grow in time, whereas if T < T ∗ {\displaystyle T<T^{*}} , recombination dominates over branching and therefore the radial concentration will disappear in time.
In a more general setup, where the system is non-homogeneous, evaluation of crossover temperature is complicated because of the presence of convective and diffusive transport.
In the ZL model, one would have obtained e E I / R T ∗ = ( A I / A I I ) C F , 0 C Z ( 0 ) {\displaystyle e^{E_{\mathrm {I} }/RT^{*}}=(A_{\mathrm {I} }/A_{\mathrm {II} })C_{\mathrm {F} ,0}C_{\mathrm {Z} }(0)} , but since C Z ( 0 ) {\displaystyle C_{\mathrm {Z} }(0)} is zero or vanishingly small in the perturbed state, there is no crossover temperature.
In his analysis, Liñán showed that there exists three types of regimes, namely, slow recombination regime , intermediate recombination regime and fast recombination regime . [ 9 ] These regimes exist in both aforementioned models.
Let us consider a premixed flame in the ZLD model. Based on the thermal diffusivity D T {\displaystyle D_{T}} and the flame burning speed S L {\displaystyle S_{L}} , one can define the flame thickness (or the thermal thickness) as δ L = D T / S L {\displaystyle \delta _{L}=D_{T}/S_{L}} . Since the activation energy of the branching is much greater than thermal energy, the characteristic thickness δ B {\displaystyle \delta _{B}} of the branching layer will be δ B / δ L ∼ O ( 1 / β ) {\displaystyle \delta _{B}/\delta _{L}\sim O(1/\beta )} , where β {\displaystyle \beta } is the Zeldovich number based on E I {\displaystyle E_{\mathrm {I} }} . The recombination reaction does not have the activation energy and its thickness δ R {\displaystyle \delta _{R}} will characterised by its Damköhler number D a I I = ( D T / S L 2 ) / ( W Z A I I − 1 ) {\displaystyle Da_{\mathrm {II} }=(D_{T}/S_{L}^{2})/(W_{\mathrm {Z} }A_{\mathrm {II} }^{-1})} , where W Z {\displaystyle W_{\mathrm {Z} }} is the molecular weight of the intermediate species. Specifically, from a diffusive-reactive balance, we obtain δ R / δ L ∼ O ( D a I I − 1 / 2 ) {\displaystyle \delta _{R}/\delta _{L}\sim O(Da_{\mathrm {II} }^{-1/2})} (in the ZL model, this would have been δ R / δ L ∼ O ( D a I I − 1 / 3 ) {\displaystyle \delta _{R}/\delta _{L}\sim O(Da_{\mathrm {II} }^{-1/3})} ).
By comparing the thicknesses of the different layers, the three regimes are classified: [ 9 ]
The fast recombination represents situations near the flammability limits. As can be seen, the recombination layer becomes comparable to the branching layer. The criticality is achieved when the branching is unable to cope up with the recombination. Such criticality exists in the ZLD model. Su-Ryong Lee and Jong S. Kim showed that as Δ ≡ D a I I / β 2 {\displaystyle \Delta \equiv Da_{\mathrm {II} }/\beta ^{2}} becomes large, the critical condition is reached, [ 9 ]
where
Here q {\displaystyle q} is the heat release parameter , Y F , 0 {\displaystyle Y_{\mathrm {F} ,0}} is the unburnt fuel mass fraction and W F {\displaystyle W_{\mathrm {F} }} is the molecular weight of the fuel.
|
https://en.wikipedia.org/wiki/Zeldovich–Liñán–Dold_model
|
Zeldovich–Taylor flow (also known as Zeldovich–Taylor expansion wave ) is the fluid motion of gaseous detonation products behind Chapman–Jouguet detonation wave . The flow was described independently by Yakov Zeldovich in 1942 [ 1 ] [ 2 ] and G. I. Taylor in 1950, [ 3 ] although G. I. Taylor carried out the work in 1941 that being circulated in the British Ministry of Home Security. Since naturally occurring detonation waves are in general a Chapman–Jouguet detonation wave , the solution becomes very useful in describing real-life detonation waves.
Consider a spherically outgoing Chapman–Jouguet detonation wave propagating with a constant velocity D {\displaystyle D} . By definition, immediately behind the detonation wave, the gas velocity is equal to the local sound speed c {\displaystyle c} with respect to the wave. Let v ( r , t ) {\displaystyle v(r,t)} be the radial velocity of the gas behind the wave, in a fixed frame. The detonation is ignited at t = 0 {\displaystyle t=0} at r = 0 {\displaystyle r=0} . For t > 0 {\displaystyle t>0} , the gas velocity must be zero at the center r = 0 {\displaystyle r=0} and should take the value v = D − c {\displaystyle v=D-c} at the detonation location r = D t {\displaystyle r=Dt} . The fluid motion is governed by the inviscid Euler equations [ 4 ]
where ρ {\displaystyle \rho } is the density, p {\displaystyle p} is the pressure and s {\displaystyle s} is the entropy. The last equation implies that the flow is isentropic and hence we can write c 2 = d p / d ρ {\displaystyle c^{2}=dp/d\rho } .
Since there are no length or time scales involved in the problem, one may look for a self-similar solution of the form v ( r , t ) = v ( ξ ) , p ( r , t ) = p ( ξ ) , ρ ( r , t ) = ρ ( ξ ) , c ( r , t ) = c ( ξ ) {\displaystyle v(r,t)=v(\xi ),p(r,t)=p(\xi ),\,\rho (r,t)=\rho (\xi ),\,c(r,t)=c(\xi )} , where ξ = r / t {\displaystyle \xi =r/t} . The first two equations then become
where prime denotes differentiation with respect to ξ {\displaystyle \xi } . We can eliminate ρ ′ / ρ {\displaystyle \rho '/\rho } between the two equations to obtain an equation that contains only v {\displaystyle v} and c {\displaystyle c} . Because of the isentropic condition, we can express ρ = ρ ( c ) , p = p ( c ) {\displaystyle \rho =\rho (c),\,p=p(c)} , that is to say, we can replace ρ − 1 d ρ / d x {\displaystyle \rho ^{-1}d\rho /dx} with ρ − 1 c ′ d ρ / d c {\displaystyle \rho ^{-1}c'd\rho /dc} . This leads to
For polytropic gases with constant specific heats, we have ρ − 1 d ρ / d c = 2 / [ ( γ − 1 ) c ] {\displaystyle \rho ^{-1}d\rho /dc=2/[(\gamma -1)c]} . The above set of equations cannot be solved analytically, but has to be integrated numerically. The solution has to be found for the range 0 ≤ ξ ≤ D {\displaystyle 0\leq \xi \leq D} subjected to the condition ξ − v = c {\displaystyle \xi -v=c} at ξ = D . {\displaystyle \xi =D.}
The function v ( ξ ) {\displaystyle v(\xi )} is found to monotonically decrease from its value v ( D ) = c ( D ) − D {\displaystyle v(D)=c(D)-D} to zero at a finite value of ξ < D {\displaystyle \xi <D} , where a weak discontinuity (that is a function is continuous, but its derivatives may not) exists. The region between the detonation front and the trailing weak discontinuity is the rarefaction (or expansion) flow. Interior to the weak discontinuity v = 0 {\displaystyle v=0} everywhere.
From the second equation described above, it follows that when v = 0 {\displaystyle v=0} , ξ = c {\displaystyle \xi =c} . More precisely, as v → 0 {\displaystyle v\rightarrow 0} , that equation can be approximated as [ 5 ]
As v → 0 {\displaystyle v\rightarrow 0} , ln v → − ∞ {\displaystyle \ln v\rightarrow -\infty } and ( ln v ) ′ → ∞ {\displaystyle (\ln v)'\rightarrow \infty } if ξ {\displaystyle \xi } decreases as v → 0 {\displaystyle v\rightarrow 0} . The left hand side of the above equation can become positive infinity only if ξ → c {\displaystyle \xi \rightarrow c} . Thus, when ξ {\displaystyle \xi } decreases to the value ξ = c 0 {\displaystyle \xi =c_{0}} , the gas comes to rest (Here c 0 {\displaystyle c_{0}} is the sound speed corresponding to v = 0 {\displaystyle v=0} ). Thus, the rarefaction motion occurs for c 0 < ξ ≤ D {\displaystyle c_{0}<\xi \leq D} and there is no fluid motion for 0 ≤ ξ ≤ c 0 {\displaystyle 0\leq \xi \leq c_{0}} .
Rewrite the second equation as
In the neighborhood of the weak discontinuity, the quantities to the first order (such as v , ξ − c 0 , c − c 0 {\displaystyle v,\,\xi -c_{0},\,c-c_{0}} ) reduces the above equation to
At this point, it is worth mentioning that in general, disturbances in gases are propagated with respect to the gas at the local sound speed. In other words, in the fixed frame, the disturbances are propagated at the speed v + c {\displaystyle v+c} (the other possibility is v − c {\displaystyle v-c} although it is of no interest here). If the gas is at rest v = 0 {\displaystyle v=0} , then the disturbance speed is c 0 {\displaystyle c_{0}} . This is just a normal sound wave propagation. If however v {\displaystyle v} is non-zero but a small quantity, then one find the correction for the disturbance propagation speed as v + c = c 0 + α 0 v {\displaystyle v+c=c_{0}+\alpha _{0}v} obtained using a Taylor series expansion, where α 0 {\displaystyle \alpha _{0}} is the Landau derivative (for ideal gas , α 0 = ( γ + 1 ) / 2 {\displaystyle \alpha _{0}=(\gamma +1)/2} , where γ {\displaystyle \gamma } is the specific heat ratio ). This means that the above equation can be written as
whose solution is
where A {\displaystyle A} is a constant. This determines v ( ξ ) {\displaystyle v(\xi )} implicitly in the neighborhood of the week discontinuity where v {\displaystyle v} is small. This equation shows that at ξ = c 0 {\displaystyle \xi =c_{0}} , v = 0 {\displaystyle v=0} , d v / d ξ = 0 {\displaystyle dv/d\xi =0} , but all higher-order derivatives are discontinuous. In the above equation, subtract v + c − c 0 {\displaystyle v+c-c_{0}} from the left-hand side and α 0 v {\displaystyle \alpha _{0}v} from the right-hand side to obtain
which implies that ξ − v > c {\displaystyle \xi -v>c} if v {\displaystyle v} is a small quantity. It can be shown that the relation ξ − v > c {\displaystyle \xi -v>c} not only holds for small v {\displaystyle v} , but throughout the rarefaction wave.
First let us show that the relation ξ − v > c {\displaystyle \xi -v>c} is not only valid near the weak discontinuity, but throughout the region. If this inequality is not maintained, then there must be a point where ξ − v = c , v ≠ 0 {\displaystyle \xi -v=c,\,v\neq 0} between the weak discontinuity and the detonation front. The second governing equation implies that at this point v ′ {\displaystyle v'} must be infinite or, d ξ / d v = 0 {\displaystyle d\xi /dv=0} . Let us obtain d 2 ξ / d v 2 {\displaystyle d^{2}\xi /dv^{2}} by taking the second derivative of the governing equation. In the resulting equation, impose the condition ξ − v = c , v ≠ 0 , d ξ / d v = 0 {\displaystyle \xi -v=c,\,v\neq 0,\,d\xi /dv=0} to obtain d 2 ξ / d v 2 = − α 0 ξ / c 0 v ≠ 0 {\displaystyle d^{2}\xi /dv^{2}=-\alpha _{0}\xi /c_{0}v\neq 0} . This implies that ξ ( v ) {\displaystyle \xi (v)} reaches a maximum at this point which in turn implies that v ( ξ ) {\displaystyle v(\xi )} cannot exist for ξ {\displaystyle \xi } greater than the maximum point considered since otherwise v ( ξ ) {\displaystyle v(\xi )} would be multi-valued. The maximum point at most can be corresponded to the outer boundary (detonation front). This means that ξ − v − c {\displaystyle \xi -v-c} can vanish only on the boundary and it is already shown that ξ − v − c {\displaystyle \xi -v-c} is positive near the weak discontinuity, ξ − v − c {\displaystyle \xi -v-c} is positive everywhere in the region except the boundaries where it can vanish.
Note that near the detonation front, we must satisfy the condition ξ − v = c , v ≠ 0 {\displaystyle \xi -v=c,\,v\neq 0} . The value evaluated at ξ = D {\displaystyle \xi =D} for the function ξ − v {\displaystyle \xi -v} , i.e., D − v ( D ) {\displaystyle D-v(D)} is nothing but the velocity of the detonation front with respect to the gas velocity behind it. For a detonation front, the condition D − v ( D ) ≤ c ( D ) {\displaystyle D-v(D)\leq c(D)} must always be met, with the equality sign representing Chapman–Jouguet detonations and the inequalities representing over-driven detonations. The analysis describing the point ξ − v = c , v ≠ 0 {\displaystyle \xi -v=c,\,v\neq 0} must correspond to the detonation front.
|
https://en.wikipedia.org/wiki/Zeldovich–Taylor_flow
|
A zellballen is a small nest of chromaffin cells or chief cells with pale eosinophilic staining. Zellballen are separated into groups by segmenting bands of fibrovascular stroma, and are surrounded by supporting sustentacular cells . [ 1 ] A zellballen pattern is diagnostic for paraganglioma or pheochromocytoma . [ 2 ]
Zellballen is German for "ball of cells". [ 3 ]
This anatomy article is a stub . You can help Wikipedia by expanding it .
This molecular or cell biology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zellballen
|
Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
For the Gregorian calendar, Zeller's congruence is
for the Julian calendar it is
where
Note: In this algorithm January and February are counted as months 13 and 14 of the previous year. E.g. if it is 2 February 2010 (02/02/2010 in DD/MM/YYYY), the algorithm counts the date as the second day of the fourteenth month of 2009 (02/14/2009 in DD/MM/YYYY format) So the adjusted year above is:
For an ISO week date Day-of-Week d (1 = Monday to 7 = Sunday), use
These formulas are based on the observation that the day of the week progresses in a predictable manner based upon each subpart of that date. Each term within the formula is used to calculate the offset needed to obtain the correct day of the week.
For the Gregorian calendar, the various parts of this formula can therefore be understood as follows:
The reason that the formula differs between calendars is that the Julian calendar does not have a separate rule for leap centuries and is offset from the Gregorian calendar by a fixed number of days each century.
Since the Gregorian calendar was adopted at different times in different regions of the world, the location of an event is significant in determining the correct day of the week for a date that occurred during this transition period. This is only required through 1929, as this was the last year that the Julian calendar was still in use by any country on earth, and thus is not required for 1930 or later.
The formulae can be used proleptically , but "Year 0" is in fact year 1 BC (see astronomical year numbering ). The Julian calendar is in fact proleptic right up to 1 March AD 4 owing to mismanagement in Rome (but not Egypt) in the period since the calendar was put into effect on 1 January 45 BC (which was not a leap year). In addition, the modulo operator might truncate integers to the wrong direction (ceiling instead of floor). To accommodate this, one can add a sufficient multiple of 400 Gregorian or 700 Julian years.
For 1 January 2000, the date would be treated as the 13th month of 1999, so the values would be:
So the formula evaluates as ( 1 + 36 + 99 + 24 + 4 − 38 ) mod 7 = 126 mod 7 = 0 = Saturday {\displaystyle (1+36+99+24+4-38){\bmod {7}}=126{\bmod {7}}=0={\text{Saturday}}} .
(The 36 comes from ( 13 + 1 ) × 13 / 5 = 182 / 5 {\displaystyle (13+1)\times 13/5=182/5} , truncated to an integer.)
However, for 1 March 2000, the date is treated as the 3rd month of 2000, so the values become
so the formula evaluates as ( 1 + 10 + 0 + 0 + 5 − 40 ) mod 7 = − 24 mod 7 = 4 = Wednesday {\displaystyle (1+10+0+0+5-40){\bmod {7}}=-24{\bmod {7}}=4={\text{Wednesday}}} .
The formulas rely on the mathematician's definition of modulo division, which means that −2 mod 7 is equal to positive 5. Unfortunately, in the truncating way most computer languages implement the remainder function, −2 mod 7 returns a result of −2. So, to implement Zeller's congruence on a computer, the formulas should be altered slightly to ensure a positive numerator. The simplest way to do this is to replace − 2 J with + 5 J and − J with + 6 J .
For the Gregorian calendar, Zeller's congruence becomes
For the Julian calendar, Zeller's congruence becomes
One can readily see that, in a given year, the last day of February and March 1 are a good test dates.
As an aside note, if we have a three-digit number abc, where a, b, and c are the digits, each nonpositive if abc is nonpositive; we have (abc) mod 7 = 9*a + 3*b + c. Repeat the formula down to a single digit. If the result is 7, 8, or 9, then subtract 7. If, instead, the result is negative, then add 7. If the result is still negative, then add 7 one more time. Utilizing this approach, we can avoid the worries of language specific differences in mod 7 evaluations. This also may enhance a mental math technique.
Zeller used decimal arithmetic, and found it convenient to use J and K values as two-digit numbers representing the year and century. But when using a computer, it is simpler to handle the year as a single 4-digit number.
For the Gregorian calendar, Zeller's congruence becomes
where Y {\displaystyle Y} is a d j Y e a r {\displaystyle adjYear} , defined in the section above.
In this case there is no possibility of underflow due to the single negative term because ⌊ Y / 4 ⌋ ≥ ⌊ Y / 100 ⌋ {\displaystyle \left\lfloor Y/4\right\rfloor \geq \left\lfloor Y/100\right\rfloor } .
For the Julian calendar, Zeller's congruence becomes
The algorithm above is mentioned for the Gregorian case in RFC 3339 , Appendix B, albeit in an abridged form that returns 0 for Sunday.
At least three other algorithms share the overall structure of Zeller's congruence in its "common simplification" type, also using an m ∈ [3, 14] ∩ Z and the "modified year" construct.
Both expressions can be shown to progress in a way that is off by one compared to the original month-length component over the required range of m , resulting in a starting value of 0 for Sunday.
Each of these four similar imaged papers deals firstly with the day of the week and secondly with the date of Easter Sunday, for the Julian and Gregorian calendars. The pages link to translations into English.
|
https://en.wikipedia.org/wiki/Zeller's_congruence
|
In coding theory , Zemor's algorithm , designed and developed by Gilles Zemor, [ 1 ] is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman .
Zemor considered a typical class of Sipser–Spielman construction of expander codes , where the underlying graph is bipartite graph . Sipser and Spielman introduced a constructive family of asymptotically good linear-error codes together with a simple parallel algorithm that will always remove a constant fraction of errors. The article is based on Dr. Venkatesan Guruswami's course notes [ 2 ]
Zemor's algorithm is based on a type of expander graphs called Tanner graph . The construction of code was first proposed by Tanner. [ 3 ] The codes are based on double cover d {\displaystyle d} , regular expander G {\displaystyle G} , which is a bipartite graph. G {\displaystyle G} = ( V , E ) {\displaystyle \left(V,E\right)} , where V {\displaystyle V} is the set of vertices and E {\displaystyle E} is the set of edges and V {\displaystyle V} = A {\displaystyle A} ∪ {\displaystyle \cup } B {\displaystyle B} and A {\displaystyle A} ∩ {\displaystyle \cap } B {\displaystyle B} = ∅ {\displaystyle \emptyset } , where A {\displaystyle A} and B {\displaystyle B} denotes sets of vertices. Let n {\displaystyle n} be the number of vertices in each group, i.e , | A | = | B | = n {\displaystyle |A|=|B|=n} . The edge set E {\displaystyle E} be of size N {\displaystyle N} = n d {\displaystyle nd} and every edge in E {\displaystyle E} has one endpoint in both A {\displaystyle A} and B {\displaystyle B} . E ( v ) {\displaystyle E(v)} denotes the set of edges containing v {\displaystyle v} .
Assume an ordering on V {\displaystyle V} , therefore ordering will be done on every edges of E ( v ) {\displaystyle E(v)} for every v ∈ V {\displaystyle v\in V} . Let finite field F = G F ( 2 ) {\displaystyle \mathbb {F} =GF(2)} , and for a word x = ( x e ) , e ∈ E {\displaystyle x=(x_{e}),e\in E} in F N {\displaystyle \mathbb {F} ^{N}} , let the subword of the word will be indexed by E ( v ) {\displaystyle E(v)} . Let that word be denoted by ( x ) v {\displaystyle (x)_{v}} . The subset of vertices A {\displaystyle A} and B {\displaystyle B} induces every word x ∈ F N {\displaystyle x\in \mathbb {F} ^{N}} a partition into n {\displaystyle n} non-overlapping sub-words ( x ) v ∈ F d {\displaystyle \left(x\right)_{v}\in \mathbb {F} ^{d}} , where v {\displaystyle v} ranges over the elements of A {\displaystyle A} .
For constructing a code C {\displaystyle C} , consider a linear subcode C o {\displaystyle C_{o}} , which is a [ d , r o d , δ ] {\displaystyle [d,r_{o}d,\delta ]} code, where q {\displaystyle q} , the size of the alphabet is 2 {\displaystyle 2} . For any vertex v ∈ V {\displaystyle v\in V} , let v ( 1 ) , v ( 2 ) , … , v ( d ) {\displaystyle v(1),v(2),\ldots ,v(d)} be some ordering of the d {\displaystyle d} vertices of E {\displaystyle E} adjacent to v {\displaystyle v} . In this code, each bit x e {\displaystyle x_{e}} is linked with an edge e {\displaystyle e} of E {\displaystyle E} .
We can define the code C {\displaystyle C} to be the set of binary vectors x = ( x 1 , x 2 , … , x N ) {\displaystyle x=\left(x_{1},x_{2},\ldots ,x_{N}\right)} of { 0 , 1 } N {\displaystyle \{0,1\}^{N}} such that, for every vertex v {\displaystyle v} of V {\displaystyle V} , ( x v ( 1 ) , x v ( 2 ) , … , x v ( d ) ) {\displaystyle \left(x_{v(1)},x_{v(2)},\ldots ,x_{v(d)}\right)} is a code word of C o {\displaystyle C_{o}} . In this case, we can consider a special case when every edge of E {\displaystyle E} is adjacent to exactly 2 {\displaystyle 2} vertices of V {\displaystyle V} . It means that V {\displaystyle V} and E {\displaystyle E} make up, respectively, the vertex set and edge set of d {\displaystyle d} regular graph G {\displaystyle G} .
Let us call the code C {\displaystyle C} constructed in this way as ( G , C o ) {\displaystyle \left(G,C_{o}\right)} code. For a given graph G {\displaystyle G} and a given code C o {\displaystyle C_{o}} , there are several ( G , C o ) {\displaystyle \left(G,C_{o}\right)} codes as there are different ways of ordering edges incident to a given vertex v {\displaystyle v} , i.e., v ( 1 ) , v ( 2 ) , … , v ( d ) {\displaystyle v(1),v(2),\ldots ,v(d)} . In fact our code C {\displaystyle C} consist of all codewords such that x v ∈ C o {\displaystyle x_{v}\in C_{o}} for all v ∈ A , B {\displaystyle v\in A,B} . The code C {\displaystyle C} is linear [ N , K , D ] {\displaystyle [N,K,D]} in F {\displaystyle \mathbb {F} } as it is generated from a subcode C o {\displaystyle C_{o}} , which is linear. The code C {\displaystyle C} is defined as C = { c ∈ F N : ( c ) v ∈ C o } {\displaystyle C=\{c\in \mathbb {F} ^{N}:(c)_{v}\in C_{o}\}} for every v ∈ V {\displaystyle v\in V} .
In this figure, ( x ) v = ( x e 1 , x e 2 , x e 3 , x e 4 ) ∈ C o {\displaystyle (x)_{v}=\left(x_{e1},x_{e2},x_{e3},x_{e4}\right)\in C_{o}} . It shows the graph G {\displaystyle G} and code C {\displaystyle C} .
In matrix G {\displaystyle G} , let λ {\displaystyle \lambda } is equal to the second largest eigenvalue of adjacency matrix of G {\displaystyle G} . Here the largest eigenvalue is d {\displaystyle d} .
Two important claims are made:
( K N ) ≥ 2 r o − 1 {\displaystyle \left({\dfrac {K}{N}}\right)\geq 2r_{o}-1} . Let R {\displaystyle R} be the rate of a linear code constructed from a bipartite graph whose digit nodes have degree m {\displaystyle m} and whose subcode nodes have degree n {\displaystyle n} . If a single linear code with parameters ( n , k ) {\displaystyle \left(n,k\right)} and rate r = ( k n ) {\displaystyle r=\left({\dfrac {k}{n}}\right)} is associated with each of the subcode nodes, then k ≥ 1 − ( 1 − r ) m {\displaystyle k\geq 1-\left(1-r\right)m} .
Let R {\displaystyle R} be the rate of the linear code, which is equal to K / N {\displaystyle K/N} Let there are S {\displaystyle S} subcode nodes in the graph. If the degree of the subcode is n {\displaystyle n} , then the code must have ( n m ) S {\displaystyle \left({\dfrac {n}{m}}\right)S} digits, as each digit node is connected to m {\displaystyle m} of the ( n ) S {\displaystyle \left(n\right)S} edges in the graph. Each subcode node contributes ( n − k ) {\displaystyle (n-k)} equations to parity check matrix for a total of ( n − k ) S {\displaystyle \left(n-k\right)S} . These equations may not be linearly independent.
Therefore, ( K N ) ≥ ( ( n m ) S − ( n − k ) S ( n m ) S ) {\displaystyle \left({\dfrac {K}{N}}\right)\geq \left({\dfrac {({\dfrac {n}{m}})S-(n-k)S}{({\dfrac {n}{m}})S}}\right)} ≥ 1 − m ( n − k n ) {\displaystyle \geq 1-m\left({\dfrac {n-k}{n}}\right)} ≥ 1 − m ( 1 − r ) {\displaystyle \geq 1-m\left(1-r\right)} , Since the value of m {\displaystyle m} , i.e., the digit node of this bipartite graph is 2 {\displaystyle 2} and here r = r o {\displaystyle r=r_{o}} , we can write as: ( K N ) ≥ 2 r o − 1 {\displaystyle \left({\dfrac {K}{N}}\right)\geq 2r_{o}-1}
If S {\displaystyle S} is linear code of rate r {\displaystyle r} , block code length d {\displaystyle d} , and minimum relative distance δ {\displaystyle \delta } , and if B {\displaystyle B} is the edge vertex incidence graph of a d {\displaystyle d} – regular graph with second largest eigenvalue λ {\displaystyle \lambda } , then the code C ( B , S ) {\displaystyle C(B,S)} has rate at least 2 r o − 1 {\displaystyle 2r_{o}-1} and minimum relative distance at least ( ( δ − ( λ d ) 1 − ( λ d ) ) ) 2 {\displaystyle \left(\left({\dfrac {\delta -\left({\dfrac {\lambda }{d}}\right)}{1-\left({\dfrac {\lambda }{d}}\right)}}\right)\right)^{2}} .
Let B {\displaystyle B} be derived from the d {\displaystyle d} regular graph G {\displaystyle G} . So, the number of variables of C ( B , S ) {\displaystyle C(B,S)} is ( d n 2 ) {\displaystyle \left({\dfrac {dn}{2}}\right)} and the number of constraints is n {\displaystyle n} . According to Alon - Chung, [ 4 ] if X {\displaystyle X} is a subset of vertices of G {\displaystyle G} of size γ n {\displaystyle \gamma n} , then the number of edges contained in the subgraph is induced by X {\displaystyle X} in G {\displaystyle G} is at most ( d n 2 ) ( γ 2 + ( λ d ) γ ( 1 − γ ) ) {\displaystyle \left({\dfrac {dn}{2}}\right)\left(\gamma ^{2}+({\dfrac {\lambda }{d}})\gamma \left(1-\gamma \right)\right)} .
As a result, any set of ( d n 2 ) ( γ 2 + ( λ d ) γ ( 1 − γ ) ) {\displaystyle \left({\dfrac {dn}{2}}\right)\left(\gamma ^{2}+\left({\dfrac {\lambda }{d}}\right)\gamma \left(1-\gamma \right)\right)} variables will be having at least γ n {\displaystyle \gamma n} constraints as neighbours. So the average number of variables per constraint is : ( ( 2 n d 2 ) ( γ 2 + ( λ d ) γ ( 1 − γ ) ) γ n ) {\displaystyle \left({\dfrac {({\dfrac {2nd}{2}})\left(\gamma ^{2}+({\dfrac {\lambda }{d}})\gamma \left(1-\gamma \right)\right)}{\gamma n}}\right)} = d ( γ + ( λ d ) ( 1 − γ ) ) {\displaystyle =d\left(\gamma +({\dfrac {\lambda }{d}})\left(1-\gamma \right)\right)} → ( 2 ) {\displaystyle \rightarrow (2)}
So if d ( γ + ( λ d ) ( 1 − γ ) ) < γ d {\displaystyle d\left(\gamma +({\dfrac {\lambda }{d}})\left(1-\gamma \right)\right)<\gamma d} , then a word of relative weight ( γ 2 + ( λ d ) γ ( 1 − γ ) ) {\displaystyle \left(\gamma ^{2}+({\dfrac {\lambda }{d}})\gamma \left(1-\gamma \right)\right)} , cannot be a codeword of C ( B , S ) {\displaystyle C(B,S)} . The inequality ( 2 ) {\displaystyle (2)} is satisfied for γ < ( 1 − ( λ d ) δ − ( λ d ) ) {\displaystyle \gamma <\left({\dfrac {1-({\dfrac {\lambda }{d}})}{\delta -({\dfrac {\lambda }{d}})}}\right)} . Therefore, C ( B , S ) {\displaystyle C(B,S)} cannot have a non zero codeword of relative weight ( δ − ( λ d ) 1 − ( λ d ) ) 2 {\displaystyle \left({\dfrac {\delta -({\dfrac {\lambda }{d}})}{1-({\dfrac {\lambda }{d}})}}\right)^{2}} or less.
In matrix G {\displaystyle G} , we can assume that λ / d {\displaystyle \lambda /d} is bounded away from 1 {\displaystyle 1} . For those values of d {\displaystyle d} in which d − 1 {\displaystyle d-1} is odd prime, there are explicit constructions of sequences of d {\displaystyle d} - regular bipartite graphs with arbitrarily large number of vertices such that each graph G {\displaystyle G} in the sequence is a Ramanujan graph . It is called Ramanujan graph as it satisfies the inequality λ ( G ) ≤ 2 d − 1 {\displaystyle \lambda (G)\leq 2{\sqrt {d-1}}} . Certain expansion properties are visible in graph G {\displaystyle G} as the separation between the eigenvalues d {\displaystyle d} and λ {\displaystyle \lambda } . If the graph G {\displaystyle G} is Ramanujan graph, then that expression ( 1 ) {\displaystyle (1)} will become 0 {\displaystyle 0} eventually as d {\displaystyle d} becomes large.
The iterative decoding algorithm written below alternates between the vertices A {\displaystyle A} and B {\displaystyle B} in G {\displaystyle G} and corrects the codeword of C o {\displaystyle C_{o}} in A {\displaystyle A} and then it switches to correct the codeword C o {\displaystyle C_{o}} in B {\displaystyle B} . Here edges associated with a vertex on one side of a graph are not incident to other vertex on that side. In fact, it doesn't matter in which order, the set of nodes A {\displaystyle A} and B {\displaystyle B} are processed. The vertex processing can also be done in parallel.
The decoder D : F d → C o {\displaystyle \mathbb {D} :\mathbb {F} ^{d}\rightarrow C_{o}} stands for a decoder for C o {\displaystyle C_{o}} that recovers correctly with any codewords with less than ( d 2 ) {\displaystyle \left({\dfrac {d}{2}}\right)} errors.
Received word : w = ( w e ) , e ∈ E {\displaystyle w=(w_{e}),e\in E} z ← w {\displaystyle z\leftarrow w} For t ← 1 {\displaystyle t\leftarrow 1} to m {\displaystyle m} do // m {\displaystyle m} is the number of iterations { if ( t {\displaystyle t} is odd) // Here the algorithm will alternate between its two vertex sets. X ← A {\displaystyle X\leftarrow A} else X ← B {\displaystyle X\leftarrow B} Iteration t {\displaystyle t} : For every v ∈ X {\displaystyle v\in X} , let ( z ) v ← D ( ( z ) v ) {\displaystyle (z)_{v}\leftarrow \mathbb {D} ((z)_{v})} // Decoding z v {\displaystyle z_{v}} to its nearest codeword. } Output: z {\displaystyle z}
Since G {\displaystyle G} is bipartite, the set A {\displaystyle A} of vertices induces the partition of the edge set E {\displaystyle E} = ∪ v ∈ A E v {\displaystyle \cup _{v\in A}E_{v}} . The set B {\displaystyle B} induces another partition, E {\displaystyle E} = ∪ v ∈ B E v {\displaystyle \cup _{v\in B}E_{v}} .
Let w ∈ { 0 , 1 } N {\displaystyle w\in \{0,1\}^{N}} be the received vector, and recall that N = d n {\displaystyle N=dn} . The first iteration of the algorithm consists of applying the complete decoding for the code induced by E v {\displaystyle E_{v}} for every v ∈ A {\displaystyle v\in A} . This means that for replacing, for every v ∈ A {\displaystyle v\in A} , the vector ( w v ( 1 ) , w v ( 2 ) , … , w v ( d ) ) {\displaystyle \left(w_{v(1)},w_{v(2)},\ldots ,w_{v(d)}\right)} by one of the closest codewords of C o {\displaystyle C_{o}} . Since the subsets of edges E v {\displaystyle E_{v}} are disjoint for v ∈ A {\displaystyle v\in A} , the decoding of these n {\displaystyle n} subvectors of w {\displaystyle w} may be done in parallel.
The iteration will yield a new vector z {\displaystyle z} . The next iteration consists of applying the preceding procedure to z {\displaystyle z} but with A {\displaystyle A} replaced by B {\displaystyle B} . In other words, it consists of decoding all the subvectors induced by the vertices of B {\displaystyle B} . The coming iterations repeat those two steps alternately applying parallel decoding to the subvectors induced by the vertices of A {\displaystyle A} and to the subvectors induced by the vertices of B {\displaystyle B} . Note: [If d = n {\displaystyle d=n} and G {\displaystyle G} is the complete bipartite graph, then C {\displaystyle C} is a product code of C o {\displaystyle C_{o}} with itself and the above algorithm reduces to the natural hard iterative decoding of product codes].
Here, the number of iterations, m {\displaystyle m} is ( ( log n ) log ( 2 − α ) ) {\displaystyle \left({\dfrac {(\log {n})}{\log(2-\alpha )}}\right)} .
In general, the above algorithm can correct a code word whose Hamming weight is no more than ( 1 2 ) . α N δ ( ( δ 2 ) − ( λ d ) ) = ( ( 1 4 ) . α N ( δ 2 − O ( λ d ) ) {\displaystyle ({\dfrac {1}{2}}).\alpha N\delta \left(({\dfrac {\delta }{2}})-({\dfrac {\lambda }{d}})\right)=\left(({\dfrac {1}{4}}).\alpha N(\delta ^{2}-O({\dfrac {\lambda }{d}})\right)} for values of α < 1 {\displaystyle \alpha <1} . Here, the decoding algorithm is implemented as a circuit of size O ( N log N ) {\displaystyle O(N\log {N})} and depth O ( log N ) {\displaystyle O(\log {N})} that returns the codeword given that error vector has weight less than α N δ 2 ( 1 − ϵ ) / 4 {\displaystyle \alpha N\delta ^{2}(1-\epsilon )/4} .
If G {\displaystyle G} is a Ramanujan graph of sufficiently high degree, for any α < 1 {\displaystyle \alpha <1} , the decoding algorithm can correct ( α δ o 2 4 ) ( 1 − ∈ ) N {\displaystyle ({\dfrac {\alpha \delta _{o}^{2}}{4}})(1-\in )N} errors, in O ( log n ) {\displaystyle O(\log {n})} rounds ( where the big- O {\displaystyle O} notation hides a dependence on α {\displaystyle \alpha } ). This can be implemented in linear time on a single processor; on n {\displaystyle n} processors each round can be implemented in constant time.
Since the decoding algorithm is insensitive to the value of the edges and by linearity, we can assume that the transmitted codeword is the all zeros - vector. Let the received codeword be w {\displaystyle w} . The set of edges which has an incorrect value while decoding is considered. Here by incorrect value, we mean 1 {\displaystyle 1} in any of the bits. Let w = w 0 {\displaystyle w=w^{0}} be the initial value of the codeword, w 1 , w 2 , … , w t {\displaystyle w^{1},w^{2},\ldots ,w^{t}} be the values after first, second . . . t {\displaystyle t} stages of decoding.
Here, X i = e ∈ E | x e i = 1 {\displaystyle X^{i}={e\in E|x_{e}^{i}=1}} , and S i = v ∈ V i | E v ∩ X i + 1 ! = ∅ {\displaystyle S^{i}={v\in V^{i}|E_{v}\cap X^{i+1}!=\emptyset }} . Here S i {\displaystyle S^{i}} corresponds to those set of vertices that was not able to successfully decode their codeword in the i t h {\displaystyle i^{th}} round. From the above algorithm S 1 < S 0 {\displaystyle S^{1}<S^{0}} as number of unsuccessful vertices will be corrected in every iteration. We can prove that S 0 > S 1 > S 2 > ⋯ {\displaystyle S^{0}>S^{1}>S^{2}>\cdots } is a decreasing sequence.
In fact, | S i + 1 | <= ( 1 2 − α ) | S i | {\displaystyle |S_{i+1}|<=({\dfrac {1}{2-\alpha }})|S_{i}|} . As we are assuming, α < 1 {\displaystyle \alpha <1} , the above equation is in a geometric decreasing sequence .
So, when | S i | < n {\displaystyle |S_{i}|<n} , more than l o g 2 − α n {\displaystyle log_{2-\alpha }n} rounds are necessary. Furthermore, ∑ | S i | = n ∑ ( 1 ( 2 − α ) i ) = O ( n ) {\displaystyle \sum |S_{i}|=n\sum ({\dfrac {1}{(2-\alpha )^{i}}})=O(n)} , and if we implement the i t h {\displaystyle i^{th}} round in O ( | S i | ) {\displaystyle O(|S_{i}|)} time, then the total sequential running time will be linear.
given in. [ 5 ]
|
https://en.wikipedia.org/wiki/Zemor's_decoding_algorithm
|
Zener pinning is the influence of a dispersion of fine particles on the movement of low- and high-angle grain boundaries through a polycrystalline material. Small particles act to prevent the motion of such boundaries by exerting a pinning pressure which counteracts the driving force pushing the boundaries. Zener pinning is very important in materials processing as it has a strong influence on recovery , recrystallization and grain growth .
A boundary is an imperfection in the crystal structure and as such is associated with a certain quantity of energy . When a boundary passes through an incoherent particle then the portion of boundary that would be inside the particle essentially ceases to exist. In order to move past the particle some new boundary must be created, and this is energetically unfavourable. While the region of boundary near the particle is pinned, the rest of the boundary continues trying to move forward under its own driving force. This results in the boundary becoming bowed between those points where it is anchored to the particles.
The figure illustrates a boundary intersecting with an incoherent particle of radius r {\displaystyle r} . The pinning force acts along the line of contact between the boundary and the particle, i.e., a circle of diameter A B = 2 π r cos θ {\displaystyle AB=2\pi r\cos \theta } . The force per unit length of boundary in contact is γ sin θ {\displaystyle \gamma \sin \theta } , where γ {\displaystyle \gamma } is the interfacial energy . Hence, the total force acting on the particle-boundary interface is
The maximum restraining force occurs when θ = 45 ∘ {\displaystyle \theta =45^{\circ }} , so F m a x = π r γ {\displaystyle F_{max}=\pi r\gamma } .
In order to determine the pinning force resulting from a given dispersion of particles, Clarence Zener made several important assumptions:
For a volume fraction, F v {\displaystyle F_{v}} , of randomly distributed spherical particles of radius r {\displaystyle r} , the number or particles per unit volume (number density) is given by
From this total number density, only those particles that are within one particle radius will be able to interact with the boundary. If the boundary is essentially planar , then this fraction will be given by
Given the assumption that all particles apply the maximum pinning force, F m a x {\displaystyle F_{max}} , the total pinning pressure exerted by the particle distribution per unit area of the boundary is
This is referred to as the Zener pinning pressure. It follows that large pinning pressures are produced by:
The Zener pinning pressure is orientation dependent, which means that the exact pinning pressure depends on the amount of coherence at the grain boundaries. [1]
Particle pinning has been studied extensively with computer simulations, such as Monte Carlo and phase field methods. These methods can capture interfaces with complex shapes and provide better approximations for the pinning force.
- "Contribution à l'étude de la dynamique du Zener pinning: simulations numériques par éléments finis", Thesis in French (2003). by G. Couturier. - "3D finite element simulation of the inhibition of normal grain growth by particles". Acta Materialia, 53, pp. 977–989, (2005). by G. Couturier, R. Doherty, Cl. Maurice, R. Fortunier. - "3D finite element simulation of Zener pinning dynamics". Philosophical Magazine, vol 83, n° 30, pp. 3387–3405, (2003). by G. Couturier, Cl. Maurice, R. Fortunier.
|
https://en.wikipedia.org/wiki/Zener_pinning
|
The Zener ratio is a dimensionless number that is used to quantify the anisotropy for cubic crystals . It is sometimes referred as anisotropy ratio and is named after Clarence Zener . [ 1 ] Conceptually, it quantifies how far a material is from being isotropic (where the value of 1 means an isotropic material).
Its mathematical definition is [ 1 ] [ 2 ]
a r = 2 C 44 C 11 − C 12 , {\displaystyle a_{r}={\frac {2C_{44}}{C_{11}-C_{12}}},}
where C i j {\displaystyle C_{ij}} refers to elastic constants in Voigt notation .
Cubic materials are special orthotropic materials that are invariant with respect to 90° rotations with respect to the principal axes, i.e., the material is the same along its principal axes. Due to these additional symmetries the stiffness tensor can be written with just three different material properties like
The inverse of this matrix is commonly written as [ 3 ]
where E {\displaystyle {E}\,} is the Young's modulus , G {\displaystyle G\,} is the shear modulus , and ν {\displaystyle \nu \,} is the Poisson's ratio . Therefore, we can think of the ratio as the relation between the shear modulus for the cubic material and its (isotropic) equivalent:
a r = G E / [ 2 ( 1 + ν ) ] = 2 ( 1 + ν ) G E ≡ 2 C 44 C 11 − C 12 . {\displaystyle a_{r}={\frac {G}{E/[2(1+\nu )]}}={\frac {2(1+\nu )G}{E}}\equiv {\frac {2C_{44}}{C_{11}-C_{12}}}.}
The Zener ratio is only applicable to cubic crystals. To overcome this limitation, a 'Universal Elastic Anisotropy Index (AU)' [ 4 ] was formulated from variational principles of elasticity and tensor algebra. The AU is now used to quantify the anisotropy of elastic crystals of all classes.
The Tensorial Anisotropy Index A T [ 5 ] extends the Zener ratio for fully anisotropic materials and overcomes the limitation of the AU that is designed for materials exhibiting internal symmetries of elastic crystals, which is not always observed in multi-component composites. It takes into consideration all the 21 coefficients of the fully anisotropic stiffness tensor and covers the directional differences among the stiffness tensor groups.
It is composed of two major parts A I {\displaystyle A^{I}} and A A {\displaystyle A^{A}} , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that A T = A I + A A . {\displaystyle A^{T}=A^{I}+A^{A}.} This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index A A {\displaystyle A^{A}} covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
A I = A I , z + A I , c o v = 2 ( C 44 + C 55 + C 66 ) ( C 11 + C 22 + C 33 ) − ( C 12 + C 13 + C 23 ) + ∑ i = 1 3 α ( C G i ) , {\displaystyle A^{I}=A^{I,z}+A^{I,cov}={\frac {2(C_{44}+C_{55}+C_{66})}{(C_{11}+C_{22}+C_{33})-(C_{12}+C_{13}+C_{23})}}+\sum _{i=1}^{3}{\alpha (C_{Gi})},}
where α ( C G i ) {\displaystyle \alpha (C_{Gi})} is the coefficient of variation for each stiffness group accounting for directional differences of material stiffness, i.e. C G 1 = [ C 11 , C 22 , C 33 ] , C G 2 = [ C 44 , C 55 , C 66 ] , C G 3 = [ C 12 , C 23 , C 13 ] . {\displaystyle C_{G1}=[C_{11},C_{22},C_{33}],C_{G2}=[C_{44},C_{55},C_{66}],C_{G3}=[C_{12},C_{23},C_{13}].} In cubic materials each stiffness component in groups 1-3 has equal value and thus this expression reduces directly to Zener ratio for cubic materials.
The second component of this index A A {\displaystyle A^{A}} is non-zero for complex materials or composites with only few or no symmetries in their internal structure. In such cases the remaining stiffness coefficients joined in three groups are not null C G 4 = [ C 34 , C 45 , C 56 ] , C G 5 = [ C 14 , C 25 , C 36 ] , C G 6 = [ C 24 , C 35 , C 46 , C 15 , C 26 , C 16 ] . {\displaystyle C_{G4}=[C_{34},C_{45},C_{56}],C_{G5}=[C_{14},C_{25},C_{36}],C_{G6}=[C_{24},C_{35},C_{46},C_{15},C_{26},C_{16}].}
|
https://en.wikipedia.org/wiki/Zener_ratio
|
In materials science , the Zener–Hollomon parameter , typically denoted as Z , is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. [ 1 ] It is given by
where ε ˙ {\textstyle {\dot {\varepsilon }}} is the strain rate , Q is the activation energy , R is the gas constant , and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition. It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel.
When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation.
|
https://en.wikipedia.org/wiki/Zener–Hollomon_parameter
|
The zenith ( UK : / ˈ z ɛ n ɪ θ / , US : / ˈ z iː -/ ) [ 1 ] [ 2 ] is the imaginary point on the celestial sphere directly "above" a particular location. "Above" means in the vertical direction ( plumb line ) opposite to the gravity direction at that location ( nadir ). The zenith is the "highest" point on the celestial sphere. The direction opposite of the zenith is the nadir .
The word zenith derives from an inaccurate reading of the Arabic expression سمت الرأس ( samt al-raʾs ), meaning "direction of the head" or "path above the head", by Medieval Latin scribes in the Middle Ages (during the 14th century), possibly through Old Spanish . [ 3 ] It was reduced to samt ("direction") and miswritten as senit / cenit , the m being misread as ni . Through the Old French cenith , zenith first appeared in the 17th century. [ 4 ]
The term zenith sometimes means the highest point , way, or level reached by a celestial body on its daily apparent path around a given point of observation. [ 5 ] This sense of the word is often used to describe the position of the Sun ("The sun reached its zenith..."), but to an astronomer, the Sun does not have its own zenith and is at the zenith only if it is directly overhead.
In a scientific context, the zenith is the direction of reference for measuring the zenith angle (or zenith angular distance ), the angle between a direction of interest (e.g. a star) and the local zenith - that is, the complement of the altitude angle (or elevation angle ).
The Sun reaches the observer's zenith when it is 90° above the horizon, and this only happens between the Tropic of Cancer and the Tropic of Capricorn . The point where this occurs is known as the subsolar point . In Islamic astronomy , the passing of the Sun over the zenith of Mecca becomes the basis of the qibla observation by shadows twice a year on 27/28 May and 15/16 July. [ 6 ] [ 7 ]
At a given location during the course of a day, the Sun reaches not only its zenith but also its nadir , at the antipode of that location 12 hours from solar noon .
In astronomy , the altitude in the horizontal coordinate system and the zenith angle are complementary angles , with the horizon perpendicular to the zenith. The astronomical meridian is also determined by the zenith, and is defined as a circle on the celestial sphere that passes through the zenith, nadir, and the celestial poles .
A zenith telescope is a type of telescope designed to point straight up at or near the zenith, and used for precision measurement of star positions, to simplify telescope construction, or both. The NASA Orbital Debris Observatory and the Large Zenith Telescope are both zenith telescopes, since the use of liquid mirrors meant these telescopes could only point straight up.
On the International Space Station , zenith and nadir are used instead of up and down , referring to directions within and around the station, relative to the earth.
Zenith stars (also "star on top", "overhead star", "latitude star") [ 8 ] are stars whose declination equals the latitude of the observers location, and hence at some time in the day or night pass culminate (pass) through the zenith. When at the zenith the right ascension of the star equals the local sidereal time at your location. In celestial navigation this allows latitude to be determined, since the declination of the star equals the latitude of the observer. If the current time at Greenwich is known at the time of the observation, the observers longitude can also be determined from the right ascension of the star. Hence "Zenith stars" lie on or near the circle of declination equal to the latitude of the observer ("zenith circle"). Zenith stars are not to be confused with "steering stars" [ 8 ] of a sidereal compass rose of a sidereal compass.
Media related to Zenith (topography) at Wikimedia Commons
|
https://en.wikipedia.org/wiki/Zenith
|
Zeno's paradoxes are a series of philosophical arguments presented by the ancient Greek philosopher Zeno of Elea (c. 490–430 BC), [ 1 ] [ 2 ] primarily known through the works of Plato , Aristotle , and later commentators like Simplicius of Cilicia . [ 2 ] Zeno devised these paradoxes to support his teacher Parmenides 's philosophy of monism , which posits that despite our sensory experiences, reality is singular and unchanging. The paradoxes famously challenge the notions of plurality (the existence of many things), motion, space, and time by suggesting they lead to logical contradictions .
Zeno's work, primarily known from second-hand accounts since his original texts are lost, comprises forty "paradoxes of plurality," which argue against the coherence of believing in multiple existences, and several arguments against motion and change. [ 2 ] Of these, only a few are definitively known today, including the renowned "Achilles Paradox", which illustrates the problematic concept of infinite divisibility in space and time . [ 1 ] [ 2 ] In this paradox, Zeno argues that a swift runner like Achilles cannot overtake a slower moving tortoise with a head start, because the distance between them can be infinitely subdivided, implying Achilles would require an infinite number of steps to catch the tortoise. [ 1 ] [ 2 ]
These paradoxes have stirred extensive philosophical and mathematical discussion throughout history , [ 1 ] [ 2 ] particularly regarding the nature of infinity and the continuity of space and time. Initially, Aristotle 's interpretation, suggesting a potential rather than actual infinity, was widely accepted. [ 1 ] However, modern solutions leveraging the mathematical framework of calculus have provided a different perspective, highlighting Zeno's significant early insight into the complexities of infinity and continuous motion. [ 1 ] Zeno's paradoxes remain a pivotal reference point in the philosophical and mathematical exploration of reality, motion, and the infinite, influencing both ancient thought and modern scientific understanding. [ 1 ] [ 2 ]
The origins of the paradoxes are somewhat unclear, but they are generally thought to have been developed to support Parmenides ' doctrine of monism , that all of reality is one, and that all change is impossible , that is, that nothing ever changes in location or in any other respect. [ 1 ] [ 2 ] Diogenes Laërtius , citing Favorinus , says that Zeno's teacher Parmenides was the first to introduce the paradox of Achilles and the tortoise. But in a later passage, Laërtius attributes the origin of the paradox to Zeno, explaining that Favorinus disagrees. [ 3 ] Modern academics attribute the paradox to Zeno. [ 1 ] [ 2 ]
Many of these paradoxes argue that contrary to the evidence of one's senses, motion is nothing but an illusion . [ 1 ] [ 2 ] In Plato's Parmenides (128a–d), Zeno is characterized as taking on the project of creating these paradoxes because other philosophers claimed paradoxes arise when considering Parmenides' view. Zeno's arguments may then be early examples of a method of proof called reductio ad absurdum , also known as proof by contradiction . Thus Plato has Zeno say the purpose of the paradoxes "is to show that their hypothesis that existences are many, if properly followed up, leads to still more absurd results than the hypothesis that they are one." [ 4 ] Plato has Socrates claim that Zeno and Parmenides were essentially arguing exactly the same point. [ 5 ] They are also credited as a source of the dialectic method used by Socrates. [ 6 ]
Some of Zeno's nine surviving paradoxes (preserved in Aristotle's Physics [ 7 ] [ 8 ] and Simplicius's commentary thereon) are essentially equivalent to one another. Aristotle offered a response to some of them. [ 7 ] Popular literature often misrepresents Zeno's arguments. For example, Zeno is often said to have argued that the sum of an infinite number of terms must itself be infinite–with the result that not only the time, but also the distance to be travelled, become infinite. [ 9 ] However, none of the original ancient sources has Zeno discussing the sum of any infinite series. Simplicius has Zeno saying "it is impossible to traverse an infinite number of things in a finite time". This presents Zeno's problem not with finding the sum , but rather with finishing a task with an infinite number of steps: how can one ever get from A to B, if an infinite number of (non-instantaneous) events can be identified that need to precede the arrival at B, and one cannot reach even the beginning of a "last event"? [ 10 ] [ 11 ] [ 12 ] [ 13 ]
Three of the strongest and most famous—that of Achilles and the tortoise, the Dichotomy argument, and that of an arrow in flight—are presented in detail below.
That which is in locomotion must arrive at the half-way stage before it arrives at the goal.
Suppose Atalanta wishes to walk to the end of a path. Before she can get there, she must get halfway there. Before she can get halfway there, she must get a quarter of the way there. Before traveling a quarter, she must travel one-eighth; before an eighth, one-sixteenth; and so on.
The resulting sequence can be represented as:
This description requires one to complete an infinite number of tasks, which Zeno maintains is an impossibility. [ 14 ]
This sequence also presents a second problem in that it contains no first distance to run, for any possible ( finite ) first distance could be divided in half, and hence would not be first after all. Hence, the trip cannot even begin. The paradoxical conclusion then would be that travel over any finite distance can be neither completed nor begun, and so all motion must be an illusion . [ 15 ]
This argument is called the " Dichotomy " because it involves repeatedly splitting a distance into two parts. An example with the original sense can be found in an asymptote . It is also known as the Race Course paradox.
In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.
In the paradox of Achilles and the tortoise , Achilles is in a footrace with a tortoise. Achilles allows the tortoise a head start of 100 meters, for example. Suppose that each racer starts running at some constant speed, one faster than the other. After some finite time, Achilles will have run 100 meters, bringing him to the tortoise's starting point. During this time, the tortoise has run a much shorter distance, say 2 meters. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles arrives somewhere the tortoise has been, he still has some distance to go before he can even reach the tortoise. As Aristotle noted, this argument is similar to the Dichotomy. [ 16 ] It lacks, however, the apparent conclusion of motionlessness.
If everything when it occupies an equal space is at rest at that instant of time, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless at that instant of time and at the next instant of time but if both instants of time are taken as the same instant or continuous instant of time then it is in motion. [ 17 ]
In the arrow paradox, Zeno states that for motion to occur, an object must change the position which it occupies. He gives an example of an arrow in flight. He states that at any one (durationless) instant of time, the arrow is neither moving to where it is, nor to where it is not. [ 18 ] It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.
Whereas the first two paradoxes divide space, this paradox starts by dividing time—and not into segments, but into points. [ 19 ]
Aristotle gives three other paradoxes.
From Aristotle:
If everything that exists has a place, place too will have a place, and so on ad infinitum . [ 20 ]
Description of the paradox from the Routledge Dictionary of Philosophy :
The argument is that a single grain of millet makes no sound upon falling, but a thousand grains make a sound. Hence a thousand nothings become something, an absurd conclusion. [ 21 ]
Aristotle's response:
Zeno's reasoning is false when he argues that there is no part of the millet that does not make a sound: for there is no reason why any such part should not in any length of time fail to move the air that the whole bushel moves in falling. In fact it does not of itself move even such a quantity of the air as it would move if this part were by itself: for no part even exists otherwise than potentially. [ 22 ]
Description from Nick Huggett:
This is a Parmenidean argument that one cannot trust one's sense of hearing. Aristotle's response seems to be that even inaudible sounds can add to an audible sound. [ 23 ]
From Aristotle:
... concerning the two rows of bodies, each row being composed of an equal number of bodies of equal size, passing each other on a race-course as they proceed with equal velocity in opposite directions, the one row originally occupying the space between the goal and the middle point of the course and the other that between the middle point and the starting-post. This...involves the conclusion that half a given time is equal to double that time. [ 24 ]
An expanded account of Zeno's arguments, as presented by Aristotle, is given in Simplicius's commentary On Aristotle's Physics . [ 25 ] [ 2 ] [ 1 ]
According to Angie Hobbs of The University of Sheffield, this paradox is intended to be considered together with the paradox of Achilles and the Tortoise, problematizing the concept of discrete space & time where the other problematizes the concept of infinitely divisible space & time. [ 26 ]
According to Simplicius , Diogenes the Cynic said nothing upon hearing Zeno's arguments, but stood up and walked, in order to demonstrate the falsity of Zeno's conclusions. [ 25 ] [ 2 ] To fully solve any of the paradoxes, however, one needs to show what is wrong with the argument, not just the conclusions. Throughout history several solutions have been proposed, among the earliest recorded being those of Aristotle and Archimedes.
Aristotle (384 BC–322 BC) remarked that as the distance decreases, the time needed to cover those distances also decreases, so that the time needed also becomes increasingly small. [ 27 ] [ failed verification ] [ 28 ] Aristotle also distinguished "things infinite in respect of divisibility" (such as a unit of space that can be mentally divided into ever smaller units while remaining spatially the same) from things (or distances) that are infinite in extension ("with respect to their extremities"). [ 29 ] Aristotle's objection to the arrow paradox was that "Time is not composed of indivisible nows any more than any other magnitude is composed of indivisibles." [ 30 ] Thomas Aquinas , commenting on Aristotle's objection, wrote "Instants are not parts of time, for time is not made up of instants any more than a magnitude is made of points, as we have already proved. Hence it does not follow that a thing is not in motion in a given time, just because it is not in motion in any instant of that time." [ 31 ] [ 32 ] [ 33 ]
Some mathematicians and historians, such as Carl Boyer , hold that Zeno's paradoxes are simply mathematical problems, for which modern calculus provides a mathematical solution. [ 34 ] Infinite processes remained theoretically troublesome in mathematics until the late 19th century. With the epsilon-delta definition of limit , Weierstrass and Cauchy developed a rigorous formulation of the logic and calculus involved. These works resolved the mathematics involving infinite processes. [ 35 ] [ 36 ]
Some philosophers , however, say that Zeno's paradoxes and their variations (see Thomson's lamp ) remain relevant metaphysical problems. [ 10 ] [ 11 ] [ 12 ] While mathematics can calculate where and when the moving Achilles will overtake the Tortoise of Zeno's paradox, philosophers such as Kevin Brown [ 10 ] and Francis Moorcroft [ 11 ] hold that mathematics does not address the central point in Zeno's argument, and that solving the mathematical issues does not solve every issue the paradoxes raise. Brown concludes "Given the history of 'final resolutions', from Aristotle onwards, it's probably foolhardy to think we've reached the end. It may be that Zeno's arguments on motion, because of their simplicity and universality, will always serve as a kind of ' Rorschach image ' onto which people can project their most fundamental phenomenological concerns (if they have any)." [ 10 ]
An alternative conclusion, proposed by Henri Bergson in his 1896 book Matter and Memory , is that, while the path is divisible, the motion is not. [ 37 ] [ 38 ]
In 2003, Peter Lynds argued that all of Zeno's motion paradoxes are resolved by the conclusion that instants in time and instantaneous magnitudes do not physically exist. [ 39 ] [ 40 ] [ 41 ] Lynds argues that an object in relative motion cannot have an instantaneous or determined relative position (for if it did, it could not be in motion), and so cannot have its motion fractionally dissected as if it does, as is assumed by the paradoxes. Nick Huggett argues that Zeno is assuming the conclusion when he says that objects that occupy the same space as they do at rest must be at rest. [ 19 ]
Based on the work of Georg Cantor , [ 42 ] Bertrand Russell offered a solution to the paradoxes, what is known as the "at-at theory of motion". It agrees that there can be no motion "during" a durationless instant, and contends that all that is required for motion is that the arrow be at one point at one time, at another point another time, and at appropriate points between those two points for intervening times. In this view motion is just change in position over time. [ 43 ] [ 44 ]
Another proposed solution is to question one of the assumptions Zeno used in his paradoxes (particularly the Dichotomy), which is that between any two different points in space (or time), there is always another point. Without this assumption there are only a finite number of distances between two points, hence there is no infinite sequence of movements, and the paradox is resolved. According to Hermann Weyl , the assumption that space is made of finite and discrete units is subject to a further problem, given by the " tile argument " or "distance function problem". [ 45 ] [ 46 ] According to this, the length of the hypotenuse of a right angled triangle in discretized space is always equal to the length of one of the two sides, in contradiction to geometry. Jean Paul Van Bendegem has argued that the Tile Argument can be resolved, and that discretization can therefore remove the paradox. [ 34 ] [ 47 ]
In 1977, [ 48 ] physicists E. C. George Sudarshan and B. Misra discovered that the dynamical evolution ( motion ) of a quantum system can be hindered (or even inhibited) through observation of the system . [ 49 ] This effect is usually called the " Quantum Zeno effect " as it is strongly reminiscent of Zeno's arrow paradox. This effect was first theorized in 1958. [ 50 ]
In the field of verification and design of timed and hybrid systems , the system behaviour is called Zeno if it includes an infinite number of discrete steps in a finite amount of time. [ 51 ] Some formal verification techniques exclude these behaviours from analysis, if they are not equivalent to non-Zeno behaviour. [ 52 ] [ 53 ] In systems design these behaviours will also often be excluded from system models, since they cannot be implemented with a digital controller. [ 54 ]
Roughly contemporaneously during the Warring States period (475–221 BCE), ancient Chinese philosophers from the School of Names , a school of thought similarly concerned with logic and dialectics, developed paradoxes similar to those of Zeno. The works of the School of Names have largely been lost, with the exception of portions of the Gongsun Longzi . The second of the Ten Theses of Hui Shi suggests knowledge of infinitesimals: That which has no thickness cannot be piled up; yet it is a thousand li in dimension. Among the many puzzles of his recorded in the Zhuangzi is one very similar to Zeno's Dichotomy:
"If from a stick a foot long you every day take the half of it, in a myriad ages it will not be exhausted."
The Mohist canon appears to propose a solution to this paradox by arguing that in moving across a measured length, the distance is not covered in successive fractions of the length, but in one stage. Due to the lack of surviving works from the School of Names, most of the other paradoxes listed are difficult to interpret. [ 56 ]
"What the Tortoise Said to Achilles", [ 57 ] written in 1895 by Lewis Carroll , describes a paradoxical infinite regress argument in the realm of pure logic. It uses Achilles and the Tortoise as characters in a clear reference to Zeno's paradox of Achilles. [ 58 ]
|
https://en.wikipedia.org/wiki/Zeno's_paradoxes
|
Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x 8 ), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde , a 16th-century Welsh physician, mathematician and writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike ); he wrote that it "doeth represent the square of squares squaredly".
At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic , which is a German spelling of the medieval Italian word censo , meaning 'squared'. [ 1 ] Since the square of a square of a number is its fourth power , Recorde used the word zenzizenzic (spelled by him as zenzizenzike ) to express it. Some of the terms had prior use in Latin zenzicubicus , zensizensicus and zensizenzum . [ 2 ] Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube , is found in Samuel Jeake 's Arithmetick Surveighed and Reviewed . Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation,
Samuel Jeake gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick (1701): [ 3 ]
The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary ( OED ) has only one citation for it. [ 4 ] [ 5 ] As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED. [ 6 ] [ 7 ]
Recorde proposed three mathematical terms by which any power (that is, index or exponent ) greater than 1 could be expressed: zenzic , i.e. squared; cubic ; and sursolid , i.e. raised to a prime number greater than three, the smallest of which is five. Sursolids were as follows: 5 was the first; 7, the second; 11, the third; 13, the fourth; etc.
Therefore, a number raised to the power of six would be zenzicubic , a number raised to the power of seven would be the second sursolid, hence bissursolid (not a multiple of two and three), a number raised to the twelfth power would be the "zenzizenzicubic" and a number raised to the power of ten would be the square of the (first) sursolid . The fourteenth power was the square of the second sursolid, and the twenty-second was the square of the third sursolid.
Jeake's text appears to designate a written exponent of 0 as being equal to an "absolute number, as if it had no Mark", thus using the notation x 0 to refer to an independent term of a polynomial, while a written exponent of 1, in his text, denotes "the Root of any number" (using root with the meaning of the base number, i.e. its first power x 1 , as demonstrated in the examples provided in the book).
|
https://en.wikipedia.org/wiki/Zenzizenzizenzic
|
Zeolite is a group of several microporous , crystalline aluminosilicate minerals commonly used as commercial adsorbents and catalysts . [ 1 ] They mainly consist of silicon , aluminium , oxygen , and have the general formula M n+ 1/n (AlO 2 ) − (SiO 2 ) x ・y H 2 O where M n+ 1/n is either a metal ion or H + .
The term was originally coined in 1756 by Swedish mineralogist Axel Fredrik Cronstedt , who observed that rapidly heating a material, believed to have been stilbite , produced large amounts of steam from water that had been adsorbed by the material. Based on this, he called the material zeolite , from the Greek ζέω (zéō) , meaning "to boil" and λίθος (líthos) , meaning "stone". [ 2 ]
Zeolites occur naturally, but are also produced industrially on a large scale. As of December 2018 [update] , 253 unique zeolite frameworks have been identified, and over 40 naturally occurring zeolite frameworks are known. [ 3 ] [ 4 ] Every new zeolite structure that is obtained is examined by the International Zeolite Association Structure Commission (IZA-SC) and receives a three-letter designation. [ 5 ]
Zeolites are white solids with ordinary handling properties, like many routine aluminosilicate minerals, e.g. feldspar . They have the general formula (MAlO 2 )(SiO 2 ) x (H 2 O) y where M + is usually H + and Na + . The Si/Al ratio is variable, which provides a means to tune the properties. Zeolites with a Si/Al ratios higher than about 3 are classified as high-silica zeolites , which tend to be more hydrophobic. The H + and Na + can be replaced by diverse cations, because zeolites have ion exchange properties. The nature of the cations influences the porosity of zeolites.
Zeolites have microporous structures with a typical diameter of 0.3–0.8 nm. Like most aluminosilicates, the framework is formed by linking of aluminum and silicon atoms by oxides. This linking leads to a 3-dimensional network of Si-O-Al, Si-O-Si, and Al-O-Al linkages. The aluminum centers are negatively charged, which requires an accompanying cation. These cations are hydrated during the formation of the materials. The hydrated cations interrupt the otherwise dense network of Si-O-Al, Si-O-Si, and Al-O-Al linkage, leading to regular water-filled cavities. Because of the porosity of the zeolite, the water can exit the material through channels. Because of the rigidity of the zeolite framework, the loss of water does not result in collapse of the cavities and channels. This aspect – the ability to generate voids within the solid material – underpins the ability of zeolites to function as catalysts. They possess high physical and chemical stability due to the large covalent bonding contribution. They have excellent hydrophobicity and are suited for adsorption of bulky, hydrophobic molecules such as hydrocarbons. In addition to that, high-silica zeolites are H + exchangeable, unlike natural zeolites, and are used as solid acid catalysts . The acidity is strong enough to protonate hydrocarbons and high-silica zeolites are used in acid catalysis processes such as fluid catalytic cracking in petrochemical industry. [ 6 ]
The structures of hundreds of zeolites have been determined. Most do not occur naturally. For each structure, the International Zeolite Association (IZA) gives a three-letter code called framework type code (FTC). [ 3 ] For example, the major molecular sieves, 3A, 4A and 5A, are all LTA (Linde Type A). Most commercially available natural zeolites are of the MOR, HEU or ANA-types.
An example of the notation of the ring structure of zeolite and other silicate materials is shown in the upper right figure. The middle figure shows a common notation using structural formula . The left figure emphasizes the SiO 4 tetrahedral structure. Connecting oxygen atoms together creates a four-membered ring of oxygen (blue bold line). In fact, such a ring substructure is called four membered ring or simply four-ring . The figure on the right shows a 4-ring with Si atoms connected to each other, which is the most common way to express the topology of the framework.
The figure on the right compares the typical framework structures of LTA (left) and FAU (right). Both zeolites share the truncated octahedral structure ( sodalite cage) (purple line). However, the way they are connected (yellow line) is different: in LTA, the four-membered rings of the cage are connected to each other to form a skeleton, while in FAU, the six-membered rings are connected to each other. As a result, the pore entrance of LTA is an 8-ring (0.41 nm [ 3 ] ) and belongs to the small pore zeolite , while the pore entrance of FAU is a 12-ring (0.74 nm [ 3 ] ) and belongs to the large pore zeolite , respectively. Materials with a 10-ring are called medium pore zeolites , a typical example being ZSM-5 (MFI).
Although more than 200 types of zeolites are known, only about 100 types of aluminosilicate are available. In addition, there are only a few types that can be synthesized in industrially feasible way and have sufficient thermal stability to meet the requirements for industrial use. In particular, the FAU (faujasite, USY), * BEA (beta), MOR (high-silica mordenite), MFI (ZSM-5), and FER (high-silica ferrierite) types are called the big five of high silica zeolites, [ 7 ] and industrial production methods have been established.
The term molecular sieve refers to a particular property of these materials, i.e., the ability to selectively sort molecules based primarily on a size exclusion process. This is due to a very regular pore structure of molecular dimensions. The maximum size of the molecular or ionic species that can enter the pores of a zeolite is controlled by the dimensions of the channels. These are conventionally defined by the ring size of the aperture, where, for example, the term "eight-ring" refers to a closed-loop that is built from eight tetrahedrally coordinated silicon (or aluminium) atoms and eight oxygen atoms. These rings are not always perfectly symmetrical due to a variety of causes, including strain induced by the bonding between units that are needed to produce the overall structure or coordination of some of the oxygen atoms of the rings to cations within the structure. Therefore, the pores in many zeolites are not cylindrical.
Isomorphous substitution of Si in zeolites can be possible for some heteroatoms such as titanium , [ 8 ] zinc [ 9 ] and germanium . [ 10 ] Al atoms in zeolites can be also structurally replaced with boron [ 11 ] and gallium . [ 12 ]
The silicoaluminophosphate type (AlPO molecular sieve), [ 13 ] in which Si is isomorphous with Al and P and Al is isomorphous with Si, and the gallogermanate [ 14 ] and others are known.
Some of the more common mineral zeolites are analcime , chabazite , clinoptilolite , heulandite , natrolite , phillipsite , and stilbite . An example of the mineral formula of a zeolite is: Na 2 Al 2 Si 3 O 10 ·2H 2 O, the formula for natrolite .
Natural zeolites form where volcanic rocks and ash layers react with alkaline groundwater. Zeolites also crystallize in post-depositional environments over periods ranging from thousands to millions of years in shallow marine basins. Naturally occurring zeolites are rarely pure and are contaminated to varying degrees by other minerals, metals, quartz , or other zeolites. For this reason, naturally occurring zeolites are excluded from many important commercial applications where uniformity and purity are essential. [ citation needed ]
Zeolites transform to other minerals under weathering , hydrothermal alteration or metamorphic conditions. Some examples: [ 15 ]
Thomsonites , one of the rarer zeolite minerals, have been collected as gemstones from a series of lava flows along Lake Superior in Minnesota and, to a lesser degree, in Michigan . Thomsonite nodules from these areas have eroded from basalt lava flows and are collected on beaches and by scuba divers in Lake Superior.
These thomsonite nodules have concentric rings in combinations of colors: black, white, orange, pink, purple, red, and many shades of green. Some nodules have copper inclusions and rarely will be found with copper "eyes". When polished by a lapidary , the thomsonites sometimes displays a "cat's eye" effect ( chatoyancy ). [ 16 ]
The first synthetic structure was reported by Richard Barrer . [ 17 ] Industrially important zeolites are produced synthetically. Typical procedures entail heating aqueous solutions of alumina and silica with sodium hydroxide . Equivalent reagents include sodium aluminate and sodium silicate . Further variations include the use of structure directing agents (SDA) such as quaternary ammonium cations . [ 18 ]
Synthetic zeolites hold some key advantages over their natural analogs. The synthetic materials are manufactured in a uniform, phase-pure state. It is also possible to produce zeolite structures that do not appear in nature. Zeolite A is a well-known example. Since the principal raw materials used to manufacture zeolites are silica and alumina, which are among the most abundant mineral components on earth, the potential to supply zeolites is virtually unlimited.
As of 2016 [update] , the world's annual production of natural zeolite approximates 3 million tonnes . Major producers in 2010 included China (2 million tonnes), South Korea (210,000 t), Japan (150,000 t), Jordan (140,000 t), Turkey (100,000 t) Slovakia (85,000 t) and the United States (59,000 t). [ 19 ] The ready availability of zeolite-rich rock at low cost and the shortage of competing minerals and rocks are probably the most important factors for its large-scale use. According to the United States Geological Survey , it is likely that a significant percentage of the material sold as zeolites in some countries is ground or sawn volcanic tuff that contains only a small amount of zeolites. These materials are used for construction, e.g. dimension stone (as an altered volcanic tuff), lightweight aggregate , pozzolanic cement , and soil conditioners . [ 20 ]
Over 200 synthetic zeolites have been reported. [ 21 ] Most zeolites have aluminosilicate frameworks but some incorporate germanium, iron, gallium, boron, zinc, tin, and titanium. [ 22 ] Zeolite synthesis involves sol-gel -like processes. The product properties depend on reaction mixture composition, pH of the system, operating temperature , pre-reaction 'seeding' time, reaction time as well as the templates used. In the sol-gel process, other elements (metals, metal oxides) can be easily incorporated.
Zeolites are widely used as catalysts and sorbents . [ 23 ] [ 24 ] In chemistry, zeolites are used as membranes to separate molecules (only molecules of certain sizes and shapes can pass through), and as traps for molecules so they can be analyzed.
Research into and development of the many biochemical and biomedical applications of zeolites, particularly the naturally occurring species heulandite , clinoptilolite , and chabazite has been ongoing. [ 25 ]
Zeolites are widely used as ion-exchange beds in domestic and commercial water purification , softening , and other applications.
Evidence for the oldest known zeolite water purification filtration system occurs in the undisturbed sediments of the Corriental reservoir at the Maya city of Tikal , in northern Guatemala. [ 26 ]
Earlier, polyphosphates were used to soften hard water. The polyphosphates forms complex with metal ions like Ca 2+ and Mg 2+ to bind them up so that they could not interfere in cleaning process. However, when this phosphate rich water goes in main stream water, it results in eutrophication of water bodies and hence use of polyphosphate was replaced with use of a synthetic zeolite.
The largest single use for zeolite is the global laundry detergent market. Zeolites are used in laundry detergent as water softeners, removing Ca 2+ and Mg 2+ ions which would otherwise precipitate from the solution. The ions are retained by the zeolites which releases Na + ions into the solution, allowing the laundry detergent to be effective in areas with hard water. [ 27 ]
Synthetic zeolites, like other mesoporous materials (e.g., MCM-41 ), are widely used as catalysts in the petrochemical industry , such as in fluid catalytic cracking and hydrocracking . Zeolites confine molecules into small spaces, which causes changes in their structure and reactivity. The acidic forms of zeolites prepared are often powerful solid-state solid acids , facilitating a host of acid-catalyzed reactions, such as isomerization , alkylation , and cracking.
Catalytic cracking uses a reactor and a regenerator. Feed is injected onto a hot, fluidized catalyst where large gasoil molecules are broken into smaller gasoline molecules and olefins . The vapor-phase products are separated from the catalyst and distilled into various products. The catalyst is circulated to a regenerator, where the air is used to burn coke off the surface of the catalyst that was formed as a byproduct in the cracking process. The hot, regenerated catalyst is then circulated back to the reactor to complete its cycle.
Zeolites containing cobalt nanoparticles have applications in the recycling industry as a catalyst to break down polyethylene and polypropylene , two widely used plastics, into propane . [ 28 ]
Zeolites have been used in advanced nuclear reprocessing methods, where their micro-porous ability to capture some ions while allowing others to pass freely allows many fission products to be efficiently removed from the waste and permanently trapped. Equally important are the mineral properties of zeolites. Their alumino-silicate construction is extremely durable and resistant to radiation, even in porous form. Additionally, once they are loaded with trapped fission products, the zeolite-waste combination can be hot-pressed into an extremely durable ceramic form, closing the pores and trapping the waste in a solid stone block. This is a waste form factor that greatly reduces its hazard, compared to conventional reprocessing systems. Zeolites are also used in the management of leaks of radioactive materials. For example, in the aftermath of the Fukushima Daiichi nuclear disaster , sandbags of zeolite were dropped into the seawater near the power plant to adsorb the radioactive cesium-137 that was present in high levels. [ 29 ]
Zeolites have the potential of providing precise and specific separation of gases, including the removal of H 2 O, CO 2 , and SO 2 from low-grade natural gas streams. Other separations include noble gases , N 2 , O 2 , freon , and formaldehyde .
On-board oxygen generating systems (OBOGS) and oxygen concentrators use zeolites in conjunction with pressure swing adsorption to remove nitrogen from compressed air to supply oxygen for aircrews at high altitudes, as well as home and portable oxygen supplies. [ 30 ]
Zeolite-based oxygen concentrator systems are widely used to produce medical-grade oxygen. The zeolite is used as a molecular sieve to create purified oxygen from air using its ability to trap impurities, in a process involving the adsorption of nitrogen, leaving highly purified oxygen and up to 5% argon.
The German group Fraunhofer e.V. announced that they had developed a zeolite substance for use in the biogas industry for long-term storage of energy at a density four times greater than water. [ 31 ] [ non-primary source needed ] [ 32 ] [ 33 ] Ultimately, the goal is to store heat both in industrial installations and in small combined heat and power plants such as those used in larger residential buildings.
Debbie Meyer Green Bags , a produce storage and preservation product, uses a form of zeolite as its active ingredient. The bags are lined with zeolite to adsorb ethylene , which is intended to slow the ripening process and extend the shelf life of produce stored in the bags.
Clinoptilolite has also been added to chicken food: the absorption of water and ammonia by the zeolite made the birds' droppings drier and less odoriferous, hence easier to handle. [ 34 ]
Zeolites are also used as a molecular sieve in cryosorption style vacuum pumps . [ 35 ]
Zeolites can be used to thermochemically store solar heat harvested from solar thermal collectors as first demonstrated by Guerra in 1978 [ 36 ] and for adsorption refrigeration , as first demonstrated by Tchernev in 1974. [ 37 ] In these applications, their high heat of adsorption and ability to hydrate and dehydrate while maintaining structural stability is exploited. This hygroscopic property coupled with an inherent exothermic (energy releasing) reaction when transitioning from a dehydrated form to a hydrated form make natural zeolites useful in harvesting waste heat and solar heat energy. [ non-primary source needed ]
Synthetic zeolites are used as an additive in the production process of warm mix asphalt concrete . The development of this application started in Germany in the 1990s. They help by decreasing the temperature level during manufacture and laying of asphalt concrete, resulting in lower consumption of fossil fuels, thus releasing less carbon dioxide , aerosols, and vapors. The use of synthetic zeolites in hot mixed asphalt leads to easier compaction and, to a certain degree, allows cold weather paving and longer hauls.
When added to Portland cement as a pozzolan , they can reduce chloride permeability and improve workability. They reduce weight and help moderate water content while allowing for slower drying, which improves break strength. [ 38 ] When added to lime mortars and lime-metakaolin mortars, synthetic zeolite pellets can act simultaneously as a pozzolanic material and a water reservoir. [ 39 ] [ 40 ]
Non-clumping cat litter is often made of zeolite (or diatomite ), one form of which, invented at MIT , can sequester the greenhouse gas methane from the atmosphere. [ 41 ]
The original formulation of QuikClot brand hemostatic agent , which is used to stop severe bleeding, [ 42 ] contained zeolite granules. When in contact with blood, the granules would rapidly absorb water from the blood plasma, creating an exothermic reaction which generated heat. The absorption of water would also concentrate clotting factors present within the blood, causing the clot formation process to occur much faster than under normal circumstances, as shown in vitro . [ 43 ]
The 2022 formulation of QuikClot uses a nonwoven material impregnated with kaolin , an inorganic mineral activating Factor XII , in turn accelerating natural clotting. [ 44 ] Unlike the original zeolite formulation, kaolin does not exhibit any thermogenic properties.
In agriculture, clinoptilolite (a naturally occurring zeolite) is used as a soil treatment. It provides a source of slowly released potassium . If previously loaded with ammonium , the zeolite can serve a similar function in the slow release of nitrogen .
Zeolites can also act as water moderators, in which they will absorb up to 55% of their weight in water and slowly release it under the plant's demand. This property can prevent root rot and moderate drought cycles.
Pet stores market zeolites for use as filter additives in aquaria , [ 20 ] where they can be used to adsorb ammonia and other nitrogenous compounds. Due to the high affinity of some zeolites for calcium, they may be less effective in hard water and may deplete calcium. Zeolite filtration is also used in some marine aquaria to keep nutrient concentrations low for the benefit of corals adapted to nutrient-depleted waters.
Where and how the zeolite was formed is an important consideration for aquarium applications. Most Northern hemisphere, natural zeolites were formed when molten lava came into contact with sea water, thereby "loading" the zeolite with Na (sodium) sacrificial ions. The mechanism is well known to chemists as ion exchange . These sodium ions can be replaced by other ions in solution, thus the take-up of nitrogen in ammonia, with the release of the sodium. A deposit near Bear River in southern Idaho is a fresh water variety (Na < 0.05%). [ 45 ] Southern hemisphere zeolites are typically formed in freshwater and have a high calcium content. [ 46 ]
Zeolites have some veterinary applications, with clinoptilolite approved in the EU as an additive for cattle feed. [ 47 ] It acts primarily as a detoxifying agent in the gut, where is can absorb undesirable species via ion-exchange before being excreted. For instance, nitrate fertilisers are water soluble and prolonged exposure by dairy cattle is known to impair protein metabolism and glucose utilization. Clinoptilolite adsorbs nitrate ions with good selectivity, allowing it to reduce these ill effects. [ 48 ]
Zeolites have been studied for human medical applications, [ 49 ] particularly for bowel conditions. [ 50 ] [ 51 ] There are no approved medical uses for zeolites as of 2024. Regardless, they are widely marketed as dietary supplements .
The zeolite structural group ( Nickel-Strunz classification ) includes: [ 3 ] [ 15 ] [ 52 ] [ 53 ] [ 54 ]
Computer calculations have predicted that millions of hypothetical zeolite structures are possible. However, only 232 of these structures have been discovered and synthesized so far, so many zeolite scientists question why only this small fraction of possibilities are observed. This problem is often referred to as "the bottleneck problem". [ citation needed ] Currently, several theories attempt to explain the reasoning behind this question.
|
https://en.wikipedia.org/wiki/Zeolite
|
Zeolite facies describes the mineral assemblage resulting from the pressure and temperature conditions of low-grade metamorphism .
The zeolite facies is generally considered to be transitional between diagenetic processes which turn sediments into sedimentary rocks , and prehnite-pumpellyite facies , which is a hallmark of subseafloor alteration of the oceanic crust around mid-ocean ridge spreading centres. The zeolite and prehnite-pumpellyite facies are considered burial metamorphism as the processes of orogenic regional metamorphism are not required.
Zeolite facies is most often experienced by pelitic sediments; rocks rich in aluminium, silica, potassium and sodium, but generally low in iron, magnesium and calcium. Zeolite facies metamorphism usually results in the production of low temperature clay minerals into higher temperature polymorphs such as kaolinite and vermiculite .
Mineral assemblages include kaolinite and montmorillonite with laumontite , wairakite , prehnite , calcite and chlorite . Phengite and adularia occur in potassium rich rocks. Minerals in this series include zeolites , albite , and quartz .
This occurs by dehydration of the clays during compaction, and heating due to blanketing of the sediments by continued deposition of sediments above. Zeolite facies is considered to start with temperatures of approximately 50 - 150 °C and some burial is required, usually 1 - 5 km.
Zeolite facies tends to correlate in clay-rich sediments with the onset of a bedding plane foliation , parallel with the bedding of the rocks, caused by alignment of platy clay minerals in a horizontal orientation which reduces their free energy state.
Generally plutonic and volcanic rocks are not greatly affected by zeolite facies metamorphism, although vesicular basalts and the like will have their vesicles filled with zeolite minerals, forming amygdaloidal texture. Tuff can also become zeolitized, as is seen in the Obispo formation on the California coast.
This article about materials science is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zeolite_facies
|
A zeolite membrane is a synthetic membrane made of crystalline aluminosilicate materials, typically aluminum , silicon , and oxygen with positive counterions such as Na + and Ca 2+ within the structure. Zeolite membranes serve as a low energy separation method. They have recently drawn interest due to their high chemical and thermal stability, [ 1 ] and their high selectivity. Currently zeolites have seen applications in gas separation , membrane reactors , water desalination , and solid state batteries . [ 2 ] Currently zeolite membranes have yet to be widely implemented commercially due to key issues including low flux, high cost of production, and defects in the crystal structure.
There are several methods used for the formation of Zeolite membranes.
The In Situ method involves Zeolite membranes being formed on microporous supports of various materials, typically aluminum oxide or stainless steel . These supports are then immersed in a solution of Aluminum and Silicon at a specific stoichiometric ratio. Other factors of this solution can affect the formation of the zeolite membrane including: pH, Ionic Strength , temperature, and the addition of structure-determining reagents . Upon heating the solution, the crystals of the membrane begin to grow on the supports.
In 2012, a “seeding method” was developed to produce zeolite membranes. In this case, the support is seeded with preformed zeolite crystals, before immersing it in the solution. These crystals allow for the formation of thinner membranes that typically contain fewer defects by growing the membranes off of existing structures. [ 3 ]
Zeolite membranes drew initial interest as a separation method due to their high thermal and chemical stabilities. The crystal structure of zeolite membranes also creates a uniform pore size of approximately .3-1.3 nm in diameter. The crystal structure of zeolites also leads to the presence of several defects, which can often create gaps in the structure larger than these pores. The presence of defects can make these membranes far less effective, and it is difficult to produce defect free zeolite membranes. [ 4 ]
There are several mechanisms of transport that govern the separation of molecules by zeolite membranes. The main mechanisms for separation by zeolite membranes are molecular sieving, diffusion, and adsorption. Molecular sieving involves the rejection of any molecules of a size greater than the pore size of the membrane. This is a relatively simple sieving process which can separate out very large molecules. Adsorption involves molecules passing through the pores of the membrane being adsorbed onto the membrane surface. Adsorption properties of the membranes can be changed by adjusting various structural properties of the membrane. [ 5 ]
Surface diffusion is a process in which molecules adsorb to the pore wall of the membrane, and are slowly transported through the pores. During surface diffusion, molecules that are adsorbed at a higher rate can begin to block the membrane pores from other, less adsorbed, molecules. Surface diffusion can account for the high selectivity of certain molecules such as hydrogen by zeolite membranes. [ 6 ] Surface diffusion typically plays a larger role in the transport of molecules at lower temperatures.
Knudsen diffusion also contributes to the varying selectivity of zeolite membranes towards different molecules. Knudsen diffusion takes place when molecules are momentarily adsorbed to the pore wall and are then reflected off the surface in a random direction. This random motion allows for separation of molecules based on their velocities. Graham's law for diffusion dictates that lighter molecules will have a higher average velocity than heavier molecules, thus resulting in an increased flux with respect to lighter molecules. These differences in flux can be used to separate different molecules using zeolite membranes. [ 3 ]
Zeolite membranes have seen the most promise in regards to gas separation applications. The ability of zeolite membranes to adsorb certain molecules to its surface under varying conditions allows for researchers to perform highly selective separations. Adsorbed molecules block diffusion pores, and prevent the diffusion of other molecules through these pores. Zeolites typically adsorb carbon dioxide at the highest rate, lending themselves to use in carbon dioxide capture and separation. Diffusion selectivity governs the separation of molecules in zeolite membranes at higher temperatures. Diffusion selectivity allows for the quicker diffusion of smaller molecules through the membrane and slower diffusion of large molecules through the membrane's pores. [ 6 ]
The natural gas industry has seen the introduction of zeolite membranes for the separation of methane, carbon dioxide, and hydrogen gasses. Zeolites provide the advantage of thermal stability and higher selectivity when compared to polymer membranes that have typically been used for these purposes. [ 7 ] There still needs to be improvement in the production of zeolite membranes, particularly regarding the cost, before they see widespread use.
Zeolite membranes have also been used in membrane reactors , since their chemical and thermal stabilities allow them to withstand reaction conditions. Membrane reactors function by removing the product of a reaction as the reaction occurs. This removal shifts the equilibrium of the reaction to allow for the formation of more products, as outlined by Le Chatelier's principle creating a more efficient reaction process. The high selectivity of zeolite membranes allows for them to be used to remove products from a reactor at high rates. [ 8 ]
Zeolite membranes have recently been studied as an alternative for energy efficient water desalination . Currently water desalination is primarily done by Reverse Osmosis filtration which uses a dense polymeric membrane to purify the water. Zeolite membranes have been tested as an alternative water purification method, and are able to separate water from impurities. Zeolites have not been implemented for industrial water desalination purposes primarily due to their high cost when compared to traditional reverse osmosis membranes. [ 9 ]
|
https://en.wikipedia.org/wiki/Zeolite_membrane
|
A zeotropic mixture , or non-azeotropic mixture, is a mixture with liquid components that have different boiling points . [ 1 ] For example, nitrogen, methane, ethane, propane, and isobutane constitute a zeotropic mixture. [ 2 ] Individual substances within the mixture do not evaporate or condense at the same temperature as one substance. [ 3 ] In other words, the mixture has a temperature glide, as the phase change occurs in a temperature range of about four to seven degrees Celsius, rather than at a constant temperature. [ 3 ] On temperature-composition graphs, this temperature glide can be seen as the temperature difference between the bubble point and dew point . [ 4 ] For zeotropic mixtures, the temperatures on the bubble (boiling) curve are between the individual component's boiling temperatures. [ 5 ] When a zeotropic mixture is boiled or condensed, the composition of the liquid and the vapor changes according to the mixtures's temperature-composition diagram. [ 5 ]
Zeotropic mixtures have different characteristics in nucleate and convective boiling, as well as in the organic Rankine cycle . Because zeotropic mixtures have different properties than pure fluids or azeotropic mixtures , zeotropic mixtures have many unique applications in industry, namely in distillation, refrigeration, and cleaning processes.
In mixtures of substances, the bubble point is the saturated liquid temperature, whereas the saturated vapor temperature is called the dew point. Because the bubble and dew lines of a zeotropic mixture's temperature-composition diagram do not intersect, a zeotropic mixture in its liquid phase has a different fraction of a component than the gas phase of the mixture. [ 4 ] On a temperature-composition diagram, after a mixture in its liquid phase is heated to the temperature at the bubble (boiling) curve, the fraction of a component in the mixture changes along an isothermal line connecting the dew curve to the boiling curve as the mixture boils. [ 4 ] At any given temperature, the composition of the liquid is the composition at the bubble point, whereas the composition of the vapor is the composition at the dew point. [ 5 ] Unlike azeotropic mixtures, there is no azeotropic point at any temperature on the diagram where the bubble line and dew lines would intersect. [ 4 ] Thus, the composition of the mixture will always change between the bubble and dew point component fractions upon boiling from a liquid to a gas until the mass fraction of a component reaches 1 (i.e. the zeotropic mixture is completely separated into its pure components). As shown in Figure 1 , the mole fraction of component 1 decreases from 0.4 to around 0.15 as the liquid mixture boils to the gas phase.
Different zeotropic mixtures have different temperature glides. For example, zeotropic mixture R152a/R245fa has a higher temperature glide than R21/R245fa. [ 7 ] A larger gap between the boiling points creates a larger temperature glide between the boiling curve and dew curve at a given mass fraction. [ 4 ] However, with any zeotropic mixture, the temperature glide decreases when the mass fraction of a component approaches 1 or 0 (i.e. when the mixture is almost separated into its pure components) because the boiling and dew curves get closer near these mass fractions. [ 4 ]
A larger difference in boiling points between the substances also affects the dew and bubble curves of the graph. [ 4 ] A larger difference in boiling points creates a larger shift in mass fractions when the mixture boils at a given temperature. [ 4 ]
Azeotropic and zeotropic mixtures have different dew and bubble curves characteristics in a temperature-composition graph. [ 4 ] Namely, azeotropic mixtures have dew and bubble curves that intersect, but zeotropic mixtures do not. [ 4 ] In other words, zeotropic mixtures have no azeotropic points. [ 4 ] An azeotropic mixture that is near its azeotropic point has negligible zeotropic behavior and is near-azeotropic rather than zeotropic. [ 5 ]
Zeotropic mixtures differ from azeotropic mixtures in that the vapor and liquid phases of an azeotropic mixture have the same fraction of constituents. [ 9 ] This is due to the constant boiling point of the azeotropic mixture. [ 9 ]
When superheating a substance, nucleate pool boiling and convective flow boiling occur when the temperature of the surface used to heat a liquid is higher than the liquid's boiling point by the wall superheat. [ 10 ]
The characteristics of pool boiling are different for zeotropic mixtures than that of pure mixtures. [ 11 ] For example, the minimum superheating needed to achieve this boiling is greater for zeotropic mixtures than for pure liquids because of the different proportions of individual substances in the liquid versus gas phases of the zeotropic mixture. [ 11 ] Zeotropic mixtures and pure liquids also have different critical heat fluxes. [ 11 ] In addition, the heat transfer coefficients of zeotropic mixtures are less than the ideal values predicted using the coefficients of pure liquids. [ 11 ] This decrease in heat transfer is due to the fact that the heat transfer coefficients of zeotropic mixtures do not increase proportionately with the mass fractions of the mixture's components. [ 11 ]
Zeotropic mixtures have different characteristics in convective boiling than pure substances or azeotropic mixtures. [ 11 ] Overall, zeotropic mixtures transfer heat more efficiently at the bottom of the fluid, whereas pure and azeotropic substances transfer heat better at the top. [ 11 ] During convective flow boiling, the thickness of the liquid film is less at the top of the film than at the bottom because of gravity. [ 11 ] In the case of pure liquids and azeotropic mixtures, this decrease in thickness causes a decrease in the resistance to heat transfer. [ 11 ] Thus, more heat is transferred and the heat transfer coefficient is higher at the top of the film. [ 11 ] The opposite occurs for zeotropic mixtures. [ 11 ] The decrease in film thickness near the top causes the component in the mixture with the higher boiling point to decrease in mass fraction. [ 11 ] Thus, the resistance to mass transfer increases near the top of the liquid. [ 11 ] Less heat is transferred, and the heat transfer coefficient is lower than at the bottom of the liquid film. [ 11 ] Because the bottom of the liquid transfers heat better, it requires a lower wall temperature near the bottom than at the top to boil the zeotropic mixture. [ 11 ]
From low cryogenic to room temperatures, the heat transfer coefficients of zeotropic mixtures are sensitive to the mixture's composition, the diameter of the boiling tube, heat and mass fluxes, and the roughness of the surface. [ 2 ] In addition, diluting the zeotropic mixture reduces the heat transfer coefficient. [ 2 ] Decreasing the pressure when boiling the mixture only increases the coefficient slightly. [ 2 ] Using grooved rather than smooth boiling tubes increases the heat transfer coefficient. [ 12 ]
The ideal case of distillation uses zeotropic mixtures. [ 14 ] Zeotropic fluid and gaseous mixtures can be separated by distillation due to the difference in boiling points between the component mixtures. [ 14 ] [ 15 ] This process involves the use of vertically-arranged distillation columns (see Figure 2 ). [ 15 ]
When separating zeotropic mixtures with three or greater liquid components, each distillation column removes only the lowest-boiling point component and the highest boiling point component. [ 15 ] In other words, each column separates two components purely. [ 14 ] If three substances are separated with a single column, the substance with the intermediate boiling point will not be purely separated, [ 14 ] and a second column would be needed. [ 14 ] To separate mixtures consisting of multiple substances, a sequence of distillation columns must be used. [ 15 ] This multi-step distillation process is also called rectification. [ 15 ]
In each distillation column, pure components form at the top (rectifying section) and bottom (stripping section) of the column when the starting liquid (called feed composition) is released in the middle of the column. [ 15 ] This is shown in Figure 2 . At a certain temperature, the component with the lowest boiling point (called distillate or overhead fraction) vaporizes and collects at the top of the column, whereas the component with the highest boiling point (called bottoms or bottom fraction) collects at the bottom of the column. [ 15 ] In a zeotropic mixture, where more than one component exists, individual components move relative to each other as vapor flows up and liquid falls down. [ 15 ]
The separation of mixtures can be seen in a concentration profile. In a concentration profile, the position of a vapor in the distillation column is plotted against the concentration of the vapor. [ 15 ] The component with the highest boiling point has a max concentration at the bottom of the column, where the component with the lowest boiling point has a max concentration at the top of the column. [ 15 ] The component with the intermediate boiling point has a max concentration in the middle of the distillation column. [ 15 ] Because of how these mixtures separate, mixtures with greater than three substances require more than one distillation column to separate the components. [ 15 ]
Many configurations can be used to separate mixtures into the same products, though some schemes are more efficient, and different column sequencings are used to achieve different needs. [ 14 ] For example, a zeotropic mixture ABC can be first separated into A and BC before separating BC to B and C. [ 14 ] On the other hand, mixture ABC can be first separated into AB and C, and AB can lastly be separated into A and B. [ 14 ] These two configurations are sharp-split configurations in which the intermediate boiling substance does not contaminate each separation step. [ 14 ] On the other hand, the mixture ABC could first be separated into AB and BC, and lastly split into A, B, and C in the same column. [ 14 ] This is a non-sharp split configuration in which the substance with the intermediate boiling point is present in different mixtures after a separation step. [ 14 ]
When designing distillation processes for separating zeotropic mixtures, the sequencing of distillation columns is vital to saving energy and costs. [ 16 ] In addition, other methods can be used to lower the energy or equipment costs required to distill zeotropic mixtures. [ 16 ] This includes combining distillation columns, using side columns, combining main columns with side columns, and re-using waste heat for the system. [ 16 ] After combining distillation columns, the amount of energy used is only that of one separated column rather than both columns combined. [ 16 ] In addition, using side columns saves energy by preventing different columns from carrying out the same separation of mixtures. [ 16 ] Combining main and side columns saves equipment costs by reducing the number of heat exchangers in the system. [ 16 ] Re-using waste heat requires the amount of heat and temperature levels of the waste to match that of the heat needed. [ 16 ] Thus, using waste heat requires changing the pressure inside evaporators and condensers of the distillation system in order to control the temperatures needed. [ 16 ] Controlling the temperature levels in a part of a system is possible with Pinch Technology . [ 17 ] These energy-saving techniques have a wide application in industrial distillation of zeotropic mixtures: side columns have been used to refine crude oil , and combining main and side columns is increasingly used. [ 16 ]
Examples of distillation for zeotropic mixtures can be found in industry. Refining crude oil is an example of multi-component distillation in industry that has been used for more than 75 years. [ 14 ] Crude oil is separated into five components with main and side columns in a sharp split configuration. [ 14 ] In addition, ethylene is separated from methane and ethane for industrial purposes using multi-component distillation. [ 14 ]
Separating aromatic substances requires extractive distillation, for example, distilling a zeotropic mixture of benzene, toluene, and p-xylene. [ 14 ]
Zeotropic mixtures that are used in refrigeration are assigned a number in the 400 series to help identify its component and their proportions as a part of nomenclature. Whereas for azeotropic mixtures they are assigned a number in the 500 series. According to ASHRAE , refrigerants names start with 'R' followed by a series of numbers—400 series if it is zeotropic or 500 if it is azeotropic—followed by uppercase letters that denote the composition. [ 18 ]
Research has proposed using zeotropic mixtures as substitutes to halogenated refrigerants due to the harmful effects that hydrochlorofluorocarbons (HCFC) and chlorofluorocarbons (CFC) have on the ozone layer and global warming . [ 3 ] Researchers have focused on using new mixtures that have the same properties as past refrigerants to phase out harmful halogenated substances, in accordance to the Montreal Protocol and Kyoto Protocol . [ 3 ] For example, researchers found that zeotropic mixture R-404A can replace R-12, a CFC, in household refrigerators. [ 19 ] However, there are some technical difficulties for using zeotropic mixtures. [ 3 ] This includes leakages, as well as the high temperature glide associated with substances of different boiling points, [ 3 ] though the temperature glide can be matched to the temperature difference between the two refrigerants when exchanging heat to increase efficiency. [ 5 ] Replacing pure refrigerants with mixtures calls for more research on the environmental impact as well as the flammability and safety of refrigerant mixtures. [ 3 ]
In the Organic Rankine Cycle (ORC), zeotropic mixtures are more thermally efficient than pure fluids. [ 20 ] [ 21 ] Due to their higher boiling points, zeotropic working fluids have higher net outputs of energy at the low temperatures of the Rankine Cycle than pure substances. [ 7 ] [ 21 ] Zeotropic working fluids condense across a range of temperatures, allowing external heat exchangers to recover the heat of condensation as a heat source for the Rankine Cycle. [ 20 ] The changing temperature of the zeotropic working fluid can be matched to that of the fluid being heated or cooled to save waste heat because the mixture's evaporation process occurs at a temperature glide [ 20 ] [ 21 ] (see Pinch Analysis ).
R21/R245fa and R152a/R245fa are two examples of zeotropic working fluids that can absorb more heat than pure R245fa due to their increased boiling points. [ 7 ] The power output increases with the proportion of R152a in R152a/R245fa. [ 20 ] R21/R245fa uses less heat and energy than R245fa. [ 7 ] Overall, zeotropic mixture R21/R245fa has better thermodynamic properties than pure R245fa and R152a/R245fa as a working fluid in the ORC. [ 7 ]
Zeotropic mixtures can be used as solvents in cleaning processes in manufacturing. [ 22 ] Cleaning processes that use zeotropic mixtures include cosolvent processes and bisolvent processes. [ 22 ]
In a cosolvent system, two miscible fluids with different boiling points are mixed to create a zeotropic mixture. [ 22 ] [ 23 ] The first fluid is a solvating agent that dissolves soil in the cleaning process. [ 22 ] [ 23 ] This fluid is an organic solvent with a low-boiling point and a flash point greater than the system's operating temperature. [ 22 ] [ 23 ] After the solvent mixes with the oil, the second fluid, a hydrofluoroether rinsing agent (HFE), rinses off the solvating agent. [ 22 ] [ 23 ] The solvating agent can be flammable because its mixture with the HFE is nonflammable. [ 23 ] In bisolvent cleaning processes, the rinsing agent is separated from the solvating agent. [ 22 ] This makes the solvating and rinsing agents more effective because they are not diluted. [ 22 ]
Cosolvent systems are used for heavy oils, waxes, greases and fingerprints, [ 22 ] [ 23 ] and can remove heavier soils than processes that use pure or azeotropic solvents. [ 23 ] Cosolvent systems are flexible in that different proportions of substances in the zeotropic mixture can be used to satisfy different cleaning purposes. [ 23 ] For example, increasing the proportion of solvating agent to rinsing agent in the mixture increases the solvency, and thus is used for removing heavier soils. [ 22 ] [ 23 ]
The operating temperature of the system depends on the boiling point of the mixture, [ 23 ] which in turn depends on the compositions of these agents in zeotropic mixture. Since zeotropic mixtures have different boiling points, the cleaning and rinse sump have different ratios of cleaning and solvating agents. [ 23 ] The lower-boiling point solvating agent is not found in the rinse sump due to the large difference in boiling points between the agents. [ 23 ]
Mixtures containing HFC-43-10mee can replace CFC-113 and perfluorocarbon (PFC) as solvents in cleaning systems because HFC-43-10mee does not harm the ozone layer, unlike CFC-113 and PFC. [ 23 ] Various mixtures of HFC-43-10mee are commercially available for a variety of cleaning purposes. [ 23 ] Examples of zeotropic solvents in cleaning processes include:
|
https://en.wikipedia.org/wiki/Zeotropic_mixture
|
The Zerewitinoff determination or Zerevitinov determination is a quantitative chemical test for the determination of active hydrogens in a chemical substance [ 1 ] developed by F. V. Tserevitinov [ ru ] (jointly with L. A. Chugaev ) in 1902-1907. A sample is treated with the Grignard reagent , methylmagnesium iodide , which reacts with any acidic hydrogen atom to form methane . This gas can be determined quantitatively by measuring its volume. For example:
|
https://en.wikipedia.org/wiki/Zerewitinoff_determination
|
Zermelo's categoricity theorem was proven by Ernst Zermelo in 1930. It states that all models of a certain second-order version of the Zermelo-Fraenkel axioms of set theory are isomorphic to a member of a certain class of sets.
Let Z F C 2 {\displaystyle \mathrm {ZFC} ^{2}} denote Zermelo-Fraenkel set theory, but with a second-order version of the axiom of replacement formulated as follows: [ 1 ]
, namely the second-order universal closure of the axiom schema of replacement. [ 2 ] p. 289 Then every model of Z F C 2 {\displaystyle \mathrm {ZFC} ^{2}} is isomorphic to a set V κ {\displaystyle V_{\kappa }} in the von Neumann hierarchy , for some inaccessible cardinal κ {\displaystyle \kappa } . [ 3 ]
Zermelo originally considered a version of Z F C 2 {\displaystyle \mathrm {ZFC} ^{2}} with urelements. Rather than using the modern satisfaction relation ⊨ {\displaystyle \vDash } , he defines a "normal domain" to be a collection of sets along with the true ∈ {\displaystyle \in } relation that satisfies Z F C 2 {\displaystyle \mathrm {ZFC} ^{2}} . [ 4 ] p. 9
Dedekind proved that the second-order Peano axioms hold in a model if and only if the model is isomorphic to the true natural numbers. [ 4 ] pp. 5–6 [ 3 ] p. 1 Uzquiano proved that when removing replacement form Z F C 2 {\displaystyle {\mathsf {ZFC}}^{2}} and considering a second-order version of Zermelo set theory with a second-order version of separation, there exist models not isomorphic to any V δ {\displaystyle V_{\delta }} for a limit ordinal δ > ω {\displaystyle \delta >\omega } . [ 5 ] p. 396
|
https://en.wikipedia.org/wiki/Zermelo's_categoricity_theorem
|
In game theory , Zermelo's theorem is a theorem about finite two-person games of perfect information in which the players move alternately and in which chance does not affect the decision making process. It says that if the game cannot end in a draw, then one of the two players must have a winning strategy (i.e. can force a win). An alternate statement is that for a game meeting all of these conditions except the condition that a draw is now possible, then either the first-player can force a win, or the second-player can force a win, or both players can at least force a draw. [ 1 ] The theorem is named after Ernst Zermelo , a German mathematician and logician, who proved the theorem for the example game of chess in 1913.
Zermelo's theorem can be applied to all finite-stage two-player games with complete information and alternating moves. The game must satisfy the following criteria: there are two players in the game; the game is of perfect information; the board game is finite; the two players can take alternate turns; and there is no chance element present. Zermelo has stated that there are many games of this type; however his theorem has been applied mostly to the game chess. [ 2 ] [ 3 ]
When applied to chess , Zermelo's theorem states "either White can force a win, or Black can force a win, or both sides can force at least a draw". [ 2 ] [ 3 ]
Zermelo's algorithm is a cornerstone algorithm in game-theory; however, it can also be applied in areas outside of finite games.
Apart from chess, Zermelo's theorem is applied across all areas of computer science . In particular, it is applied in model checking and value interaction . [ 4 ]
Zermelo's work shows that in two-person zero-sum games with perfect information, if a player is in a winning position, then that player can always force a win no matter what strategy the other player may employ. Furthermore, and as a consequence, if a player is in a winning position, it will never require more moves than there are positions in the game (with a position defined as position of pieces as well as the player next to move). [ 1 ]
In 1912, during the Fifth International Congress of Mathematicians in Cambridge, Ernst Zermelo gave two talks. The first one covered axiomatic and genetic methods in the foundation of mathematical disciplines, and the second speech was on the game of chess. The second speech prompted Zermelo to write a paper on game theory. Being an avid chess player, Zermelo was concerned with application of set theory to the game of chess. Zermelo's original paper describing the theorem, Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels , was published in German in 1913. It can be considered as the first known paper on game theory. [ 5 ] Ulrich Schwalbe and Paul Walker translated Zermelo's paper into English in 1997 and published the translation in the appendix to Zermelo and the Early History of Game Theory . [ 1 ]
Zermelo considers the class of two-person games without chance, where players have strictly opposing interests and where only a finite number of positions are possible. Although in the game only finitely many positions are possible, Zermelo allows infinite sequences of moves since he does not consider stopping rules. Thus, he allows for the possibility of infinite games. Then he addresses two problems:
To answer the first question, Zermelo states that a necessary and sufficient condition is the nonemptyness of a certain set, containing all possible sequences of moves such that a player wins independently of how the other player plays. But should this set be empty, the best a player could achieve would be a draw. So Zermelo defines another set containing all possible sequences of moves such that a player can postpone his loss for an infinite number of moves, which implies a draw. This set may also be empty, i.e., the player can avoid his loss for only finitely many moves if his opponent plays correctly. But this is equivalent to the opponent being able to force a win. This is the basis for all modern versions of Zermelo's theorem.
About the second question, Zermelo claimed that it will never take more moves than there are positions in the game. His proof is a proof by contradiction : Assume that a player can win in a number of moves larger than the number of positions. By the pigeonhole principle , at least one winning position must have appeared twice. So the player could have played at the first occurrence in the same way as he does at the second and thus could have won in fewer moves than there are positions.
In 1927, a Hungarian mathematician Dénes Kőnig revised Zermelo's paper and pointed out some gaps in the original work. First of all, Kőnig argues that Zermelo did not prove that a player, for example White, who is in a winning position is always able to force a win by making moves smaller than the number of positions in the game. Zermelo argued that White can change its behaviour at the first possibility of any related winning position and win without repetition. However, Kőnig maintains that this argument is not correct as it is not enough to reduce the number of moves in a single game below the number of possible positions. Thus, Zermelo claimed, but did not show, that a winning player can always win without repetition. The second objection by Kőnig is that the strategy 'do the same at the first occurrence of a position as at the second and thus win in fewer moves' cannot be made if it is Black's turn to move in this position. However, this argument is not correct because Zermelo considered two positions different whether Black or White makes a move. [ 5 ]
It has been believed that Zermelo used backward induction as his method of proof. However, recent research on the Zermelo's theorem demonstrates that backward induction was not used to explain the strategy behind chess. Contrary to the popular belief, chess is not a finite game without at least one of the fifty move rule or threefold repetition rule. Strictly speaking, chess is an infinite game therefore backward induction does not provide the minmax theorem in this game. [ 6 ]
Backward induction is a process of reasoning backward in time. It is used to analyse and solve extensive form games of perfect information. This method analyses the game starting at the end, and then works backwards to reach the beginning. In the process, backward induction determines the best strategy for the player that made the last move. Then the ultimate strategy is determined for the next-to last moving player of the game. The process is repeated again determining the best action for every point in the game has been found. Therefore, backward induction determines the Nash equilibrium of every subgame in the original game. [ 4 ]
There is a number of reasons as to why backward induction is not present in the Zermelo's original paper:
Firstly, a recent study by Schwalbe and Walker (2001) demonstrated that Zermelo's paper contained basic idea of backward induction; however Zermelo did not make a formal statement on the theorem. Zermelo's original method was the idea of non-repetition. The first mention of backward induction was provided by László Kalmár in 1928. Kalmár generalised the work of Zermelo and Kőnig in his paper "On the Theory of Abstract Games". Kalmár was concerned with the question: "Given a winning position, how quickly can a win be forced?". His paper showed that winning without repetition is possible given that a player is a winning position. Kalmár's proof of non-repetition was proof by backward induction. In his paper, Kalmár introduced the concept of subgame and tactic. Kalmár's central argument was that a position can be a winning position only if a player can win in a finite number of moves. Also, a winning position for player A is always a losing position for player B. [ 7 ]
|
https://en.wikipedia.org/wiki/Zermelo's_theorem_(game_theory)
|
In set theory , Zermelo–Fraenkel set theory , named after mathematicians Ernst Zermelo and Abraham Fraenkel , is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox . Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics . Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC , where C stands for "choice", [ 1 ] and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.
Informally, [ 2 ] Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set , so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension , thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes.
There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets a {\displaystyle a} and b {\displaystyle b} there is a new set { a , b } {\displaystyle \{a,b\}} containing exactly a {\displaystyle a} and b {\displaystyle b} . Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe (also known as the cumulative hierarchy).
The metamathematics of Zermelo–Fraenkel set theory has been extensively studied. Landmark results in this area established the logical independence of the axiom of choice from the remaining Zermelo-Fraenkel axioms and of the continuum hypothesis from ZFC. The consistency of a theory such as ZFC cannot be proved within the theory itself, as shown by Gödel's second incompleteness theorem .
The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. However, the discovery of paradoxes in naive set theory , such as Russell's paradox , led to the desire for a more rigorous form of set theory that was free of these paradoxes.
In 1908, Ernst Zermelo proposed the first axiomatic set theory , Zermelo set theory . However, as first pointed out by Abraham Fraenkel in a 1921 letter to Zermelo, this theory was incapable of proving the existence of certain sets and cardinal numbers whose existence was taken for granted by most set theorists of the time, notably the cardinal number aleph-omega ( ℵ ω {\displaystyle \aleph _{\omega }} ) and the set { Z 0 , P ( Z 0 ) , P ( P ( Z 0 ) ) , P ( P ( P ( Z 0 ) ) ) , . . . } , {\displaystyle \{Z_{0},{\mathcal {P}}(Z_{0}),{\mathcal {P}}({\mathcal {P}}(Z_{0})),{\mathcal {P}}({\mathcal {P}}({\mathcal {P}}(Z_{0}))),...\},} where Z 0 {\displaystyle Z_{0}} is any infinite set and P {\displaystyle {\mathcal {P}}} is the power set operation. [ 3 ] Moreover, one of Zermelo's axioms invoked a concept, that of a "definite" property, whose operational meaning was not clear. In 1922, Fraenkel and Thoralf Skolem independently proposed operationalizing a "definite" property as one that could be formulated as a well-formed formula in a first-order logic whose atomic formulas were limited to set membership and identity. They also independently proposed replacing the axiom schema of specification with the axiom schema of replacement . Appending this schema, as well as the axiom of regularity (first proposed by John von Neumann ), [ 4 ] to Zermelo set theory yields the theory denoted by ZF . Adding to ZF either the axiom of choice (AC) or a statement that is equivalent to it yields ZFC.
Formally, ZFC is a one-sorted theory in first-order logic . The equality symbol can be treated as either a primitive logical symbol or a high-level abbreviation for having exactly the same elements. The former approach is the most common. The signature has a single predicate symbol, usually denoted ∈ {\displaystyle \in } , which is a predicate symbol of arity 2 (a binary relation symbol). This symbol symbolizes a set membership relation. For example, the formula a ∈ b {\displaystyle a\in b} means that a {\displaystyle a} is an element of the set b {\displaystyle b} (also read as a {\displaystyle a} is a member of b {\displaystyle b} ).
There are different ways to formulate the formal language. Some authors may choose a different set of connectives or quantifiers. For example, the logical connective NAND alone can encode the other connectives, a property known as functional completeness . This section attempts to strike a balance between simplicity and intuitiveness.
The language's alphabet consists of:
With this alphabet, the recursive rules for forming well-formed formulae (wff) are as follows:
A well-formed formula can be thought as a syntax tree. The leaf nodes are always atomic formulae. Nodes ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } have exactly two child nodes, while nodes ¬ {\displaystyle \lnot } , ∀ x {\displaystyle \forall x} and ∃ x {\displaystyle \exists x} have exactly one. There are countably infinitely many wffs, however, each wff has a finite number of nodes.
There are many equivalent formulations of the ZFC axioms. [ 5 ] The following particular axiom set is from Kunen (1980) . The axioms in order below are expressed in a mixture of first-order logic and high-level abbreviations.
Axioms 1–8 form ZF, while the axiom 9 turns ZF into ZFC. Following Kunen (1980) , we use the equivalent well-ordering theorem in place of the axiom of choice for axiom 9.
All formulations of ZFC imply that at least one set exists. Kunen includes an axiom that directly asserts the existence of a set, although he notes that he does so only "for emphasis". [ 6 ] Its omission here can be justified in two ways. First, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty. Hence, it is a logical theorem of first-order logic that something exists – usually expressed as the assertion that something is identical to itself, ∃ x ( x = x ) {\displaystyle \exists x(x=x)} . Consequently, it is a theorem of every first-order theory that something exists. However, as noted above, because in the intended semantics of ZFC, there are only sets, the interpretation of this logical theorem in the context of ZFC is that some set exists. Hence, there is no need for a separate axiom asserting that a set exists. Second, however, even if ZFC is formulated in so-called free logic , in which it is not provable from logic alone that something exists, the axiom of infinity asserts that an infinite set exists. This implies that a set exists, and so, once again, it is superfluous to include an axiom asserting as much.
Two sets are equal (are the same set) if they have the same elements.
The converse of this axiom follows from the substitution property of equality . ZFC is constructed in first-order logic. Some formulations of first-order logic include identity; others do not. If the variety of first-order logic in which one is constructing set theory does not include equality " = {\displaystyle =} ", x = y {\displaystyle x=y} may be defined as an abbreviation for the following formula: [ 7 ] ∀ z [ z ∈ x ⇔ z ∈ y ] ∧ ∀ w [ x ∈ w ⇔ y ∈ w ] . {\displaystyle \forall z[z\in x\Leftrightarrow z\in y]\land \forall w[x\in w\Leftrightarrow y\in w].}
In this case, the axiom of extensionality can be reformulated as
which says that if x {\displaystyle x} and y {\displaystyle y} have the same elements, then they belong to the same sets. [ 8 ]
Every non-empty set x {\displaystyle x} contains a member y {\displaystyle y} such that x {\displaystyle x} and y {\displaystyle y} are disjoint sets .
or in modern notation: ∀ x ( x ≠ ∅ ⇒ ∃ y ( y ∈ x ∧ y ∩ x = ∅ ) ) . {\displaystyle \forall x\,(x\neq \varnothing \Rightarrow \exists y(y\in x\land y\cap x=\varnothing )).}
This (along with the axioms of pairing and union) implies, for example, that no set is an element of itself and that every set has an ordinal rank .
Subsets are commonly constructed using set builder notation . For example, the even integers can be constructed as the subset of the integers Z {\displaystyle \mathbb {Z} } satisfying the congruence modulo predicate x ≡ 0 ( mod 2 ) {\displaystyle x\equiv 0{\pmod {2}}} :
In general, the subset of a set z {\displaystyle z} obeying a formula φ ( x ) {\displaystyle \varphi (x)} with one free variable x {\displaystyle x} may be written as:
The axiom schema of specification states that this subset always exists (it is an axiom schema because there is one axiom for each φ {\displaystyle \varphi } ). Formally, let φ {\displaystyle \varphi } be any formula in the language of ZFC with all free variables among x , z , w 1 , … , w n {\displaystyle x,z,w_{1},\ldots ,w_{n}} ( y {\displaystyle y} is not free in φ {\displaystyle \varphi } ). Then:
Note that the axiom schema of specification can only construct subsets and does not allow the construction of entities of the more general form:
This restriction is necessary to avoid Russell's paradox (let y = { x : x ∉ x } {\displaystyle y=\{x:x\notin x\}} then y ∈ y ⇔ y ∉ y {\displaystyle y\in y\Leftrightarrow y\notin y} ) and its variants that accompany naive set theory with unrestricted comprehension (since under this restriction y {\displaystyle y} only refers to sets within z {\displaystyle z} that don't belong to themselves, and y ∈ z {\displaystyle y\in z} has not been established, even though y ⊆ z {\displaystyle y\subseteq z} is the case, so y {\displaystyle y} stands in a separate position from which it can't refer to or comprehend itself; therefore, in a certain sense, this axiom schema is saying that in order to build a y {\displaystyle y} on the basis of a formula φ ( x ) {\displaystyle \varphi (x)} , we need to previously restrict the sets y {\displaystyle y} will regard within a set z {\displaystyle z} that leaves y {\displaystyle y} outside so y {\displaystyle y} can't refer to itself; or, in other words, sets shouldn't refer to themselves).
In some other axiomatizations of ZF, this axiom is redundant in that it follows from the axiom schema of replacement and the axiom of the empty set .
On the other hand, the axiom schema of specification can be used to prove the existence of the empty set , denoted ∅ {\displaystyle \varnothing } , once at least one set is known to exist. One way to do this is to use a property φ {\displaystyle \varphi } which no set has. For example, if w {\displaystyle w} is any existing set, the empty set can be constructed as
Thus, the axiom of the empty set is implied by the nine axioms presented here. The axiom of extensionality implies the empty set is unique (does not depend on w {\displaystyle w} ). It is common to make a definitional extension that adds the symbol " ∅ {\displaystyle \varnothing } " to the language of ZFC.
If x {\displaystyle x} and y {\displaystyle y} are sets, then there exists a set which contains x {\displaystyle x} and y {\displaystyle y} as elements, for example if x = {1,2} and y = {2,3} then z will be {{1,2},{2,3}}
The axiom schema of specification must be used to reduce this to a set with exactly these two elements. The axiom of pairing is part of Z, but is redundant in ZF because it follows from the axiom schema of replacement if we are given a set with at least two elements. The existence of a set with at least two elements is assured by either the axiom of infinity , or by the axiom schema of specification [ dubious – discuss ] and the axiom of the power set applied twice to any set.
The union over the elements of a set exists. For example, the union over the elements of the set { { 1 , 2 } , { 2 , 3 } } {\displaystyle \{\{1,2\},\{2,3\}\}} is { 1 , 2 , 3 } . {\displaystyle \{1,2,3\}.}
The axiom of union states that for any set of sets F {\displaystyle {\mathcal {F}}} , there is a set A {\displaystyle A} containing every element that is a member of some member of F {\displaystyle {\mathcal {F}}} :
Although this formula doesn't directly assert the existence of ∪ F {\displaystyle \cup {\mathcal {F}}} , the set ∪ F {\displaystyle \cup {\mathcal {F}}} can be constructed from A {\displaystyle A} in the above using the axiom schema of specification:
The axiom schema of replacement asserts that the image of a set under any definable function will also fall inside a set.
Formally, let φ {\displaystyle \varphi } be any formula in the language of ZFC whose free variables are among x , y , A , w 1 , … , w n , {\displaystyle x,y,A,w_{1},\dotsc ,w_{n},} so that in particular B {\displaystyle B} is not free in φ {\displaystyle \varphi } . Then:
(The unique existential quantifier ∃ ! {\displaystyle \exists !} denotes the existence of exactly one element such that it follows a given statement.)
In other words, if the relation φ {\displaystyle \varphi } represents a definable function f {\displaystyle f} , A {\displaystyle A} represents its domain , and f ( x ) {\displaystyle f(x)} is a set for every x ∈ A , {\displaystyle x\in A,} then the range of f {\displaystyle f} is a subset of some set B {\displaystyle B} . The form stated here, in which B {\displaystyle B} may be larger than strictly necessary, is sometimes called the axiom schema of collection .
Let S ( w ) {\displaystyle S(w)} abbreviate w ∪ { w } , {\displaystyle w\cup \{w\},} where w {\displaystyle w} is some set. (We can see that { w } {\displaystyle \{w\}} is a valid set by applying the axiom of pairing with x = y = w {\displaystyle x=y=w} so that the set z is { w } {\displaystyle \{w\}} ). Then there exists a set X such that the empty set ∅ {\displaystyle \varnothing } , defined axiomatically, is a member of X and, whenever a set y is a member of X then S ( y ) {\displaystyle S(y)} is also a member of X .
or in modern notation: ∃ X [ ∅ ∈ X ∧ ∀ y ( y ∈ X ⇒ S ( y ) ∈ X ) ] . {\displaystyle \exists X\left[\varnothing \in X\land \forall y(y\in X\Rightarrow S(y)\in X)\right].}
More colloquially, there exists a set X having infinitely many members. (It must be established, however, that these members are all different because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) The minimal set X satisfying the axiom of infinity is the von Neumann ordinal ω which can also be thought of as the set of natural numbers N . {\displaystyle \mathbb {N} .}
By definition, a set z {\displaystyle z} is a subset of a set x {\displaystyle x} if and only if every element of z {\displaystyle z} is also an element of x {\displaystyle x} :
The Axiom of power set states that for any set x {\displaystyle x} , there is a set y {\displaystyle y} that contains every subset of x {\displaystyle x} :
The axiom schema of specification is then used to define the power set P ( x ) {\displaystyle {\mathcal {P}}(x)} as the subset of such a y {\displaystyle y} containing the subsets of x {\displaystyle x} exactly:
Axioms 1–8 define ZF. Alternative forms of these axioms are often encountered, some of which are listed in Jech (2003) . Some ZF axiomatizations include an axiom asserting that the empty set exists . The axioms of pairing, union, replacement, and power set are often stated so that the members of the set x {\displaystyle x} whose existence is being asserted are just those sets which the axiom asserts x {\displaystyle x} must contain.
The following axiom is added to turn ZF into ZFC:
The last axiom, commonly known as the axiom of choice , is presented here as a property about well-orders , as in Kunen (1980) .
For any set X {\displaystyle X} , there exists a binary relation R {\displaystyle R} which well-orders X {\displaystyle X} . This means R {\displaystyle R} is a linear order on X {\displaystyle X} such that every nonempty subset of X {\displaystyle X} has a least element under the order R {\displaystyle R} .
Given axioms 1 – 8 , many statements are provably equivalent to axiom 9 . The most common of these goes as follows. Let X {\displaystyle X} be a set whose members are all nonempty. Then there exists a function f {\displaystyle f} from X {\displaystyle X} to the union of the members of X {\displaystyle X} , called a " choice function ", such that for all Y ∈ X {\displaystyle Y\in X} one has f ( Y ) ∈ Y {\displaystyle f(Y)\in Y} . A third version of the axiom, also equivalent, is Zorn's lemma .
Since the existence of a choice function when X {\displaystyle X} is a finite set is easily proved from axioms 1–8 , AC only matters for certain infinite sets . AC is characterized as nonconstructive because it asserts the existence of a choice function but says nothing about how this choice function is to be "constructed".
One motivation for the ZFC axioms is the cumulative hierarchy of sets introduced by John von Neumann . [ 10 ] In this viewpoint, the universe of set theory is built up in stages, with one stage for each ordinal number . At stage 0, there are no sets yet. At each following stage, a set is added to the universe if all of its elements have been added at previous stages. Thus the empty set is added at stage 1, and the set containing the empty set is added at stage 2. [ 11 ] The collection of all sets that are obtained in this way, over all the stages, is known as V . The sets in V can be arranged into a hierarchy by assigning to each set the first stage at which that set was added to V .
It is provable that a set is in V if and only if the set is pure and well-founded . And V satisfies all the axioms of ZFC if the class of ordinals has appropriate reflection properties. For example, suppose that a set x is added at stage α, which means that every element of x was added at a stage earlier than α. Then, every subset of x is also added at (or before) stage α, because all elements of any subset of x were also added before stage α. This means that any subset of x which the axiom of separation can construct is added at (or before) stage α, and that the powerset of x will be added at the next stage after α. [ 12 ]
The picture of the universe of sets stratified into the cumulative hierarchy is characteristic of ZFC and related axiomatic set theories such as Von Neumann–Bernays–Gödel set theory (often called NBG) and Morse–Kelley set theory . The cumulative hierarchy is not compatible with other set theories such as New Foundations .
It is possible to change the definition of V so that at each stage, instead of adding all the subsets of the union of the previous stages, subsets are only added if they are definable in a certain sense. This results in a more "narrow" hierarchy, which gives the constructible universe L , which also satisfies all the axioms of ZFC, including the axiom of choice. It is independent from the ZFC axioms whether V = L . Although the structure of L is more regular and well behaved than that of V , few mathematicians argue that V = L should be added to ZFC as an additional " axiom of constructibility ".
Proper classes (collections of mathematical objects defined by a property shared by their members which are too big to be sets) can only be treated indirectly in ZF (and thus ZFC).
An alternative to proper classes while staying within ZF and ZFC is the virtual class notational construct introduced by Quine (1969) , where the entire construct y ∈ { x | F x } is simply defined as F y . [ 13 ] This provides a simple notation for classes that can contain sets but need not themselves be sets, while not committing to the ontology of classes (because the notation can be syntactically converted to one that only uses sets). Quine's approach built on the earlier approach of Bernays & Fraenkel (1958) . Virtual classes are also used in Levy (2002) , Takeuti & Zaring (1982) , and in the Metamath implementation of ZFC.
The axiom schemata of replacement and separation each contain infinitely many instances. Montague (1961) included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other.
Gödel's second incompleteness theorem says that a recursively axiomatizable system that can interpret Robinson arithmetic can prove its own consistency only if it is inconsistent. Moreover, Robinson arithmetic can be interpreted in general set theory , a small fragment of ZFC. Hence the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). Thus, to the extent that ZFC is identified with ordinary mathematics, the consistency of ZFC cannot be demonstrated in ordinary mathematics. The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal , which is unprovable in ZFC if ZFC is consistent. Nevertheless, it is deemed unlikely that ZFC harbors an unsuspected contradiction; it is widely believed that if ZFC were inconsistent, that fact would have been uncovered by now. This much is certain – ZFC is immune to the classic paradoxes of naive set theory : Russell's paradox , the Burali-Forti paradox , and Cantor's paradox .
Abian & LaMacchia (1978) studied a subtheory of ZFC consisting of the axioms of extensionality, union, powerset, replacement, and choice. Using models , they proved this subtheory consistent, and proved that each of the axioms of extensionality, replacement, and power set is independent of the four remaining axioms of this subtheory. If this subtheory is augmented with the axiom of infinity, each of the axioms of union, choice, and infinity is independent of the five remaining axioms. Because there are non-well-founded models that satisfy each axiom of ZFC except the axiom of regularity, that axiom is independent of the other ZFC axioms.
If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski's axiom . [ 14 ] Assuming that axiom turns the axioms of infinity , power set , and choice ( 7 – 9 above) into theorems.
Many important statements are independent of ZFC . The independence is usually proved by forcing , whereby it is shown that every countable transitive model of ZFC (sometimes augmented with large cardinal axioms ) can be expanded to satisfy the statement in question. A different expansion is then shown to satisfy the negation of the statement. An independence proof by forcing automatically proves independence from arithmetical statements, other concrete statements, and large cardinal axioms. Some statements independent of ZFC can be proven to hold in particular inner models , such as in the constructible universe . However, some statements that are true about constructible sets are not consistent with hypothesized large cardinal axioms.
Forcing proves that the following statements are independent of ZFC:
Remarks:
A variation on the method of forcing can also be used to demonstrate the consistency and unprovability of the axiom of choice , i.e., that the axiom of choice is independent of ZF. The consistency of choice can be (relatively) easily verified by proving that the inner model L satisfies choice. (Thus every model of ZF contains a submodel of ZFC, so that Con(ZF) implies Con(ZFC).) Since forcing preserves choice, we cannot directly produce a model contradicting choice from a model satisfying choice. However, we can use forcing to create a model which contains a suitable submodel, namely one satisfying ZF but not C.
Another method of proving independence results, one owing nothing to forcing, is based on Gödel's second incompleteness theorem. This approach employs the statement whose independence is being examined, to prove the existence of a set model of ZFC, in which case Con(ZFC) is true. Since ZFC satisfies the conditions of Gödel's second theorem, the consistency of ZFC is unprovable in ZFC (provided that ZFC is, in fact, consistent). Hence no statement allowing such a proof can be proved in ZFC. This method can prove that the existence of large cardinals is not provable in ZFC, but cannot prove that assuming such cardinals, given ZFC, is free of contradiction.
The project to unify set theorists behind additional axioms to resolve the continuum hypothesis or other meta-mathematical ambiguities is sometimes known as "Gödel's program". [ 15 ] Mathematicians currently debate which axioms are the most plausible or "self-evident", which axioms are the most useful in various domains, and about to what degree usefulness should be traded off with plausibility; some " multiverse " set theorists argue that usefulness should be the sole ultimate criterion in which axioms to customarily adopt. One school of thought leans on expanding the "iterative" concept of a set to produce a set-theoretic universe with an interesting and complex but reasonably tractable structure by adopting forcing axioms; another school advocates for a tidier, less cluttered universe, perhaps focused on a "core" inner model. [ 16 ]
ZFC has been criticized both for being excessively strong and for being excessively weak, as well as for its failure to capture objects such as proper classes and the universal set .
Many mathematical theorems can be proven in much weaker systems than ZFC, such as Peano arithmetic and second-order arithmetic (as explored by the program of reverse mathematics ). Saunders Mac Lane and Solomon Feferman have both made this point. Some of "mainstream mathematics" (mathematics not directly connected with axiomatic set theory) is beyond Peano arithmetic and second-order arithmetic, but still, all such mathematics can be carried out in ZC ( Zermelo set theory with choice), another theory weaker than ZFC. Much of the power of ZFC, including the axiom of regularity and the axiom schema of replacement, is included primarily to facilitate the study of the set theory itself.
On the other hand, among axiomatic set theories , ZFC is comparatively weak. Unlike New Foundations , ZFC does not admit the existence of a universal set. Hence the universe of sets under ZFC is not closed under the elementary operations of the algebra of sets . Unlike von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory (MK), ZFC does not admit the existence of proper classes . A further comparative weakness of ZFC is that the axiom of choice included in ZFC is weaker than the axiom of global choice included in NBG and MK.
There are numerous mathematical statements independent of ZFC . These include the continuum hypothesis , the Whitehead problem , and the normal Moore space conjecture . Some of these conjectures are provable with the addition of axioms such as Martin's axiom or large cardinal axioms to ZFC. Some others are decided in ZF+AD where AD is the axiom of determinacy , a strong supposition incompatible with choice. One attraction of large cardinal axioms is that they enable many results from ZF+AD to be established in ZFC adjoined by some large cardinal axiom. The Mizar system and metamath have adopted Tarski–Grothendieck set theory , an extension of ZFC, so that proofs involving Grothendieck universes (encountered in category theory and algebraic geometry) can be formalized.
Related axiomatic set theories :
|
https://en.wikipedia.org/wiki/Zermelo–Fraenkel_set_theory
|
The Zero-Force Evolutionary Law (ZFEL) is a theory proposed by Daniel McShea and Robert Brandon regarding the evolution of diversity and complexity . Under the ZFEL, diversity is understood as the variation among organisms and complexity as the variation among the parts within an organism. [ 1 ] A part is understood as a system that is to some degree internally integrated and isolated from its surroundings. [ 2 ] In a multicellular organism, for example, a cell is a part, and therefore complexity is the number of different cell types. Like the theory of relativity , the theory has a special and general formulation. The special formulation states that in the absence of natural selection, an evolutionary system with variation and heredity will tend spontaneously to diversify and complexify.
The general formulation states that evolutionary systems have a tendency to diversify and complexify, but that these processes may be amplified or constrained by other forces, including natural selection. The mechanism of the ZFEL is the inherently error-prone process of replication and reproduction. In the absence of selection, errors tend to accumulate, with the result that individuals within a population tend to become more different from each other (diversity) and parts within an individual tend to become more different from each other (complexity). Both of these tendencies can be overcome by selection, including stabilizing or negative selection, with the result that diversity or complexity often does not change, or even decreases. What the ZFEL offers is not so much a prediction as a null expectation, telling us what will happen in evolution when selection is absent. It is the analogue of Newton's law of momentum , which tells us the trajectory of a moving object in the absence of forces (a straight line).
|
https://en.wikipedia.org/wiki/Zero-Force_Evolutionary_Law
|
An exploit is a method or piece of code that takes advantage of vulnerabilities in software , applications , networks , operating systems , or hardware , typically for malicious purposes.
The term "exploit" derives from the English verb "to exploit," meaning "to use something to one’s own advantage."
Exploits are designed to identify flaws, bypass security measures, gain unauthorized access to systems, take control of systems, install malware , or steal sensitive data .
While an exploit by itself may not be a malware , it serves as a vehicle for delivering malicious software by breaching security controls . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Researchers estimate that malicious exploits cost the global economy over US$450 billion annually.
In response to this threat, organizations are increasingly utilizing cyber threat intelligence to identify vulnerabilities and prevent hacks before they occur. [ 5 ]
Exploits target vulnerabilities, which are essentially flaws or weaknesses in a system's defenses.
Common targets for exploits include operating systems , web browsers , and various applications , where hidden vulnerabilities can compromise the integrity and security of computer systems .
Exploits can cause unintended or unanticipated behavior in systems, potentially leading to severe security breaches . [ 6 ] [ 7 ]
Many exploits are designed to provide superuser -level access to a computer system.
Attackers may use multiple exploits in succession to first gain low-level access and then escalate privileges repeatedly until they reach the highest administrative level, often referred to as "root."
This technique of chaining several exploits together to perform a single attack is known as an exploit chain.
Exploits that remain unknown to everyone except the individuals who discovered and developed them are referred to as zero-day or "0day" exploits.
After an exploit is disclosed to the authors of the affected software, the associated vulnerability is often fixed through a patch , rendering the exploit unusable.
This is why some black hat hackers , as well as military or intelligence agency hackers, do not publish their exploits but keep them private.
One scheme that offers zero-day exploits is known as exploit as a service . [ 8 ]
There are several methods of classifying exploits. The most common is by how the exploit communicates to the vulnerable software.
By Method of Communication: [ 9 ]
By Targeted Component: [ 9 ]
The classification of exploits based [ 10 ] [ 11 ] on the type of vulnerability they exploit and the result of running the exploit (e.g., Elevation of Privilege ( EoP ), Denial of Service ( DoS ), spoofing ) is a common practice in cybersecurity. This approach helps in systematically identifying and addressing security threats. For instance, the STRIDE threat model categorizes threats into six types, including Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. [ 12 ] Similarly, the National Vulnerability Database (NVD) categorizes vulnerabilities by types such as Authentication Bypass by Spoofing and Authorization Bypass. [ 13 ]
By Type of Vulnerability :
Another classification is by the action against the vulnerable system; unauthorized data access, arbitrary code execution, and denial of service are examples.
Attackers employ various techniques to exploit vulnerabilities and achieve their objectives. Some common methods include: [ 9 ]
A zero-click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. [ 14 ] These exploits are commonly the most sought after exploits (specifically on the underground exploit market) because the target typically has no way of knowing they have been compromised at the time of exploitation.
FORCEDENTRY , discovered in 2021, is an example of a zero-click attack. [ 15 ] [ 16 ]
In 2022, NSO Group was reportedly selling zero-click exploits to governments for breaking into individuals' phones. [ 17 ]
For mobile devices, the National Security Agency (NSA) points out that timely updating of software and applications, avoiding public network connections, and turning the device Off and On at least once a week can mitigate the threat of zero-click attacks. [ 18 ] [ 19 ] [ 20 ] Experts say that protection practices for traditional endpoints are also applicable to mobile devices. Many exploits exist only in memory , not in files. Theoretically, restarting the device can wipe malware payloads from memory, forcing attackers back to the beginning of the exploit chain. [ 21 ] [ 22 ]
Pivoting is a technique employed by both hackers and penetration testers to expand their access within a target network. By compromising a system, attackers can leverage it as a platform to target other systems that are typically shielded from direct external access by firewalls . Internal networks often contain a broader range of accessible machines compared to those exposed to the internet. For example, an attacker might compromise a web server on a corporate network and then utilize it to target other systems within the same network. This approach is often referred to as a multi-layered attack. Pivoting is also known as island hopping .
Pivoting can further be distinguished into proxy pivoting and VPN pivoting:
Typically, the proxy or VPN applications enabling pivoting are executed on the target computer as the payload of an exploit.
Pivoting is usually done by infiltrating a part of a network infrastructure (as an example, a vulnerable printer or thermostat) and using a scanner to find other devices connected to attack them. By attacking a vulnerable piece of networking, an attacker could infect most or all of a network and gain complete control.
|
https://en.wikipedia.org/wiki/Zero-click_attack
|
The zero-curtain effect occurs in cold (particularly periglacial ) environments where the phase transition of water to ice is slowed due to latent heat release. The effect is notably found in arctic and alpine permafrost sediments, and occurs where the air temperature falls below 0 ° C (the freezing point of water) followed by a rapid drop in soil temperature. [ 1 ]
Because of this effect, the lowering of temperature in moist, cold ground does not happen at a uniform rate. The loss of heat through conduction is reduced when water freezes, and latent heat is released. This heat of fusion is continually released until all the subsurface water has frozen, at which point temperatures can continue to fall. [ 2 ]
Therefore, for as long as water is available to the system (for example, through cryosuction / capillary action ) the temperature of the sediment will remain at a constant temperature.
This geomorphology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zero-curtain_effect
|
The zero-energy universe hypothesis proposes that the total amount of energy in the universe is exactly zero : its amount of positive energy in the form of matter is exactly canceled out by its negative energy in the form of gravity . [ 1 ] Some physicists, such as Lawrence Krauss , Stephen Hawking or Alexander Vilenkin , call or called this state "a universe from nothingness", although the zero-energy universe model requires both a matter field with positive energy and a gravitational field with negative energy to exist. [ 2 ] The hypothesis is broadly discussed in popular sources. [ 3 ] [ 4 ] [ 5 ]
The hypothesis is based on the tacit assumption that the universe is an infinitely large closed system existing at one spatial scale only, because in the case of a finite universe or, which is the same, in the case of an infinite recursion of nested universes, there would be a self-accelerating gravitational flow of space passing from the larger-scale universe into the normal-scale universe, then into the smaller-scale universe and so ad infinitum , to the effect that the system would be open and all its matter would eventually be entrained and dissolved by the ever faster negative-energied flow of space.
To meet the above condition of non-nestedness, the universe must have a perfectly homogeneous or "flat" distribution of mass at very small spatial scales and at very large spatial scales, which implies that gravitation must be completely absent at the smallest and largest spatial scales (otherwise, even the tiniest inhomogeneities in the distribution of mass will be gravitationally amplified). This tall requirement makes the hypothesis completely untenable, because from the perspective of the minimum total potential energy principle , such a homogeneous distribution of mass, with its maximum ( i.e. , zero) gravitational potential energy, is the least probable of all imaginable distributions of mass (an unstable equilibrium ).
During World War II, Pascual Jordan first suggested that since the positive energy of a star's mass and the negative energy of its gravitational field together may have zero total energy, conservation of energy would not prevent a star being created by a quantum transition of the vacuum. George Gamow recounted putting this idea to Albert Einstein : "Einstein stopped in his tracks and, since we were crossing a street, several cars had to stop to avoid running us down". [ 6 ] Elaboration of the concept was slow, with the first notable calculation being performed by Richard Feynman in 1962. [ 7 ] The first known publication on the topic was in 1973, when Edward Tryon proposed in the journal Nature that the universe emerged from a large-scale quantum fluctuation of vacuum energy , resulting in its positive mass-energy being exactly balanced by its negative gravitational potential energy . [ 4 ] In the subsequent decades, development of the concept was constantly plagued by the dependence of the calculated masses on the selection of the coordinate systems. In particular, a problem arises due to energy associated with coordinate systems co-rotating with the entire universe. [ 7 ] A first constraint was derived in 1987 when Alan Guth published a proof of gravitational energy being negative. [ 8 ] The question of the mechanism permitting generation of both positive and negative energy from null initial solution was not understood, and an ad hoc solution with cyclic time was proposed by Stephen Hawking in 1988. [ 9 ] [ 10 ]
In 1994, development of the theory resumed [ 11 ] following the publication of a work by Nathan Rosen , [ 12 ] in which Rosen described a special case of closed universe. In 1995, J.V. Johri demonstrated that the total energy of Rosen's universe is zero in any universe compliant with a Friedmann–Lemaître–Robertson–Walker metric , and proposed a mechanism of inflation-driven generation of matter in a young universe. [ 13 ] The zero energy solution for Minkowski space representing an observable universe, was provided in 2009. [ 7 ]
In his book Brief Answers to the Big Questions , Hawking explains:
The laws of physics demand the existence of something called ' negative energy '.
To help you get your head around this weird but crucial concept, let me draw on a simple analogy. Imagine a man wants to build a hill on a flat piece of land. The hill will represent the universe. To make this hill he digs a hole in the ground and uses that soil to dig his hill. But of course he's not just making a hill—he's also making a hole, in effect a negative version of the hill. The stuff that was in the hole has now become the hill, so it all perfectly balances out. This is the principle behind what happened at the beginning of the universe. When the Big Bang produced a massive amount of positive energy, it simultaneously produced the same amount of negative energy. In this way, the positive and the negative add up to zero, always. It's another law of nature. So where is all this negative energy today? It's in the third ingredient in our cosmic cookbook: it's in space. This may sound odd, but according to the laws of nature concerning gravity and motion—laws that are among the oldest in science —space itself is a vast store of negative energy. Enough to ensure that everything adds up to zero. [ 14 ]
The zero-total-energy universe is called "flat" or "Euclidean". But the observed flatness of the universe at very large scales cannot be used as an experimental evidence in favour of the zero-total-energy universe because it is the flatness of the primordial universe (it takes light many billions of years to arrive from large distances to the terrestrial telescopes). The modern universe is hierarchic even at the largest scales, which implies that the total energy of the modern universe is negative.
Astronomical observations suggest that by falling into its own gravitational field, the universe becomes ever more dominated by the gravitational field and acquires an ever more negative total energy:
The central singularity is in the future:
"... the central singularity is still at r = 0. The conclusion is that motion forward in time is motion towards smaller r . An object entering the horizon is carried down to r = 0 just as surely as you and I are carried into next week." [ 16 ]
We are literally longing for the future — people with elongated bodies are more future-oriented than people with round bodies. [ 17 ]
The only explanation, according to Chaboyer and Krauss, for an accelerating universe is that the energy content of a vacuum is non-zero with a negative pressure , in other words, dark energy. This negative pressure of the vacuum grows in importance as the universe expands and causes the expansion to accelerate.
The peculiar properties of the false vacuum stem from its pressure, which is large and negative (see box on the right). Mechanically such a negative pressure corresponds to a suction , which does not sound like something that would drive the Universe into a period of rapid expansion.
The drain hole sucking water toward it is equivalent to the singularity at the center of a black hole sucking space toward it.
The basic idea, outlined in a nontechnical manner in ref. [7 [ 18 ] ], is that as inhomogeneities grow one must consider not only their backreaction on average cosmic evolution, but also the variance in the geometry as it affects the calibration of clocks and rulers of ideal observers. Dark energy is then effectively realised as a misidentification of gravitational energy gradients.
To us, falling towards the central singularity, our gravity-dominated [ 19 ] and because of that shrinking [ 20 ] black-hole universe seems to be expanding:
Now let us consider an astronaut explorer who goes to visit a black hole and falls in. According to her own proper time, the explorer can soon arrive in the vicinity of the horizon. Any light emitted at r s in the outward radial direction as she falls in stays at the horizon, according to outer observers, but travels at c relative to the astronaut. Therefore, in the astronaut's rest frame the horizon moves outwards at c .
If the total energy of the universe were zero, the negative gravitational potential energy of the universe would be cancelled out to zero by the positive actual energy of the universe, so that the universe's matter would be infinitely rarefied and thus nonexistent. The very fact that the universe exists proves that its total energy is negative, not zero.
The universe's positive energy ( E = hf ) consists of Planck quanta of action ( h ), which are quanta of angular momentum, [ 21 ] borrowed from the universe's gravitational field at the cost of deepening and narrowing the funnel-shaped gravity well [ 15 ] representing the field (to its inhabitants, the deepening and narrowing gravity well seems to be expanding [ 22 ] ):
The universe would have expanded in a smooth way from a single point. As it expanded, it would have borrowed energy from the gravitational field, to create matter . As any economist could have predicted, the result of all that borrowing was inflation. The universe expanded and borrowed at an ever increasing rate . Fortunately, the debt of gravitational energy will not have to be repaid until the end of the universe.
The concept of the zero-total-energy universe is based on the wrong assumption that making the universe's negative potential energy (which is the debt) more negative by one unit always makes the universe's positive actual energy more positive by the same one unit.
In other words, the concept of the zero-total-energy universe ignores the law of the diminishing marginal productivity of debt , which is a particular case of the law of diminishing returns (also known as the law of diminishing marginal productivity).
The angular momentum ( L ) of the universe's funnel-shaped gravity well is the product of its moment of inertia and its angular velocity :
where
In the course of time, the universe's funnel-shaped gravity well deepens and narrows, [ 15 ] so that its moment of inertia (which is potential or zero-rotational-frequencied angular momentum, i.e. the reservoir from which angular momentum becomes borrowed into actuality by increasing its rotational frequency) decreases. Consequently, the marginal productivity of debt diminishes, so that the total energy of the universe becomes ever more negative.
The universe's moment of inertia is the universe's rest mass , space or volume, or gravitational potential energy:
The student is advised to regard moment of inertia as being equivalent to ‘angular mass’; equations in rotational mechanics are generally analogous to those in translational mechanics. Wherever an equation occurs in translational mechanics involving mass m , there is an equivalent equation in rotational mechanics involving moment of inertia J . The units of moment of inertia are kilogram metres 2 (abbreviation kg m 2 ).
The quantity factor of potential energy is space or volume which however is equivalent to mass.
Therefore, the self-accelerating decrease in the universe's moment of inertia implies that the volume of the universe is shrinking. To human observers, the shrinking universe seems to be expanding because the atoms of which the observers consist are shrinking progressively faster than the entire universe:
All change is relative. The universe is expanding relatively to our common material standards; our material standards are shrinking relatively to the size of the universe. The theory of the "expanding universe" might also be called the theory of the "shrinking atom". <...>
Let us then take the whole universe as our standard of constancy, and adopt the view of a cosmic being whose body is composed of intergalactic spaces and swells as they swell. Or rather we must now say it keeps the same size, for he will not admit that it is he who has changed. Watching us for a few thousand million years, he sees us shrinking; atoms, animals, planets, even the galaxies, all shrink alike; only the intergalactic spaces remain the same. The earth spirals round the sun in an ever‑decreasing orbit. It would be absurd to treat its changing revolution as a constant unit of time. The cosmic being will naturally relate his units of length and time so that the velocity of light remains constant. Our years will then decrease in geometrical progression in the cosmic scale of time. On that scale man's life is becoming briefer; his threescore years and ten are an ever‑decreasing allowance. Owing to the property of geometrical progressions an infinite number of our years will add up to a finite cosmic time; so that what we should call the end of eternity is an ordinary finite date in the cosmic calendar. But on that date the universe has expanded to infinity in our reckoning, and we have shrunk to nothing in the reckoning of the cosmic being.
We walk the stage of life, performers of a drama for the benefit of the cosmic spectator. As the scenes proceed he notices that the actors are growing smaller and the action quicker. When the last act opens the curtain rises on midget actors rushing through their parts at frantic speed. Smaller and smaller. Faster and faster. One last microscopic blurr of intense agitation. And then nothing.
Negative energy consists of a negative number of Planck quanta of action. That is why the catastrophically self-accelerating increase in the negative energy of the universe's gravitational potential field cancels the multiplicity of the universe's Planck quanta of action (protons, electrons, etc. ) by organizing them into a hierarchic unity , and shortly afterwards completely cancels the existence of all Planck quanta of action.
So, the universe is most alive just before it dies. The very fact that the extreme negentropy of terrestrial life exists indicates that the total energy of the universe has already become extremely negative and has organized all Planck quanta of action into a single hierarchy [ 23 ] with the planet Earth at its top:
The negative energy of the gravitational field is what allows negative entropy, equivalent to information, to grow, making the Universe a more complicated and interesting place.
Hydrogen is a light, odourless gas, which, given enough time, turns into people.
The 13.7-billion-year-long catastrophically self-accelerating hierarchization and shrinkage of the universe's protons is expected to finish by the end of the year 2026 AD—see Hyperbolic growth#Global macrodevelopment .
However, the universe's protons will not disappear at the end of the year 2026 AD. Any self-gravitating proton, converting its rest mass into radiant energy, radiates away only a half of that radiant energy but retains the other half. [ 24 ] [ 25 ]
Therefore, upon converting all of its rest mass into radiant energy, a proton will retain a half of that radiant energy circulating within itself and serving as a quasi rest mass. Those end-time protons will formally have rest masses but essentially will be massless "radiant spirits".
Thus, at the end of the year 2026 AD, the universe will become entirely made of light and will enter the last era of its existence called the eschaton , during which the universe will consist of ephemeralized to the point of being amenable to psychokinesis, ghostlike atoms, frantically performing their danse macabre on the verge of instantaneous disappearance:
"It's this idea that we represent some kind of singularity, or that we announce the nearby presence of a singularity. That the evolution of life and cultural form and all that is clearly funneling toward something fairly unimaginable." —McKenna, Terence. A Weekend with Terence McKenna August 1993
"‘It all just seemed unbelievably boring to me,’ Penrose says. Then he found something interesting within it: at the very end of the universe, the only remaining particles will be massless. That means everything that exists will travel at the speed of light, making the flow of time meaningless." —Brooks, Michael. Roger Penrose: Non-stop cosmos, non-stop career New Scientist , 2010 03 10
"In other words, we end the whole thing. We collapse the state vector and everything goes into a state of novelty. What happens then I think is the universe becomes entirely made of light ." —McKenna, Terence. Appreciating Imagination 1997
"The conventions of relativity say that time slows down as one approaches the speed of light, but if one tries to imagine the point of view of a thing made of light, one must realize that what is never mentioned is that if one moves at the speed of light, there is no time whatsoever . There is an experience of time zero. <...> One has transited into the eternal mode. One is then apart from the moving image; one exists in the completion of eternity. I believe that this is what technology pushes toward." —McKenna, Terence. New Maps of Hyperspace 1984
"What exactly is immortality? It's the negation of time. How do we negate time? By getting close to, and perhaps matching, the speed of light. If you ARE light, everything is instant." — Time fUSION Anomaly, 1999 10 11
"And the angel that I saw standing upon the sea and upon the land lifted his hand up to heaven, and swore by him who lives forevermore, who created heaven and the things that are in it, and the sea and the things that are in it, that time shall be no more , but in the days of the voice of the seventh angel, when he begins to blow, even the mystery of God shall be finished, as he preached by his servants the prophets." — Revelation 10:5-7 New Matthew Bible
|
https://en.wikipedia.org/wiki/Zero-energy_universe
|
Zero-forcing (or null-steering) precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. [ 1 ] When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Zero-forcing has been used in LTE mobile networks. [ 2 ]
In a multiple antenna downlink system which comprises N t {\displaystyle N_{t}} transmit antenna access points and K {\displaystyle K} single receive antenna users, such that K ≤ N t {\displaystyle K\leq N_{t}} , the received signal of user k {\displaystyle k} is described as
where x = ∑ i = 1 K P i s i w i {\displaystyle \mathbf {x} =\sum _{i=1}^{K}{\sqrt {P_{i}}}s_{i}\mathbf {w} _{i}} is the N t × 1 {\displaystyle N_{t}\times 1} vector of transmitted symbols, n k {\displaystyle n_{k}} is the noise signal, h k {\displaystyle \mathbf {h} _{k}} is the N t × 1 {\displaystyle N_{t}\times 1} channel vector and w i {\displaystyle \mathbf {w} _{i}} is some N t × 1 {\displaystyle N_{t}\times 1} linear precoding vector. Here ( ⋅ ) T {\displaystyle (\cdot )^{T}} is the matrix transpose, P i {\displaystyle {\sqrt {P_{i}}}} is the square root of transmit power, and s i {\displaystyle s_{i}} is the message signal with zero mean and variance E ( | s i | 2 ) = 1 {\displaystyle \mathbf {E} (|s_{i}|^{2})=1} .
The above signal model can be more compactly re-written as
where
A zero-forcing precoder is defined as a precoder where w i {\displaystyle \mathbf {w} _{i}} intended for user i {\displaystyle i} is orthogonal to every channel vector h j {\displaystyle \mathbf {h} _{j}} associated with users j {\displaystyle j} where j ≠ i {\displaystyle j\neq i} . That is,
Thus the interference caused by the signal meant for one user is effectively nullified for rest of the users via zero-forcing precoder.
From the fact that each beam generated by zero-forcing precoder is orthogonal to all the other user channel vectors, one can rewrite the received signal as
The orthogonality condition can be expressed in matrix form as
where Q {\displaystyle \mathbf {Q} } is some K × K {\displaystyle K\times K} diagonal matrix. Typically, Q {\displaystyle \mathbf {Q} } is selected to be an identity matrix. This makes W {\displaystyle \mathbf {W} } the right Moore-Penrose pseudo-inverse of H T {\displaystyle \mathbf {H} ^{T}} given by
Given this zero-forcing precoder design, the received signal at each user is decoupled from each other as
Quantify the amount of the feedback resource required to maintain at least a given throughput performance gap between zero-forcing with perfect feedback and with limited feedback, i.e.,
Jindal showed that the required feedback bits of a spatially uncorrelated channel should be scaled according to SNR of the downlink channel, which is given by: [ 3 ]
where M is the number of transmit antennas and ρ b , m {\displaystyle \rho _{b,m}} is the SNR of the downlink channel.
To feed back B bits though the uplink channel, the throughput performance of the uplink channel should be larger than or equal to 'B'
where b = Ω F B T F B {\displaystyle b=\Omega _{FB}T_{FB}} is the feedback resource consisted of multiplying the feedback frequency resource and the frequency temporal resource subsequently and ρ F B {\displaystyle \rho _{FB}} is SNR of the feedback channel. Then, the required feedback resource to satisfy Δ R ≤ log 2 g {\displaystyle \Delta R\leq \log _{2}g} is
Note that differently from the feedback bits case, the required feedback resource is a function of both downlink and uplink channel conditions. It is reasonable to include the uplink channel status in the calculation of the feedback resource since the uplink channel status determines the capacity, i.e., bits/second per unit frequency band (Hz), of the feedback link. Consider a case when SNR of the downlink and uplink are proportion such that ρ b , m / ρ F B ) = C u p , d n {\displaystyle \rho _{b,m}/\rho _{FB})=C_{up,dn}} is constant and both SNRs are sufficiently high. Then, the feedback resource will be only proportional to the number of transmit antennas
It follows from the above equation that the feedback resource ( b F B {\displaystyle b_{FB}} ) is not necessary to scale according to SNR of the downlink channel, which is almost contradict to the case of the feedback bits. One, hence, sees that the whole systematic analysis can reverse the facts resulted from each reductioned situation.
If the transmitter knows the downlink channel state information (CSI) perfectly, ZF-precoding can achieve almost the system capacity when the number of users is large. On the other hand, with limited channel state information at the transmitter (CSIT) the performance of ZF-precoding decreases depending on the accuracy of CSIT. ZF-precoding requires the significant feedback overhead with respect to signal-to-noise-ratio (SNR) so as to achieve the full multiplexing gain. [ 3 ] Inaccurate CSIT results in the significant throughput loss because of residual multiuser interferences. Multiuser interferences remain since they can not be nulled with beams generated by imperfect CSIT.
|
https://en.wikipedia.org/wiki/Zero-forcing_precoding
|
A cambered aerofoil generates no lift when it is moving parallel to an axis called the zero-lift axis (or the zero-lift line .) When the angle of attack on an aerofoil is measured relative to the zero-lift axis it is true to say the lift coefficient is zero when the angle of attack is zero. [ 1 ] For this reason, on a cambered aerofoil the zero-lift line is better than the chord line when describing the angle of attack. [ 2 ]
When symmetric aerofoils are moving parallel to the chord line of the aerofoil, zero lift is generated. However, when cambered aerofoils are moving parallel to the chord line, lift is generated. (See diagram at right.) For symmetric aerofoils, the chord line and the zero lift line are the same. [ 3 ]
|
https://en.wikipedia.org/wiki/Zero-lift_axis
|
A zero-mode waveguide is an optical waveguide that guides light energy into a volume that is small in all dimensions compared to the wavelength of the light.
Zero-mode waveguides have been developed for rapid parallel sensing of zeptolitre sample volumes, as applied to gene sequencing , by Pacific Biosciences (previously named Nanofluidics, Inc.) [ 1 ]
A waveguide operated at frequencies lower than its cutoff frequency (wavelengths longer than its cutoff wavelength ) and used as a precision attenuator is also known as a "waveguide below-cutoff attenuator." [ 2 ]
The zero-mode waveguide is made possible by creating circular or rectangular nanoapertures using focused ion beam on an aluminium layer. [ 3 ]
The zero-mode waveguide can also enhance fluorescence signals due to surface plasmons generated at metal-dielectric interfaces. [ 4 ] Due to surface plasmon generation field is localized and enhanced as well as it changes the LDOS inside the cavity which leads to increase in Purcell Factor of analyte molecules inside the zero-mode waveguide [ 5 ]
The zero-mode waveguide is very useful for Ultraviolet Auto-fluorescence spectroscopy on tryptophan-carrying proteins like beta-galactosidase. [ 6 ] With further modification of the zero-mode waveguide with a conical reflector, it is possible to study the dynamic process of smaller proteins like streptavidin with 24 tryptophan.
, [ 7 ] The modified zero-mode waveguide with a conical reflector can be further optimized to enhance the signal-to-noise ratio and reach the ultimate sensitivity of single tryptophan proteins like TNase. [ 8 ]
This optics -related article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zero-mode_waveguide
|
The zero-phonon line and the phonon sideband jointly constitute the line shape of individual light absorbing and emitting molecules ( chromophores ) embedded into a transparent solid matrix. When the host matrix contains many chromophores, each will contribute a zero- phonon line and a phonon sideband to the absorption and emission spectra . The spectra originating from a collection of identical chromophores in a matrix is said to be inhomogeneously broadened because each chromophore is surrounded by a somewhat different matrix environment which modifies the energy required for an electronic transition. In an inhomogeneous distribution of chromophores, individual zero-phonon line and phonon sideband positions are therefore shifted and overlapping.
Figure 1 shows the typical line shape for electronic transitions of individual chromophores in a solid matrix. The zero-phonon line is located at a frequency ω’ determined by the intrinsic difference in energy levels between ground and excited state as well as by the local environment. The phonon sideband is shifted to a higher frequency in absorption and to a lower frequency in fluorescence. The frequency gap Δ between the zero-phonon line and the peak of the phonon side band is determined by Franck–Condon principles .
The distribution of intensity between the zero-phonon line and the phonon side band is strongly dependent on temperature. At room temperature there is enough thermal energy to excite many phonons and the probability of zero-phonon transition is close to zero. For organic chromophores in organic matrices, the probability of a zero-phonon electronic transition only becomes likely below about 40 kelvins , but depends also on the strength of coupling between the chromophore and the host lattice.
The transition between the ground and the excited state is based on the Franck–Condon principle , that the electronic transition is very fast compared with the motion in the lattice. The energy transitions can then be symbolized by vertical arrows between the ground and excited state, that is, there is no motion along the configurational coordinates during the transition. Figure 2 is an energy diagram for interpreting absorption and emission with and without phonons in terms of the configurational coordinate q i {\displaystyle q_{i}} . The energy transitions originate at the lowest phonon energy level of the electronic states. As represented in the figure, the largest wavefunction overlap (and therefore largest transition probability) occurs when the photon energy is equal to the energy difference between the two electronic states ( E 1 − E 0 {\displaystyle E_{1}-E_{0}} ) plus three quanta of lattice mode i {\displaystyle i} vibrational energy ( ℏ Ω i {\displaystyle \hbar \Omega _{i}} ). This three-phonon transition is mirrored in emission when the excited state quickly decays to its zero-point lattice vibration level by means of a radiationless process, and from there to the ground state via photon emission. The zero-phonon transition is depicted as having a lower wavefunction overlap and therefore a lower transition probability.
In addition to the Franck-Condon assumption, three other approximations are commonly assumed and are implicit in the figures. The first is that each lattice vibrational mode is well described by a quantum harmonic oscillator . This approximation is implied in the parabolic shape of the potential wells of Figure 2, and in the equal energy spacing between phonon energy levels. The second approximation is that only the lowest (zero-point) lattice vibration is excited. This is called the low temperature approximation and means that electronic transitions do not originate from any of the higher phonon levels. The third approximation is that the interaction between the chromophore and the lattice is the same in both the ground and the excited state. Specifically, the harmonic oscillator potential is equal in both states. This approximation, called linear coupling, is represented in Figure 2 by two equally shaped parabolic potentials and by equally spaced phonon energy levels in both the ground and excited states.
The strength of the zero-phonon transition arises in the superposition of all of the lattice modes. Each lattice mode m {\displaystyle m} has a characteristic vibrational frequency Ω m {\displaystyle {\Omega }_{m}} which leads to an energy difference between phonons ℏ Ω m {\displaystyle \hbar {\Omega }_{m}} . When the transition probabilities for all the modes are summed, the zero-phonon transitions always add at the electronic origin ( E 1 − E 0 {\displaystyle E_{1}-E_{0}} ), while the transitions with phonons contribute at a distribution of energies. Figure 3 illustrates the superposition of transition probabilities of several lattice modes. The phonon transition contributions from all lattice modes constitute the phonon sideband.
The frequency separation between the maxima of the absorption and fluorescence phonon sidebands is the phonon contribution to the Stokes’ shift .
The shape of the zero-phonon line is Lorentzian with a width determined by the excited state lifetime T 10 according to the Heisenberg uncertainty principle . Without the influence of the lattice, the natural line width (full width at half maximum) of the chromophore is γ 0 = 1/ T 10 . The lattice reduces the lifetime of the excited state by introducing radiationless decay mechanisms. At absolute zero the lifetime of the excited state influenced by the lattice is T 1 . Above absolute zero, thermal motions will introduce random perturbations to the chromophores local environment. These perturbations shift the energy of the electronic transition, introducing a temperature dependent broadening of the line width. The measured width of a single chromophore's zero phonon line, the homogeneous line width, is then γ h ( T ) ≥ 1/ T 1 .
The line shape of the phonon side band is that of a Poisson distribution as it expresses a discrete number of events, electronic transitions with phonons, during a period of time. At higher temperatures, or when the chromophore interacts strongly with the matrix, the probability of multiphonon is high and the phonon side band approximates a Gaussian distribution .
The distribution of intensity between the zero-phonon line and the phonon sideband is characterized by the Debye-Waller factor α.
The zero-phonon line is an optical analogy to the Mössbauer lines , which originate in the recoil-free emission or absorption of gamma rays from the nuclei of atoms bound in a solid matrix. In the case of the optical zero-phonon line, the position of the chromophore is the physical parameter that may be perturbed, whereas in the gamma transition, the momenta of the atoms may be changed. More technically, the key to the analogy is the symmetry between position and momentum in the Hamiltonian of the quantum harmonic oscillator . Both position and momentum contribute in the same way (quadratically) to the total energy.
|
https://en.wikipedia.org/wiki/Zero-phonon_line_and_phonon_sideband
|
Zero-point energy ( ZPE ) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics , quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle . [ 1 ] Therefore, even at absolute zero , atoms and molecules retain some vibrational motion. Apart from atoms and molecules , the empty space of the vacuum also has these properties. According to quantum field theory , the universe can be thought of not as isolated particles but continuous fluctuating fields : matter fields, whose quanta are fermions (i.e., leptons and quarks ), and force fields , whose quanta are bosons (e.g., photons and gluons ). All these fields have zero-point energy. [ 2 ] These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics [ 1 ] [ 3 ] since some systems can detect the existence of this energy. [ citation needed ] However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Albert Einstein ’s theory of special relativity . [ 1 ]
The notion of a zero-point energy is also important for cosmology , and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. [ 4 ] Yet according to Einstein's theory of general relativity , any such energy would gravitate, and the experimental evidence from the expansion of the universe , dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. [ 5 ] [ 6 ] This idea would be true if supersymmetry were an exact symmetry of nature ; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry , only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. [ 6 ] This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics . Many physicists believe that "the vacuum holds the key to a full understanding of nature". [ 7 ]
The term zero-point energy (ZPE) is a translation from the German Nullpunktsenergie . [ 8 ] Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy . The term zero-point field ( ZPF ) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate .
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy . Temperature , for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion ). As temperature is reduced to absolute zero , it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics . [ citation needed ]
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function , that fluctuates between various energy states (see wave-particle duality ). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave -like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well . This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Given the equivalence of mass and energy expressed by Albert Einstein 's E = mc 2 , any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator . According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons ). All these fields have zero-point energy. [ 2 ] Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum , and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field. [ 9 ]
The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. [ 10 ] For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant . For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy (discussed further below) and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang , the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem . [ 5 ]
Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission , Casimir force , Lamb shift , magnetic moment of the electron and Delbrück scattering . [ 11 ] [ 12 ] These effects are usually called "radiative corrections". [ 13 ] In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states , symmetry breaking , chaos and emergence . Active areas of research include the effects of virtual particles, [ 14 ] quantum entanglement , [ 15 ] the difference (if any) between inertial and gravitational mass , [ 16 ] variation in the speed of light , [ 17 ] a reason for the observed value of the cosmological constant [ 18 ] and the nature of dark energy. [ 19 ] [ 20 ]
Zero-point energy evolved from historical ideas about the vacuum . To Aristotle the vacuum was τὸ κενόν , "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire , air , earth , and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. [ 21 ] Debate about the characteristics of the vacuum were largely confined to the realm of philosophy , it was not until much later on with the beginning of the renaissance , that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum. [ 22 ]
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation . The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics , this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc :
To those who maintained the existence of a plenum as a philosophical principle, nature's abhorrence of a vacuum was a sufficient reason for imagining an all-surrounding aether ... Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, till a space had been filled three or four times with aethers. [ 23 ]
Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity , which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
In 1900, Max Planck derived the average energy ε of a single energy radiator , e.g., a vibrating atomic unit, as a function of absolute temperature: [ 24 ] ε = h ν e h ν / ( k T ) − 1 , {\displaystyle \varepsilon ={\frac {h\nu }{e^{h\nu /(kT)}-1}}\,,} where h is the Planck constant , ν is the frequency , k is the Boltzmann constant , and T is the absolute temperature . The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900. [ 25 ]
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900. [ 26 ]
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. [ 27 ] In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of hν . This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one hν / 2 , as an additional term dependent on the frequency ν , which was greater than zero (where h is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." [ 28 ] In a series of papers from 1911 to 1913, [ 29 ] Planck found the average energy of an oscillator to be: [ 26 ] [ 30 ] ε = h ν 2 + h ν e h ν / ( k T ) − 1 . {\displaystyle \varepsilon ={\frac {h\nu }{2}}+{\frac {h\nu }{e^{h\nu /(kT)}-1}}~.}
Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern . [ 31 ] In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". [ 32 ] Zero-point energy was also invoked by Peter Debye , [ 33 ] who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation . [ 34 ] With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920:
There is a weighty argument to be adduced in favour of the aether hypothesis. To deny the aether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view ... according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it. [ 35 ] [ 36 ]
Kurt Bennewitz [ de ] and Francis Simon (1923), [ 37 ] who worked at Walther Nernst 's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen , argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), [ 38 ] [ 39 ] that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken [ 40 ] provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10 BO and 11 BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, [ 41 ] with the development of matrix mechanics in Werner Heisenberg 's article " Quantum theoretical re-interpretation of kinematic and mechanical relations " the zero-point energy was derived from quantum mechanics. [ 42 ]
In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, [ 43 ] [ 44 ] [ 45 ] but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law 1 / r 2 held down to the zero values of r . For the force between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would be adequate to separate them. [...] Thus the matter in the universe would tend to shrink into nothing or to diminish indefinitely in size." [ 46 ] The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation . [ 47 ] This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability. [ 48 ]
In 1926, Pascual Jordan [ 49 ] published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. [ 50 ] However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". [ 51 ] Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, [ 52 ] performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory". [ 53 ]
Building on the work of Heisenberg and others, Paul Dirac 's theory of emission and absorption (1927) [ 54 ] was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission . [ 55 ] Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. [ 56 ] [ 57 ] In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac: [ 54 ]
The light-quantum has the peculiarity that it apparently ceases to exist when it is in one of its stationary states, namely, the zero state, in which its momentum and therefore also its energy, are zero. When a light-quantum is absorbed it can be considered to jump into this zero state, and when one is emitted it can be considered to jump from the zero state to one in which it is physically in evidence, so that it appears to have been created. Since there is no limit to the number of light-quanta that may be created in this way, we must suppose that there are an infinite number of light quanta in the zero state ...
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote: [ 58 ]
From quantum theory there follows the existence of so called zero-point oscillations; for example each oscillator in its lowest state is not completely at rest but always is moving about its equilibrium position. Therefore electromagnetic oscillations also can never cease completely. Thus the quantum nature of the electromagnetic field has as its consequence zero point oscillations of the field strength in the lowest energy state, in which there are no light quanta in space ... The zero point oscillations act on an electron in the same way as ordinary electrical oscillations do. They can change the eigenstate of the electron, but only in a transition to a state with the lowest energy, since empty space can only take away energy, and not give it up. In this way spontaneous radiation arises as a consequence of the existence of these unique field strengths corresponding to zero point oscillations. Thus spontaneous radiation is induced radiation of light quanta produced by zero point oscillations of empty space
This view was also later supported by Theodore Welton (1948), [ 59 ] who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom , now known as the Lamb shift, [ 60 ] and measurement of the magnetic moment of the electron. [ 61 ] Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers [ 62 ] and also Victor Weisskopf (1936), [ 63 ] and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). [ 64 ] As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. [ 65 ] [ 11 ] But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli 's 1945 Nobel lecture [ 66 ] he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality".
In 1948 Hendrik Casimir [ 67 ] [ 68 ] showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions . These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, [ 69 ] [ 70 ] did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder , Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. [ 71 ] Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) [ 72 ] [ 73 ] [ 74 ] in the case of plane parallel dielectric plates . The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. [ 75 ] The role of relativistic forces becomes dominant at orders of a hundred nanometers.
In 1951 Herbert Callen and Theodore Welton [ 76 ] proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) [ 77 ] as an explanation for observed Johnson noise in electric circuits. [ 78 ] The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. [ 79 ] FDT has been shown to be true experimentally under certain quantum, non-classical, conditions. [ 80 ] [ 81 ] [ 82 ]
In 1963 the Jaynes–Cummings model [ 83 ] was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency ( Rabi frequency ). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. [ 84 ] [ 85 ] These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") [ 86 ] [ 87 ] or amplified. Amplification was first predicted by Purcell in 1946 [ 88 ] (the Purcell effect ) and has been experimentally verified. [ 89 ] This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom. [ 90 ]
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. [ 91 ] Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum , or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well , the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator , H ^ = V 0 + 1 2 k ( x ^ − x 0 ) 2 + 1 2 m p ^ 2 , {\displaystyle {\hat {H}}=V_{0}+{\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}+{\frac {1}{2m}}{\hat {p}}^{2}\,,} where V 0 is the minimum of the classical potential well.
The uncertainty principle tells us that ⟨ ( x ^ − x 0 ) 2 ⟩ ⟨ p ^ 2 ⟩ ≥ ℏ 2 , {\displaystyle {\sqrt {\left\langle \left({\hat {x}}-x_{0}\right)^{2}\right\rangle }}{\sqrt {\left\langle {\hat {p}}^{2}\right\rangle }}\geq {\frac {\hbar }{2}}\,,} making the expectation values of the kinetic and potential terms above satisfy ⟨ 1 2 k ( x ^ − x 0 ) 2 ⟩ ⟨ 1 2 m p ^ 2 ⟩ ≥ ( ℏ 4 ) 2 k m . {\displaystyle \left\langle {\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}\right\rangle \left\langle {\frac {1}{2m}}{\hat {p}}^{2}\right\rangle \geq \left({\frac {\hbar }{4}}\right)^{2}{\frac {k}{m}}\,.}
The expectation value of the energy must therefore be at least
⟨ H ^ ⟩ ≥ V 0 + ℏ 2 k m = V 0 + ℏ ω 2 {\displaystyle \left\langle {\hat {H}}\right\rangle \geq V_{0}+{\frac {\hbar }{2}}{\sqrt {\frac {k}{m}}}=V_{0}+{\frac {\hbar \omega }{2}}}
where ω = √ k / m is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly E 0 = V 0 + ħω / 2 , requires solving for the ground state of the system.
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by ν above, using angular frequency , denoted with ω and defined by ω = 2 πν . This leads to a convention of writing the Planck constant h with a bar through its top ( ħ ) to denote the quantity h / 2π . In these terms, an example of zero-point energy is the above E = ħω / 2 associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate . Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics , a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice , have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature .
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by: h 2 n 2 8 m L 2 {\displaystyle {\frac {h^{2}n^{2}}{8mL^{2}}}} where h is the Planck constant , m is the mass of the particle, n is the energy state ( n = 1 corresponds to the ground-state energy), and L is the width of the well.
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields , with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson . The matter and force fields have zero-point energy. [ 2 ] A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. [ 92 ] The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum , and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field. [ 9 ]
Each point in space makes a contribution of E = ħω / 2 , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology , the vacuum energy is one possible explanation for the cosmological constant [ 18 ] and the source of dark energy. [ 19 ] [ 20 ]
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy . Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy . The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy. [ 93 ]
In quantum perturbation theory , it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations , or the zero-point energy to the particle masses .
The oldest and best known quantized force field is the electromagnetic field . Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories .
In the quantum theory of the electromagnetic field, classical wave amplitudes α and α * are replaced by operators a and a † that satisfy: [ a , a † ] = 1 {\displaystyle \left[a,a^{\dagger }\right]=1}
The classical quantity | α | 2 appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator a † a . The fact that: [ a , a † a ] ≠ 1 {\displaystyle \left[a,a^{\dagger }a\right]\neq 1} implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for a † a and a . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" a † and a associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of a and a † . This is true for any harmonic oscillator: the zero-point energy ħω / 2 appears when we write the Hamiltonian: H c l = p 2 2 m + 1 2 m ω 2 q 2 = 1 2 ℏ ω ( a a † + a † a ) = ℏ ω ( a † a + 1 2 ) {\displaystyle {\begin{aligned}H_{cl}&={\frac {p^{2}}{2m}}+{\tfrac {1}{2}}m\omega ^{2}{q}^{2}\\&={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)\\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)\end{aligned}}}
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by: [ 10 ] H F − ⟨ 0 | H F | 0 ⟩ = 1 2 ℏ ω ( a a † + a † a ) − 1 2 ℏ ω = ℏ ω ( a † a + 1 2 ) − 1 2 ℏ ω = ℏ ω a † a {\displaystyle {\begin{aligned}H_{F}-\left\langle 0|H_{F}|0\right\rangle &={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega a^{\dagger }a\end{aligned}}} without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted : H F , i.e.: : H F :≡ ℏ ω ( a a † + a † a ) :≡ ℏ ω a † a {\displaystyle :H_{F}:\equiv \hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right):\equiv \hbar \omega a^{\dagger }a}
In other words, within the normal ordering symbol we can commute a and a † . Since zero-point energy is intimately connected to the non-commutativity of a and a † , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with a and a † and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian. [ 94 ]
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by: H F = 1 8 π ∫ d 3 r ( E 2 + B 2 ) = k 2 2 π | α ( t ) | 2 {\displaystyle {\begin{aligned}H_{F}&={\frac {1}{8\pi }}\int d^{3}r\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\\&={\frac {k^{2}}{2\pi }}|\alpha (t)|^{2}\end{aligned}}}
We introduce the "mode function" A 0 ( r ) that satisfies the Helmholtz equation : ( ∇ 2 + k 2 ) A 0 ( r ) = 0 {\displaystyle \left(\nabla ^{2}+k^{2}\right)\mathbf {A} _{0}(\mathbf {r} )=0} where k = ω / c and assume it is normalized such that: ∫ d 3 r | A 0 ( r ) | 2 = 1 {\displaystyle \int d^{3}r\left|\mathbf {A} _{0}(\mathbf {r} )\right|^{2}=1}
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that | A 0 ( r ) | 2 should be independent of r for each mode of the field. The mode function satisfying these conditions is: A 0 ( r ) = e k e i k ⋅ r {\displaystyle \mathbf {A} _{0}(\mathbf {r} )=e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }} where k · e k = 0 in order to have the transversality condition ∇ · A ( r , t ) satisfied for the Coulomb gauge [ dubious – discuss ] in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume V = L 3 and impose on the field the periodic boundary condition: A ( x + L , y + L , z + L , t ) = A ( x , y , z , t ) {\displaystyle \mathbf {A} (x+L,y+L,z+L,t)=\mathbf {A} (x,y,z,t)} or equivalently ( k x , k y , k z ) = 2 π L ( n x , n y , n z ) {\displaystyle \left(k_{x},k_{y},k_{z}\right)={\frac {2\pi }{L}}\left(n_{x},n_{y},n_{z}\right)} where n can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function: A k ( r ) = 1 V e k e i k ⋅ r {\displaystyle \mathbf {A} _{\mathbf {k} }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }} which satisfies the Helmholtz equation, transversality, and the "box normalization": ∫ V d 3 r | A k ( r ) | 2 = 1 {\displaystyle \int _{V}d^{3}r\left|\mathbf {A} _{\mathbf {k} }(\mathbf {r} )\right|^{2}=1} where e k is chosen to be a unit vector which specifies the polarization of the field mode. The condition k · e k = 0 means that there are two independent choices of e k , which we call e k 1 and e k 2 where e k 1 · e k 2 = 0 and e 2 k 1 = e 2 k 2 = 1 . Thus we define the mode functions: A k λ ( r ) = 1 V e k λ e i k ⋅ r , λ = { 1 2 {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} \lambda }e^{i\mathbf {k} \cdot \mathbf {r} }\,,\quad \lambda ={\begin{cases}1\\2\end{cases}}} in terms of which the vector potential becomes [ clarification needed ] : A k λ ( r , t ) = 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e i k ⋅ r + a k λ † ( 0 ) e − i k ⋅ r ] e k λ {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }} or: A k λ ( r , t ) = 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e − i ( ω k t − k ⋅ r ) + a k λ † ( 0 ) e i ( ω k t − k ⋅ r ) ] {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}\right]} where ω k = kc and a k λ , a † k λ are photon annihilation and creation operators for the mode with wave vector k and polarization λ . This gives the vector potential for a plane wave mode of the field. The condition for ( k x , k y , k z ) shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write: A ( r t ) = ∑ k λ 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e i k ⋅ r + a k λ † ( 0 ) e − i k ⋅ r ] e k λ {\displaystyle \mathbf {A} (\mathbf {r} t)=\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }} for the total vector potential in free space. Using the fact that: ∫ V d 3 r A k λ ( r ) ⋅ A k ′ λ ′ ∗ ( r ) = δ k , k ′ 3 δ λ , λ ′ {\displaystyle \int _{V}d^{3}r\mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )\cdot \mathbf {A} _{\mathbf {k} '\lambda '}^{\ast }(\mathbf {r} )=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}} we find the field Hamiltonian is: H F = ∑ k λ ℏ ω k ( a k λ † a k λ + 1 2 ) {\displaystyle H_{F}=\sum _{\mathbf {k} \lambda }\hbar \omega _{k}\left(a_{\mathbf {k} \lambda }^{\dagger }a_{\mathbf {k} \lambda }+{\tfrac {1}{2}}\right)}
This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations: [ a k λ ( t ) , a k ′ λ ′ † ( t ) ] = δ k , k ′ 3 δ λ , λ ′ [ a k λ ( t ) , a k ′ λ ′ ( t ) ] = [ a k λ † ( t ) , a k ′ λ ′ † ( t ) ] = 0 {\displaystyle {\begin{aligned}\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]&=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}\\[10px]\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}(t)\right]&=\left[a_{\mathbf {k} \lambda }^{\dagger }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]=0\end{aligned}}}
Clearly the least eigenvalue for H F is: ∑ k λ 1 2 ℏ ω k {\displaystyle \sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}}
This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor 8 π v 2 d v c 3 V {\displaystyle {\frac {8\pi v^{2}dv}{c^{3}}}V} shows. The summation becomes approximately the integral: 4 π h V c 3 ∫ v 3 d v {\displaystyle {\frac {4\pi hV}{c^{3}}}\int v^{3}\,dv} for high values of v . It diverges proportional to v 4 for large v .
There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume V is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" [ 95 ] these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry.
This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out. [ 96 ]
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which n k λ = 0 for all modes ( k , λ ) . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero. [ citation needed ]
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. [ 54 ] An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by Σ k λ ħω k / 2 is infinite. We can make the replacement: ∑ k λ ⟶ ∑ λ ( 1 2 π ) 3 ∫ d 3 k = V 8 π 3 ∑ λ ∫ d 3 k {\displaystyle \sum _{\mathbf {k} \lambda }\longrightarrow \sum _{\lambda }\left({\frac {1}{2\pi }}\right)^{3}\int d^{3}k={\frac {V}{8\pi ^{3}}}\sum _{\lambda }\int d^{3}k} the zero-point energy density is: 1 V ∑ k λ 1 2 ℏ ω k = 2 8 π 3 ∫ d 3 k 1 2 ℏ ω k = 4 π 4 π 3 ∫ d k k 2 ( 1 2 ℏ ω k ) = ℏ 2 π 2 c 3 ∫ d ω ω 3 {\displaystyle {\begin{aligned}{\frac {1}{V}}\sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}&={\frac {2}{8\pi ^{3}}}\int d^{3}k{\tfrac {1}{2}}\hbar \omega _{k}\\&={\frac {4\pi }{4\pi ^{3}}}\int dk\,k^{2}\left({\tfrac {1}{2}}\hbar \omega _{k}\right)\\&={\frac {\hbar }{2\pi ^{2}c^{3}}}\int d\omega \,\omega ^{3}\end{aligned}}} or in other words the spectral energy density of the vacuum field: ρ 0 ( ω ) = ℏ ω 3 2 π 2 c 3 {\displaystyle \rho _{0}(\omega )={\frac {\hbar \omega ^{3}}{2\pi ^{2}c^{3}}}}
The zero-point energy density in the frequency range from ω 1 to ω 2 is therefore: ∫ ω 1 ω 2 d ω ρ 0 ( ω ) = ℏ 8 π 2 c 3 ( ω 2 4 − ω 1 4 ) {\displaystyle \int _{\omega _{1}}^{\omega _{2}}d\omega \rho _{0}(\omega )={\frac {\hbar }{8\pi ^{2}c^{3}}}\left(\omega _{2}^{4}-\omega _{1}^{4}\right)}
This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg /cm 3 .
We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is: H = 1 2 m ( p − e c A ) 2 + 1 2 m ω 0 2 x 2 + H F {\displaystyle H={\frac {1}{2m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}+{\tfrac {1}{2}}m\omega _{0}^{2}\mathbf {x} ^{2}+H_{F}}
This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate x and the canonical momentum p = m ẋ + e A / c of the oscillator are: x ˙ = ( i ℏ ) − 1 [ x . H ] = 1 m ( p − e c A ) p ˙ = ( i ℏ ) − 1 [ p . H ] = 1 2 ∇ ( p − e c A ) 2 − m ω 0 2 x ˙ = − 1 m [ ( p − e c A ) ⋅ ∇ ] [ − e c A ] − 1 m ( p − e c A ) × ∇ × [ − e c A ] − m ω 0 2 x ˙ = e c ( x ˙ ⋅ ∇ ) A + e c x ˙ × B − m ω 0 2 x ˙ {\displaystyle {\begin{aligned}\mathbf {\dot {x}} &=(i\hbar )^{-1}[\mathbf {x} .H]={\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\\\mathbf {\dot {p}} &=(i\hbar )^{-1}[\mathbf {p} .H]{\begin{aligned}&={\tfrac {1}{2}}\nabla \left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}-m\omega _{0}^{2}\mathbf {\dot {x}} \\&=-{\frac {1}{m}}\left[\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\cdot \nabla \right]\left[-{\frac {e}{c}}\mathbf {A} \right]-{\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\times \nabla \times \left[-{\frac {e}{c}}\mathbf {A} \right]-m\omega _{0}^{2}\mathbf {\dot {x}} \\&={\frac {e}{c}}(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}\end{aligned}}} or: m x ¨ = p ˙ − e c A ˙ = − e c [ A ˙ − ( x ˙ ⋅ ∇ ) A ] + e c x ˙ × B − m ω 0 2 x = e E + e c x ˙ × B − m ω 0 2 x {\displaystyle {\begin{aligned}m\mathbf {\ddot {x}} &=\mathbf {\dot {p}} -{\frac {e}{c}}\mathbf {\dot {A}} \\&=-{\frac {e}{c}}\left[\mathbf {\dot {A}} -\left(\mathbf {\dot {x}} \cdot \nabla \right)\mathbf {A} \right]+{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \\&=e\mathbf {E} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \end{aligned}}} since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative A ˙ = ∂ A ∂ t + ( x ˙ ⋅ ∇ ) A 3 . {\displaystyle \mathbf {\dot {A}} ={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} ^{3}\,.}
For nonrelativistic motion we may neglect the magnetic force and replace the expression for m ẍ by: x ¨ + ω 0 2 x ≈ e m E ≈ ∑ k λ 2 π ℏ ω k V [ a k λ ( t ) + a k λ † ( t ) ] e k λ {\displaystyle {\begin{aligned}\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} &\approx {\frac {e}{m}}\mathbf {E} \\&\approx \sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(t)+a_{\mathbf {k} \lambda }^{\dagger }(t)\right]e_{\mathbf {k} \lambda }\end{aligned}}}
Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for a k λ is found similarly from the Hamiltonian to be: a ˙ k λ = i ω k a k λ + i e 2 π ℏ ω k V x ˙ ⋅ e k λ {\displaystyle {\dot {a}}_{\mathbf {k} \lambda }=i\omega _{k}a_{\mathbf {k} \lambda }+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\mathbf {\dot {x}} \cdot e_{\mathbf {k} \lambda }} in the electric dipole approximation.
In deriving these equations for x , p , and a k λ we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, t = 0 ) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator A ( t ) evolves in time as A ( t ) = U † ( t ) A (0) U ( t ) , where U ( t ) is the time evolution operator satisfying i ℏ U ˙ = H U , U † ( t ) = U − 1 ( t ) , U ( 0 ) = 1 . {\displaystyle i\hbar {\dot {U}}=HU\,,\quad U^{\dagger }(t)=U^{-1}(t)\,,\quad U(0)=1\,.}
Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is: a k λ ( t ) = a k λ ( 0 ) e − i ω k t + i e 2 π ℏ ω k V ∫ 0 t d t ′ e k λ ⋅ x ˙ ( t ′ ) e i ω k ( t ′ − t ) {\displaystyle a_{\mathbf {k} \lambda }(t)=a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\int _{0}^{t}dt'\,e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} (t')e^{i\omega _{k}\left(t'-t\right)}} and therefore the equation for ȧ k λ may be written: x ¨ + ω 0 2 x = e m E 0 ( t ) + e m E R R ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} ={\frac {e}{m}}\mathbf {E} _{0}(t)+{\frac {e}{m}}\mathbf {E} _{RR}(t)} where E 0 ( t ) = i ∑ k λ 2 π ℏ ω k V [ a k λ ( 0 ) e − i ω k t − a k λ † ( 0 ) e i ω k t ] e k λ {\displaystyle \mathbf {E} _{0}(t)=i\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}-a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i\omega _{k}t}\right]e_{\mathbf {k} \lambda }} and E R R ( t ) = − 4 π e V ∑ k λ ∫ 0 t d t ′ [ e k λ ⋅ x ˙ ( t ′ ) ] cos ω k ( t ′ − t ) {\displaystyle \mathbf {E} _{RR}(t)=-{\frac {4\pi e}{V}}\sum _{\mathbf {k} \lambda }\int _{0}^{t}dt'\left[e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} \left(t'\right)\right]\cos \omega _{k}\left(t'-t\right)}
It can be shown that in the radiation reaction field, if the mass m is regarded as the "observed" mass then we can take E R R ( t ) = 2 e 3 c 3 x ¨ {\displaystyle \mathbf {E} _{RR}(t)={\frac {2e}{3c^{3}}}\mathbf {\ddot {x}} }
The total field acting on the dipole has two parts, E 0 ( t ) and E RR ( t ) . E 0 ( t ) is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation [ ∇ 2 − 1 c 2 ∂ 2 ∂ t 2 ] E = 0 {\displaystyle \left[\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right]\mathbf {E} =0} satisfied by the field in the (source free) vacuum. For this reason E 0 ( t ) is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at t = 0 . E RR ( t ) is the source field, the field generated by the dipole and acting on the dipole.
Using the above equation for E RR ( t ) we obtain an equation for the Heisenberg-picture operator x ( t ) {\displaystyle \mathbf {x} (t)} that is formally the same as the classical equation for a linear dipole oscillator: x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)} where τ = 2 e 2 / 3 mc 3 . in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole.
Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field E 0 ( t ) .
According to our earlier equation for a k λ ( t ) the free field is the only field in existence at t = 0 as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at t = 0 is therefore of the form | Ψ ⟩ = | vac ⟩ | ψ D ⟩ , {\displaystyle |\Psi \rangle =|{\text{vac}}\rangle |\psi _{D}\rangle \,,} where |vac⟩ is the vacuum state of the field and | ψ D ⟩ is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero: ⟨ E 0 ( t ) ⟩ = ⟨ Ψ | E 0 ( t ) | Ψ ⟩ = 0 {\displaystyle \langle \mathbf {E} _{0}(t)\rangle =\langle \Psi |\mathbf {E} _{0}(t)|\Psi \rangle =0} since a k λ (0)|vac⟩ = 0 . however, the energy density associated with the free field is infinite: 1 4 π ⟨ E 0 2 ( t ) ⟩ = 1 4 π ∑ k λ ∑ k ′ λ ′ 2 π ℏ ω k V 2 π ℏ ω k ′ V × ⟨ a k λ ( 0 ) a k ′ λ ′ † ( 0 ) ⟩ = 1 4 π ∑ k λ ( 2 π ℏ ω k V ) = ∫ 0 ∞ d w ρ 0 ( ω ) {\displaystyle {\begin{aligned}{\frac {1}{4\pi }}\left\langle \mathbf {E} _{0}^{2}(t)\right\rangle &={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\sum _{\mathbf {k'} \lambda '}{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}{\sqrt {\frac {2\pi \hbar \omega _{k'}}{V}}}\times \left\langle a_{\mathbf {k} \lambda }(0)a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right\rangle \\&={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\left({\frac {2\pi \hbar \omega _{k}}{V}}\right)\\&=\int _{0}^{\infty }dw\,\rho _{0}(\omega )\end{aligned}}}
The important point of this is that the zero-point field energy H F does not affect the Heisenberg equation for a k λ since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with a k λ . We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term Σ k λ ħω k / 2 in the field Hamiltonian.
The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory: [ z ( t ) , p z ( t ) ] = [ U † ( t ) z ( 0 ) U ( t ) , U † ( t ) p z ( 0 ) U ( t ) ] = U † ( t ) [ z ( 0 ) , p z ( 0 ) ] U ( t ) = i ℏ U † ( t ) U ( t ) = i ℏ {\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&=\left[U^{\dagger }(t)z(0)U(t),U^{\dagger }(t)p_{z}(0)U(t)\right]\\&=U^{\dagger }(t)\left[z(0),p_{z}(0)\right]U(t)\\&=i\hbar U^{\dagger }(t)U(t)\\&=i\hbar \end{aligned}}}
We can calculate [ z ( t ), p z ( t )] from the formal solution of the operator equation of motion x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)}
Using the fact that [ a k λ ( 0 ) , a k ′ λ ′ † ( 0 ) ] = δ k k ′ 3 , δ λ λ ′ {\displaystyle \left[a_{\mathbf {k} \lambda }(0),a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right]=\delta _{\mathbf {kk'} }^{3},\delta _{\lambda \lambda '}} and that equal-time particle and field operators commute, we obtain: = [ z ( t ) , m z ˙ ( t ) ] + [ z ( t ) , e c A z ( t ) ] = [ z ( t ) , m z ˙ ( t ) ] = ( i ℏ e 2 2 π 2 m c 3 ) ( 8 π 3 ) ∫ 0 ∞ d ω ω 4 ( ω 2 − ω 0 2 ) 2 + τ 2 ω 6 {\displaystyle {\begin{aligned}[z(t),p_{z}(t)]&=\left[z(t),m{\dot {z}}(t)\right]+\left[z(t),{\frac {e}{c}}A_{z}(t)\right]\\&=\left[z(t),m{\dot {z}}(t)\right]\\&=\left({\frac {i\hbar e^{2}}{2\pi ^{2}mc^{3}}}\right)\left({\frac {8\pi }{3}}\right)\int _{0}^{\infty }{\frac {d\omega \,\omega ^{4}}{\left(\omega ^{2}-\omega _{0}^{2}\right)^{2}+\tau ^{2}\omega ^{6}}}\end{aligned}}}
For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., τω 0 ≪ 1 . Then the integrand above is sharply peaked at ω = ω 0 and: [ z ( t ) , p z ( t ) ] ≈ 2 i ℏ e 2 3 π m c 3 ω 0 3 ∫ − ∞ ∞ d x x 2 + τ 2 ω 0 6 = ( 2 i ℏ e 2 ω 0 3 3 π m c 3 ) ( π τ ω 0 3 ) = i ℏ {\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&\approx {\frac {2i\hbar e^{2}}{3\pi mc^{3}}}\omega _{0}^{3}\int _{-\infty }^{\infty }{\frac {dx}{x^{2}+\tau ^{2}\omega _{0}^{6}}}\\&=\left({\frac {2i\hbar e^{2}\omega _{0}^{3}}{3\pi mc^{3}}}\right)\left({\frac {\pi }{\tau \omega _{0}^{3}}}\right)\\&=i\hbar \end{aligned}}} the necessity of the vacuum field can also be appreciated by making the small damping approximation in x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) x ¨ ≈ − ω 0 2 x ( t ) x . . . ≈ − ω 0 2 x ˙ {\displaystyle {\begin{aligned}&\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)\\&\mathbf {\ddot {x}} \approx -\omega _{0}^{2}\mathbf {x} (t)&&\mathbf {\overset {...}{x}} \approx -\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}} and x ¨ + τ ω 0 2 x ˙ + ω 0 2 x ≈ e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\tau \omega _{0}^{2}\mathbf {\dot {x}} +\omega _{0}^{2}\mathbf {x} \approx {\frac {e}{m}}\mathbf {E} _{0}(t)}
Without the free field E 0 ( t ) in this equation the operator x ( t ) would be exponentially dampened, and commutators like [ z ( t ), p z ( t )] would approach zero for t ≫ 1 / τω 2 0 . With the vacuum field included, however, the commutator is iħ at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator. [ 97 ]
What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails.
The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of x , the spectral energy density of the vacuum field must be proportional to the third power of ω in order for [ z ( t ), p z ( t )] to hold. In the case of a dissipative force proportional to ẋ , by contrast, the fluctuation force must be proportional to ω {\displaystyle \omega } in order to maintain the canonical commutation relation. [ 97 ] This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem. [ 76 ]
The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field. [ 98 ]
The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter . In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
The Standard Model hypothesises a field called the Higgs field (symbol: ϕ ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of ϕ 0 in the ground state (the vacuum expectation value or VEV) is then ⟨ ϕ 0 ⟩ = v / √ 2 , where v = | μ | / √ λ . The measured value of this parameter is approximately 246 GeV/ c 2 . [ 99 ] It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number.
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory ).
Zero-point energy has many observed physical consequences. [ 11 ] It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. [ 100 ] Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. [ 101 ] For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. [ 102 ] The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics. [ 103 ]
A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir , who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move.
Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. [ 104 ] [ 105 ] [ 106 ] [ 107 ] [ 108 ] That changed in 1997 with Lamoreaux [ 109 ] conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then. [ 110 ] [ 111 ] [ 112 ] [ 113 ]
In 2009, Munday et al. [ 114 ] published experimental proof that (as predicted in 1961 [ 115 ] ) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction. [ 116 ]
An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect , a hypothetical phenomenon in which light signals travel slightly faster than c between two closely spaced conducting plates. [ 117 ]
The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels 2 S 1 / 2 and 2 P 1 / 2 (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation , according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; [ 118 ] this effect is called the Lamb shift. [ 119 ] The shift of about 4.38 × 10 −6 eV is roughly 10 −7 of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms. [ 120 ]
Taking ħ (the Planck constant divided by 2π ), c (the speed of light ), and e 2 = q 2 e / 4π ε 0 (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where q e is the absolute value of the electronic charge and ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity )) we can form a dimensionless quantity called the fine-structure constant : α = e 2 ℏ c = q e 2 4 π ε 0 ℏ c ≈ 1 137 {\displaystyle \alpha ={\frac {e^{2}}{\hbar c}}={\frac {q_{e}^{2}}{4\pi \varepsilon _{0}\hbar c}}\approx {\frac {1}{137}}}
The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. [ 121 ] The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it.
The Heisenberg inequality where ħ = h / 2π , and Δ x , Δ p are the standard deviations of position and momentum states that: Δ x Δ p ≥ 1 2 ℏ {\displaystyle \Delta _{x}\Delta _{p}\geq {\frac {1}{2}}\hbar }
It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z 0 boson rest energy, m z c 2 ≈ 90 GeV, that: α ≈ 1 129 {\displaystyle \alpha \approx {\frac {1}{129}}} rather than the low-energy α ≈ 1 / 137 . [ 122 ] [ 123 ] The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining α . All in all, α depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of α has been observed for several years now in precision experiment in high-energy physics.
In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. [ citation needed ] The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 [ 124 ] and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". [ 125 ] [ 126 ] Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit . At this point the vacuum has all the properties of a birefringent medium , thus in principle a rotation of the polarization frame (the Faraday effect ) can be observed in empty space. [ 127 ] [ 128 ]
Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance . Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. [ 129 ] There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. [ 130 ] The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754 , [ 131 ] the closest discovered neutron star to Earth . [ 132 ]
Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." [ 133 ] The team found that visible light from the star had undergone linear polarisation [ clarification needed ] of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. [ 134 ] Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE).
In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. [ 136 ] [ 137 ] This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy .
There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude . [ 138 ]
The European Space Agency's Euclid telescope , launched on 1 July 2023, will map galaxies up to 10 billion light years away. [ 139 ] By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence , where observed acceleration is due to the energy of a scalar field , rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. [ 140 ] Scalar fields are predicted by the Standard Model of particle physics and string theory , but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation ) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy.
Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos . It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation ). [ 141 ] Many physicists also believe that inflation explains why the Universe appears to be the same in all directions ( isotropic ), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis , the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe , but this is far from certain.
Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. [ 142 ] Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation.
There has been a long debate [ 143 ] over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger , in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". [ 144 ] From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) [ 145 ] for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). [ 146 ] in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) [ 147 ] has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED."
Milonni has shown the necessity of the vacuum field for the formal consistency of QED. [ 148 ] Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory . [ 149 ] Nevertheless, as pointed out by Jaffe, "no
known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”" [ 147 ]
The mathematical models used in classical electromagnetism , quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. [ 150 ] In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. [ 151 ] There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) [ 152 ] [ 153 ] and Altshuler–Aronov–Spivak (AAS) effects, [ 154 ] Berry , [ 155 ] Aharonov–Anandan, [ 156 ] Pancharatnam [ 157 ] and Chiao–Wu [ 158 ] phase rotation effects, Josephson effect , [ 159 ] [ 160 ] Quantum Hall effect , [ 161 ] the De Haas–Van Alphen effect , [ 162 ] the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact [ 163 ] and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence . [ 164 ]
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside , FitzGerald , Lodge and Hertz . The original equations used Hamilton 's more expressive quaternion notation, [ 165 ] a kind of Clifford algebra , which fully subsumes the standard Maxwell vectorial equations largely used today. [ 166 ] In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". [ 167 ] It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. [ 168 ] It has often been argued that quaternions are not compatible with special relativity, [ 169 ] but multiple papers have shown ways of incorporating relativity. [ 170 ] [ 171 ] [ 172 ] [ 173 ]
A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law , which elaborates Maxwell's equations is violated by these force free vortices. [ 174 ] [ 175 ] [ 176 ] These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem , [ 177 ] conservation laws such as the laws of thermodynamics need not always apply to dissipative systems , [ 178 ] [ 179 ] which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine [ 180 ] for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" [ 181 ] or "order out of chaos". [ 182 ] It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. [ 183 ] For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun. [ 184 ]
One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. [ 185 ] A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors . Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations . However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. [ 186 ] Bostick, [ 187 ] [ 188 ] for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors . [ 189 ] [ 190 ] Others have also pointed out this connection, Fröhlich [ 191 ] has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter ( μ {\displaystyle \mu } = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors . This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature . [ 192 ] This is an example of zero-point energy having multiple stable states (see Quantum phase transition , Quantum critical point , Topological degeneracy , Topological order [ 193 ] ) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum.
The idea that the vacuum energy can have multiple stable energy states is a leading hypothesis for the cause of cosmic inflation . In fact, it has been argued that these early vacuum fluctuations led to the expansion of the universe and in turn have guaranteed the non-equilibrium conditions necessary to drive order from chaos, as without such expansion the universe would have reached thermal equilibrium and no complexity could have existed. With the continued accelerated expansion of the universe, the cosmos generates an energy gradient that increases the "free energy" (i.e. the available, usable or potential energy for useful work) which the universe is able to use to create ever more complex forms of order. [ 194 ] [ 195 ] The only reason Earth's environment does not decay into an equilibrium state is that it receives a daily dose of sunshine and that, in turn, is due to the sun "polluting" interstellar space with entropy. The sun's fusion power is only possible due to the gravitational disequilibrium of matter that arose from cosmic expansion. In this essence, the vacuum energy can be viewed as the key cause of the structure throughout the universe. That humanity might alter the morphology of the vacuum energy to create an energy gradient for useful work is the subject of much controversy.
Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy ( work ) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines . [ citation needed ]
Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science . As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether . [ 196 ] Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. [ 197 ] [ 198 ] Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil. [ 197 ] [ 199 ]
A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. [ 197 ] In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates. [ 200 ]
In 1999, Pinto, a former scientist at NASA 's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved." [ 201 ]
Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force , he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001) [ 202 ] ) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system. [ 203 ]
In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force. [ 204 ]
A 2008 patent by Haisch and Moddel [ 205 ] details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel [ 206 ] was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation. [ 207 ]
In 1951 Callen and Welton [ 76 ] proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) [ 77 ] as an explanation for observed Johnson noise [ 78 ] in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. [ 79 ] Such a theory has met with resistance: Macdonald (1962) [ 208 ] and Harris (1971) [ 209 ] claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) [ 210 ] and Kleen (1986), [ 211 ] argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) [ 212 ] pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". [ 213 ] Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. [ 80 ] A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics , by exploiting certain quantum mechanical properties. [ 81 ]
There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations. [ 82 ] [ 214 ] [ 215 ] [ 216 ] [ 217 ] [ 218 ] [ 219 ] [ 220 ] [ 221 ] [ 222 ]
Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown [ 223 ]
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, [ 16 ] [ 224 ] [ 225 ] [ 226 ] but the interaction (if any) is not yet fully understood. According to the general theory of relativity , rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. [ 227 ] In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect , but the force produced in such an example is theorized to be exceedingly weak. [ 228 ]
In 1963 Robert Forward , a physicist and aerospace engineer at Hughes Research Laboratories , published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. [ 229 ] Since all atoms have spin , gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. [ 230 ] [ 231 ] [ 232 ] In 1966 Dewitt [ 233 ] was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization . In 1983, Dewitt's work was substantially expanded by Ross. [ 234 ]
From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. [ 235 ] [ 236 ] [ 237 ] Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. [ 238 ] [ 239 ] [ 240 ] [ 241 ] [ 242 ] [ 243 ] Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." [ 244 ] A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves." [ 245 ]
In 1986 the U.S. Air Force 's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them". [ 199 ] [ 246 ]
In 1988 Kip Thorne et al. [ 247 ] published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy . In 1993 Scharnhorst and Barton [ 117 ] showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy , dark energy or zero-point radiation of the vacuum. [ 248 ]
In 1992 Evgeny Podkletnov [ 249 ] published a heavily debated [ 250 ] [ 251 ] [ 252 ] [ 253 ] journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles [ 254 ] [ 255 ] [ 256 ] about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some [ 257 ] [ 258 ] but defended by others. [ 259 ] [ 260 ] In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. [ 261 ] Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC . [ 262 ] AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public. [ 263 ]
In 2002 Phantom Works , Boeing 's advanced research and development facility in Seattle , approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, also commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable" [ 264 ]
Froning and Roach (2002) [ 265 ] put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space.
In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves . [ 266 ] In the spheron model of the nucleus, [ 267 ] proposed by the two times Nobel laureate Linus Pauling , dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state , but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored.
In 2014 NASA 's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. [ 268 ] [ 269 ] [ 270 ] In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. [ 271 ] The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out. [ 272 ]
|
https://en.wikipedia.org/wiki/Zero-point_energy
|
Zero-rating is the practice of providing Internet access without financial cost under certain conditions, such as by permitting access to only certain websites or by subsidizing the service with advertising or by exempting certain websites from the data allowance. [ 1 ] [ 2 ]
Commentators discussing zero-rating present it often in the context of net neutrality . [ 2 ] While most sources report that use of zero-rating is contrary to the principle of net neutrality, there are mixed opinions among advocates of net neutrality about the extent to which people can benefit from zero-rating programs while retaining net neutrality protections. [ 2 ] Supporters of zero-rating argue that it enables consumers to make choices to access more data and leads to more people using online services, but critics believe zero-rating exploits the poor, creates opportunities for censorship, and disrupts the free market . [ 2 ]
Internet services like Facebook , Wikipedia and Google have built special programs to use zero-rating as means to provide their service more broadly into developing markets. The benefit for these new customers, who will mostly have to rely on mobile networks to connect to the Internet, would be a subsidised access to services from these service providers. The results of these efforts have been mixed, with adoption in a number of markets, sometimes overestimated expectations and perceived lack of benefits for mobile network operators. [ 3 ] In Chile , the national telecom regulator ruled that this practice violated net neutrality laws and had to end by June 1, 2014. [ 4 ] [ 5 ] The Federal Communications Commission did not ban zero-rating programs, but it "acknowledged that they could violate the spirit of net neutrality". [ 6 ]
Since June 2014, U.S. mobile provider T-Mobile US has offered zero-rated access to participating music streaming services to its mobile internet customers. T-Mobile US launched its plan called “Music Freedom” which would exempt users of T-Mobile US from having to pay premium prices for access to music content; additionally, this content access would not count as part of an individual's cap, which is the limit they can reach before they are charged for data. [ 7 ] [ 8 ] In November 2015, T-Mobile US expanded zero-rated access to video streaming services. [ 9 ]
In January 2016, Verizon joined AT&T by creating its own sponsored data program, FreeBee Data, which "enables content providers to pay a wireless provider to allow its subscribers to engage with or consume a piece of content without it counting against the customers' monthly allotments". [ 10 ] Sponsored data on behalf of content providers through AT&T or Verizon covers the costs for the viewers and attracts more consumers. Some people have characterized this as ISPs having created a toll-free service for online users.
Advocates of net neutrality state that sponsored data "allows well-heeled content providers to pay for placement to the disadvantage of smaller companies that can't afford the same luxury". [ 11 ] Verizon's FreeBee Data program which allows its own customers to access certain content, like ESPN and its video streaming service, for free along with any other relevant app access and the data will not count against their monthly caps. In this way, big ISPs discriminate against data and content from those who do not pay to have their content included in the FreeBee or other sponsored programs.
Similarly, mobile network operators are also able to use the underlying classification technology like deep packet inspection to redirect enterprise-related data charges for employees using their private tablets or smartphones to their employer. [ 12 ] This has the benefit of Toll-free / zero-rated applications allowing employees to participate in bring your own device (BYOD) programs.
In education, and as a response to the closure of school buildings due to the COVID-19 pandemics, Colombian government has created a learning resources platform for mobile phone ( movil.colombiaaprende ) and "published a decree requesting mobile operators to provide zero-rating conditions for access to specific education services and websites (both voice and data). The government reached an agreement with mobile and Internet operators ensuring all inhabitants have access to educational content and guidelines, in particular lower income households, with a cap at about USD 20." [ 13 ]
Starting 2015, Facebook was zero-rated in India . A year after, the local regulator forbade that practice. [ 14 ] The popular application WhatsApp [ 15 ] has been regularly finger-pointed by various journalists , bloggers and observers, to use intensively the zero-rating practice to encourage mobile users , the usage of its application, for no charge or consumption in the subscription-quota. Countries such as Brazil , [ 16 ] South Africa , [ 17 ] [ 18 ] Argentina , [ 19 ] [ 20 ] [ 21 ] Mexico , [ 22 ] etc..
In 2017, a long report underlines [ 23 ] that a certain number of emerging countries , or small countries , let the zero-rating for a certain number of services, especially for GAFAM ones, others big companies ( Yahoo , Twitter ), and smaller ones (including music streaming ). Those countries, providing zero-rating, are Brazil , Chile , Colombia , Costa Rica , Dominican Republic , Ecuador , El Salvador , Guatemala , Honduras , Jamaica , Mexico , Nicaragua , Paraguay , Peru , and Trinidad and Tobago . That reports alerts that a certain trio of services, are systematically present in each country: Facebook , WhatsApp and Twitter .
Zero-rating certain services, fast lanes and sponsored data have been criticised as anti-competitive and limiting open markets . [ 24 ] It enables internet providers to gain a significant advantage in the promotion of in-house services over competing independent companies, especially in data-heavy markets like video-streaming. A service provider, who is offering unlimited access to their service, will naturally seem more favourable to consumers over one where usage is limited. If the first provider is the one restricting access, they are creating a considerable advantage for themselves over their competition, thereby restricting the freedom of the market. As many new internet and content services are launched targeting primarily mobile usage, and further adoption of internet connectivity globally (including broadband in rural areas of developed countries) relies heavily on mobile, zero-rating has also been regarded as a threat to the open internet, which is typically available via fixed line networks with unlimited usage tariffs or flat rates . [ 25 ] [ 26 ] Facebook and the Wikimedia Foundation have been specifically criticized for their zero-rating programs , to further strengthen incumbent mobile network operators and limit consumer rights to an open internet. [ 27 ]
The United States has not officially made a decision on the regulation of zero-rating providers; instead, adopting a “wait-and-see” approach to the matter. The FCC has therefore elected to examine on a case-by-case basis under a “general conduct rule” that “prohibits unreasonable interference with end users’ ability to select content and content providers’ ability to reach end users”. [ 28 ] Days before the Trump inauguration, the Obama administration FCC issued a report expressing concerns with T-Mobile, Verizon and AT&T and their sponsored data programs. The FCC's Wireless Telecommunications Bureau found issues in wireless broadband services that vertically integrate their own affiliated programming, along with service providers allowing unaffiliated content providers to sponsor data. The report concluded that vertically affiliated broadband providers that zero-rate affiliated content most likely violate the general conduct rule. [ 29 ]
In the EU, specific cases such as those of Portugal were under scrutiny by national and EU regulators as of 2017, following the BEREC regulation on net neutrality. [ 30 ]
In addition to commercial interests, governments with a cultural agenda may support zero-rating for local content. [ 31 ]
|
https://en.wikipedia.org/wiki/Zero-rating
|
In number theory , zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group . Concretely, given a finite abelian group G and a positive integer n , one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0 .
The classic result in this area is the 1961 theorem of Paul Erdős , Abraham Ginzburg , and Abraham Ziv . [ 1 ] They proved that for the group Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } of integers modulo n ,
k = 2 n − 1. {\displaystyle k=2n-1.}
Explicitly this says that any multiset of 2 n − 1 integers has a subset of size n the sum of whose elements is a multiple of n , but that the same is not true of multisets of size 2 n − 2. (Indeed, the lower bound is easy to see: the multiset containing n − 1 copies of 0 and n − 1 copies of 1 contains no n -subset summing to a multiple of n .) This result is known as the Erdős–Ginzburg–Ziv theorem after its discoverers. It may also be deduced from the Cauchy–Davenport theorem . [ 2 ]
More general results than this theorem exist, such as Olson's theorem , Kemnitz's conjecture (proved by Christian Reiher in 2003 [ 3 ] ), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005 [ 4 ] ).
This combinatorics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Zero-sum_problem
|
Zero-sum thinking perceives situations as zero-sum games , where one person's gain would be another's loss. [ 1 ] [ 2 ] [ 3 ] The term is derived from game theory . However, unlike the game theory concept, zero-sum thinking refers to a psychological construct —a person's subjective interpretation of a situation. Zero-sum thinking is captured by the saying "your gain is my loss" (or conversely, "your loss is my gain"). Rozycka-Tran et al. (2015) defined zero-sum thinking as:
A general belief system about the antagonistic nature of social relations, shared by people in a society or culture and based on the implicit assumption that a finite amount of goods exists in the world, in which one person's winning makes others the losers, and vice versa ... a relatively permanent and general conviction that social relations are like a zero-sum game. People who share this conviction believe that success, especially economic success, is possible only at the expense of other people's failures. [ 1 ]
Zero-sum bias is a cognitive bias towards zero-sum thinking; it is people's tendency to intuitively judge that a situation is zero-sum, even when this is not the case. [ 4 ] This bias promotes zero-sum fallacies , false beliefs that situations are zero-sum. Such fallacies can cause other false judgements and poor decisions. [ 5 ] [ 6 ] In economics, "zero-sum fallacy" generally refers to the fixed-pie fallacy .
There are many examples of zero-sum thinking, some of them fallacious .
There is no evidence which suggests that zero-sum thinking is an enduring feature of human psychology. Game-theoretic situations rarely apply to instances of individual behaviour. This is demonstrated by the ordinary response to the prisoner's dilemma .
Zero-sum thinking is the result of both proximate and ultimate causes .
In terms of ultimate causation, zero-sum thinking might be a legacy of human evolution. Specifically, it might be understood to be a psychological adaptation that facilitated successful resource competition in the environment of ancestral humans where resources like mates, status, and food were perpetually scarce. [ 8 ] [ 16 ] [ 3 ] For example, Rubin suggests that the pace of technological growth was so slow during the period in which modern humans evolved that no individual would have observed any growth during their lifetime: "Each person would live and die in a world of constant technology and income. Thus, there was no incentive to evolve a mechanism for understanding or planning for growth" (p. 162). [ 3 ] Rubin also points to instances where the understanding of laypeople and economists about economic situations diverge, such as the lump-of-labor fallacy . [ 3 ] From this perspective, zero-sum thinking might be understood as the default way that humans think about resource allocations, which must be unlearned by, for example, an education in basic economics .
Zero-sum thinking can also be understood in terms of proximate causation, which refers to the developmental history of individuals within their own lifetime . The proximate causes of zero-sum thinking include the experiences that individuals have with resource allocations, as well as their beliefs about specific situations, or their beliefs about the world in general .
One of the proximate causes of zero-sum thinking is the experiences that individuals have with scarce resources or zero-sum interactions in their developmental environment. [ 17 ] In 1965, George M. Foster argued that members of "peasant" societies have an " Image of Limited Good ," which he argued was learned through by experiences in a society that was essentially zero-sum.
"The model of cognitive orientation that seems to me best to account for peasant behavior is the "Image of Limited Good." By "Image of Limited Good" I mean that broad areas of peasant behavior are patterned in such fashion as to suggest that peasants view their social, economic, and natural universes—their total environment—as one in which all of the desired things in life such as land, wealth, health, friendship and love, manliness and honor, respect and status, power and influence, security and safety, exist in finite quantity and are always in short supply, as far as the peasant is concerned. Not only do these and all other "good things" exist in finite and limited quantities, but in addition there is no way directly within peasant power to increase the available quantities ... When the peasant views his economic world as one in which Limited Good prevails, and he can progress only at the expense of another, he is usually very near the truth." (pps. 67-68) [ 17 ]
More recently, Rozycka-Tran et al. (2015) conducted a cross-cultural study that compared the responses of individuals in 37 nations to a scale of zero-sum beliefs. This scale asked individuals to report their agreement with statements that measured zero-sum thinking. For example, one item on the scale stated that "Successes of some people are usually failures of others". Rozycka-Tran et al. found that individuals in countries with lower Gross Domestic Product showed stronger zero-sum beliefs on average, suggesting that "the belief in zero-sum game seems to arise in countries with lower income, where resources are scarce" (p. 539). [ 1 ] Similarly, Rozycka-Tran et al. found that individuals with lower socioeconomic status displayed stronger zero-sum beliefs.
Related to experiences with resource-scarce environments is the belief that a resource is scarce or finite. For example, the lump of labour fallacy refers to the belief that in the economy there is a fixed amount of work to be done, and thus the allocation of jobs is zero-sum. [ 18 ] Although the belief that a resource is scarce might develop through experiences with resource scarcity, this is not necessarily the case. For example, individuals might come to believe that wealth is finite because it is a claim that has been repeated by politicians or journalists. [ 19 ]
Another proximate cause of zero-sum thinking is the belief that one (or one's group) is entitled to a certain share of a resource. [ 20 ] [ 21 ] An extreme case is the belief that one is entitled to all of a resource that exists, implying that any gains by another is one's own loss. Less extreme is the belief that one (or one's group) is superior and therefore entitled to more than others. For example, perceptions of zero-sum group competition have been associated with the Dominance sub-scale of the social dominance orientation personality trait, which itself has been characterized as a zero-sum worldview ("a view of human existence as zero-sum," p. 999). [ 22 ] Individuals who practice monogamy have also been found to think about love in consensually nonmonogamous relationships as zero-sum, and it was suggested that this might be because they believe that individuals in romantic relationships have an entitlement to their partner's love. [ 21 ]
When individuals think that a situation is zero-sum, they will be more likely to act competitively (or less cooperatively) towards others, because they will see others as a competitive threat. For example, when students think that they are being graded on a curve —a grading scheme that makes the allocation of grades zero-sum—they will be less likely to provide assistance to a peer who is proximate in status to themselves, because that peer's gain could be their own loss. [ 2 ]
When individuals perceive that there is a zero-sum competition in society for resources like jobs, they will be less likely to hold pro-immigration attitudes (because immigrants would deplete the resource). [ 10 ] Zero-sum thinking may also lead to certain social prejudices. When individuals hold zero-sum beliefs about love in romantic relationships, they are more prejudiced against consensual nonmonogamists (presumably because the perception of zero-sumness makes consensual nonmonogamy seem inadequate or unfair). [ 21 ]
|
https://en.wikipedia.org/wiki/Zero-sum_thinking
|
In the mathematical field of graph theory , a zero-symmetric graph is a connected graph in which each vertex has exactly three incident edges and, for each two vertices, there is a unique symmetry taking one vertex to the other. Such a graph is a vertex-transitive graph but cannot be an edge-transitive graph : the number of symmetries equals the number of vertices, too few to take every edge to every other edge. [ 1 ]
The name for this class of graphs was coined by R. M. Foster in a 1966 letter to H. S. M. Coxeter . [ 2 ] In the context of group theory , zero-symmetric graphs are also called graphical regular representations of their symmetry groups. [ 3 ]
The smallest zero-symmetric graph is a nonplanar graph with 18 vertices. [ 4 ] Its LCF notation is [5,−5] 9 .
Among planar graphs , the truncated cuboctahedral and truncated icosidodecahedral graphs are also zero-symmetric. [ 5 ]
These examples are all bipartite graphs . However, there exist larger examples of zero-symmetric graphs that are not bipartite. [ 6 ]
These examples also have three different symmetry classes (orbits) of edges. However, there exist zero-symmetric graphs with only two orbits of edges.
The smallest such graph has 20 vertices, with LCF notation [6,6,-6,-6] 5 . [ 7 ]
Every finite zero-symmetric graph is a Cayley graph , a property that does not always hold for cubic vertex-transitive graphs more generally and that helps in the solution of combinatorial enumeration tasks concerning zero-symmetric graphs. There are 97687 zero-symmetric graphs on up to 1280 vertices. These graphs form 89% of the cubic Cayley graphs and 88% of all connected vertex-transitive cubic graphs on the same number of vertices. [ 8 ]
All known finite connected zero-symmetric graphs contain a Hamiltonian cycle , but it is unknown whether every finite connected zero-symmetric graph is necessarily Hamiltonian. [ 9 ] This is a special case of the Lovász conjecture that (with five known exceptions, none of which is zero-symmetric) every finite connected vertex-transitive graph and every finite Cayley graph is Hamiltonian.
|
https://en.wikipedia.org/wiki/Zero-symmetric_graph
|
Zero: The Biography of a Dangerous Idea is a non-fiction book by American author and journalist Charles Seife . [ 1 ] [ 2 ] The book was initially released on February 7, 2000, by Viking.
The book offers a comprehensive look at number 0 and its controverting role as one of the great paradoxes of human thought and history since its invention by the ancient Babylonians or the Indian people . Even though zero is a fundamental idea for the modern science, initially the notion of a complete absence got a largely negative, sometimes hostile, treatment by the Western world and Greco-Roman philosophy. [ 3 ]
Zero won the 2001 PEN /Martha Albrand Award for First Nonfiction Book.
Of course, Seife's book is not a typical biography. There are no tell-all interviews with the number one or any of zero's other neighbors on the number line... Seife's book begins—of course—at Chapter Zero, with a story of how only recently a divide by zero error in its control software brought the guided missile cruiser USS Yorktown grinding to a halt. As Seife relates, "Though it was armored against weapons, nobody had thought to defend the Yorktown from zero. It was a grave mistake." Maybe it's not the pulse-pounding drama of a Tom Clancy novel, but it's enough foreshadowing to launch Seife on an essay which begins with notches on a 30,000-year-old wolf bone and ends with the role of zero in black holes and the big bang.
|
https://en.wikipedia.org/wiki/Zero:_The_Biography_of_a_Dangerous_Idea
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.