text
stringlengths
559
401k
source
stringlengths
13
121
A g-factor (also called g value) is a dimensionless quantity that characterizes the magnetic moment and angular momentum of an atom, a particle or the nucleus. It is the ratio of the magnetic moment (or, equivalently, the gyromagnetic ratio) of a particle to that expected of a classical particle of the same charge and angular momentum. In nuclear physics, the nuclear magneton replaces the classically expected magnetic moment (or gyromagnetic ratio) in the definition. The two definitions coincide for the proton. == Definition == === Dirac particle === The spin magnetic moment of a charged, spin-1/2 particle that does not possess any internal structure (a Dirac particle) is given by μ = g e 2 m S , {\displaystyle {\boldsymbol {\mu }}=g{e \over 2m}\mathbf {S} ,} where μ is the spin magnetic moment of the particle, g is the g-factor of the particle, e is the elementary charge, m is the mass of the particle, and S is the spin angular momentum of the particle (with magnitude ħ/2 for Dirac particles). === Baryon or nucleus === Protons, neutrons, nuclei, and other composite baryonic particles have magnetic moments arising from their spin (both the spin and magnetic moment may be zero, in which case the g-factor is undefined). Conventionally, the associated g-factors are defined using the nuclear magneton, and thus implicitly using the proton's mass rather than the particle's mass as for a Dirac particle. The formula used under this convention is μ = g μ N ℏ I = g e 2 m p I , {\displaystyle {\boldsymbol {\mu }}=g{\mu _{\text{N}} \over \hbar }{\mathbf {I} }=g{e \over 2m_{\text{p}}}\mathbf {I} ,} where μ is the magnetic moment of the nucleon or nucleus resulting from its spin, g is the effective g-factor, I is its spin angular momentum, μN is the nuclear magneton, e is the elementary charge, and mp is the proton rest mass. == Calculation == === Electron g-factors === There are three magnetic moments associated with an electron: one from its spin angular momentum, one from its orbital angular momentum, and one from its total angular momentum (the quantum-mechanical sum of those two components). Corresponding to these three moments are three different g-factors: ==== Electron spin g-factor ==== The most known of these is the electron spin g-factor (more often called simply the electron g-factor) ge, defined by μ s = g e μ B ℏ S , {\displaystyle {\boldsymbol {\mu }}_{\text{s}}=g_{\text{e}}{\frac {\mu _{\text{B}}}{\hbar }}\mathbf {S} ,} where μs is the magnetic moment resulting from the spin of an electron, S is its spin angular momentum, and μB = eħ/2me is the Bohr magneton. In atomic physics, the electron spin g-factor is often defined as the absolute value of ge: g s = | g e | = − g e . {\displaystyle g_{\text{s}}=|g_{\text{e}}|=-g_{\text{e}}.} The z component of the magnetic moment then becomes μ z = − g s μ B m s , {\displaystyle \mu _{\text{z}}=-g_{\text{s}}\mu _{\text{B}}m_{\text{s}},} where ℏ m s {\displaystyle \hbar m_{\text{s}}} are the eigenvalues of the Sz operator, meaning that ms can take on values ± 1 / 2 {\displaystyle \pm 1/2} . The value gs is roughly equal to 2.002319 and is known to extraordinary precision – one part in 1013. The reason it is not precisely two is explained by quantum electrodynamics calculation of the anomalous magnetic dipole moment. ==== Electron orbital g-factor ==== Secondly, the electron orbital g-factor gL is defined by μ L = − g L μ B ℏ L , {\displaystyle {\boldsymbol {\mu }}_{L}=-g_{L}{\frac {\mu _{\text{B}}}{\hbar }}\mathbf {L} ,} where μL is the magnetic moment resulting from the orbital angular momentum of an electron, L is its orbital angular momentum, and μB is the Bohr magneton. For an infinite-mass nucleus, the value of gL is exactly equal to one, by a quantum-mechanical argument analogous to the derivation of the classical magnetogyric ratio. For an electron in an orbital with a magnetic quantum number ml, the z component of the orbital magnetic moment is μ z = − g L μ B m l , {\displaystyle \mu _{z}=-g_{L}\mu _{\text{B}}m_{l},} which, since gL = 1, is −μBml. For a finite-mass nucleus, there is an effective g value g L = 1 − 1 M , {\displaystyle g_{L}=1-{\frac {1}{M}},} where M is the ratio of the nuclear mass to the electron mass. ==== Total angular momentum (Landé) g-factor ==== Thirdly, the Landé g-factor gJ is defined by | μ J | = g J μ B ℏ | J | , {\displaystyle |{\boldsymbol {\mu }}_{J}|=g_{J}{\frac {\mu _{\text{B}}}{\hbar }}|\mathbf {J} |,} where μJ is the total magnetic moment resulting from both spin and orbital angular momentum of an electron, J = L + S is its total angular momentum, and μB is the Bohr magneton. The value of gJ is related to gL and gs by a quantum-mechanical argument; see the article Landé g-factor. μJ and J vectors are not collinear, so only their magnitudes can be compared. === Muon g-factor === The muon, like the electron, has a g-factor associated with its spin, given by the equation μ = g e 2 m μ S , {\displaystyle {\boldsymbol {\mu }}=g{e \over 2m_{\mu }}\mathbf {S} ,} where μ is the magnetic moment resulting from the muon's spin, S is the spin angular momentum, and mμ is the muon mass. That the muon g-factor is not quite the same as the electron g-factor is mostly explained by quantum electrodynamics and its calculation of the anomalous magnetic dipole moment. Almost all of the small difference between the two values (99.96% of it) is due to a well-understood lack of heavy-particle diagrams contributing to the probability for emission of a photon representing the magnetic dipole field, which are present for muons, but not electrons, in QED theory. These are entirely a result of the mass difference between the particles. However, not all of the difference between the g-factors for electrons and muons is exactly explained by the Standard Model. The muon g-factor can, in theory, be affected by physics beyond the Standard Model, so it has been measured very precisely, in particular at the Brookhaven National Laboratory. In the E821 collaboration final report in November 2006, the experimental measured value is 2.0023318416(13), compared to the theoretical prediction of 2.00233183620(86). This is a difference of 3.4 standard deviations, suggesting that beyond-the-Standard-Model physics may be a contributory factor. The Brookhaven muon storage ring was transported to Fermilab where the Muon g–2 experiment used it to make more precise measurements of muon g-factor. On April 7, 2021, the Fermilab Muon g−2 collaboration presented and published a new measurement of the muon magnetic anomaly. When the Brookhaven and Fermilab measurements are combined, the new world average differs from the theory prediction by 4.2 standard deviations. == Measured g-factor values == The electron g-factor is one of the most precisely measured values in physics. == See also == Anomalous magnetic dipole moment Electron magnetic moment Landé g-factor == Notes and references == == Further reading == CODATA recommendations 2006 == External links == Media related to G-factor (physics) at Wikimedia Commons Gwinner, Gerald; Silwal, Roshani (June 2022). "Tiny isotopic difference tests standard model of particle physics". Nature. 606 (7914): 467–468. doi:10.1038/d41586-022-01569-3. PMID 35705815. S2CID 249710367.
Wikipedia/G-factor_(physics)
The United States Department of Energy (DOE) is an executive department of the U.S. federal government that oversees U.S. national energy policy and energy production, the research and development of nuclear power, the military's nuclear weapons program, nuclear reactor production for the United States Navy, energy-related research, and energy conservation. The DOE was created in 1977 in the aftermath of the 1973 oil crisis. It sponsors more physical science research than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The DOE also directs research in genomics, with the Human Genome Project originating from a DOE initiative. The department is headed by the secretary of energy, who reports directly to the president of the United States and is a member of the Cabinet. The current secretary of energy is Chris Wright, who has served in the position since February 2025. The department's headquarters are in southwestern Washington, D.C., in the James V. Forrestal Building, with additional offices in Germantown, Maryland. == History == === Formation and consolidation === In 1942, during World War II, the United States started the Manhattan Project to develop the atomic bomb under the U.S. Army Corps of Engineers. After the war, in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. The Energy Reorganization Act of 1974 split the responsibilities of the AEC into the new Nuclear Regulatory Commission, which was charged with regulating the nuclear power industry, and the Energy Research and Development Administration, which was assigned to manage the nuclear weapon, naval reactor, and energy development programs. The 1973 oil crisis called attention to the need to consolidate energy policy. In 1977, President Jimmy Carter signed into law the Department of Energy Organization Act, which established the Department of Energy. The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration, the Energy Research and Development Administration, the Federal Power Commission, and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary. President Carter proposed the Department of Energy with the goal of promoting energy conservation and energy independence, and developing alternative sources of energy to reduce the use of fossil fuels. With international energy's future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three Mile Island accident, Carter was able to intervene with the help of the department. Through the DOE, Carter was able to make changes within the Nuclear Regulatory Commission, including improving management and procedures, since nuclear energy and weapons are responsibilities of the department. === Weapon plans stolen === In December 1999, the FBI was investigating how China obtained plans for a specific nuclear device. Wen Ho Lee was accused of stealing nuclear secrets from Los Alamos National Laboratory for the People's Republic of China. Federal officials, including then-Energy Secretary Bill Richardson, publicly named Lee as a suspect before he was charged with a crime. The U.S. Congress held hearings to investigate the Department of Energy's handling of his case. Republican senators thought that an independent agency should be in charge of nuclear weapons and security issues, rather than the DOE. All but one of the 59 charges against Lee were eventually dropped because the investigation proved the plans the Chinese obtained could not have come from Lee. Lee filed suit and won a $1.6 million settlement against the federal government and news agencies. The episode eventually led to the creation of the National Nuclear Security Administration, a semi-autonomous agency within the department. === Loan guarantee program of 2005 === In 2001, American Solar Challenge was sponsored by the DOE and the National Renewable Energy Laboratory. After the 2005 race, the DOE discontinued its sponsorship. Title XVII of Energy Policy Act of 2005 authorizes the DOE to issue loan guarantees to eligible projects that "avoid, reduce, or sequester air pollutants or anthropogenic emissions of greenhouse gases" and "employ new or significantly improved technologies as compared to technologies in service in the United States at the time the guarantee is issued". In loan guarantees, a conditional commitment requires to meet an equity commitment, as well as other conditions, before the loan guarantee is completed. In September 2008, the DOE, the Nuclear Threat Initiative (NTI), the Institute of Nuclear Materials Management (INMM), and the International Atomic Energy Agency (IAEA) partnered to develop and launch the World Institute for Nuclear Security (WINS), an international non-governmental organization designed to provide a forum to share best practices in strengthening the security and safety of nuclear and radioactive materials and facilities. In December 2024, the Loan Programs Office announced it would extend the largest loan ever sanctioned – a $15 billion (US) low-interest loan to support the modernization of Pacific Gas & Electric’s hydroelectric power structure, enhance transmission lines critical for renewable energy integration, data center operations, and the growing fleet of electric vehicles. Initially requested as a $30 billion (US) loan, the amount was reduced due to concerns over the company’s repayment capacity. == Organization == The department announced a reorganization with new names of under secretaries in 2022. The department is under the control and supervision of a United States Secretary of Energy, a political appointee of the President of the United States. The Energy Secretary is assisted in managing the department by a United States Deputy Secretary of Energy, also appointed by the president, who assumes the duties of the secretary in the secretary's absence. The department also has three under secretaries, each appointed by the president, who oversee the major areas of the department's work. The president also appoints seven officials with the rank of Assistant Secretary of Energy who have line management responsibility for major organizational elements of the department. The Energy Secretary assigns their functions and duties. === Symbolism in the seal === Excerpt from the Code of Federal Regulations, in Title 10: Energy: The official seal of the Department of Energy "includes a green shield bisected by a gold-colored lightning bolt, on which is emblazoned a gold-colored symbolic sun, atom, oil derrick, windmill, and dynamo. It is crested by the white head of an eagle, atop a white rope. Both appear on a blue field surrounded by concentric circles in which the name of the agency, in gold, appears on a green background." "The eagle represents the care in planning and the purposefulness of efforts required to respond to the Nation's increasing demands for energy. The sun, atom, oil derrick, windmill, and dynamo serve as representative technologies whose enhanced development can help meet these demands. The rope represents the cohesiveness in the development of the technologies and their link to our future capabilities. The lightning bolt represents the power of the natural forces from which energy is derived and the Nation's challenge in harnessing the forces." "The color scheme is derived from nature, symbolizing both the source of energy and the support of man's existence. The blue field represents air and water, green represents mineral resources and the earth itself, and gold represents the creation of energy in the release of natural forces. By invoking this symbolism, the color scheme represents the Nation's commitment to meet its energy needs in a manner consistent with the preservation of the natural environment." === Facilities === The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows: Other major DOE facilities include: Airstrip: Pahute Mesa Airstrip – Nye County, Nevada, part of Nevada National Security Site === Nuclear weapons sites === The DOE/NNSA has federal responsibility for the design, testing and production of all nuclear weapons. NNSA in turn uses contractors to carry out its responsibilities at the following government owned sites: Research, development, and manufacturing guidance: Los Alamos National Laboratory and Lawrence Livermore National Laboratory Engineering of the non-nuclear components and system integration: Sandia National Laboratories Manufacturing of key components: The Kansas City Plant, Savannah River Site and Y-12 National Security Complex. Testing: Nevada Test Site Final weapon and warhead assembling and dismantling: Pantex == Related legislation == 1920 – Federal Power Act 1935 – Public Utility Holding Company Act of 1935 1946 – Atomic Energy Act PL 79-585 (created the Atomic Energy Commission) [Superseded by the Atomic Energy Act of 1954] 1954 – Atomic Energy Act of 1954, as Amended PL 83-703 1956 – Colorado River Storage Project PL 84-485 1957 – Atomic Energy Commission Acquisition of Property PL 85-162 1957 – Price-Anderson Nuclear Industries Indemnity Act PL 85-256 1968 – Natural Gas Pipeline Safety Act PL 90-481 1973 – Mineral Leasing Act Amendments (Trans-Alaska Oil Pipeline Authorization) PL 93-153 1974 – Energy Reorganization Act PL 93-438 (Split the AEC into the Energy Research and Development Administration and the Nuclear Regulatory Commission) 1975 – Energy Policy and Conservation Act PL 94-163 1977 – Department of Energy Organization Act PL 95-91 (Dismantled ERDA and replaced it with the Department of Energy) 1978 – National Energy Act PL 95-617, 618, 619, 620, 621 1980 – Energy Security Act PL 96-294 1989 – Natural Gas Wellhead Decontrol Act PL 101-60 1992 – Energy Policy Act of 1992 PL 102-486 2000 – National Nuclear Security Administration Act PL 106-65 2005 – Energy Policy Act of 2005 PL 109-58 2007 – Energy Independence and Security Act of 2007 PL 110-140 2008 – Food, Conservation, and Energy Act of 2008 PL 110-234 == Budget == On May 7, 2009 President Barack Obama unveiled a $26.4 billion budget request for DOE for fiscal year (FY) 2010, including $2.3 billion for the DOE Office of Energy Efficiency and Renewable Energy (EERE). That budget aimed to substantially expand the use of renewable energy sources while improving energy transmission infrastructure. It also proposed significant investments in hybrids and plug-in hybrids, smart grid technologies, and scientific research and innovation. As part of the $789 billion economic stimulus package in the American Recovery and Reinvestment Act of 2009, Congress provided Energy with an additional $38.3 billion for fiscal years 2009 and 2010, adding about 75 percent to Energy's annual budgets. Most of the stimulus spending was in the form of grants and contracts. For fiscal year 2013, each of the operating units of the Department of Energy operated with the following budgets: In March 2018, Energy Secretary Rick Perry testified to a Senate panel about the Trump administration's DOE budget request for fiscal year 2019. The budget request prioritized nuclear security while making large cuts to energy efficiency and renewable energy programs. The proposal was a $500 million increase in funds over fiscal year 2017. It "promotes innovations like a new Office of Cybersecurity, Energy Security, and Emergency Response (CESER) and gains for the Office of Fossil Energy. Investments would be made to strengthen the National Nuclear Security Administration and modernize the nuclear force, as well as in weapons activities and advanced computing." However, the budget for the Office of Energy Efficiency and Renewable Energy would be lowered to $696 million under the plan, down from $1.3 billion in fiscal year 2017. Overall, the department's energy and related programs would be cut by $1.9 billion. == Programs and contracts == === Energy Savings Performance Contract === Energy Savings Performance Contracts (ESPCs) are contracts under which a contractor designs, constructs, and obtains the necessary financing for an energy savings project, and the federal agency makes payments over time to the contractor from the savings in the agency's utility bills. The contractor guarantees the energy improvements will generate savings, and after the contract ends, all continuing cost savings accrue to the federal agency. === Energy Innovation Hubs === Energy Innovation Hubs are multi-disciplinary, meant to advance highly promising areas of energy science and technology from their early stages of research to the point that the risk level will be low enough for industry to commercialize the technologies. The Consortium for Advanced Simulation of Light Water Reactors (CASL) was the first DOE Energy Innovation Hub established in July 2010, for the purpose of providing advanced modeling and simulation (M&S) solutions for commercial nuclear reactors. The 2009 DOE budget includes $280 million to fund eight Energy Innovation Hubs, each of which is focused on a particular energy challenge. Two of the eight hubs are included in the EERE budget and will focus on integrating smart materials, designs, and systems into buildings to better conserve energy and on designing and discovering new concepts and materials needed to convert solar energy into electricity. Another two hubs, included in the DOE Office of Science budget, were created to tackle the challenges of devising advanced methods of energy storage and creating fuels directly from sunlight without the use of plants or microbes. Yet another hub was made to develop "smart" materials to allow the electrical grid to adapt and respond to changing conditions. In 2012, the DOE awarded $120 million to the Ames Laboratory to start a new EIH, the Critical Materials Institute, which will focus on improving the supply of rare earth elements. === Advanced Research Projects Agency-Energy === ARPA-E was officially created by the America COMPETES Act , authored by Congressman Bart Gordon, within the United States Department of Energy (DOE) in 2007, though without a budget. The initial budget of about $400 million was a part of the economic stimulus bill of February 2009. === Other === DOE Isotope Program - coordinates isotope production Federal Energy Management Program Foundation for Energy Security and Innovation - a 501(c)(3) organization dedicated to supporting DOE research Fusion Energy Sciences - a program to research nuclear fusion, with a yearly budget in 2020 of $670 million, with $250 million of that going to ITER GovEnergy - an annual event partly sponsored by the DOE Grid Deployment Office - a division dedicated to spreading adoption of grid-enhancing technologies and improving transmission permitting National Science Bowl - a high school and middle school science knowledge competition Solar Decathlon - an international collegiate competition to design and build solar-powered houses State Energy Program Weatherization Assistance Program == List of secretaries of energy == == See also == Federal Energy Regulatory Commission National Council on Electricity Policy United States federal executive departments == References == == Further reading == Cumming, Alfred (February 9, 2009). "Polygraph Use by the Department of Energy: Issues for Congress" (PDF). Congressional Research Service. Archived from the original (PDF) on March 28, 2014 – via Federation of American Scientists. == External links == Official website Department of Energy in the Federal Register Department of Energy on USAspending.gov Works by the United States Department of Energy at Project Gutenberg Works by United States Department of Energy at LibriVox (public domain audiobooks) Advanced Energy Initiative Twenty In Ten
Wikipedia/U.S._Department_of_Energy
In quantum field theory, a branch of theoretical physics, crossing is the property of scattering amplitudes that allows antiparticles to be interpreted as particles going backwards in time. Crossing states that the same formula that determines the S-matrix elements and scattering amplitudes for particle A {\displaystyle \mathrm {A} } to scatter with X {\displaystyle \mathrm {X} } and produce particle B {\displaystyle \mathrm {B} } and Y {\displaystyle \mathrm {Y} } will also give the scattering amplitude for A + B ¯ + X {\displaystyle \mathrm {A} +{\bar {\mathrm {B} }}+\mathrm {X} } to go into Y {\displaystyle \mathrm {Y} } , or for B ¯ {\displaystyle {\bar {\mathrm {B} }}} to scatter with X {\displaystyle \mathrm {X} } to produce Y + A ¯ {\displaystyle \mathrm {Y} +{\bar {\mathrm {A} }}} . The only difference is that the value of the energy is negative for the antiparticle. The formal way to state this property is that the antiparticle scattering amplitudes are the analytic continuation of particle scattering amplitudes to negative energies. The interpretation of this statement is that the antiparticle is in every way a particle going backwards in time. == History == Murray Gell-Mann and Marvin Leonard Goldberger introduced crossing symmetry in 1954. Crossing had already been implicit in the work of Richard Feynman, but came to its own in the 1950s and 1960s as part of the analytic S-matrix program. == Overview == Consider an amplitude M ( ϕ ( p ) + . . . → . . . ) {\displaystyle {\mathcal {M}}(\phi (p)+...\ \rightarrow ...)} . We concentrate our attention on one of the incoming particles with momentum p. The quantum field ϕ ( p ) {\displaystyle \phi (p)} , corresponding to the particle is allowed to be either bosonic or fermionic. Crossing symmetry states that we can relate the amplitude of this process to the amplitude of a similar process with an outgoing antiparticle ϕ ¯ ( − p ) {\displaystyle {\bar {\phi }}(-p)} replacing the incoming particle ϕ ( p ) {\displaystyle \phi (p)} : M ( ϕ ( p ) + . . . → . . . ) = M ( . . . → . . . + ϕ ¯ ( − p ) ) {\displaystyle {\mathcal {M}}(\phi (p)+...\rightarrow ...)={\mathcal {M}}(...\rightarrow ...+{\bar {\phi }}(-p))} . In the bosonic case, the idea behind crossing symmetry can be understood intuitively using Feynman diagrams. Consider any process involving an incoming particle with momentum p. For the particle to give a measurable contribution to the amplitude, it has to interact with a number of different particles with momenta q 1 , q 2 , . . . , q n {\displaystyle q_{1},q_{2},...,q_{n}} via a vertex. Conservation of momentum implies ∑ k = 1 n q k = p {\displaystyle \sum _{k=1}^{n}q_{k}=p} . In case of an outgoing particle, conservation of momentum reads as ∑ k = 1 n q k = − p {\displaystyle \sum _{k=1}^{n}q_{k}=-p} . Thus, replacing an incoming boson with an outgoing antiboson with opposite momentum yields the same S-matrix element. In fermionic case, one can apply the same argument but now the relative phase convention for the external spinors must be taken into account. == Example == For example, the annihilation of an electron with a positron into two photons is related to an elastic scattering of an electron with a photon (Compton scattering) by crossing symmetry. This relation allows to calculate the scattering amplitude of one process from the amplitude for the other process if negative values of energy of some particles are substituted. == See also == Feynman–Stueckelberg interpretation Feynman diagram Regge theory Detailed balance == References == == Further reading == Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. p. 155. ISBN 0-201-50397-2. Griffiths, David (1987). An Introduction to Elementary Particles (1st ed.). John Wiley & Sons. p. 21. ISBN 0-471-60386-4.
Wikipedia/Crossing_(physics)
In physics, the Schwinger model, named after Julian Schwinger, is the model describing 1+1D (1 spatial dimension + time) Lorentzian quantum electrodynamics which includes electrons, coupled to photons. The model defines the usual QED Lagrangian L = − 1 4 g 2 F μ ν F μ ν + ψ ¯ ( i γ μ D μ − m ) ψ {\displaystyle {\mathcal {L}}=-{\frac {1}{4g^{2}}}F_{\mu \nu }F^{\mu \nu }+{\bar {\psi }}(i\gamma ^{\mu }D_{\mu }-m)\psi } over a spacetime with one spatial dimension and one temporal dimension. Where F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} is the U ( 1 ) {\displaystyle U(1)} photon field strength, D μ = ∂ μ − i A μ {\displaystyle D_{\mu }=\partial _{\mu }-iA_{\mu }} is the gauge covariant derivative, ψ {\displaystyle \psi } is the fermion spinor, m {\displaystyle m} is the fermion mass and γ 0 , γ 1 {\displaystyle \gamma ^{0},\gamma ^{1}} form the two-dimensional representation of the Clifford algebra. This model exhibits confinement of the fermions and as such, is a toy model for QCD. A handwaving argument why this is so is because in two dimensions, classically, the potential between two charged particles goes linearly as r {\displaystyle r} , instead of 1 / r {\displaystyle 1/r} in 4 dimensions, 3 spatial, 1 time. This model also exhibits a spontaneous symmetry breaking of the U(1) symmetry due to a chiral condensate due to a pool of instantons. The photon in this model becomes a massive particle at low temperatures. This model can be solved exactly and is used as a toy model for other more complex theories. == References ==
Wikipedia/Schwinger_model
In physics, black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. As the study of the statistical mechanics of black-body radiation led to the development of the theory of quantum mechanics, the effort to understand the statistical mechanics of black holes has had a deep impact upon the understanding of quantum gravity, leading to the formulation of the holographic principle. == Overview == The second law of thermodynamics requires that black holes have entropy. If black holes carried no entropy, it would be possible to violate the second law by throwing mass into the black hole. The increase of the entropy of the black hole more than compensates for the decrease of the entropy carried by the object that was swallowed. In 1972, Jacob Bekenstein conjectured that black holes should have an entropy proportional to the area of the event horizon, where by the same year, he proposed no-hair theorems. In 1973 Bekenstein suggested ln ⁡ 2 0.8 π ≈ 0.276 {\displaystyle {\frac {\ln {2}}{0.8\pi }}\approx 0.276} as the constant of proportionality, asserting that if the constant was not exactly this, it must be very close to it. The next year, in 1974, Stephen Hawking showed that black holes emit thermal Hawking radiation corresponding to a certain temperature (Hawking temperature). Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at 1 / 4 {\displaystyle 1/4} : S BH = k B A 4 ℓ P 2 , {\displaystyle S_{\text{BH}}={\frac {k_{\text{B}}A}{4\ell _{\text{P}}^{2}}},} where A {\displaystyle A} is the area of the event horizon, k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, and ℓ P = G ℏ / c 3 {\displaystyle \ell _{\text{P}}={\sqrt {G\hbar /c^{3}}}} is the Planck length. This is often referred to as the Bekenstein–Hawking formula. The subscript BH either stands for "black hole" or "Bekenstein–Hawking". The black hole entropy is proportional to the area of its event horizon A {\displaystyle A} . The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle. This area relationship was generalized to arbitrary regions via the Ryu–Takayanagi formula, which relates the entanglement entropy of a boundary conformal field theory to a specific surface in its dual gravitational theory. Although Hawking's calculations gave further thermodynamic evidence for black hole entropy, until 1995 no one was able to make a controlled calculation of black hole entropy based on statistical mechanics, which associates entropy with a large number of microstates. In fact, so called "no-hair" theorems appeared to suggest that black holes could have only a single microstate. The situation changed in 1995 when Andrew Strominger and Cumrun Vafa calculated the right Bekenstein–Hawking entropy of a supersymmetric black hole in string theory, using methods based on D-branes and string duality. Their calculation was followed by many similar computations of entropy of large classes of other extremal and near-extremal black holes, and the result always agreed with the Bekenstein–Hawking formula. However, for the Schwarzschild black hole, viewed as the most far-from-extremal black hole, the relationship between micro- and macrostates has not been characterized. Efforts to develop an adequate answer within the framework of string theory continue. In loop quantum gravity (LQG) it is possible to associate a geometrical interpretation with the microstates: these are the quantum geometries of the horizon. LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon. It is possible to derive, from the covariant formulation of full quantum theory (spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy. The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes. There seems to be also discussed the calculation of Bekenstein–Hawking entropy from the point of view of loop quantum gravity. The current accepted microstate ensemble for black holes is the microcanonical ensemble. The partition function for black holes results in a negative heat capacity. In canonical ensembles, there is limitation for a positive heat capacity, whereas microcanonical ensembles can exist at a negative heat capacity. == The laws of black hole mechanics == The four laws of black hole mechanics are physical properties that black holes are believed to satisfy. The laws, analogous to the laws of thermodynamics, were discovered by Jacob Bekenstein, Brandon Carter, and James Bardeen. Further considerations were made by Stephen Hawking. === Statement of the laws === The laws of black hole mechanics are expressed in geometrized units. ==== The zeroth law ==== The horizon has constant surface gravity for a stationary black hole. ==== The first law ==== For perturbations of stationary black holes, the change of energy is related to change of area, angular momentum, and electric charge by d E = κ 8 π d A + Ω d J + Φ d Q , {\displaystyle dE={\frac {\kappa }{8\pi }}\,dA+\Omega \,dJ+\Phi \,dQ,} where E {\displaystyle E} is the energy, κ {\displaystyle \kappa } is the surface gravity, A {\displaystyle A} is the horizon area, Ω {\displaystyle \Omega } is the angular velocity, J {\displaystyle J} is the angular momentum, Φ {\displaystyle \Phi } is the electrostatic potential and Q {\displaystyle Q} is the electric charge. ==== The second law ==== The horizon area is, assuming the weak energy condition, a non-decreasing function of time: d A d t ≥ 0. {\displaystyle {\frac {dA}{dt}}\geq 0.} This "law" was superseded by Hawking's discovery that black holes radiate, which causes both the black hole's mass and the area of its horizon to decrease over time. ==== The third law ==== It is not possible to form a black hole with vanishing surface gravity. That is, κ = 0 {\displaystyle \kappa =0} cannot be achieved. === Discussion of the laws === ==== The zeroth law ==== The zeroth law is analogous to the zeroth law of thermodynamics, which states that the temperature is constant throughout a body in thermal equilibrium. It suggests that the surface gravity is analogous to temperature. T constant for thermal equilibrium for a normal system is analogous to κ {\displaystyle \kappa } constant over the horizon of a stationary black hole. ==== The first law ==== The left side, d E {\displaystyle dE} , is the change in energy (proportional to mass). Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right side the term T d S {\displaystyle TdS} . ==== The second law ==== The second law is the statement of Hawking's area theorem. Analogously, the second law of thermodynamics states that the change in entropy in an isolated system will be greater than or equal to 0 for a spontaneous process, suggesting a link between entropy and the area of a black hole horizon. However, this version violates the second law of thermodynamics by matter losing (its) entropy as it falls in, giving a decrease in entropy. However, generalizing the second law as the sum of black hole entropy and outside entropy, shows that the second law of thermodynamics is not violated in a system including the universe beyond the horizon. The generalized second law of thermodynamics (GSL) was needed to present the second law of thermodynamics as valid. This is because the second law of thermodynamics, as a result of the disappearance of entropy near the exterior of black holes, is not useful. The GSL allows for the application of the law because now the measurement of interior, common entropy is possible. The validity of the GSL can be established by studying an example, such as looking at a system having entropy that falls into a bigger, non-moving black hole, and establishing upper and lower entropy bounds for the increase in the black hole entropy and entropy of the system, respectively. One should also note that the GSL will hold for theories of gravity such as Einstein gravity, Lovelock gravity, or Braneworld gravity, because the conditions to use GSL for these can be met. However, on the topic of black hole formation, the question becomes whether or not the generalized second law of thermodynamics will be valid, and if it is, it will have been proved valid for all situations. Because a black hole formation is not stationary, but instead moving, proving that the GSL holds is difficult. Proving the GSL is generally valid would require using quantum-statistical mechanics, because the GSL is both a quantum and statistical law. This discipline does not exist so the GSL can be assumed to be useful in general, as well as for prediction. For example, one can use the GSL to predict that, for a cold, non-rotating assembly of N {\displaystyle N} nucleons, S B H − S > 0 {\displaystyle S_{BH}-S>0} , where S B H {\displaystyle S_{BH}} is the entropy of a black hole and S {\displaystyle S} is the sum of the ordinary entropy. ==== The third law ==== The third law of black hole thermodynamics is controversial. Specific counterexamples called extremal black holes fail to obey the rule. The classical third law of thermodynamics, known as the Nernst theorem, which says the entropy of a system must go to zero as the temperature goes to absolute zero is also not a universal law. However the systems that fail the classical third law have not been realized in practice, leading to the suggestion that the extremal black holes may not represent the physics of black holes generally. A weaker form of the classical third law known as the "unattainability principle" states that an infinite number of steps are required to put a system in to its ground state. This form of the third law does have an analog in black hole physics.: 10  === Interpretation of the laws === The four laws of black hole mechanics suggest that one should identify the surface gravity of a black hole with temperature and the area of the event horizon with entropy, at least up to some multiplicative constants. If one only considers black holes classically, then they have zero temperature and, by the no-hair theorem, zero entropy, and the laws of black hole mechanics remain an analogy. However, when quantum-mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at a temperature T H = κ 2 π . {\displaystyle T_{\text{H}}={\frac {\kappa }{2\pi }}.} From the first law of black hole mechanics, this determines the multiplicative constant of the Bekenstein–Hawking entropy, which is (in geometrized units) S BH = A 4 . {\displaystyle S_{\text{BH}}={\frac {A}{4}}.} which is the entropy of the black hole in Einstein's general relativity. Quantum field theory in curved spacetime can be utilized to calculate the entropy for a black hole in any covariant theory for gravity, known as the Wald entropy. == Critique == While black hole thermodynamics (BHT) has been regarded as one of the deepest clues to a quantum theory of gravity, there remains a philosophical criticism that "the analogy is not nearly as good as is commonly supposed", that it “is often based on a kind of caricature of thermodynamics” and "it’s unclear what the systems in BHT are supposed to be". These criticisms were reexamined in detail, ending with the opposite conclusion, "stationary black holes are not analogous to thermodynamic systems: they are thermodynamic systems, in the fullest sense." == Beyond black holes == Gary Gibbons and Hawking have shown that black hole thermodynamics is more general than black holes—that cosmological event horizons also have an entropy and temperature. More fundamentally, Gerard 't Hooft and Leonard Susskind used the laws of black hole thermodynamics to argue for a general holographic principle of nature, which asserts that consistent theories of gravity and quantum mechanics must be lower-dimensional. Though not yet fully understood in general, the holographic principle is central to theories like the AdS/CFT correspondence. There are also connections between black hole entropy and fluid surface tension. == See also == Joseph Polchinski Robert Wald == Notes == == Citations == == Bibliography == Bardeen, J. M.; Carter, B.; Hawking, S. W. (1973). "The four laws of black hole mechanics". Communications in Mathematical Physics. 31 (2): 161–170. Bibcode:1973CMaPh..31..161B. doi:10.1007/BF01645742. S2CID 54690354. Bekenstein, Jacob D. (April 1973). "Black holes and entropy". Physical Review D. 7 (8): 2333–2346. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333. S2CID 122636624. Hawking, Stephen W. (1974). "Black hole explosions?". Nature. 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0. S2CID 4290107. Hawking, Stephen W. (1975). "Particle creation by black holes". Communications in Mathematical Physics. 43 (3): 199–220. Bibcode:1975CMaPh..43..199H. doi:10.1007/BF02345020. S2CID 55539246. Hawking, S. W.; Ellis, G. F. R. (1973). The Large Scale Structure of Space–Time. New York: Cambridge University Press. ISBN 978-0-521-09906-6. Hawking, Stephen W. (1994). "The Nature of Space and Time". arXiv:hep-th/9409195. 't Hooft, Gerardus (1985). "On the quantum structure of a black hole" (PDF). Nuclear Physics B. 256: 727–745. Bibcode:1985NuPhB.256..727T. doi:10.1016/0550-3213(85)90418-3. Archived from the original (PDF) on 2011-09-26. Page, Don (2005). "Hawking Radiation and Black Hole Thermodynamics". New Journal of Physics. 7 (1): 203. arXiv:hep-th/0409024. Bibcode:2005NJPh....7..203P. doi:10.1088/1367-2630/7/1/203. S2CID 119047329. == External links == Bekenstein-Hawking entropy on Scholarpedia Black Hole Thermodynamics Black hole entropy on arxiv.org
Wikipedia/Black_hole_thermodynamics
In quantum field theory, the Nambu–Jona-Lasinio model (or more precisely: the Nambu and Jona-Lasinio model) is a complicated effective theory of nucleons and mesons constructed from interacting Dirac fermions with chiral symmetry, paralleling the construction of Cooper pairs from electrons in the BCS theory of superconductivity. The "complicatedness" of the theory has become more natural as it is now seen as a low-energy approximation of the still more basic theory of quantum chromodynamics, which does not work perturbatively at low energies. == Overview == The model is much inspired by the different field of solid state theory, particularly from the BCS breakthrough of 1957. The model was introduced in a joint article of Yoichiro Nambu (who also contributed essentially to the theory of superconductivity, i.e., by the "Nambu formalism") and Giovanni Jona-Lasinio, published in 1961. A subsequent paper included chiral symmetry breaking, isospin and strangeness. Around that time, the same model was independently considered by Soviet physicists Valentin Vaks and Anatoly Larkin. The model is quite technical, although based essentially on symmetry principles. It is an example of the importance of four-fermion interactions and is defined in a spacetime with an even number of dimensions. It is still important and is used primarily as an effective although not rigorous low energy substitute for quantum chromodynamics. The dynamical creation of a condensate from fermion interactions inspired many theories of the breaking of electroweak symmetry, such as technicolor and the top-quark condensate. Starting with the one-flavor case first, the Lagrangian density is L = i ψ ¯ ∂ / ψ + λ 4 [ ( ψ ¯ ψ ) ( ψ ¯ ψ ) − ( ψ ¯ γ 5 ψ ) ( ψ ¯ γ 5 ψ ) ] {\displaystyle {\mathcal {L}}=\ i\ {\bar {\psi }}\ \partial \!\!\!/\ \psi +{\frac {\ \lambda \ }{4}}\left[\ \left({\bar {\psi }}\ \psi \right)\left({\bar {\psi }}\psi \right)-\left({\bar {\psi }}\ \gamma ^{5}\ \psi \right)\left({\bar {\psi }}\ \gamma ^{5}\ \psi \right)\ \right]} or, equivalently, decomposed into left and right chiral parts, L = i ψ ¯ L ∂ / ψ L + i ψ ¯ R ∂ / ψ R + λ ( ψ ¯ L ψ R ) ( ψ ¯ R ψ L ) . {\displaystyle \ {\mathcal {L}}\ =\ i\ {\bar {\psi }}_{\mathsf {L}}\ \partial \!\!\!/\ \psi _{\mathsf {L}}\ +\ i\ {\bar {\psi }}_{\mathsf {R}}\ \partial \!\!\!/\ \psi _{\mathsf {R}}\ +\ \lambda \ \left({\bar {\psi }}_{\mathsf {L}}\ \psi _{\mathsf {R}}\right)\left({\bar {\psi }}_{\mathsf {R}}\ \psi _{\mathsf {L}}\right)~.} The terms proportional to λ {\displaystyle \ \lambda \ } are an attractive four-fermion interaction, which parallels the BCS theory phonon exchange interaction. The global symmetry of the model is U(1)Q × U(1)χ where Q is the ordinary charge of the Dirac fermion and χ is the chiral charge. The parameter λ {\displaystyle \ \lambda \ } is equivalent to a reciprocal squared mass, λ = 1 M 2 , {\displaystyle \ \lambda ={\tfrac {1}{\ M^{2}}}\ ,} which represents short-distance physics or the strong interaction scale, producing an attractive four-fermion interaction. There is no bare fermion mass term because of the chiral symmetry. However, there will be a chiral condensate (but no confinement) leading to an effective mass term and a spontaneous symmetry breaking of the chiral symmetry, but not the charge symmetry. With N flavors and the flavor indices, represented by the Latin letters a and b, the Lagrangian density becomes L = i ψ ¯ a ∂ / ψ a + λ 4 N [ ( ψ ¯ a ψ b ) ( ψ ¯ b ψ a ) − ( ψ ¯ a γ 5 ψ b ) ( ψ ¯ b γ 5 ψ a ) ] {\displaystyle {\mathcal {L}}\ =\ i\ {\bar {\psi }}_{a}\partial \!\!\!/\ \psi ^{a}+{\frac {\lambda }{\ 4\ N\ }}\ \left[\ \left({\bar {\psi }}_{a}\ \psi ^{b}\right)\left({\bar {\psi }}_{b}\ \psi ^{a}\right)-\left({\bar {\psi }}_{a}\ \gamma ^{5}\ \psi ^{b}\right)\left({\bar {\psi }}_{b}\ \gamma ^{5}\ \psi ^{a}\right)\ \right]\ } hence L = i ψ ¯ L a ∂ / ψ L a + i ψ ¯ R a ∂ / ψ R a + λ N ( ψ ¯ L a ψ R b ) ( ψ ¯ R b ψ L a ) . {\displaystyle {\mathcal {L}}\ =\ i\ {\bar {\psi }}_{{\mathsf {L}}\ a}\ \partial \!\!\!/\ \psi _{\mathsf {L}}^{a}\ +\ i\ {\bar {\psi }}_{{\mathsf {R}}\ a}\ \partial \!\!\!/\ \psi _{\mathsf {R}}^{a}+{\frac {\lambda }{\ N\ }}\ \left({\bar {\psi }}_{{\mathsf {L}}\ a}\ \psi _{\mathsf {R}}^{b}\right)\left({\bar {\psi }}_{{\mathsf {R}}\ b}\ \psi _{\mathsf {L}}^{a}\right)~.} Chiral symmetry forbids a bare mass term, but there may be chiral condensates. The global symmetry here is SU(N)L × SU(N)R × U(1)Q × U(1)χ where SU(N)L × SU(N)R acting upon the left-handed flavors and right-handed flavors respectively is the chiral symmetry (in other words, there is no natural correspondence between the left-handed and the right-handed flavors), U(1)Q is the Dirac charge, which is sometimes called the baryon number and U(1)χ is the axial charge. If a chiral condensate forms, then the chiral symmetry is spontaneously broken into a diagonal subgroup SU(N) since the condensate leads to a pairing of the left-handed and the right-handed flavors. The axial charge is also spontaneously broken. The broken symmetries lead to (nearly) massless pseudoscalar bosons, e.g. pions. See Goldstone boson. As mentioned, this model is sometimes used as a phenomenological model of quantum chromodynamics in the chiral limit. However, while it is able to model chiral symmetry breaking and chiral condensates, it does not model confinement. Also, the axial symmetry is broken spontaneously in this model, leading to a massless Goldstone boson unlike QCD, where it is broken anomalously. Since the Nambu–Jona-Lasinio model is nonrenormalizable in four spacetime dimensions, this theory can only be an effective field theory which needs to be UV completed. == See also == Gross–Neveu model == References == == External links == Giovanni Jona-Lasinio and Yoichiro Nambu, Nambu-Jona-Lasinio model, Scholarpedia, 5(12):7487, (2010). doi:10.4249/scholarpedia.7487
Wikipedia/Nambu–Jona-Lasinio_model
In mathematical physics, two-dimensional Yang–Mills theory is the special case of Yang–Mills theory in which the dimension of spacetime is taken to be two. This special case allows for a rigorously defined Yang–Mills measure, meaning that the (Euclidean) path integral can be interpreted as a measure on the set of connections modulo gauge transformations. This situation contrasts with the four-dimensional case, where a rigorous construction of the theory as a measure is currently unknown. An aspect of the subject of particular interest is the large-N limit, in which the structure group is taken to be the unitary group U ( N ) {\displaystyle U(N)} and then the N {\displaystyle N} tends to infinity limit is taken. The large-N limit of two-dimensional Yang–Mills theory has connections to string theory. == Background == Interest in the Yang–Mills measure comes from a statistical mechanical or constructive quantum field theoretic approach to formulating a quantum theory for the Yang–Mills field. A gauge field is described mathematically by a 1-form A {\displaystyle A} on a principal G {\displaystyle G} -bundle over a manifold M {\displaystyle M} taking values in the Lie algebra L ( G ) {\displaystyle L(G)} of the Lie group G {\displaystyle G} . We assume that the structure group G {\displaystyle G} , which describes the physical symmetries of the gauge field, is a compact Lie group with a bi-invariant metric on the Lie algebra L ( G ) {\displaystyle L(G)} , and we also assume given a Riemannian metric on the manifold M {\displaystyle M} . The Yang–Mills action functional is given by S Y M ( A ) = 1 2 ∫ M ‖ F A ‖ 2 d σ M {\displaystyle S_{YM}(A)={\frac {1}{2}}\int _{M}\|F^{A}\|^{2}\,d\sigma _{M}} where F A {\displaystyle F^{A}} is the curvature of the connection form A {\displaystyle A} , the norm-squared in the integrand comes from the metric on the Lie algebra and the one on the base manifold, and σ M {\displaystyle \sigma _{M}} is the Riemannian volume measure on M {\displaystyle M} . The measure μ T {\displaystyle \mu _{T}} is given formally by d μ T ( A ) = 1 Z T e − S Y M ( A ) / T D A , {\displaystyle d\mu _{T}(A)={\frac {1}{Z_{T}}}e^{-S_{YM}(A)/T}DA,} as a normalized probability measure on the space of all connections on the bundle, with T > 0 {\displaystyle T>0} a parameter, and Z T {\displaystyle Z_{T}} is a formal normalizing constant. More precisely, the probability measure is more likely to be meaningful on the space of orbits of connections under gauge transformations. == The Yang–Mills measure for two-dimensional manifolds == Study of Yang–Mills theory in two dimensions dates back at least to work of A. A. Migdal in 1975. Some formulas appearing in Migdal's work can, in retrospect, be seen to be connected to the heat kernel on the structure group of the theory. The role of the heat kernel was made more explicit in various works in the late 1970s, culminating in the introduction of the heat kernel action in work of Menotti and Onofri in 1981. In the continuum theory, the Yang–Mills measure μ T {\displaystyle \mu _{T}} was rigorously defined for the case where M = R 2 {\displaystyle M={\mathbb {R} }^{2}} by Bruce Driver and by Leonard Gross, Christopher King, and Ambar Sengupta. For compact manifolds, both oriented and non-oriented, with or without boundary, with specified bundle topology, the Yang–Mills measure was constructed by Sengupta In this approach the 2-dimensional Yang–Mills measure is constructed by using a Gaussian measure on an infinite-dimensional space conditioned to satisfy relations implied by the topologies of the surface and of the bundle. Wilson loop variables (certain important variables on the space) were defined using stochastic differential equations and their expected values computed explicitly and found to agree with the results of the heat kernel action. Dana S. Fine used the formal Yang–Mills functional integral to compute loop expectation values. Other approaches include that of Klimek and Kondracki and Ashtekar et al. Thierry Lévy constructed the 2-dimensional Yang–Mills measure in a very general framework, starting with the loop-expectation value formulas and constructing the measure, somewhat analogously to Brownian motion measure being constructed from transition probabilities. Unlike other works that also aimed to construct the measure from loop expectation values, Lévy's construction makes it possible to consider a very wide family of loop observables. The discrete Yang–Mills measure is a term that has been used for the lattice gauge theory version of the Yang–Mills measure, especially for compact surfaces. The lattice in this case is a triangulation of the surface. Notable facts are: (i) the discrete Yang–Mills measure can encode the topology of the bundle over the continuum surface even if only the triangulation is used to define the measure; (ii) when two surfaces are sewn along a common boundary loop, the corresponding discrete Yang–Mills measures convolve to yield the measure for the combined surface. == Wilson loop expectation values in 2 dimensions == For a piecewise smooth loop γ {\displaystyle \gamma } on the base manifold M {\displaystyle M} and a point u {\displaystyle u} on the fiber in the principal G {\displaystyle G} -bundle P → M {\displaystyle P\to M} over the base point o ∈ M {\displaystyle o\in M} of the loop, there is the holonomy h γ ( A ) {\displaystyle h_{\gamma }(A)} of any connection A {\displaystyle A} on the bundle. For regular loops γ 1 , … , γ n {\displaystyle \gamma _{1},\ldots ,\gamma _{n}} , all based at o {\displaystyle o} and any function φ {\displaystyle \varphi } on G n {\displaystyle G^{n}} the function A ↦ φ ( h γ 1 ( A ) , … , h γ n ( A ) ) {\displaystyle A\mapsto \varphi {\bigl (}h_{\gamma _{1}}(A),\ldots ,h_{\gamma _{n}}(A){\bigr )}} is called a Wilson loop variable, of interest mostly when φ {\displaystyle \varphi } is a product of traces of the holonomies in representations of the group G {\displaystyle G} . With M {\displaystyle M} being a two-dimensional Riemannian manifold the loop expectation values ∫ φ ( h γ 1 ( A ) , … , h γ n ( A ) ) d μ T ( A ) {\displaystyle \int \varphi {\bigl (}h_{\gamma _{1}}(A),\ldots ,h_{\gamma _{n}}(A){\bigr )}\,d\mu _{T}(A)} were computed in the above-mentioned works. If M {\displaystyle M} is the plane then ∫ φ ( h γ ( A ) ) d μ T ( A ) = ∫ G φ ( x ) Q T a ( x ) d x , {\displaystyle \int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)=\int _{G}\varphi (x)Q_{Ta}(x)\,dx,} where Q t ( y ) {\displaystyle Q_{t}(y)} is the heat kernel on the group G {\displaystyle G} , a {\displaystyle a} is the area enclosed by the loop γ {\displaystyle \gamma } , and the integration is with respect to unit-mass Haar measure. This formula was proved by Driver and by Gross et al. using the Gaussian measure construction of the Yang–Mills measure on the plane and by defining parallel transport by interpreting the equation of parallel transport as a Stratonovich stochastic differential equation. If M {\displaystyle M} is the 2-sphere then ∫ φ ( h γ ( A ) ) d μ T ( A ) = 1 Q T c ( e ) ∫ G φ ( x ) Q T a ( x ) Q T b ( x − 1 ) d x , {\displaystyle \int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)={\frac {1}{Q_{Tc}(e)}}\int _{G}\varphi (x)Q_{Ta}(x)Q_{Tb}(x^{-1})\,dx,} where now b {\displaystyle b} is the area of the region "outside" the loop γ {\displaystyle \gamma } , and c {\displaystyle c} is the total area of the sphere. This formula was proved by Sengupta using the conditioned Gaussian measure construction of the Yang–Mills measure and the result agrees with what one gets by using the heat kernel action of Menotti and Onofri. As an example for higher genus surfaces, if M {\displaystyle M} is a torus, then ∫ φ ( h γ ( A ) ) d μ T ( A ) = ∫ G φ ( x ) Q T a ( x ) Q T b ( x − 1 w z w − 1 z − 1 ) d x d w d z ∫ G Q T c ( w z w − 1 z − 1 ) d w d z , {\displaystyle \int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)={\frac {\int _{G}\varphi (x)Q_{Ta}(x)Q_{Tb}(x^{-1}wzw^{-1}z^{-1})\,dx\,dw\,dz}{\int _{G}Q_{Tc}(wzw^{-1}z^{-1})\,dw\,dz}},} with c {\displaystyle c} being the total area of the torus, and γ {\displaystyle \gamma } a contractible loop on the torus enclosing an area a {\displaystyle a} . This, and counterparts in higher genus as well as for surfaces with boundary and for bundles with nontrivial topology, were proved by Sengupta. There is an extensive physics literature on loop expectation values in two-dimensional Yang–Mills theory. Many of the above formulas were known in the physics literature from the 1970s, with the results initially expressed in terms of a sum over the characters of the gauge group rather than the heat kernel and with the function φ {\displaystyle \varphi } being the trace in some representation of the group. Expressions involving the heat kernel then appeared explicitly in the form of the "heat kernel action" in work of Menotti and Onofri. The role of the convolution property of the heat kernel was used in works of Sergio Albeverio et al. in constructing stochastic cosurface processes inspired by Yang–Mills theory and, indirectly, by Makeenko and Migdal in the physics literature. == The low-T limit == The Yang–Mills partition function is, formally, ∫ e − 1 T S Y M ( A ) D A {\displaystyle \int e^{-{\frac {1}{T}}S_{YM}(A)}\,DA} In the two-dimensional case we can view this as being (proportional to) the denominator that appears in the loop expectation values. Thus, for example, the partition function for the torus would be ∫ G 2 Q T S ( a b a − 1 b − 1 ) d a d b , {\displaystyle \int _{G^{2}}Q_{TS}(aba^{-1}b^{-1})\,da\,db,} where S {\displaystyle S} is the area of the torus. In two of the most impactful works in the field, Edward Witten showed that as T ↓ 0 {\displaystyle T\downarrow 0} the partition function yields the volume of the moduli space of flat connections with respect to a natural volume measure on the moduli space. This volume measure is associated to a natural symplectic structure on the moduli space when the surface is orientable, and is the torsion of a certain complex in the case where the surface is not orientable. Witten's discovery has been studied in different ways by several researchers. Let M g 0 {\displaystyle {\mathcal {M}}_{g}^{0}} denote the moduli space of flat connections on a trivial bundle, with structure group being a compact connected semi-simple Lie group G {\displaystyle G} whose Lie algebra is equipped with an Ad-invariant metric, over a compact two-dimensional orientable manifold of genus g ≥ 2 {\displaystyle g\geq 2} . Witten showed that the symplectic volume of this moduli space is given by where the sum is over all irreducible representations of G {\displaystyle G} . This was proved rigorous by Sengupta (see also the works by Lisa Jeffrey and by Kefeng Liu). There is a large literature on the symplectic structure on the moduli space of flat connections, and more generally on the moduli space itself, the major early work being that of Michael Atiyah and Raoul Bott. Returning to the Yang–Mills measure, Sengupta proved that the measure itself converges in a weak sense to a suitably scaled multiple of the symplectic volume measure for orientable surfaces of genus ≥ 2 {\displaystyle \geq 2} . Thierry Lévy and James R. Norris established a large deviations principle for this convergence, showing that the Yang–Mills measure encodes the Yang–Mills action functional even though this functional does not explicitly appear in the rigorous formulation of the measure. == The large-N limit == The large-N limit of gauge theories refers to the behavior of the theory for gauge groups of the form U ( N ) {\displaystyle U(N)} , S U ( N ) {\displaystyle SU(N)} , O ( N ) {\displaystyle O(N)} , S O ( N ) {\displaystyle SO(N)} , and other such families, as N {\displaystyle N} goes to ↑ ∞ {\displaystyle \uparrow \infty } . There is a large physics literature on this subject, including major early works by Gerardus 't Hooft. A key tool in this analysis is the Makeenko–Migdal equation. In two dimensions, the Makeenko–Migdal equation takes a special form developed by Kazakov and Kostov. In the large-N limit, the 2-D form of the Makeenko–Migdal equation relates the Wilson loop functional for a complicated curve with multiple crossings to the product of Wilson loop functionals for a pair of simpler curves with at least one less crossing. In the case of the sphere or the plane, it was the proposed that the Makeenko–Migdal equation could (in principle) reduce the computation of Wilson loop functionals for arbitrary curves to the Wilson loop functional for a simple closed curve. In dimension 2, some of the major ideas were proposed by I. M. Singer, who named this limit the master field (a general notion in some areas of physics). Xu studied the large- N {\displaystyle N} limit of 2-dimensional Yang–Mills loop expectation values using ideas from random matrix theory. Sengupta computed the large-N limit of loop expectation values in the plane and commented on the connection with free probability. Confirming one proposal of Singer, Michael Anshelevich and Sengupta showed that the large-N limit of the Yang–Mills measure over the plane for the groups U ( N ) {\displaystyle U(N)} is given by a free probability theoretic counterpart of the Yang–Mills measure. An extensive study of the master field in the plane was made by Thierry Lévy. Several major contributions have been made by Bruce K. Driver, Brian C. Hall, and Todd Kemp, Franck Gabriel, and Antoine Dahlqvist. Dahlqvist and Norris have constructed the master field on the two-dimensional sphere. In spacetime dimension larger than 2, there is very little in terms of rigorous mathematical results. Sourav Chatterjee has proved several results in large-N gauge theory for dimension larger than 2. Chatterjee established an explicit formula for the leading term of the free energy of three-dimensional U ( N ) {\displaystyle U(N)} lattice gauge theory for any N, as the lattice spacing tends to zero. Let Z ( n , ε , g ) {\displaystyle Z(n,\varepsilon ,g)} be the partition function of d {\displaystyle d} -dimensional U ( N ) {\displaystyle U(N)} lattice gauge theory with coupling strength g {\displaystyle g} in the box with lattice spacing ε {\displaystyle \varepsilon } and size being n spacings in each direction. Chatterjee showed that in dimensions d=2 and 3, log ⁡ Z ( n , ε , g ) {\displaystyle \log Z(n,\varepsilon ,g)} is n d ( 1 2 ( d − 1 ) N 2 log ⁡ ( g 2 ε 4 − d ) + ( d − 1 ) log ⁡ ( ∏ j = 1 N − 1 j ! ( 2 π ) N / 2 ) + N 2 K d ) {\displaystyle n^{d}\left({\frac {1}{2}}(d-1)N^{2}\log(g^{2}\varepsilon ^{4-d})+(d-1)\log \left({\frac {\prod _{j=1}^{N-1}j!}{(2\pi )^{N/2}}}\right)+N^{2}K_{d}\right)} up to leading order in n {\displaystyle n} , where K d {\displaystyle K_{d}} is a limiting free-energy term. A similar result was also obtained for in dimension 4, for n → ∞ {\displaystyle n\to \infty } , ε → 0 {\displaystyle \varepsilon \to 0} , and g → 0 {\displaystyle g\to 0} independently. == References ==
Wikipedia/Two-dimensional_Yang–Mills_theory
In theoretical physics, the six-dimensional (2,0)-superconformal field theory is a quantum field theory whose existence is predicted by arguments in string theory. It is still poorly understood because there is no known description of the theory in terms of an action functional. Despite the inherent difficulty in studying this theory, it is considered to be an interesting object for a variety of reasons, both physical and mathematical. == Applications == The (2,0)-theory has proven to be important for studying the general properties of quantum field theories. Indeed, this theory subsumes a large number of mathematically interesting effective quantum field theories and points to new dualities relating these theories. For example, Luis Alday, Davide Gaiotto, and Yuji Tachikawa showed that by compactifying this theory on a surface, one obtains a four-dimensional quantum field theory, and there is a duality known as the AGT correspondence which relates the physics of this theory to certain physical concepts associated with the surface itself. More recently, theorists have extended these ideas to study the theories obtained by compactifying down to three dimensions. In addition to its applications in quantum field theory, the (2,0)-theory has spawned a number of important results in pure mathematics. For example, the existence of the (2,0)-theory was used by Witten to give a "physical" explanation for a conjectural relationship in mathematics called the geometric Langlands correspondence. In subsequent work, Witten showed that the (2,0)-theory could be used to understand a concept in mathematics called Khovanov homology. Developed by Mikhail Khovanov around 2000, Khovanov homology provides a tool in knot theory, the branch of mathematics that studies and classifies the different shapes of knots. Another application of the (2,0)-theory in mathematics is the work of Davide Gaiotto, Greg Moore, and Andrew Neitzke, which used physical ideas to derive new results in hyperkähler geometry. == See also == ABJM superconformal field theory N = 4 supersymmetric Yang–Mills theory == Notes == == References == Alday, Luis; Gaiotto, Davide; Tachikawa, Yuji (2010). "Liouville correlation functions from four-dimensional gauge theories". Letters in Mathematical Physics. 91 (2): 167–197. arXiv:0906.3219. Bibcode:2010LMaPh..91..167A. doi:10.1007/s11005-010-0369-5. S2CID 15459761. Dimofte, Tudor; Gaiotto, Davide; Gukov, Sergei (2010). "Gauge theories labelled by three-manifolds". Communications in Mathematical Physics. 325 (2): 367–419. arXiv:1108.4389. Bibcode:2014CMaPh.325..367D. doi:10.1007/s00220-013-1863-2. S2CID 10882599. Gaiotto, Davide; Moore, Gregory; Neitzke, Andrew (2013). "Wall-crossing, Hitchin systems, and the WKB approximation". Advances in Mathematics. 234: 239–403. arXiv:0907.3987. doi:10.1016/j.aim.2012.09.027. S2CID 115176676. Khovanov, Mikhail (2000). "A categorification of the Jones polynomial". Duke Mathematical Journal. 101 (3): 359–426. arXiv:math/9908171. doi:10.1215/s0012-7094-00-10131-7. S2CID 119585149. Moore, Gregory (2012). "Lecture Notes for Felix Klein Lectures" (PDF). Retrieved 14 August 2013. Witten, Edward (2009). "Geometric Langlands from six dimensions". arXiv:0905.2720 [hep-th]. Witten, Edward (2012). "Fivebranes and knots". Quantum Topology. 3 (1): 1–137. doi:10.4171/qt/26.
Wikipedia/6D_(2,0)_superconformal_field_theory
In mathematical physics, constructive quantum field theory is the field devoted to showing that quantum field theory can be defined in terms of precise mathematical structures. This demonstration requires new mathematics, in a sense analogous to classical real analysis, putting calculus on a mathematically rigorous foundation. Weak, strong, and electromagnetic forces of nature are believed to have their natural description in terms of quantum fields. Attempts to put quantum field theory on a basis of completely defined concepts have involved most branches of mathematics, including functional analysis, differential equations, probability theory, representation theory, geometry, and topology. It is known that a quantum field is inherently hard to handle using conventional mathematical techniques like explicit estimates. This is because a quantum field has the general nature of an operator-valued distribution, a type of object from mathematical analysis. The existence theorems for quantum fields can be expected to be very difficult to find, if indeed they are possible at all. One discovery of the theory that can be related in non-technical terms, is that the dimension d of the spacetime involved is crucial. Notable work in the field by James Glimm and Arthur Jaffe showed that with d < 4 many examples can be found. Along with work of their students, coworkers, and others, constructive field theory resulted in a mathematical foundation and exact interpretation to what previously was only a set of recipes, also in the case d < 4. Theoretical physicists had given these rules the name "renormalization," but most physicists had been skeptical about whether they could be turned into a mathematical theory. Today one of the most important open problems, both in theoretical physics and in mathematics, is to establish similar results for gauge theory in the realistic case d = 4. The traditional basis of constructive quantum field theory is the set of Wightman axioms. Konrad Osterwalder and Robert Schrader showed that there is an equivalent problem in mathematical probability theory. The examples with d < 4 satisfy the Wightman axioms as well as the Osterwalder–Schrader axioms . They also fall in the related framework introduced by Rudolf Haag and Daniel Kastler, called algebraic quantum field theory. There is a firm belief in the physics community that the gauge theory of C.N. Yang and Robert Mills (the Yang–Mills theory) can lead to a tractable theory, but new ideas and new methods will be required to actually establish this, and this could take many years. == External links == Jaffe, Arthur (2000). "Constructive Quantum Field Theory" (PDF). Mathematical Physics 2000: 111–127. doi:10.1142/9781848160224_0007. ISBN 978-1-86094-230-3. Baez, John (1992). Introduction to algebraic and constructive quantum field theory. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-60512-8. OCLC 889252663.
Wikipedia/Constructive_quantum_field_theory
In scientific modeling, a toy model is a deliberately simplistic model with many details removed so that it can be used to explain a mechanism concisely. It is also useful in a description of the fuller model. In "toy" mathematical models, this is usually done by reducing or extending the number of dimensions or reducing the number of fields/variables or restricting them to a particular symmetric form. In economic models, some may be only loosely based on theory, others more explicitly so. They allow for a quick first pass at some question, and present the essence of the answer from a more complicated model or from a class of models. For the researcher, they may come before writing a more elaborate model, or after, once the elaborate model has been worked out. Blanchard's list of examples includes the IS–LM model, the Mundell–Fleming model, the RBC model, and the New Keynesian model. In "toy" physical descriptions, an analogous example of an everyday mechanism is often used for illustration. The phrase "tinker-toy model" is also used, in reference to the Tinkertoys product used for children's constructivist learning. == Examples == Examples of toy models in physics include: the Ising model as a toy model for ferromagnetism, or lattice models more generally. It is the simplest model that allows for Euclidean quantum field theory in statistical physics. Newtonian orbital mechanics as described by assuming that Earth is attached to the Sun by an elastic band; the Schwarzschild metric, general relativistic model describing a single symmetrical non-rotating non-charged concentration of mass (such as a perfect spherical mass): a simple relativistic "equivalent" of the classical symmetric Newtonian mass (in fact, the first solution of the Einstein field equations to be developed); Hawking radiation around a black hole described as conventional radiation from a fictitious membrane at radius r = 2m (the black hole membrane paradigm); frame-dragging around a rotating star considered as the effect of space being a conventional viscous fluid; the null dust; the Gödel metric in general relativity, which allows closed timelike curves; the Lambda-CDM model of cosmology, in which general relativistic effects of structure formation are not taken into account. the empty universe, a simple expanding universe model; the Bohr model of the atom, a "semi-classical" quantum mechanical model of the atom, which can be solved exactly for the hydrogen atom; the particle in a box in quantum mechanics; the Spekkens model, a hidden-variable theory; the primon gas, which illustrates some connections between number theory and physics. == See also == Physical model – Informative representation of an entityPages displaying short descriptions of redirect targets Spherical cow – Humorous concept in scientific models Toy problem – Simplified example problem used for research or exposition Toy theorem – Simplified instance of a general theorem == References ==
Wikipedia/Toy_model
In quantum field theory, scalar chromodynamics, also known as scalar quantum chromodynamics or scalar QCD, is a gauge theory consisting of a gauge field coupled to a scalar field. This theory is used experimentally to model the Higgs sector of the Standard Model. It arises from a coupling of a scalar field to gauge fields. Scalar fields are used to model certain particles in particle physics; the most important example is the Higgs boson. Gauge fields are used to model forces in particle physics: they are force carriers. When applied to the Higgs sector, these are the gauge fields appearing in electroweak theory, described by Glashow–Weinberg–Salam theory. == Matter content and Lagrangian == === Matter content === This article discusses the theory on flat spacetime R 1 , 3 {\displaystyle \mathbb {R} ^{1,3}} , commonly known as Minkowski space. The model consists of a complex vector valued scalar field ϕ {\displaystyle \phi } minimally coupled to a gauge field A μ {\displaystyle A_{\mu }} . The gauge group of the theory is a Lie group G {\displaystyle G} . Commonly, this is SU ( N ) {\displaystyle {\text{SU}}(N)} for some N {\displaystyle N} , though many details hold even when we don't concretely fix G {\displaystyle G} . The scalar field can be treated as a function ϕ : R 1 , 3 → V {\displaystyle \phi :\mathbb {R} ^{1,3}\rightarrow V} , where ( V , ρ , G ) {\displaystyle (V,\rho ,G)} is the data of a representation of G {\displaystyle G} . Then V {\displaystyle V} is a vector space. The 'scalar' refers to how ϕ {\displaystyle \phi } transforms (trivially) under the action of the Lorentz group, despite ϕ {\displaystyle \phi } being vector valued. For concreteness, the representation is often chosen to be the fundamental representation. For SU ( N ) {\displaystyle {\text{SU}}(N)} , this fundamental representation is C N {\displaystyle \mathbb {C} ^{N}} . Another common representation is the adjoint representation. In this representation, varying the Lagrangian below to find the equations of motion gives the Yang–Mills–Higgs equation. Each component of the gauge field is a function A μ : R 1 , 3 → g {\displaystyle A_{\mu }:\mathbb {R} ^{1,3}\rightarrow {\mathfrak {g}}} where g {\displaystyle {\mathfrak {g}}} is the Lie algebra of G {\displaystyle G} from the Lie group–Lie algebra correspondence. From a geometric point of view, A μ {\displaystyle A_{\mu }} are the components of a principal connection under a global choice of trivialization (which can be made due to the theory being on flat spacetime). === Lagrangian === The Lagrangian density arises from minimally coupling the Klein–Gordon Lagrangian (with a potential) to the Yang–Mills Lagrangian.: 102  Here the scalar field ϕ {\displaystyle \phi } is in the fundamental representation of SU ( N ) {\displaystyle {\text{SU}}(N)} : where F μ ν {\displaystyle F_{\mu \nu }} is the gauge field strength, defined as F μ ν = ∂ μ A ν − ∂ ν A μ + i g [ A μ , A ν ] {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }+ig[A_{\mu },A_{\nu }]} . In geometry this is the curvature form. D μ ϕ {\displaystyle D_{\mu }\phi } is the covariant derivative of ϕ {\displaystyle \phi } , defined as D μ ϕ = ∂ μ ϕ − i g ρ ( A μ ) ϕ . {\displaystyle D_{\mu }\phi =\partial _{\mu }\phi -ig\rho (A_{\mu })\phi .} g {\displaystyle g} is the coupling constant. V ( ϕ ) {\displaystyle V(\phi )} is the potential. tr {\displaystyle {\text{tr}}} is an invariant bilinear form on g {\displaystyle {\mathfrak {g}}} , such as the Killing form. It is a typical abuse of notation to label this tr {\displaystyle {\text{tr}}} as the form often arises as the trace in some representation of g {\displaystyle {\mathfrak {g}}} . This straightforwardly generalizes to an arbitrary gauge group G {\displaystyle G} , where ϕ {\displaystyle \phi } takes values in an arbitrary representation ρ {\displaystyle \rho } equipped with an invariant inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } , by replacing ( D μ ϕ ) † D μ ϕ ↦ ⟨ D μ ϕ , D μ ϕ ⟩ {\displaystyle (D_{\mu }\phi )^{\dagger }D^{\mu }\phi \mapsto \langle D_{\mu }\phi ,D^{\mu }\phi \rangle } . === Gauge invariance === The model is invariant under gauge transformations, which at the group level is a function U : R 1 , 3 → G {\displaystyle U:\mathbb {R} ^{1,3}\rightarrow G} , and at the algebra level is a function α : R 1 , 3 → g {\displaystyle \alpha :\mathbb {R} ^{1,3}\rightarrow {\mathfrak {g}}} . At the group level, the transformations of fields is ϕ ( x ) ↦ U ( x ) ϕ ( x ) {\displaystyle \phi (x)\mapsto U(x)\phi (x)} A μ ( x ) ↦ U A μ U − 1 − i g ( ∂ μ U ) U − 1 . {\displaystyle A_{\mu }(x)\mapsto UA_{\mu }U^{-1}-{\frac {i}{g}}(\partial _{\mu }U)U^{-1}.} From the geometric viewpoint, U ( x ) {\displaystyle U(x)} is a global change of trivialization. This is why it is a misnomer to call gauge symmetry a symmetry: it is really a redundancy in the description of the system. === Curved spacetime === The theory admits a generalization to a curved spacetime M {\displaystyle M} , but this requires more subtle definitions for many objects appearing in the theory. For example, the scalar field must be viewed as a section of an associated vector bundle with fibre V {\displaystyle V} . This is still true on flat spacetime, but the flatness of the base space allows the section to be viewed as a function M → V {\displaystyle M\rightarrow V} , which is conceptually simpler. == Higgs mechanism == If the potential is minimized at a non-zero value of ϕ {\displaystyle \phi } , this model exhibits the Higgs mechanism. In fact the Higgs boson of the Standard Model is modeled by this theory with the choice G = SU ( 2 ) {\displaystyle G={\text{SU}}(2)} ; the Higgs boson is also coupled to electromagnetism. == Examples == By concretely choosing a potential V {\displaystyle V} , some familiar theories can be recovered. Taking V ( ϕ ) = M 2 ϕ † ϕ {\displaystyle V(\phi )=M^{2}\phi ^{\dagger }\phi } gives Yang–Mills minimally coupled to a Klein–Gordon field with mass M {\displaystyle M} . Taking V ( ϕ ) = λ ( ϕ † ϕ ) 2 − μ H 2 ϕ † ϕ {\displaystyle V(\phi )=\lambda (\phi ^{\dagger }\phi )^{2}-\mu _{H}^{2}\phi ^{\dagger }\phi } gives the potential for the Higgs boson in the Standard Model. == See also == Scalar electrodynamics Quantum chromodynamics == References ==
Wikipedia/Scalar_chromodynamics
Crystallography is the branch of science devoted to the study of molecular and crystalline structure and properties. The word crystallography is derived from the Ancient Greek word κρύσταλλος (krústallos; "clear ice, rock-crystal"), and γράφειν (gráphein; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming 2014 the International Year of Crystallography. Crystallography is a broad topic, and many of its subareas, such as X-ray crystallography, are themselves important scientific topics. Crystallography ranges from the fundamentals of crystal structure to the mathematics of crystal geometry, including those that are not periodic or quasicrystals. At the atomic scale it can involve the use of X-ray diffraction to produce experimental data that the tools of X-ray crystallography can convert into detailed positions of atoms, and sometimes electron density. At larger scales it includes experimental tools such as orientational imaging to examine the relative orientations at the grain boundary in materials. Crystallography plays a key role in many areas of biology, chemistry, and physics, as well new developments in these fields. == History and timeline == Before the 20th century, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established. The discovery of X-rays and electrons in the last decade of the 19th century enabled the determination of crystal structures on the atomic scale, which brought about the modern era of crystallography. The first X-ray diffraction experiment was conducted in 1912 by Max von Laue, while electron diffraction was first realized in 1927 in the Davisson–Germer experiment and parallel work by George Paget Thomson and Alexander Reid. These developed into the two main branches of crystallography, X-ray crystallography and electron diffraction. The quality and throughput of solving crystal structures greatly improved in the second half of the 20th century, with the developments of customized instruments and phasing algorithms. Nowadays, crystallography is an interdisciplinary field, supporting theoretical and experimental discoveries in various domains. Modern-day scientific instruments for crystallography vary from laboratory-sized equipment, such as diffractometers and electron microscopes, to dedicated large facilities, such as photoinjectors, synchrotron light sources and free-electron lasers. == Methodology == Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray diffraction, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways. X-rays interact with the spatial distribution of electrons in the sample. Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition the magnetic moment of neutrons is non-zero, so they are also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels, which can sometimes be resolved by substituting deuterium for hydrogen. Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample.: Chpt 4  It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces. == Applications in various areas == === Materials science === Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which take diffraction patterns of samples with a large number of crystals, play an important role in structural determination. Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. Crystallographic studies help elucidate the relationship between a material's structure and its properties, aiding in developing new materials with tailored characteristics. This understanding is crucial in various fields, including metallurgy, geology, and materials science. Advancements in crystallographic techniques, such as electron diffraction and X-ray crystallography, continue to expand our understanding of material behavior at the atomic level. In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs. Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory. === Biology === X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium. Electron diffraction has been used to determine some protein structures, most notably membrane proteins and viral capsids. Macromolecular structures determined through X-ray crystallography (and other techniques) are housed in the Protein Data Bank (PDB)–a freely accessible repository for the structures of proteins and other biological macromolecules. There are many molecular graphics codes available for visualising these structures. == Notation == Coordinates in square brackets such as [100] denote a direction vector (in real space). Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001] or the negative of any of those directions. Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl]. Indices in curly brackets or braces such as {100} denote a family of planes and their normals. In cubic materials the symmetry makes them equivalent, just as the way angle brackets denote a family of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}. == Reference literature == The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are: Vol A - Space Group Symmetry, Vol A1 - Symmetry Relations Between Space Groups, Vol B - Reciprocal Space, Vol C - Mathematical, Physical, and Chemical Tables, Vol D - Physical Properties of Crystals, Vol E - Subperiodic Groups, Vol F - Crystallography of Biological Macromolecules, and Vol G - Definition and Exchange of Crystallographic Data. == Notable scientists == == See also == == References == == External links == Free book, Geometry of Crystals, Polycrystals and Phase Transformations American Crystallographic Association Learning Crystallography Web Course on Crystallography Crystallographic Space Groups
Wikipedia/Crystallographic
Glass fiber reinforced concrete (GFRC) is a type of fiber-reinforced concrete. The product is also known as glassfibre reinforced concrete or GRC in British English. Glass fiber concretes are mainly used in exterior building façade panels and as architectural precast concrete. Somewhat similar materials are fiber cement siding and cement boards. == Composition == GRC (Glass fibre-reinforced concrete) ceramic consists of high-strength, alkali-resistant glass fibre embedded in a concrete & ceramic matrix. In this form, both fibres and matrix retain their physical and chemical identities, while offering a synergistic combination of properties that cannot be achieved with either of the components acting alone. In general, fibres are the principal load-carrying members, while the surrounding matrix keeps them in the desired locations and orientation, acting as a load transfer medium between the fibres and protecting them from environmental damage. The fibres provide reinforcement for the matrix and other useful functions in fibre-reinforced composite materials. Glass fibres can be incorporated into a matrix either in continuous or discontinuous (chopped) lengths. Durability was poor with the original type of glass fibres since the alkalinity of cement reacts with its silica. In the 1970s alkali-resistant glass fibres were commercialized. Alkali resistance is achieved by adding zirconia to the glass. The higher the zirconia content the better the resistance to alkali attack. AR glass fibres should have a Zirconia content of more than 16% to be in compliance with internationally recognized specifications (EN, ASTM, PCI, GRCA, etc). === Laminates === A widely used application for fibre-reinforced concrete is structural laminate, obtained by adhering and consolidating thin layers of fibres and matrix into the desired thickness. The fibre orientation in each layer as well as the stacking sequence of various layers can be controlled to generate a wide range of physical and mechanical properties for the composite laminate. GFRC cast without steel framing is commonly used for purely decorative applications such as window trims, decorative columns, exterior friezes, or limestone-like wall panels. == Properties == The design of glass-fibre-reinforced concrete panels uses a knowledge of its basic properties under tensile, compressive, bending and shear forces, coupled with estimates of behavior under secondary loading effects such as creep, thermal response and moisture movement. There are a number of differences between structural metal and fibre-reinforced composites. For example, metals in general exhibit yielding and plastic deformation, whereas most fibre-reinforced composites are elastic in their tensile stress-strain characteristics. However, the dissimilar nature of these materials provides mechanisms for high-energy absorption on a microscopic scale comparable to the yielding process. Depending on the type and severity of external loads, a composite laminate may exhibit gradual deterioration in properties but usually does not fail in a catastrophic manner. Mechanisms of damage development and growth in metal and composite structure are also quite different. Other important characteristics of many fibre-reinforced composites are their non-corroding behavior, high damping capacity and low coefficients of thermal expansion. Glass-fibre-reinforced concrete architectural panels have the general appearance of pre-cast concrete panels but differ in several significant ways. For example, the GFRC panels, on average, weigh substantially less than pre-cast concrete panels due to their reduced thickness. Their low weight decreases loads superimposed on the building’s structural components making construction of the building frame more economical. === Sandwich panels === A sandwich panel is a composite of three or more materials bonded together to form a structural panel. It takes advantage of the shear strength of a low-density core material and the high compressive and tensile strengths of the GFRC facing to obtain high strength-to-weight ratios. The theory of sandwich panels and functions of the individual components may be described by making an analogy to an I-beam. The core in a sandwich panel is comparable to the web of an I-beam, which supports the flanges and allows them to act as a unit. The web of the I-beam and the core of the sandwich panels carry the beam shear stresses. The core in a sandwich panel differs from the web of an I-beam in that it maintains continuous support for the facings, allowing the facings to be worked up to or above their yield strength without crimping or buckling. Obviously, the bonds between the core and facings must be capable of transmitting shear loads between these two components, thus making the entire structure an integral unit. The load-carrying capacity of a sandwich panel can be increased dramatically by introducing light steel framing. Light steel stud framing is similar to conventional steel stud framing for walls, except that the frame is encased in a concrete product. Here, the sides of the steel frame are covered with two or more layers of GFRC, depending on the type and magnitude of external loads. The strong and rigid GFRC provides full lateral support on both sides of the studs, preventing them from twisting and buckling laterally. The resulting panel is lightweight in comparison with traditionally reinforced concrete yet is strong and durable and can be easily handled. === GRC Jali === GRC stands for glass-reinforced concrete, while Jali refers to the intricate latticework or screen-like patterns often applied to GRC panels. GRC Jali, often referred to as glass fibre reinforced concrete (GFRC), it is a highly durable and flexible building material used in various outdoor and indoor applications. It is made from an amalgamation of sand, glass fibres(preferably OC and NIG) and water. It is well-known for its strength as well as its weatherproofing and appealing look. === Technical specifications === GFRC Material Properties Typical strength properties of GRC === Uses === GFRC is incredibly versatile and has a large number of use cases due to its strength, weight, and design. The most common place you will see this material is in the construction industry. It's used in very demanding cases such as architectural cladding that's hanging several stories above sidewalks or even more for aesthetics such as interior furniture pieces like GFRC coffee tables, GRC Jali, Elevation screens. The glass fiber reinforced concrete not only reduces the cost of concrete but also enhances its strength. == References == 6. GFRC Screen (GRC Jali) Asian GRC. Revised 12 February 2024. "GFRC Technical Specification, GRC Material Properties, Typical strength properties of GRC and uses".
Wikipedia/Glass_fiber_reinforced_concrete
Carbon fibre reinforced carbon (CFRC), carbon–carbon (C/C), or reinforced carbon–carbon (RCC) is a composite material consisting of carbon fiber reinforcement in a matrix of graphite. It was developed for the reentry vehicles of intercontinental ballistic missiles, and is most widely known as the material for the nose cone and wing leading edges of the Space Shuttle orbiter. Carbon-carbon brake discs and brake pads have been the standard component of the brake systems of Formula One racing cars since the late 1970s; the first year carbon brakes were seen on a Formula One car was 1976. Carbon–carbon is well-suited to structural applications at high temperatures, or where thermal shock resistance and/or a low coefficient of thermal expansion is needed. While it is less brittle than many other ceramics, it lacks impact resistance; Space Shuttle Columbia was destroyed during atmospheric re-entry after one of its RCC panels was broken by the impact of a piece of polyurethane foam insulation that broke off from the External Tank. == Production == The material is made in three stages: First, material is laid up in its intended final shape, with carbon filament and/or cloth surrounded by an organic binder such as plastic or pitch. Often, coke or some other fine carbon aggregate is added to the binder mixture. Second, the lay-up is heated, so that pyrolysis transforms the binder to relatively pure carbon. The binder loses volume in the process, causing voids to form; the addition of aggregate reduces this problem, but does not eliminate it. Third, the voids are gradually filled by forcing a carbon-forming gas such as acetylene through the material at a high temperature, over the course of several days. This long heat treatment process also allows the carbon to form into larger graphite crystals, and is the major reason for the material's high cost. The gray "Reinforced Carbon–Carbon (RCC)" panels on the space shuttle's wing leading edges and nose cone cost NASA $100,000/sq ft to produce, although much of this cost was a result of the advanced geometry and research costs associated with the panels. This stage can also include manufacturing of the finished product. C/C is a hard material that can be made highly resistant to thermal expansion, temperature gradients, and thermal cycling, depending on how the fiber scaffold is laid up and the quality/density of the matrix filler. Carbon–carbon materials retain their properties above 2000 °C. This temperature may be exceeded with the help of protective coatings to prevent oxidation. The material has a density between 1.6 and 1.98 g/cm3. == Similar products == Carbon fibre-reinforced silicon carbide (C/SiC) is a development of pure carbon–carbon that uses silicon carbide with carbon fibre. It is slightly denser than pure carbon-carbon and thought to be more durable. It can be used in the brake disc and brake pads of high-performance road cars. The first car to use it was the Mercedes-Benz C215 Coupe F1 edition. It is standard on the Bugatti Veyron and many Bentleys, Ferraris, Lamborghinis, Porsches, and the Corvette ZR1 and Z06. They are also offered as an optional upgrade on certain high performance Audi cars, including the D3 S8, B7 RS4, C6 S6 and RS6, and the R8. The material is not used in Formula 1 because of its weight. Carbon brakes became widely available for commercial airplanes in the 1980s, having been first used on the Concorde supersonic transport. A related non-ceramic carbon composite with uses in high-tech racing automotives is the carbotanium carbon–titanium composite used in the Zonda R and Huayra supercars made by the Italian motorcar company Pagani. == Footnotes == == References == == External links == Carbon brakes for Concorde
Wikipedia/Reinforced_Carbon-Carbon
Industrial engineering (IE) is concerned with the design, improvement and installation of integrated systems of people, materials, information, equipment and energy. It draws upon specialized knowledge and skill in the mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design, to specify, predict, and evaluate the results to be obtained from such systems. Industrial engineering is a branch of engineering that focuses on optimizing complex processes, systems, and organizations by improving efficiency, productivity, and quality. It combines principles from engineering, mathematics, and business to design, analyze, and manage systems that involve people, materials, information, equipment, and energy. Industrial engineers aim to reduce waste, streamline operations, and enhance overall performance across various industries, including manufacturing, healthcare, logistics, and service sectors. Industrial engineers are employed in numerous industries, such as automobile manufacturing, aerospace, healthcare, forestry, finance, leisure, and education. Industrial engineering combines the physical and social sciences together with engineering principles to improve processes and systems. Several industrial engineering principles are followed to ensure the effective flow of systems, processes, and operations. Industrial engineers work to improve quality and productivity while simultaneously cutting waste. They use principles such as lean manufacturing, six sigma, information systems, process capability, and more. These principles allow the creation of new systems, processes or situations for the useful coordination of labor, materials and machines. Depending on the subspecialties involved, industrial engineering may also overlap with, operations research, systems engineering, manufacturing engineering, production engineering, supply chain engineering, management science, engineering management, financial engineering, ergonomics or human factors engineering, safety engineering, logistics engineering, quality engineering or other related capabilities or fields. == History == === Origins === ==== Industrial engineering ==== The origins of industrial engineering are generally traced back to the Industrial Revolution with the rise of factory systems and mass production. The fundamental concepts began to emerge through ideas like Adam Smith's division of labor and the implementation of interchangeable parts by Eli Whitney. The term "industrial engineer" is credited to James Gunn who proposed the need for such an engineer focused on production and cost analysis in 1901. However, Frederick Taylor is widely credited as the "father of industrial engineering" for his focus on scientific management, emphasizing time studies and standardized work methods, with his principles being published in 1911. Notably, Taylor established the first department dedicated to industrial engineering work, called "Elementary Rate Fixing," in 1885 with the goal of process improvement and productivity increase. Frank and Lillian Gilbreth further contributed significantly with their development of motion studies and therbligs for analyzing manual labor in the early 20th century. The early focus of the field was heavily on improving efficiency and productivity within manufacturing environments, driven in part by the call for cost reduction by engineering professionals, as highlighted by the first president of ASME in 1880. The formalization of the discipline continued with the founding of the American Institute of Industrial Engineering (AIIE) in 1948. In more recent years, industrial engineering has expanded beyond manufacturing to include areas like healthcare, project management, and supply chain optimization. ==== Systems Engineering ==== The origins of systems engineering as a recognized discipline can be traced back to World War II, where its principles began to emerge to manage the complexities of new war technologies. Although systems thinking predates this period, the analysis of the RAF Fighter Command C2 System during the Battle of Britain (even though the term wasn't yet invented) is considered an early example of high-caliber systems engineering. The first known public use of the term "systems engineering" occurred in March 1950 by Mervin J. Kelly of Bell Telephone Laboratories, who described it as crucial for defining new systems and guiding the application of research in creating new services. The first published paper specifically on the subject appeared in 1956 by Kenneth Schlager, who noted the growing importance of systems engineering due to increasing technological complexity and the formation of dedicated systems engineering groups. In 1957, E.W. Engstrom further elaborated on the concept, emphasizing the determination of objectives and the thorough consideration of all influencing factors as requirements for successful systems engineering. That same year also saw the publication of the first textbook on the subject, "Systems Engineering: An Introduction to the Design of Large-Scale Systems" by Goode and Mahol. Early practices of systems engineering were generally informal, transdisciplinary, and deeply rooted in the application domain. Following these initial mentions and publications, the field saw further development in the 1960s and 1970s, with figures like Arthur Hall defining traits of a systems engineer and viewing it as a comprehensive process. Despite its informal nature, systems engineering played a vital role in major achievements like the 1969 Apollo moon landing. A significant step towards formalization occurred in July 1969 with the introduction of the first formal systems engineering process, Military Standard (MIL-STD)-499: System Engineering Management, by the U.S. Air Force. This standard aimed to provide guidance for managing the systems engineering process and was later extended and updated. The need for formally trained systems engineers led to the formation of the National Council on Systems Engineering (NCOSE) in the late 1980s, which evolved into the International Council on Systems Engineering (INCOSE). INCOSE further contributed to the formalization of the field through publications like its journal "Systems Engineering" starting in 1994 and the first edition of the "Systems Engineering Handbook" in 1997. Additionally, organizations like NASA published their own systems engineering handbooks. In the 21st century, international standardization became a key aspect, with the International Standards Organization (ISO) publishing its first standard defining systems engineering application and management in 2005, further solidifying its standing as a formal discipline. === Pioneers === Frederick Taylor (1856–1915) is generally credited as the father of the industrial engineering discipline. He earned a degree in mechanical engineering from Stevens Institute of Technology and earned several patents from his inventions. Taylor is the author of many well-known works, including a book, The Principles of Scientific Management, which became a classic of management literature. It is considered one of the most influential management books of the 20th century. The book laid our three goals: to illustrate how the country loses through inefficiency, to show that the solution to inefficiency is systematic management, and to show that the best management rests on defined laws, rules, and principles that can be applied to all kinds of human activity. Taylor is remembered for developing the stopwatch time study. Taylor's findings set the foundation for industrial engineering. Frank Gilbreth (1868-1924), along with his wife Lillian Gilbreth (1878-1972), also had a significant influence on the development of Industrial Engineering. Their work is housed at Purdue University. In 1907, Frank Gilbreth met Frederick Taylor, and he learned tremendously from Taylor’s work. Frank and Lillian created 18 kinds of elemental motions that make up a set of fundamental motions required for a worker to perform a manual operation or task. They named the elements therbligs, which are used in the study of motion in the workplace. These developments were the beginning of a much broader field known as human factors or ergonomics. Through the efforts of Hugo Diemer, the first course on industrial engineering was offered as an elective at Pennsylvania State University in 1908. The first doctoral degree in industrial engineering was awarded in 1933 by Cornell University. Henry Gantt (1861-1919) immersed himself in the growing movement of Taylorism. Gantt is best known for creating a management tool, the Gantt chart. Gantt charts display dependencies pictorially, which allows project managers to keep everything organized. They are studied in colleges and used by project managers around the world. In addition to the creation of the Gannt chart, Gantt had many other significant contributions to scientific management. He cared about worker incentives and the impact businesses had on society. Today, the American Society of Mechanical Engineers awards a Gantt Medal for “distinguished achievement in management and for service to the community.” Henry Ford (1863-1947) further revolutionized factory production with the first installation of a moving assembly line. This innovation reduced the time it took to build a car from more than 12 hours to one hour and 33 minutes. This continuous-flow inspired production method introduced a new way of automobile manufacturing. Ford is also known for transforming the workweek schedule. He cut the typical six-day workweek to five and doubled the daily pay. Thus, creating the typical 40-hour workweek. Total quality management (TQM) emerged in the 1940s and gained momentum after World War II. The term was coined to describe its Japanese-style management approach to quality improvement. Total quality management can be described as a management system for a customer-focused organization that engages all employees in continual improvement of the organization. Joseph Juran is credited with being a pioneer of TQM by teaching the concepts of controlling quality and managerial breakthrough. The American Institute of Industrial Engineering was formed in 1948. The early work by F. W. Taylor and the Gilbreths was documented in papers presented to the American Society of Mechanical Engineers as interest grew from merely improving machine performance to the performance of the overall manufacturing process, most notably starting with the presentation by Henry R. Towne (1844–1924) of his paper The Engineer as An Economist (1886). === Modern practice === From 1960 to 1975, with the development of decision support systems in supply such as material requirements planning (MRP), one can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide. In the 1970s, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own continuous improvement programs. W. Edwards Deming made significant contributions in the minimization of variance starting in the 1950s and continuing to the end of his life. In the 1990s, following the global industry globalization process, the emphasis was on supply chain management and customer-oriented business process design. The theory of constraints, developed by Israeli scientist Eliyahu M. Goldratt (1985), is also a significant milestone in the field. In recent years (late 2000s to 2025), the traditional skills of industrial engineering, such as system optimization, process improvement, and efficiency management, remain essential. However, these foundational abilities are increasingly complemented by a deeper understanding of emerging technologies, such as artificial intelligence, machine learning, and IoT (Internet of Things). Proficiency in data analytics has become crucial, as it allows engineers to harness big data and derive insights that inform decision-making and innovation. Additionally, knowledge in fields such as cybersecurity, software development, and sustainable practices is becoming integral to the industrial engineering scope. As we navigate beyond 2025, it is imperative for professionals across various industries to stay abreast of these advancements. The ongoing evolution of industrial engineering will undoubtedly open new career pathways and reshape existing roles. Companies and individuals must be proactive in adapting to these changes to harness the full potential of this dynamic field. == Etymology == While originally applied to manufacturing, the use of industrial in industrial engineering can be somewhat misleading, since it has grown to encompass any methodical or quantitative approach to optimizing how a process, system, or organization operates. In fact, the industrial in industrial engineering means the industry in its broadest sense. People have changed the term industrial to broader terms such as industrial and manufacturing engineering, industrial and systems engineering, industrial engineering and operations research, or industrial engineering and management. == Sub-disciplines == There are numerous sub-disciplines associated with industrial engineering, including the following a non-exhaustive list. While some industrial engineers focus exclusively on one of these sub-disciplines, many deal with a combination of sub-disciplines. The first 14 of these sub-disciplines come from the IISE Body of Knowledge. These are considered knowledge areas, and many of them contain an overlap of content. Work design and measurement Operations research and analysis Engineering economic analysis Facilities engineering and energy management Quality engineering and reliability engineering Ergonomics and human factors in engineering and design Operations engineering and operations management Supply chain management Engineering management Safety Information engineering Design and manufacturing engineering Product design and product development Systems design and systems engineering Facilities engineering Logistics Systems engineering Healthcare engineering Project management Financial engineering == Education == Industrial engineering students take courses in work analysis and design, process design, human factors, facilities planning and layout, engineering economic analysis, production planning and control, systems engineering, computer utilization and simulation, operations research, quality control, automation, robotics, and productivity engineering. Various universities offer Industrial Engineering degrees across the world. The Edwardson School of Industrial Engineering at Purdue University, the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology, and the Department of Industrial and Operations Engineering at the University of Michigan are all named industrial engineering departments in the United States. Other universities include: Virginia Tech, Texas A&M, Northwestern University, University of Wisconsin–Madison, and the University of Southern California, and NC State University. It is important to attend accredited universities because ABET accreditation ensures that graduates have met the educational requirements necessary to enter the profession. This quality of education is recognized internationally and prepares students for successful careers. Internationally, industrial engineering degrees accredited within any member country of the Washington Accord enjoy equal accreditation within all other signatory countries, thus allowing engineers from one country to practice engineering professionally in any other. Universities offer degrees at the bachelor, master, and doctoral levels. === Undergraduate curriculum === In the United States, the undergraduate degree earned is either a bachelor of science (BS) or a bachelor of science and engineering (BSE) in industrial engineering (IE). In South Africa, the undergraduate degree is a bachelor of engineering (BEng). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE or ISyE). The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e., calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work, which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions. The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems. Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs. === Graduate curriculum === The usual graduate degree earned is the master of science (MS), master of science and engineering (MSE) or master of engineering (MEng) in industrial engineering or various alternative related concentration titles. Typical MS curricula may cover: == See also == === Notable Associations and Professional Organizations === Institute of Industrial Engineers (IISE) Human Factors and Ergonomics Society (HFES) Society of Manufacturing Engineers (SME) American Production and Inventory Control Society (APICS) Institute for Operations Research and the Management Sciences (INFORMS) American Society for Quality (ASQ) The International Council on Systems Engineering (INCOSE) === Notable Universities === List of Universities with Industrial Engineering Programs === Notable Conferences === International Conference on Mechanical Industrial & Energy Engineering IISE Annual Conference INFORMS Annual Conference === Related topics === == Notes == == Further reading == Badiru, A. (Ed.) (2005). Handbook of industrial and systems engineering. CRC Press. ISBN 0-8493-2719-9. B. S. Blanchard and Fabrycky, W. (2005). Systems Engineering and Analysis (4th Edition). Prentice-Hall. ISBN 0-13-186977-9. Salvendy, G. (Ed.) (2001). Handbook of industrial engineering: Technology and operations management. Wiley-Interscience. ISBN 0-471-33057-4. Turner, W. et al. (1992). Introduction to industrial and systems engineering (Third edition). Prentice Hall. ISBN 0-13-481789-3. Eliyahu M. Goldratt, Jeff Cox (1984). The Goal North River Press; 2nd Rev edition (1992). ISBN 0-88427-061-0; 20th Anniversary edition (2004) ISBN 0-88427-178-1 Miller, Doug, Towards Sustainable Labour Costing in UK Fashion Retail (February 5, 2013). doi:10.2139/ssrn.2212100 Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.ISBN 978-1-118-58537-5 Systems Engineering Body of Knowledge (SEBoK) Traditional Engineering Master of Engineering Administration (MEA) Kambhampati, Venkata Satya Surya Narayana Rao (2017). "Principles of Industrial Engineering" IIE Annual Conference. Proceedings; Norcross (2017): 890-895.Principles of Industrial Engineering - ProQuest IISE Body of Knowledge == External links == Media related to Industrial engineering at Wikimedia Commons
Wikipedia/Industrial_engineering
Heat treating (or heat treatment) is a group of industrial, thermal and metalworking processes used to alter the physical, and sometimes chemical, properties of a material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials, such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve the desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case hardening, precipitation strengthening, tempering, carburizing, normalizing and quenching. Although the term heat treatment applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding. == Physical processes == Metallic materials consist of a microstructure of small crystals called "grains" or crystallites. The nature of the grains (i.e. grain size and composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within the microstructure. Heat treating is often used to alter the mechanical properties of a metallic alloy, manipulating properties such as the hardness, strength, toughness, ductility, and elasticity. There are two mechanisms that may change an alloy's properties during heat treatment: the formation of martensite causes the crystals to deform intrinsically, and the diffusion mechanism causes changes in the homogeneity of the alloy. The crystal structure consists of atoms that are grouped in a very specific arrangement, called a lattice. In most elements, this order will rearrange itself, depending on conditions like temperature and pressure. This rearrangement called allotropy or polymorphism, may occur several times, at many different temperatures for a particular metal. In alloys, this rearrangement may cause an element that will not normally dissolve into the base metal to suddenly become soluble, while a reversal of the allotropy will make the elements either partially or completely insoluble. When in the soluble state, the process of diffusion causes the atoms of the dissolved element to spread out, attempting to form a homogenous distribution within the crystals of the base metal. If the alloy is cooled to an insoluble state, the atoms of the dissolved constituents (solutes) may migrate out of the solution. This type of diffusion, called precipitation, leads to nucleation, where the migrating atoms group together at the grain-boundaries. This forms a microstructure generally consisting of two or more distinct phases. For instance, steel that has been heated above the austenizing temperature (red to orange-hot, or around 1,500 °F (820 °C) to 1,600 °F (870 °C) depending on carbon content), and then cooled slowly, forms a laminated structure composed of alternating layers of ferrite and cementite, becoming soft pearlite. After heating the steel to the austenite phase and then quenching it in water, the microstructure will be in the martensitic phase. This is due to the fact that the steel will change from the austenite phase to the martensite phase after quenching. Some pearlite or ferrite may be present if the quench did not rapidly cool off all the steel. Unlike iron-based alloys, most heat-treatable alloys do not experience a ferrite transformation. In these alloys, the nucleation at the grain-boundaries often reinforces the structure of the crystal matrix. These metals harden by precipitation. Typically a slow process, depending on temperature, this is often referred to as "age hardening". Many metals and non-metals exhibit a martensite transformation when cooled quickly (with external media like oil, polymer, water, etc.). When a metal is cooled very quickly, the insoluble atoms may not be able to migrate out of the solution in time. This is called a "diffusionless transformation." When the crystal matrix changes to its low-temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms prevent the crystal matrix from completely changing into its low-temperature allotrope, creating shearing stresses within the lattice. When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum, the alloy becomes softer. == Effects of composition == The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will usually form simultaneously. A hypo eutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid solution contains more. === Eutectoid alloys === A eutectoid (eutectic-like) alloy is similar in behavior to a eutectic alloy. A eutectic alloy is characterized by having a single melting point. This melting point is lower than that of any of the constituents, and no change in the mixture will lower the melting point any further. When a molten eutectic alloy is cooled, all of the constituents will crystallize into their respective phases at the same temperature. A eutectoid alloy is similar, but the phase change occurs, not from a liquid, but from a solid solution. Upon cooling a eutectoid alloy from the solution temperature, the constituents will separate into different crystal phases, forming a single microstructure. A eutectoid steel, for example, contains 0.77% carbon. Upon cooling slowly, the solution of iron and carbon (a single phase called austenite) will separate into platelets of the phases ferrite and cementite. This forms a layered microstructure called pearlite. Since pearlite is harder than iron, the degree of softness achievable is typically limited to that produced by the pearlite. Similarly, the hardenability is limited by the continuous martensitic microstructure formed when cooled very fast. === Hypoeutectoid alloys === A hypoeutectic alloy has two separate melting points. Both are above the eutectic melting point for the system but are below the melting points of any constituent forming the system. Between these two melting points, the alloy will exist as part solid and part liquid. The constituent with the higher melting point will solidify first. When completely solidified, a hypoeutectic alloy will often be in a solid solution. Similarly, a hypoeutectoid alloy has two critical temperatures, called "arrests". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the "pro eutectoid phase". These two temperatures are called the upper (A3) and lower (A1) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to "crystallize-out", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure. For example, a hypoeutectoid steel contains less than 0.77% carbon. Upon cooling a hypoeutectoid steel from the austenite transformation temperature, small islands of proeutectoid-ferrite will form. These will continue to grow and the carbon will recede until the eutectoid concentration in the rest of the steel is reached. This eutectoid mixture will then crystallize as a microstructure of pearlite. Since ferrite is softer than pearlite, the two microstructures combine to increase the ductility of the alloy. Consequently, the hardenability of the alloy is lowered. === Hypereutectoid alloys === A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure. A hypereutectoid steel contains more than 0.77% carbon. When slowly cooling hypereutectoid steel, the cementite will begin to crystallize first. When the remaining steel becomes eutectoid in composition, it will crystallize into pearlite. Since cementite is much harder than pearlite, the alloy has greater hardenability at a cost in ductility. == Effects of time and temperature == Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate. With the exception of stress-relieving, tempering, and aging, most heat treatments begin by heating an alloy beyond a certain transformation, or arrest (A), temperature. This temperature is referred to as an "arrest" because at the A temperature the metal experiences a period of hysteresis. At this point, all of the heat energy is used to cause the crystal change, so the temperature stops rising for a short time (arrests) and then continues climbing once the change is complete. Therefore, the alloy must be heated above the critical temperature for a transformation to occur. The alloy will usually be held at this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Iron, for example, has four critical-temperatures, depending on carbon content. Pure iron in its alpha (room temperature) state changes to nonmagnetic gamma-iron at its A2 temperature, and weldable delta-iron at its A4 temperature. However, as carbon is added, becoming steel, the A2 temperature splits into the A3 temperature, also called the austenizing temperature (all phases become austenite, a solution of gamma iron and carbon) and its A1 temperature (austenite changes into pearlite upon cooling). Between these upper and lower temperatures the pro eutectoid phase forms upon cooling. Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage. The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound. When austenite is cooled but kept above the martensite start temperature Ms so that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as "sphereoidite". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form, with more complete bainite transformation occurring depending on the time held above martensite start Ms. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time. Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked. This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation. == Types of heat treatment == Complex heat treating schedules, or "cycles", are often devised by metallurgists to optimize an alloy's mechanical properties. In the aerospace industry, a superalloy may undergo five or more different heat treating operations to develop the desired properties. This can lead to quality problems depending on the accuracy of the furnace's temperature controls and timer. These operations can usually be divided into several basic techniques. === Annealing === Annealing consists of heating a metal to a specific temperature and then cooling at a rate that will produce a refined microstructure, either fully or partially separating the constituents. The rate of cooling is generally slow. Annealing is most often used to soften a metal for cold working, to improve machinability, or to enhance properties like electrical conductivity. In ferrous alloys, annealing is usually accomplished by heating the metal beyond the upper critical temperature and then cooling very slowly, resulting in the formation of pearlite. In both pure metals and many alloys that cannot be heat treated, annealing is used to remove the hardness caused by cold working. The metal is heated to a temperature where recrystallization can occur, thereby repairing the defects caused by plastic deformation. In these metals, the rate of cooling will usually have little effect. Most non-ferrous alloys that are heat-treatable are also annealed to relieve the hardness of cold working. These may be slowly cooled to allow full precipitation of the constituents and produce a refined microstructure. Ferrous alloys are usually either "full annealed" or "process annealed". Full annealing requires very slow cooling rates, in order to form coarse pearlite. In process annealing, the cooling rate may be faster; up to, and including normalizing. The main goal of process annealing is to produce a uniform microstructure. Non-ferrous alloys are often subjected to a variety of annealing techniques, including "recrystallization annealing", "partial annealing", "full annealing", and "final annealing". Not all annealing techniques involve recrystallization, such as stress relieving. === Normalizing === Normalizing is a technique used to provide uniformity in grain size and composition (equiaxed crystals) throughout an alloy. The term is often used for ferrous alloys that have been austenitized and then cooled in the open air. Normalizing not only produces pearlite but also martensite and sometimes bainite, which gives harder and stronger steel but with less ductility for the same composition than full annealing. In the normalizing process the steel is heated to about 40 degrees Celsius above its upper critical temperature limit, held at this temperature for some time, and then cooled in air. === Stress relieving === Stress-relieving is a technique to remove or reduce the internal stresses created in metal. These stresses may be caused in a number of ways, ranging from cold working to non-uniform cooling. Stress-relieving is usually accomplished by heating a metal below the lower critical temperature and then cooling uniformly. Stress relieving is commonly used on items like air tanks, boilers and other pressure vessels, to remove a portion of the stresses created during the welding process. === Aging === Some metals are classified as precipitation hardening metals. When a precipitation hardening alloy is quenched, its alloying elements will be trapped in solution, resulting in a soft metal. Aging a "solutionized" metal will allow the alloying elements to diffuse through the microstructure and form intermetallic particles. These intermetallic particles will nucleate and fall out of the solution and act as a reinforcing phase, thereby increasing the strength of the alloy. Alloys may age " naturally" meaning that the precipitates form at room temperature, or they may age "artificially" when precipitates only form at elevated temperatures. In some applications, naturally aging alloys may be stored in a freezer to prevent hardening until after further operations - assembly of rivets, for example, maybe easier with a softer part. Examples of precipitation hardening alloys include 2000 series, 6000 series, and 7000 series aluminium alloy, as well as some superalloys and some stainless steels. Steels that harden by aging are typically referred to as maraging steels, from a combination of the term "martensite aging". === Quenching === Quenching is a process of cooling a metal at a rapid rate. This is most often done to produce a martensite transformation. In ferrous alloys, this will often produce a harder metal, while non-ferrous alloys will usually become softer than normal. To harden by quenching, a metal (usually steel or cast iron) must be heated above the upper critical temperature (Steel: above 815~900 degrees Celsius) and then quickly cooled. Depending on the alloy and other considerations (such as concern for maximum hardness vs. cracking and distortion), cooling may be done with forced air or other gases, (such as nitrogen). Liquids may be used, due to their better thermal conductivity, such as oil, water, a polymer dissolved in water, or a brine. Upon being rapidly cooled, a portion of austenite (dependent on alloy composition) will transform to martensite, a hard, brittle crystalline structure. The quenched hardness of a metal depends on its chemical composition and quenching method. Cooling speeds, from fastest to slowest, go from brine, polymer (i.e. mixtures of water + glycol polymers), freshwater, oil, and forced air. However, quenching certain steel too fast can result in cracking, which is why high-tensile steels such as AISI 4140 should be quenched in oil, tool steels such as ISO 1.2767 or H13 hot work tool steel should be quenched in forced air, and low alloy or medium-tensile steels such as XK1320 or AISI 1040 should be quenched in brine. Some Beta titanium based alloys have also shown similar trends of increased strength through rapid cooling. However, most non-ferrous metals, like alloys of copper, aluminum, or nickel, and some high alloy steels such as austenitic stainless steel (304, 316), produce an opposite effect when these are quenched: they soften. Austenitic stainless steels must be quenched to become fully corrosion resistant, as they work-harden significantly. === Tempering === Untempered martensitic steel, while very hard, is too brittle to be useful for most applications. A method for alleviating this problem is called tempering. Most applications require that quenched parts be tempered. Tempering consists of heating steel below the lower critical temperature, (often from 400˚F to 1105˚F or 205˚C to 595˚C, depending on the desired results), to impart some toughness. Higher tempering temperatures (maybe up to 1,300˚F or 700˚C, depending on the alloy and application) are sometimes used to impart further ductility, although some yield strength is lost. Tempering may also be performed on normalized steels. Other methods of tempering consist of quenching to a specific temperature, which is above the martensite start temperature, and then holding it there until pure bainite can form or internal stresses can be relieved. These include austempering and martempering. ==== Tempering colors ==== Steel that has been freshly ground or polished will form oxide layers when heated. At a very specific temperature, the iron oxide will form a layer with a very specific thickness, causing thin-film interference. This causes colors to appear on the surface of the steel. As the temperature is increased, the iron oxide layer grows in thickness, changing the color. These colors, called tempering colors, have been used for centuries to gauge the temperature of the metal. 350˚F (176˚C), light yellowish 400˚F (204˚C), light-straw 440˚F (226˚C), dark-straw 500˚F (260˚C), brown 540˚F (282˚C), purple 590˚F (310˚C), deep blue 640˚F (337˚C), light blue The tempering colors can be used to judge the final properties of the tempered steel. Very hard tools are often tempered in the light to the dark straw range, whereas springs are often tempered to the blue. However, the final hardness of the tempered steel will vary, depending on the composition of the steel. Higher-carbon tool steel will remain much harder after tempering than spring steel (of slightly less carbon) when tempered at the same temperature. The oxide film will also increase in thickness over time. Therefore, steel that has been held at 400˚F for a very long time may turn brown or purple, even though the temperature never exceeded that needed to produce a light straw color. Other factors affecting the final outcome are oil films on the surface and the type of heat source used. === Selective heat treating === Many heat treating methods have been developed to alter the properties of only a portion of an object. These tend to consist of either cooling different areas of an alloy at different rates, by quickly heating in a localized area and then quenching, by thermochemical diffusion, or by tempering different areas of an object at different temperatures, such as in differential tempering. ==== Differential hardening ==== Some techniques allow different areas of a single object to receive different heat treatments. This is called differential hardening. It is common in high quality knives and swords. The Chinese jian is one of the earliest known examples of this, and the Japanese katana may be the most widely known. The Nepalese Khukuri is another example. This technique uses an insulating layer, like layers of clay, to cover the areas that are to remain soft. The areas to be hardened are left exposed, allowing only certain parts of the steel to fully harden when quenched. ==== Flame hardening ==== Flame hardening is used to harden only a portion of the metal. Unlike differential hardening, where the entire piece is heated and then cooled at different rates, in flame hardening, only a portion of the metal is heated before quenching. This is usually easier than differential hardening, but often produces an extremely brittle zone between the heated metal and the unheated metal, as cooling at the edge of this heat-affected zone is extremely rapid. ==== Induction hardening ==== Induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly, using a no-contact method of induction heating. The alloy is then quenched, producing a martensite transformation at the surface while leaving the underlying metal unchanged. This creates a very hard, wear-resistant surface while maintaining the proper toughness in the majority of the object. Crankshaft journals are a good example of an induction hardened surface. ==== Case hardening ==== Case hardening is a thermochemical diffusion process in which an alloying element, most commonly carbon or nitrogen, diffuses into the surface of a monolithic metal. The resulting interstitial solid solution is harder than the base material, which improves wear resistance without sacrificing toughness. Laser surface engineering is a surface treatment with high versatility, selectivity and novel properties. Since the cooling rate is very high in laser treatment, metastable even metallic glass can be obtained by this method. === Cold and cryogenic treating === Although quenching steel causes the austenite to transform into martensite, all of the austenite usually does not transform. Some austenite crystals will remain unchanged even after quenching below the martensite finish (Mf) temperature. Further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures. Cold treating generally consists of cooling the steel to around -115˚F (-81˚C), but does not eliminate all of the austenite. Cryogenic treating usually consists of cooling to much lower temperatures, often in the range of -315˚F (-192˚C), to transform most of the austenite into martensite. Cold and cryogenic treatments are typically done immediately after quenching, before any tempering, and will increase the hardness, wear resistance, and reduce the internal stresses in the metal but, because it is really an extension of the quenching process, it may increase the chances of cracking during the procedure. The process is often used for tools, bearings, or other items that require good wear resistance. However, it is usually only effective in high-carbon or high-alloy steels in which more than 10% austenite is retained after quenching. === Decarburization === The heating of steel is sometimes used as a method to alter the carbon content. When steel is heated in an oxidizing environment, the oxygen combines with the iron to form an iron-oxide layer, which protects the steel from decarburization. When the steel turns to austenite, however, the oxygen combines with iron to form a slag, which provides no protection from decarburization. The formation of slag and scale actually increases decarburization, because the iron oxide keeps oxygen in contact with the decarburization zone even after the steel is moved into an oxygen-free environment, such as the coals of a forge. Thus, the carbon atoms begin combining with the surrounding scale and slag to form both carbon monoxide and carbon dioxide, which is released into the air. Steel contains a relatively small percentage of carbon, which can migrate freely within the gamma iron. When austenitized steel is exposed to air for long periods of time, the carbon content in the steel can be lowered. This is the opposite from what happens when steel is heated in a reducing environment, in which carbon slowly diffuses further into the metal. In an oxidizing environment, the carbon can readily diffuse outwardly, so austenitized steel is very susceptible to decarburization. This is often used for cast steel, where a high carbon-content is needed for casting, but a lower carbon-content is desired in the finished product. It is often used on cast-irons to produce malleable cast iron, in a process called "white tempering". This tendency to decarburize is often a problem in other operations, such as blacksmithing, where it becomes more desirable to austenize the steel for the shortest amount of time possible to prevent too much decarburization. == Specification of heat treatment == Usually the end condition is specified instead of the process used in heat treatment. === Case hardening === Case hardening is specified by "hardness" and "case depth". The case depth can be specified in two ways: total case depth or effective case depth. The total case depth is the true depth of the case. For most alloys, the effective case depth is the depth of the case that has a hardness equivalent of HRC50; however, some alloys specify a different hardness (40-60 HRC) at effective case depth; this is checked on a Tukon microhardness tester. This value can be roughly approximated as 65% of the total case depth; however, the chemical composition and hardenability can affect this approximation. If neither type of case depth is specified the total case depth is assumed. For case hardened parts the specification should have a tolerance of at least ±0.005 in (0.13 mm). If the part is to be ground after heat treatment, the case depth is assumed to be after grinding. The Rockwell hardness scale used for the specification depends on the depth of the total case depth, as shown in the table below. Usually, hardness is measured on the Rockwell "C" scale, but the load used on the scale will penetrate through the case if the case is less than 0.030 in (0.76 mm). Using Rockwell "C" for a thinner case will result in a false reading. For cases that are less than 0.015 in (0.38 mm) thick a Rockwell scale cannot reliably be used, so file hard is specified instead. File hard is approximately equivalent to 58 HRC. When specifying the hardness either a range should be given or the minimum hardness specified. If a range is specified at least 5 points should be given. === Through hardening === Only hardness is listed for through hardening. It is usually in the form of HRC with at least a five-point range. === Annealing === The hardness for an annealing process is usually listed on the HRB scale as a maximum value. It is a process to refine grain size, improve strength, remove residual stress, and affect the electromagnetic properties... == Types of furnaces == Furnaces used for heat treatment can be split into two broad categories: batch furnaces and continuous furnaces. Batch furnaces are usually manually loaded and unloaded, whereas continuous furnaces have an automatic conveying system to provide a constant load into the furnace chamber. === Batch furnaces === Batch systems usually consist of an insulated chamber with a steel shell, a heating system, and an access door to the chamber. === Box-type furnace === Many basic box-type furnaces have been upgraded to a semi-continuous batch furnace with the addition of integrated quench tanks and slow-cool chambers. These upgraded furnaces are a very commonly used piece of equipment for heat-treating. === Car-type furnace === Also known as a " bogie hearth", the car furnace is an extremely large batch furnace. The floor is constructed as an insulated movable car that is moved in and out of the furnace for loading and unloading. The car is usually sealed using sand seals or solid seals when in position. Due to the difficulty in getting a sufficient seal, car furnaces are usually used for non-atmosphere processes. === Elevator-type furnace === Similar in type to the car furnace, except that the car and hearth are rolled into position beneath the furnace and raised by means of a motor-driven mechanism, elevator furnaces can handle large heavy loads and often eliminate the need for any external cranes and transfer mechanisms. === Bell-type furnace === Bell furnaces have removable covers called bells, which are lowered over the load and hearth by crane. An inner bell is placed over the hearth and sealed to supply a protective atmosphere. An outer bell is lowered to provide the heat supply. === Pit furnaces === Furnaces that are constructed in a pit and extend to floor level or slightly above are called pit furnaces. Workpieces can be suspended from fixtures, held in baskets, or placed on bases in the furnace. Pit furnaces are suited to heating long tubes, shafts, and rods by holding them in a vertical position. This manner of loading provides minimal distortion. === Salt bath furnaces === Salt baths are used in a wide variety of heat treatment processes including neutral hardening, liquid carburising, liquid nitriding, austempering, martempering and tempering. Parts are loaded into a pot of molten salt where they are heated by conduction, giving a very readily available source of heat. The core temperature of a part rises in temperature at approximately the same rate as its surface in a salt bath. Salt baths utilize a variety of salts for heat treatment, with cyanide salts being the most extensively used. Concerns about associated occupation health and safety, and expensive waste management and disposal due to their environmental effects have made the use of salt baths less attractive in recent years. Consequently, many salt baths are being replaced by more environmentally friendly fluidized bed furnaces. === Fluidised bed furnaces === A fluidised bed consists of a cylindrical retort made from high-temperature alloy, filled with sand-like aluminum oxide particulate. Gas (air or nitrogen) is bubbled through the oxide and the sand moves in such a way that it exhibits fluid-like behavior, hence the term fluidized. The solid-solid contact of the oxide gives very high thermal conductivity and excellent temperature uniformity throughout the furnace, comparable to those seen in a salt bath. == See also == Carbon steel Carbonizing Diffusion hardening Induction hardening Retrogression heat treatment Nitriding == References == == Further reading == International Heat Treatment Magazine in English Reed-Hill, Robert (1994). Principles of Physical Metallurgy (3rd ed.). Boston: PWS Publishing.
Wikipedia/Heat_treatment
A doctorate (from Latin doctor, meaning "teacher") or doctoral degree is a postgraduate academic degree awarded by universities and some other educational institutions, derived from the ancient formalism licentia docendi ("licence to teach"). In most countries, a research degree qualifies the holder to teach at university level in the degree's field or work in a specific profession. There are a number of doctoral degrees; the most common is the Doctor of Philosophy (PhD), awarded in many different fields, ranging from the humanities to scientific disciplines. Many universities also award honorary doctorates to individuals deemed worthy of special recognition, either for scholarly work or other contributions to the university or society. == History == === Middle Ages === The term doctor derives from Latin, meaning "teacher" or "instructor". The doctorate (Latin: doctoratus) appeared in medieval Europe as a license to teach Latin (licentia docendi) at a university. Its roots can be traced to the early church in which the term doctor referred to the Apostles, Church Fathers, and other Christian authorities who taught and interpreted the Bible. The right to grant a licentia docendi (i.e. the doctorate) was originally reserved to the Catholic Church, which required the applicant to pass a test, take an oath of allegiance, and pay a fee. The Third Council of the Lateran of 1179 guaranteed access—at that time essentially free of charge—to all able applicants. Applicants were tested for aptitude. This right remained a bone of contention between the church authorities and the universities, slowly distancing themselves from the Church. In 1213 the right was granted by the pope to the University of Paris, where it became a universal license to teach (licentia ubique docendi). However, while the licentia continued to hold a higher prestige than the bachelor's degree (baccalaureus), the latter was ultimately reduced to an intermediate step to the master's degree (magister) and doctorate, both of which now became the accepted teaching qualifications. According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150 by the University of Paris. George Makdisi theorizes that the ijazah issued in early Islamic madrasahs was the origin of the doctorate later issued in medieval European universities. Alfred Guillaume and Syed Farid al-Attas agree that there is a resemblance between the ijazah and the licentia docendi. However, Toby Huff and others reject Makdisi's theory. Devin J. Stewart notes a difference in the granting authority (individual professor for the ijzazah and a corporate entity in the case of the university doctorate). === 17th and 18th centuries === The doctorate of philosophy developed in Germany in the 17th century (likely c. 1652). The term "philosophy" does not refer here to the field or academic discipline of philosophy; it is used in a broader sense under its original Greek meaning of "love of wisdom". In most of Europe, all fields (history, philosophy, social sciences, mathematics, and natural philosophy/natural sciences) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy". The Doctorate of Philosophy adheres to this historic convention, even though most degrees are not for the study of philosophy. Chris Park explains that it was not until formal education and degree programs were standardized in the early 19th century that the Doctorate of Philosophy was reintroduced in Germany as a research degree, abbreviated as Dr. phil. (similar to Ph.D. in Anglo-American countries). Germany, however, differentiated then in more detail between doctorates in philosophy and doctorates in the natural sciences, abbreviated as Dr. rer. nat. and also doctorates in the social/political sciences, abbreviated as Dr. rer. pol., similar to the other traditional doctorates in medicine (Dr. med.) and law (Dr. jur.). University doctoral training was a form of apprenticeship to a guild. The traditional term of study before new teachers were admitted to the guild of "Masters of Arts" was seven years, matching the apprenticeship term for other occupations. Originally the terms "master" and "doctor" were synonymous, but over time the doctorate came to be regarded as a higher qualification than the master's degree. University degrees, including doctorates, were originally restricted to men. The first women to be granted doctorates were Juliana Morell in 1608 at Lyons or maybe Avignon (she "defended theses" in 1606 or 1607, although claims that she received a doctorate in canon law in 1608 have been discredited), Elena Cornaro Piscopia in 1678 at the University of Padua, Laura Bassi in 1732 at Bologna University, Dorothea Erxleben in 1754 at Halle University and María Isidra de Guzmán y de la Cerda in 1785 at Complutense University, Madrid. === Modern times === The use and meaning of the doctorate have changed over time and are subject to regional variations. For instance, until the early 20th century, few academic staff or professors in English-speaking universities held doctorates, except for very senior scholars and those in holy orders. After that time, the German practice of requiring lecturers to have completed a research doctorate spread. Universities' shift to research-oriented education (based upon the scientific method, inquiry, and observation) increased the doctorate's importance. Today, a research doctorate (PhD) or its equivalent (as defined in the US by the NSF) is generally a prerequisite for an academic career. However, many recipients do not work in academia. Professional doctorates developed in the United States from the 19th century onward. The first professional doctorate offered in the United States was the MD at Kings College (now Columbia University) after the medical school's founding in 1767. However, this was not a professional doctorate in the modern American sense. It was awarded for further study after the qualifying Bachelor of Medicine (MB) rather than a qualifying degree. The MD became the standard first degree in medicine in the US during the 19th century, but as a three-year undergraduate degree. It did not become established as a graduate degree until 1930. As the standard qualifying degree in medicine, the MD gave that profession the ability (through the American Medical Association, established in 1847 for this purpose) to set and raise standards for entry into professional practice.In the shape of the German-style PhD, the modern research degree was first awarded in the US in 1861, at Yale University. This differed from the MD in that the latter was a vocational "professional degree" that trained students to apply or practice knowledge rather than generate it, similar to other students in vocational schools or institutes. In the UK, research doctorates initially took higher doctorates in Science and Letters, first introduced at Durham University in 1882. The PhD spread to the UK from the US via Canada and was instituted at all British universities from 1917. The first (titled a DPhil) was awarded at the University of Oxford. Following the MD, the next professional doctorate in the US, the Juris Doctor (JD), was established by the University of Chicago in 1902. However, it took a long time to be accepted, not replacing the Bachelor of Laws (LLB) until the 1960s, by which time the LLB was generally taken as a graduate degree. Notably, the JD and LLB curriculum were identical, with the degree being renamed as a doctorate, and it (like the MD) was not equivalent to the PhD, raising criticism that it was "not a 'true Doctorate'". When professional doctorates were established in the UK in the late 1980s and early 1990s, they did not follow the US model. Still, they were set up as research degrees at the same level as PhDs but with some taught components and a professional focus for research work. Now usually called higher doctorates in the United Kingdom, the older-style doctorates take much longer to complete since candidates must show themselves to be leading experts in their subjects. These doctorates are less common than the PhD in some countries and are often awarded honoris causa. The habilitation is still used for academic recruitment purposes in many countries within the EU. It involves either a long new thesis (a second book) or a portfolio of research publications. The habilitation (highest available degree) demonstrates independent and thorough research, experience in teaching and lecturing, and, more recently, the ability to generate supportive funding. The habilitation follows the research doctorate, and in Germany, it can be a requirement for appointment as a Privatdozent or professor. == Types == Since the Middle Ages, the number and types of doctorates awarded by universities have proliferated throughout the world. Practice varies from one country to another. While a doctorate usually entitles a person to be addressed as "doctor", the use of the title varies widely depending on the type and the associated occupation. === Research doctorate === Research doctorates are awarded in recognition of publishable academic research, at least in principle, in a peer-reviewed academic journal. The best-known research degree in the English-speaking world is the Doctor of Philosophy (abbreviated PhD or, at a small number of British universities, DPhil) awarded in many countries throughout the world. In the US, for instance, although the most typical research doctorate is the PhD, accounting for about 98% of the research doctorates awarded, there are more than 15 other names for research doctorates. Other research-oriented doctorates (some having a professional practice focus) include the Doctor of Education (EdD), the Doctor of Science (DSc or ScD),Doctor of Arts (DA), Doctor of Juridical Science (JSD or SJD), Doctor of Musical Arts (DMA), Doctor of Professional Studies/Professional Doctorate (ProfDoc or DProf), Doctor of Public Health (DrPH), Doctor of Social Science (DSSc or DSocSci), Doctor of Management (DM, DMan or DMgt), Doctor of Business Administration (DBA), Doctor of Engineering (DEng, DESc, DES or EngD) the German engineering doctorate Doktoringenieur (Dr.-Ing.), natural science doctorate Doctor rerum naturalium (Dr. rer. nat.), and economics and social science doctorate Doctor rerum politicarum (Dr. rer. pol.). The UK Doctor of Medicine (MD or MD (Res)) and Doctor of Dental Surgery (DDS) are research doctorates. The Doctor of Theology (ThD or DTh), Doctor of Practical Theology (DPT) and the Doctor of Sacred Theology (STD, or DSTh) are research doctorates in theology. Criteria for research doctorates vary but typically require completion of a substantial body of original research, which may be presented as a single thesis or dissertation, or as a portfolio of shorter project reports (thesis by publication). The submitted dissertation is assessed by a committee of, typically, internal, and external examiners. It is then typically defended by the candidate during an oral examination (called viva (voce) in the UK and India) by the committee, which then awards the degree unconditionally, awards the degree conditionally (ranging from corrections in grammar to additional research), or denies the degree. Candidates may also be required to complete graduate-level courses in their field and study research methodology. Criteria for admission to doctoral programs vary. Students may be admitted with a bachelor's degree in the US and the UK However, elsewhere, e.g. in Finland and many other European countries, a master's degree is required. The time required to complete a research doctorate varies from three years, excluding undergraduate study, to six years or more. === Licentiate === Licentiate degrees vary widely in their meaning, and in a few countries are doctoral-level qualifications. Sweden awards the licentiate degree as a two-year qualification at the doctoral level and the doctoral degree (PhD) as a four-year qualification. Sweden originally abolished the Licentiate in 1969 but reintroduced it in response to demands from business. Finland also has a two-year doctoral level licentiate degree, similar to Sweden's. Outside of Scandinavia, the licentiate is usually a lower-level qualification. In Belgium, the licentiate was the basic university degree prior to the Bologna Process and was equivalent to a bachelor's degree. In France and other countries, it is the bachelor's-level qualification in the Bologna process. In the Pontifical system, the Licentiate in Sacred Theology (STL) is equivalent to an advanced master's degree, or the post-master's coursework required in preparation for a doctorate (i.e. similar in level to the Swedish/Finnish licentiate degree). While other licences (such as the Licence in Canon Law) are at the level of master's degrees. === Higher doctorate and post-doctoral degrees === A higher tier of research doctorates may be awarded based on a formally submitted portfolio of published research of an exceptionally high standard. Examples include the Doctor of Science (DSc or ScD), Doctor of Divinity (DD), Doctor of Letters (DLitt or LittD), Doctor of Law or Laws (LLD), and Doctor of Civil Law (DCL) degrees found in the UK, Ireland and some Commonwealth countries, and the traditional doctorates in Scandinavia like the Doctor Medicinae (Dr. Med.). The habilitation teaching qualification (facultas docendi or "faculty to teach") under a university procedure with a thesis and an exam is commonly regarded as belonging to this category in Germany, Austria, France, Liechtenstein, Switzerland, Poland, etc. The degree developed in Germany in the 19th century "when holding a doctorate seemed no longer sufficient to guarantee a proficient transfer of knowledge to the next generation". In many federal states of Germany, the habilitation results in an award of a formal "Dr. habil." degree or the holder of the degree may add "habil." to their research doctorate such as "Dr. phil. habil." or "Dr. rer. nat. habil." In some European universities, especially in German-speaking countries, the degree is insufficient to have teaching duties without professor supervision (or teaching and supervising PhD students independently) without an additional teaching title such as Privatdozent. In Austria, the habilitation bestows the graduate with the facultas docendi, venia legendi. Since 2004, the honorary title of "Privatdozent" (before this, completing the habilitation resulted in appointment as a civil servant). In many Central and Eastern Europe countries, the degree gives venia legendi, Latin for "the permission to lecture", or ius docendi, "the right to teach", a specific academic subject at universities for a lifetime. The French academic system used to have a higher doctorate, called the "state doctorate" (doctorat d'État), but, in 1984, it was superseded by the habilitation (Habilitation à diriger des recherches, "habilitation to supervise (doctoral and post-doctoral) research", abbreviated HDR) which is the prerequisite to supervise PhDs and to apply to Full Professorships. In many countries of the previous Soviet Union (USSR), for example the Russian Federation or Ukraine there is the higher doctorate (above the title of "Candidate of Sciences"/PhD) under the title "Doctor of Sciences". While this section has focused on earned qualifications conferred in virtue of published work or the equivalent, a higher doctorate may also be presented on an honorary basis by a university — at its own initiative or after a nomination — in recognition of public prestige, institutional service, philanthropy, or professional achievement. In a formal listing of qualifications, and often in other contexts, an honorary higher doctorate will be identified using language like "DCL, honoris causa", "Hon LLD", or "LittD h.c.". === Professional doctorate === Depending on the country, professional doctorates may also be research degrees at the same level as PhDs. The relationship between research and practice is considered important and professional degrees with little or no research content are typically aimed at professional performance. Many professional doctorates are named "Doctor of [subject name] and abbreviated using the form "D[subject abbreviation]" or "[subject abbreviation]D", or may use the more generic titles "Professional Doctorate", abbreviated "ProfDoc" or "DProf", "Doctor of Professional Studies" (DPS) or "Doctor of Professional Practice" (DPP). In the US, professional doctorates (formally "doctor's degree – professional practice" in government classifications) are defined by the US Department of Education's National Center for Educational Statistics as degrees that require a minimum of six years of university-level study (including any pre-professional bachelor's or associate degree) and meet the academic requirements for professional licensure in the discipline. The definition for a professional doctorate does not include a requirement for either a dissertation or study beyond master's level, in contrast to the definition for research doctorates ("doctor's degree – research/scholarship"). However, individual programs may have different requirements. There is also a category of "doctor's degree – other" for doctorates that do not fall into either the "professional practice" or "research/scholarship" categories. All of these are considered doctoral degrees. In contrast to the US, many countries reserve the term "doctorate" for research degrees. If, as in Canada and Australia, professional degrees bear the name "Doctor of ...", etc., it is made clear that these are not doctorates. Examples of this include Doctor of Pharmacy (PharmD), Doctor of Medicine (MD), Doctor of Dental Surgery (DDS), Doctor of Nursing Practice (DNP), and Juris Doctor (JD). Contrariwise, for example, research doctorates like Doctor of Business Administration (DBA), Doctor of Education (EdD) and Doctor of Social Science (DSS) qualify as full academic doctorates in Canada though they normally incorporate aspects of professional practice in addition to a full dissertation. In the Philippines, the University of the Philippines Open University offers a Doctor of Communication (DComm) professional doctorate. All doctorates in the UK and Ireland are third cycle qualifications in the Bologna Process, comparable to US research doctorates. Although all doctorates are research degrees, professional doctorates normally include taught components, while the name PhD/DPhil is normally used for doctorates purely by thesis. Professional, practitioner, or practice-based doctorates such as the DClinPsy, MD, DHSc, EdD, DBA, EngD and DAg are full academic doctorates. They are at the same level as the PhD in the national qualifications frameworks; they are not first professional degrees but are "often post-experience qualifications" in which practice is considered important in the research context. In 2009 there were 308 professional doctorate programs in the UK, up from 109 in 1998, with the most popular being the EdD (38 institutions), DBA (33), EngD/DEng (22), MD/DM (21), and DClinPsy/DClinPsych/ClinPsyD (17). Similarly, in Australia, the term "professional doctorate" is sometimes applied to the Scientiae Juridicae Doctor (SJD), which, like the UK professional doctorates, is a research degree. === Honorary doctorate === When a university wishes to formally recognize an individual's contributions to a particular field or philanthropic efforts, it may choose to grant a doctoral degree honoris causa ('for the sake of the honor'), waiving the usual requirements for granting the degree. Some universities do not award honorary degrees, for example, Cornell University, the University of Virginia, and Massachusetts Institute of Technology. == National variations == === Argentina === In Argentina the doctorate (doctorado) is the highest academic degree. The intention is that candidates produce original contributions in their field knowledge within a frame of academic excellence. A dissertation or thesis is prepared under the supervision of a tutor or director. It is reviewed by a Doctoral Committee composed of examiners external to the program and at least one examiner external to the institution. The degree is conferred after a successful dissertation defence. In 2006, there were approximately 2,151 postgraduate careers in the country, of which 14% were doctoral degrees. Doctoral programs in Argentina are overseen by the National Commission for University Evaluation and Accreditation, an agency in Argentina's Ministry of Education, Science and Technology. === Australia === The Australian Qualifications Framework (AQF) categorizes tertiary qualifications into ten levels that are numbered from one to ten in ascending order of complexity and depth. Of these qualification levels, six are for higher education qualifications and are numbered from five to ten. Doctoral degrees occupy the highest of these levels: level ten.: 63  All doctoral degrees involve research and this is a defining characteristic of them.: 63  There are three categories of doctoral degrees recognized by the AQF: research doctorates, professional doctorates and higher doctorates.: 63–64  Research doctorates and professional doctorates are both completed as part of a programme of study and supervised research.: 63  Both have entry requirements of the student having a supervisor that has agreed to supervise their research, along with the student possessing an honours degree with upper second-class honours or better or a master's degree with a substantial research component. Research doctorates are typically titled Doctor of Philosophy and they are awarded on the basis of an original and significant contribution to knowledge.: 63  Professional doctorates are typically titled Doctor of (field of study) and they are awarded on the basis of an original and significant contribution to professional practice.: 63  Higher doctorates are typically titled similarly to professional doctorates and are awarded based on a submitted portfolio of research that follows a consistent theme and is internationally recognized as an original and substantive contribution to knowledge beyond that required for the awarding of a research doctorate.: 64  Typically, to be eligible to be awarded a higher doctorate a student must have completed a research doctorate at least seven to ten years prior to submitting the research portfolio used to award them a higher doctorate. === Brazil === Doctoral candidates are normally required to have a master's degree in a related field. Exceptions are based on their individual academic merit. A second and a third foreign language are other common requirements, although the requirements regarding proficiency commonly are not strict. The admissions process varies by institution. Some require candidates to take tests while others base admissions on a research proposal application and interview only. In both instances however, a faculty member must agree prior to admission to supervise the applicant. Requirements usually include satisfactory performance in advanced graduate courses, passing an oral qualifying exam and submitting a thesis that must represent an original and relevant contribution to existing knowledge. The thesis is examined in a final public oral exam administered by at least five faculty members, two of whom must be external. After completion, which normally consumes 4 years, the candidate is commonly awarded the degree of Doutor (Doctor) followed by the main area of specialization, e.g. Doutor em Direito (Doctor of Laws), Doutor em Ciências da Computação (Doctor of Computer Sciences), Doutor em Filosofia (Doctor of Philosophy), Doutor em Economia (Doctor of Economics), Doutor em Engenharia (Doctor of Engineering) or Doutor em Medicina (Doctor of Medicine). The generic title of Doutor em Ciências (Doctor of Sciences) is normally used to refer collectively to doctorates in the natural sciences (i.e. Physics, Chemistry, Biological and Life Sciences, etc.) All graduate programs in Brazilian public universities are tuition-free (mandated by the Brazilian constitution). Some graduate students are additionally supported by institutional scholarships granted by federal government agencies like CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and CAPES (Coordenação de Aperfeiçoamento do Pessoal de Ensino Superior). Personal scholarships are provided by the various FAP's (Fundações de Amparo à Pesquisa) at the state level, especially FAPESP in the state of São Paulo, FAPERJ in the state of Rio de Janeiro and FAPEMIG in the state of Minas Gerais. Competition for graduate financial aid is intense and most scholarships support at most 2 years of Master's studies and 4 years of doctoral studies. The normal monthly stipend for doctoral students in Brazil is between US$500 and $1000. A degree of Doutor usually enables an individual to apply for a junior faculty position equivalent to a US assistant professor. Progression to full professorship, known as Professor Titular requires that the candidate be successful in a competitive public exam and normally takes additional years. In the federal university system, doctors who are admitted as junior faculty members may progress (usually by seniority) to the rank of associate professor, then become eligible to take the competitive exam for vacant full professorships. In São Paulo state universities, associate professorships and subsequent eligibility to apply for a full professorship are conditioned on the qualification of Livre-docente and requires, in addition to a doctorate, a second thesis or cumulative portfolio of peer-reviewed publications, a public lecture before a panel of experts (including external members from other universities), and a written exam. In recent years some initiatives as jointly supervised doctorates (e.g. "cotutelles") have become increasingly common in the country, as part of the country's efforts to open its universities to international students. === Denmark === Denmark offers two types of "doctorate"-like degrees: A three-year ph.d. degree program, which replaced the equivalent licentiat in 1992, and does not grant the holder the right to the title dr. or doktor. At the same time, a minor, two-year research training program, leading to a title of "magister", was phased out to meet the international standards of the Bologna Process. A 'full' doctor's degree (e.g. dr.phil., Doctor Philosophiae, for humanistic and STEM subjects) – the higher doctorate – introduced in 1479. The second part of the title communicates the field of study – e.g. dr.scient (in the sciences), dr.jur (in law), dr.theol (in theology). For the ph.d. degree, the candidates (ph.d. students or fellows) – who are required to have a master's degree – enroll at a ph.d. school at a university and participate in a research training program, at the end of which they each submit a thesis and defend it orally at a formal disputation. In the disputation, the candidates defend their theses against three official opponents, and may take opponents or questions from those present in the auditorium (ex auditorio). For the higher doctorate, the candidate (referred to as præses) is required to submit a thesis of major scientific significance, and to proceed to defend it orally against two official opponents, as well as against any and all opponents from the auditorium (ex auditorio) – no matter how long the proceedings take. The official opponents are required to be full professors. The candidate is required to have a master's degree, but not necessarily a ph.d. The ph.d. was introduced as a separate title from the higher doctorate in 1992 as part of the transition to a new degree structure, since the changes in the degree system would otherwise leave a significant amount of academics without immediately recognizable qualifications in international settings. The original vision was purported to be to phase out the higher doctorate in favor of the ph.d. (or merge the two), but so far, there are no signs of this happening. Many Danish academics with permanent positions wrote ph.d. dissertations in the 90s when the system was new, since at that time, a ph.d. degree or equivalent qualifications began to be required for certain academic positions in Denmark. Until the late 20th century, the higher doctorate was a condition for attaining full professorship; it is no longer required per se for any positions, but is considered amply equivalent to the ph.d. when applying for academic positions. === Egypt === In Egypt, the highest degree doctorate is awarded by Al-Azhar University est. 970, which grants ( العالمية Ālimiyya\ Habilitation). The Medical doctorate (abbreviated as M.D.) is equivalent to the Ph.D. degree. To earn an M.D. in a science specialty, one must have a master's degree (M.Sc.) (or two diplomas before the introduction of M.Sc. degree in Egypt) before applying. The M.D. degree involves courses in the field and defending a dissertation. It takes on average three to five years. Many postgraduate medical and surgical specialties students earn a doctorate. After finishing a 6-year medical school and one-year internship (house officer), physicians and surgeons earn the M.B. B.Ch. degree, which is equivalent to a US MD degree. They can then apply to earn a master's degree or a speciality diploma, then an MD degree in a specialty. The Egyptian M.D. degree is written using the name of one's specialty. For example, M.D. (Geriatrics) means a doctorate in Geriatrics, which is equivalent to a Ph.D. in Geriatrics. === Finland === The Finnish requirement for the entrance into doctoral studies is a master's degree or equivalent. All universities have the right to award doctorates. The ammattikorkeakoulu institutes (institutes of higher vocational education that are not universities but often called "Universities of Applied Sciences" in English) do not award doctoral or other academic degrees. The student must: Demonstrate understanding of their field and its meaning, while preparing to use scientific or scholarly study in their field, creating new knowledge. Obtain a good understanding of development, basic problems and research methods Obtain such understanding of the general theory of science and letters and such knowledge of neighbouring research fields that they are able to follow the development of these fields. The way to show that these general requirements have been met is: Complete graduate coursework. Demonstrate critical and independent thought Prepare and publicly defend a dissertation (a monograph or a compilation thesis of peer-reviewed articles). In fine arts, the dissertation may be substituted by works and/or performances as accepted by the faculty. Entrance to a doctoral program is available only for holders of a master's degree; there is no honors procedure for recruiting Bachelors. Entrance is not as controlled as in undergraduate studies, where a strict numerus clausus is applied. Usually, a prospective student discusses their plans with a professor. If the professor agrees to accept the student, the student applies for admission. The professor may recruit students to their group. Formal acceptance does not imply funding. The student must obtain funding either by working in a research unit or through private scholarships. Funding is more available for natural and engineering sciences than in letters. Sometimes, normal work and research activity are combined. Prior to introduction of the Bologna process, Finland required at least 42 credit weeks (1,800 hours) of formal coursework. The requirement was removed in 2005, leaving the decision to individual universities, which may delegate the authority to faculties or individual professors. In Engineering and Science, required coursework varies between 40 and 70 ECTS. The duration of graduate studies varies. It is possible to graduate three years after the master's degree, while much longer periods are not uncommon. The study ends with a dissertation, which must present substantial new scientific/scholarly knowledge. The dissertation can either be a monograph or it an edited collection of 3 to 7 journal articles. Students unable or unwilling to write a dissertation may qualify for a licentiate degree by completing the coursework requirement and writing a shorter thesis, usually summarizing one year of research. When the dissertation is ready, the faculty names two expert pre-examiners with doctoral degrees from the outside the university. During the pre-examination process, the student may receive comments on the work and respond with modifications. After the pre-examiners approve, the doctoral candidate applies the faculty for permission to print the thesis. When granting this permission, the faculty names the opponent for the thesis defence, who must also be an outside expert, with at least a doctorate. In all Finnish universities, long tradition requires that the printed dissertation hang on a cord by a public university noticeboard for at least ten days prior to for the dissertation defence. The doctoral dissertation takes place in public. The opponent and the candidate conduct a formal debate, usually wearing white tie, under the supervision of the thesis supervisor. Family, friends, colleagues and the members of the research community customarily attend the defence. After a formal entrance, the candidate begins with an approximately 20-minute popular lecture (lectio praecursoria), that is meant to introduce laymen to the thesis topic. The opponent follows with a short talk on the topic, after which the pair critically discuss the dissertation. The proceedings take two to three hours. At the end the opponent presents their final statement and reveals whether he/she will recommend that the faculty accept it. Any member of the public then has an opportunity to raise questions, although this is rare. Immediately after the defence, the supervisor, the opponent and the candidate drink coffee with the public. Usually, the attendees of the defence are given the printed dissertation. In the evening, the passed candidate hosts a dinner (Finnish: karonkka) in honour of the opponent. Usually, the candidate invites their family, colleagues and collaborators. Doctoral graduates are often Doctors of Philosophy (filosofian tohtori), but many fields retain their traditional titles: Doctor of Medicine (lääketieteen tohtori), Doctor of Science in technology (tekniikan tohtori), Doctor of Science in arts (Art and Design), etc. The doctorate is a formal requirement for a docenture or professor's position, although these in practice require postdoctoral research and further experience. Exceptions may be granted by the university governing board, but this is uncommon, and usually due to other work and expertise considered equivalent. === France === History Before 1984 three research doctorates existed in France: the State doctorate (doctorat d'État, "DrE", the old doctorate introduced in 1808), the third cycle doctorate (doctorat de troisième cycle, also called doctorate of specialty, doctorat de spécialité, created in 1954 and shorter than the State doctorate) and the diploma of doctor-engineer (diplôme de docteur-ingénieur created in 1923), for technical research. During the first half of the 20th century, following the submission of two theses (primary thesis, thèse principale, and secondary thesis, thèse complémentaire) to the Faculty of Letters (in France, "letters" is equivalent to "humanities") at the University of Paris, the doctoral candidate was awarded the Doctorat ès lettres. There was also the less prestigious "university doctorate", doctorat d'université, which could be received for the submission of a single thesis. In the 1950s, the Doctorat ès lettres was renamed to Doctorat d'État. In 1954 (for the sciences) and 1958 (for letters and human sciences), the less demanding doctorat de troisième cycle degree was created on the model of the American Ph.D. with the purpose to lessen what had become an increasingly long period of time between the typical students' completion of their Diplôme d'études supérieures, roughly equivalent to a Master of Arts, and their Doctorat d'État. After 1984, only one type of doctoral degree remained: the "doctorate" (Doctorat). A special diploma was created called the "Habilitation to Supervise Research" (also translated as "accreditation to supervise research"; Habilitation à diriger des recherches), a professional qualification to supervise doctoral work. (This diploma is similar in spirit to the older State doctorate, and the requirements for obtaining it are similar to those necessary to obtain tenure in other systems.) Before only professors or senior full researchers of similar rank were normally authorized to supervise a doctoral candidate's work. Now habilitation is a prerequisite to the title of professor in university (Professeur des universités) and to the title of Research Director (Directeur de recherche) in national public research agency such as CNRS, INRIA, or INRAE. Admission Today, the doctorate (doctorat) is a research-only degree. It is a national degree and its requirements are fixed by the minister of higher education and research. Only public institutions award the doctorate. It can be awarded in any field of study. The master's degree is a prerequisite. The normal duration is three years. The writing of a comprehensive thesis constitutes the bulk of the doctoral work. While the length of the thesis varies according to the discipline, it is rarely less than 150 pages, and often substantially more. Some 15,000 new doctoral matriculations occur every year and ≈10,000 doctorates are awarded. Doctoral candidates can apply for a three-year fellowship. The most well known is the Contrat Doctoral (4,000 granted every year with a gross salary of 1758 euros per month as of September 2016). Since 2002, candidates follow in-service training, but there is no written examination for the doctorate. The candidate has to write a thesis that is read by two external reviewers. The head of the institution decides whether the candidate can defend the thesis, after considering the external reviews. The jury members are designated by the head of the institution. The candidate's supervisor and the external reviewers are generally jury members. The maximum number of jury members is 8. The defense generally lasts 45 minutes in scientific fields, followed by 1 – 2+1⁄2 hours of questions from the jury or other doctors present. The defense and questions are public. The jury then deliberates in private and then declares the candidate admitted or "postponed". The latter is rare. New regulations were set in 2016 and do not award distinctions. The title of doctor (docteur) can also be used by medical and pharmaceutical practitioners who hold a doctor's State diploma (diplôme d'État de docteur, distinct from the doctorat d'État mentioned above). The diploma is a first-degree. A guideline with good practices and legal analysis has been published in 2018 by the Association nationale des docteurs (ANDès) and the Confédération des Jeunes Chercheurs (CJC) with funding from the French Ministry of research. === Germany === Doctoral degrees in Germany are research doctorates and are awarded by a process called Promotion. Most doctorates are awarded with specific Latin designations for the field of research (except for engineering, where the designation is German), instead of a general name for all fields (such as the Ph.D.). The most important degrees are: Dr. theol. (theologiae; theology); Dr. phil. (philosophiae; humanities such as philosophy, philology, history, and social sciences such as sociology, political science, or psychology as well); Dr. rer. nat. (rerum naturalium; natural and formal sciences, i.e. physics, chemistry, biology, mathematics, computer science and information technology, or psychology); Dr. iur. (iuris; law); Dr. med. (medicinae; medicine); Dr. med. dent. (medicinae dentariae; dentistry); Dr. med. vet. (medicinae veterinariae; veterinary medicine); Dr.-Ing. (engineering); Dr. oec. (oeconomiae; economics); Dr. rer. pol. (rerum politicarum; economics, business administration, political science). The concept of a US-style professional doctorate as an entry-level professional qualification does not exist. Professional doctorates obtained in other countries, not requiring a thesis or not being third cycle qualifications under the Bologna process, can only be used postnominally, e.g., "Max Mustermann, MD", and do not allow the use of the title Dr. In medicine, "doctoral" dissertations are often written alongside undergraduate study therefore, European Research Council decided in 2010 that such Dr. med. degrees do not meet the international standards of a Ph.D. research degree. The duration of the doctorate depends on the field: a doctorate in medicine may take less than a full-time year to complete; those in other fields, two to six years. Over fifty doctoral designations exist, many of them rare or no longer in use. As a title, the degree is commonly written in front of the name in abbreviated form, e.g., Dr. rer. nat. Max Mustermann or Dr. Max Mustermann, dropping the designation entirely. However, leaving out the designation is only allowed when the doctorate degree is not an honorary doctorate, which must be indicated by Dr. h.c. (from Latin honoris causa). Although the honorific does not become part of the name, holders can demand that the title appear in official documents. The title is not mandatory. The honorific is commonly used in formal letters. For holders of other titles, only the highest title is mentioned. In contrast to English, in which a person's name is preceded by at most one title (except in very ceremonious usage), the formal German mode of address permits several titles in addition to "Herr" or "Frau" (which, unlike "Mr" or "Ms", is not considered a title at all, but an Anrede or "address"), including repetitions in the case of multiple degrees, as in "Frau Prof. Dr. Dr. Schmidt", for a person who would be addressed as "Prof. Schmidt" in English. In the German university system it is common to write two doctoral theses, the inaugural thesis (Inauguraldissertation), completing a course of study, and the habilitation thesis (Habilitationsschrift), which opens the road to a professorship. Upon completion of the habilitation thesis, a Habilitation is awarded, which is indicated by appending habil. (habilitata/habilitatus) to the doctorate, e.g., Dr. rer. nat. habil. Max Mustermann. It is considered as an additional academic qualification rather than an academic degree formally. It qualifies the owner to teach at German universities (facultas docendi). The holder of a Habilitation receives the authorization to teach a certain subject (venia legendi). This has been the traditional prerequisite for attaining Privatdozent (PD) and employment as a full university professor. With the introduction of Juniorprofessuren—around 2005—as an alternative track towards becoming a professor at universities (with tenure), Habilitation is no longer the only university career track. === India === In India, doctorates are offered by universities. Entry requirements include master's degree. Some universities consider undergraduate degrees in professional areas such as engineering, medicine or law as qualifications for pursuing doctorate level degrees. Entrance examinations are held for almost all programs. In most universities, coursework duration and thesis is 3–7 years. The most common doctoral degree is Ph.D. === Italy === Until the introduction of the dottorato di ricerca in the mid-1980s, the laurea generally constituted the highest academic degree obtainable in Italy. The first institution in Italy to create a doctoral program was Scuola Normale Superiore di Pisa in 1927 under the historic name "Diploma di Perfezionamento". Further, the dottorato di ricerca was introduced by law and presidential decree in 1980, in a reform of academic teaching, training and experimentation in organisation and teaching methods. Italy uses a three-level degree system following the Bologna Process. The first-level degree, called a laurea (Bachelor's degree), requires three years and a short thesis. The second-level degree, called a laurea magistrale (Master's degree), is obtained after two additional years, specializing in a branch of the field. This degree requires more advanced thesis work, usually involving academic research or an internship. The final degree is called a dottorato di ricerca (Ph.D.) and is obtained after three years of academic research on the subject and a thesis. Alternatively, after obtaining the laurea or the laurea magistrale, one can complete a "Master's" (first-level Master's after the laurea; second-level Master's after the laurea magistrale) of one or two years, usually including an internship. An Italian "Master's" is not the same as a master's degree; it is intended to be more focused on professional training and practical experience. Regardless of the field of study, the title for Bachelors Graduate students is Dottore/Dottoressa (abbrev. Dott./Dott.ssa, or as Dr.), not to be confused with the title for the Ph.D., which is instead Dottore/Dottoressa di Ricerca. A laurea magistrale grants instead the title of Dottore/Dottoressa magistrale. Graduates in the fields of Education, Art and Music are also called Dr. Prof. (or simply Professore) or Maestro. Many professional titles, such as ingegnere (engineer) are awarded only upon passing a post-graduation examination (esame di stato), and registration in the relevant professional association. The Superior Graduate Schools in Italy (Italian: Scuola Superiore Universitaria), also called Schools of Excellence (Italian: Scuole di Eccellenza) such as Scuola Normale Superiore di Pisa and Sant'Anna School of Advanced Studies keep their historical "Diploma di Perfezionamento" title by law and MIUR Decree. === Japan === ==== Dissertation-only ==== Until the 1990s, most natural science and engineering doctorates in Japan were earned by industrial researchers in Japanese companies. These degrees were awarded by the employees' former university, usually after years of research in industrial laboratories. The only requirement is submission of a dissertation, along with articles published in well-known journals. This program is called ronbun hakase (論文博士). It produced the majority of engineering doctoral degrees from national universities. University-based doctoral programs called katei hakase (課程博士), are gradually replacing these degrees. By 1994, more doctoral engineering degrees were earned for research within university laboratories (53%) than industrial research laboratories (47%). Since 1978, Japan Society for the Promotion of Science (JSPS) has provided tutorial and financial support for promising researchers in Asia and Africa. The program is called JSPS RONPAKU. ==== Professional degree ==== The only professional doctorate in Japan is the Juris Doctor, known as Hōmu Hakushi (法務博士) The program generally lasts two or three years. This curriculum is professionally oriented, but unlike in the US the program does not provide education sufficient for a law license. All candidates for a bar license must pass the bar exam (Shihou shiken), attend the Legal Training and Research Institute and pass the practical exam (Nikai Shiken or Shihou Shushusei koushi). === Netherlands and Flanders === The traditional academic system of the Netherlands provided basic academic diploma: propaedeuse and three academic degrees: kandidaat (the lowest degree), depending on gender doctorandus or doctoranda (drs.) (with equivalent degrees in engineering – ir. and law – mr.) and doctor (dr.). After successful completion of the first year of university, the student was awarded the propaedeutic diploma (not a degree). In some fields, this diploma was abolished in the 1980s. In physics and mathematics, the student could directly obtain a kandidaats (candidate) diploma in two years. The candidate diploma was all but abolished by 1989. It used to be attained after completion of the majority of courses of the academic study (usually after completion of course requirements of the third year in the program), after which the student was allowed to begin work on their doctorandus thesis. The successful completion of this thesis conveyed the doctoranda/us title, implying that the student's initial studies were finished. In addition to these 'general' degrees, specific titles equivalent to the doctorandus degree were awarded for law: meester (master) (mr.), and for engineering: ingenieur (engineer)(ir.). Following the Bologna protocol the Dutch adopted the Anglo-Saxon system of academic degrees. The old candidate's degree was revived to become the bachelor's degree and the doctorandus' (mr and ir degree) were replaced by master's degrees. Students can only enroll in a doctorate system after completing a research university level master's degree; although dispensation can be granted on a case-by-case basis after scrutiny of the individual's portfolio. The most common way to conduct doctoral studies is to work as promovendus/assistent in opleiding (aio)/onderzoeker in opleiding (oio) (research assistant with additional courses and supervision), perform extensive research and write a dissertation consisting of published articles (over a period of four or more years). Research can also be conducted without official research assistant status, for example through a business-sponsored research laboratory. The doctor's title is the highest academic title in the Netherlands and Flanders. In research doctorates the degree is always Ph.D. or dr. with no distinction between disciplines, and can only be granted by research universities. ==== Netherlands ==== Every Ph.D. thesis has to be promoted by research university staff member holding ius promovendi (the right to promote). In the Netherlands all full professors have ius promovendi, as well as other academic staff granted this right on individual basis by the board of their university (almost always senior associate professors). The promotor has the role of principal advisor and determines whether the thesis quality suffices and can be submitted to the examining committee. The examining committee is appointed by the academic board of the university based on recommendation of the promotor and consists of experts in the field. The examining committee reviews the thesis manuscript and has to approve or fail the thesis. Failures at this stage are rare because promotors generally not submit work they deem inadequate to the examining committee, supervisors and promotor lose prestige among their colleagues should they allow a substandard thesis to be submitted. After examining committee approval, the candidate publishes the thesis (generally more than 100 copies) and sends it to the examining committee, colleagues, friends and family with an invitation to the public defence. Additional copies are kept in the university library and the Royal Library of the Netherlands. The degree is awarded in a formal, public, defence session, in which the thesis is defended against critical questions of the "opposition" (the examining committee). Specific formalities differ between universities, for example whether a public presentation is given, either before or during the session, specific phrasing in the procedure, and dress code. In most protocols, candidates can be supported by paranymphs, a largely ceremonial role, but they are formally allowed to take over the defence on behalf of the candidate. Doctoral candidates The actual defence lasts exactly the assigned time slot (45 minutes to 1 hour exactly depending on the university) after which the defence is suspended by the bedel who stops the examination, frequently mid sentence. Failure during this session is possible, but extremely rare. After formal approval of the thesis and the defence by the examining committee in a closed discussion, the session is resumed and the promotor grants the degree and hands over the diploma to the candidate, and usually congratulates the candidate and gives a personal speech praising the work of the young doctor (laudatio), before the session is formally closed. Dutch doctors may use PhD behind their name instead of the uncapitalized dr. before their name. Those who obtained a degree in a foreign country can only use one of the Dutch title dr. if their grade is approved as equivalent by the Dienst Uitvoering Onderwijs though according to the opportunity principle, little effort is spent in identifying such fraud. Those who have multiple doctor (dr.) titles may use the title dr.mult. Those who have received honoris causa doctorates may use dr.h.c. before their own name. The Dutch universities of technology (Eindhoven University of Technology, Delft University of Technology, University of Twente, and Wageningen University) also award a 2-year (industry oriented) Professional Doctorate in Engineering (PDEng), renamed EngD from September 2022 onwards, which does not grant the right to use the dr. title abbreviation. In 2023, a pilot started at universities of applied sciences with a professional doctoral programme, in which the focus is on applying knowledge to improve or solve professional processes or products. ==== Flanders ==== In Belgium's Flemish Community the doctorandus title was only used by those who actually started their doctoral work. Doctorandus is still used as a synonym for a Ph.D. student. The licentiaat (licensee) title was in use for a regular graduate until the Bologna reform changed the licentiaat degree to the master's degree (the Bologna reform abolished the two-year kandidaat degree and introduced a three-year academic bachelor's degree instead). === Poland === In Poland, an academic degree of doktor 'doctor' is awarded in sciences and arts upon an examination and defence of a doctoral dissertation. As Poland is a signatory to the Bologna Process, doctoral studies are a third cycle of studies following the bachelor's (licencjat) and master's (magister) degrees or their equivalents. Doctoral student is known as doktorant (masculine form) or doktorantka (feminine form). Doctorate is awarded within specified brach and discipline of science or art by university or research institute accredited by the minister responsible for higher education. The title is abbreviated to dr in nominative case. Doctors may further go a habilitation process. === Russia === Introduced in 1819 in the Russian Empire, the academic title Doctor of the Sciences (Russian: Доктор наук) marks the highest academic level achievable by a formal process. The title was abolished with the end of the Empire in 1917 and revived by the USSR in 1934 along with a new (lower) complementary degree of a Candidate [Doctor] of the Sciences' (Russian: Кандидат наук). This system is used since with minor adjustments. The Candidate of the Sciences title is usually seen as roughly equivalent to the research doctorates in Western countries while the Doctor of the Sciences title is relatively rare and retains its exclusivity. Most "Candidates" never reach the "Doctor of the Sciences" title. Similar title systems were adopted by many of the Soviet bloc countries. === Spain === Doctoral degrees are regulated by Royal Decree (R.D. 778/1998), Real Decreto (in Spanish). They are granted by the university on behalf of the king. Its diploma has the force of a public document. The Ministry of Science keeps a national registry of theses called TESEO. According to the National Institute of Statistics (INE), fewer than 5% of M.Sc. degree holders are admitted to Ph.D. programmes. All doctoral programs are research-oriented. A minimum of 4 years of study is required, divided into 2 stages: A 2-year (or longer) period of studies concludes with a public dissertation presented to a panel of 3 Professors. Upon approval from the university, the candidate receives a Diploma de Estudios Avanzados (part qualified doctor, equivalent to M.Sc.). From 2008 it is possible to substitute the former diploma by a recognized master program. A 2-year (or longer) research period includes extensions for up to 10 years. The student must present a thesis describing a discovery or original contribution. If approved by their thesis director, the study is presented to a panel of 5 distinguished scholars. Any Doctor attending the public defense is allowed to challenge the candidate with questions. If approved, the candidate receives the doctorate. Four marks used to be granted: Unsatisfactory (Suspenso), Pass (Aprobado), Remarkable (Notable), "Cum laude" (Sobresaliente), and "Summa cum laude" (Sobresaliente Cum Laude). Those Doctors granted their degree "Summa Cum Laude" were allowed to apply for an "Extraordinary Award". Since September 2012 and regulated by Royal Decree (R.D. 99/2011) (in Spanish), three marks can be granted: Unsatisfactory (No apto), Pass (Apto) and "Cum laude" (Apto Cum Laude) as maximum mark. In the public defense the doctor is notified if the thesis has passed or not passed. The Apto Cum Laude mark is awarded after the public defense as the result of a private, anonymous vote. Votes are verified by the university. A unanimous vote of the reviewers nominates Doctors granted Apto Cum Laude for an "Extraordinary Award" (Premio Extraordinario de Doctorado). In the same Royal Decree the initial 3-year study period was replaced by a Research master's degree (one or two years; Professional master's degrees do not grant direct access to Ph.D. Programs) that concludes with a public dissertation called Trabajo de Fin de Máster or Proyecto de Fin de Máster. An approved project earns a master's degree that grants access to a Ph.D. program and initiates the period of research. A doctorate is required in order to teach at the university. Some universities offer an online Ph.D. model. Only Ph.D. holders, Grandees and Dukes can sit and cover their heads in the presence of the King. From 1857, Complutense University was the only one in Spain authorised to confer the doctorate. This law remained in effect until 1954, when the University of Salamanca joined in commemoration of its septcentenary. In 1970, the right was extended to all Spanish universities. All doctorate holders are reciprocally recognised as equivalent in Germany and Spain (according to the "Bonn Agreement of November 14, 1994"). === United Kingdom === ==== History of the UK doctorate ==== The doctorate has long existed in the UK as, originally, the second degree in divinity, law, medicine and music. But it was not until the late 19th century that the research doctorate, now known as the higher doctorate, was introduced. The first higher doctorate was the Doctor of Science at Durham University, introduced in 1882. This was soon followed by other universities, including the University of Cambridge establishing its ScD in the same year, the University of London transforming its DSc from an advanced study course to a research degree in 1885, and the University of Oxford establishing its Doctor of Letters (DLitt) in 1900. The PhD was adopted in the UK following a joint decision in 1917 by British universities, although it took much longer for it to become established. Oxford became the first university to institute the new degree, although naming it the DPhil. The PhD was often distinguished from the earlier higher doctorates by distinctive academic dress. At Cambridge, for example, PhDs wear a master's gown with scarlet facings rather than the full scarlet gown of the higher doctors, while the University of Wales gave PhDs crimson gowns rather than scarlet. Professional doctorates were introduced in Britain in the 1980s and 1990s. The earliest professional doctorates were in the social sciences, including the Doctor of Business Administration (DBA), Doctor of Education (EdD) and Doctor of Clinical Psychology (DClinPsy). ==== British doctorates today ==== Today, except for those awarded honoris causa (honorary degrees), all doctorates granted by British universities are research doctorates, in that their main (and in many cases only) component is the submission of an extensive and substantial thesis or portfolio of original research, examined by an expert panel appointed by the university. UK doctorates are categorised as: Doctorates Subject specialist research – normally PhD/DPhil; the most common form of doctorate Integrated subject specialist doctorates – integrated PhDs including teaching at master's level Doctorates by publication – PhD by Published Works; only awarded infrequently Professional / practice-based / practitioner doctorates – e.g. EdD, ProfDoc/DProf, EngD, etc.; usually include taught elements and have an orientation that combines professional and academic aspects Higher doctorates e.g. DD, LLD, DSc, DLitt; higher level than doctorates, usually awarded either for a substantial body of work over an extended period or as honorary degrees. The Quality Assurance Agency states in the Framework for Higher Education Qualifications of UK Degree-Awarding Bodies (which covers doctorates but not higher doctorates) that: Doctoral degrees are awarded to students who have demonstrated: the creation and interpretation of new knowledge, through original research or other advanced scholarship, of a quality to satisfy peer review, extend the forefront of the discipline, and merit publication a systematic acquisition and understanding of a substantial body of knowledge which is at the forefront of an academic discipline or area of professional practice the general ability to conceptualise, design and implement a project for the generation of new knowledge, applications or understanding at the forefront of the discipline, and to adjust the project design in the light of unforeseen problems a detailed understanding of applicable techniques for research and advanced academic enquiry In the UK, the doctorate is a qualification awarded at FHEQ level 8/level 12 of the FQHEIS on the national qualifications frameworks. The higher doctorates are stated to be "A higher level of award", which is not covered by the qualifications frameworks. ==== Subject specialist doctorates ==== These are the most common doctorates in the UK and are normally awarded as PhDs. While the master/apprentice model was traditionally used for British PhDs, since 2003 courses have become more structured, with students taking courses in research skills and receiving training for professional and personal development. However, the assessment of the PhD remains based on the production of a thesis or equivalent and its defence at a viva voce oral examination, normally held in front of at least two examiners, one internal and one external. Access to PhDs normally requires an upper second class or first class bachelor's degree, or a master's degree. Courses normally last three years, although it is common for students to be initially registered for MPhil degrees and then formally transferred onto the PhD after a year or two. Students who are not considered likely to complete a PhD may be offered the opportunity to complete an MPhil instead. Integrated doctorates, originally known as 'New Route PhDs', were introduced from 2000 onwards. These integrate teaching at master's level during the first one or two years of the degree, either alongside research or as a preliminary to starting research. These courses usually offer a master's-level exit degree after the taught courses are completed. While passing the taught elements is often required, examination of the final doctorate is still by thesis (or equivalent) alone. The duration of integrated doctorates is a minimum of four years, with three years spent on the research component. In 2013, Research Councils UK issued a 'Statement of Expectations for Postgraduate Training', which lays out the expectations for training in PhDs funded by the research councils. In the latest version (2016), issued together with Cancer Research UK, the Wellcome Trust and the British Heart Foundation, these include the provision of careers advice, in-depth advanced training in the subject area, provision of transferable skills, training in experimental design and statistics, training in good research conduct, and training for compliance with legal, ethical and professional frameworks. The statement also encourages peer-group development through cohort training and/or Graduate schools. ==== Higher doctorates ==== Higher doctorates are awarded in recognition of a substantial body of original research undertaken over the course of many years. Typically the candidate submits a collection of previously published, peer-refereed work, which is reviewed by a committee of internal and external academics who decide whether the candidate deserves the doctorate. The higher doctorate is similar in some respects to the habilitation in some European countries. However, the purpose of the award is significantly different. While the habilitation formally determines whether an academic is suitably qualified to be a university professor, the higher doctorate does not qualify the holder for a position but rather recognises their contribution to research. Higher doctorates were defined by the UK Council for Graduate Education (UKCGE) in 2013 as: an award that is at a level above the PhD (or equivalent professional doctorate in the discipline), and that is typically gained not through a defined programme of study but rather by submission of a substantial body of research-based work. In terms of number of institutions offering the awards, the most common doctorates of this type in UKCGE surveys carried out in 2008 and 2013 were the Doctor of Science (DSc), Doctor of Letters (DLitt), Doctor of Law (LLD), Doctor of Music (DMus) and Doctor of Divinity (DD); in the 2008 survey the Doctor of Technology (DTech) tied with the DD. The DSc was offered by all 49 responding institutions in 2008 and 15 out of 16 in 2013 and the DLitt by only one less in each case, while the DD was offered in 10 responding institutions in 2008 and 3 in 2013. In terms of number of higher doctorates awarded (not including honorary doctorates) the DSc was most popular, but the number of awards was very low: the responding institutions had averaged an award of at most one earned higher doctorate per year over the period 2003–2013. ==== Honorary degrees ==== Most British universities award degrees honoris causa to recognise individuals who have made a substantial contribution to a particular field. Usually an appropriate higher doctorate is used in these circumstances, depending on the candidate's achievements. However, some universities differentiate between honorary and substantive doctorates, using the degree of Doctor of the University (D.Univ.) for these purposes, and reserve the higher doctorates for formal academic research. === United States === U.S. research doctorates are awarded for advanced study followed by successfully completing and defending independent research presented in the form of a dissertation. Professional degrees may use the term "doctor" in their titles, such as Juris Doctor and Doctor of Medicine, but these degrees rarely contain an independent research component and are not research doctorates. Law school graduates, although awarded the J.D. degree, are not normally addressed as "doctor". In legal studies, the Doctor of Juridical Science is considered the equivalent to a Ph.D. Many American universities offer the PhD followed by a professional doctorate or joint PhD with a professional degree. Often, PhD work is sequential to the professional degree, e.g., PhD in law after a JD or equivalent in physical therapy after DPT, in pharmacy after Pharm.D. Such professional degrees are referred to as an entry-level doctorate program and Ph.D. as a post-professional doctorate. ==== Research degrees ==== The most common research doctorate in the United States is the Doctor of Philosophy (Ph.D.). This degree was first awarded in the U.S. at the 1861 Yale University commencement. The University of Pennsylvania followed in 1871, with Cornell University (1872), Harvard (1873), Michigan (1876) and Princeton (1879) following suit. Controversy and opposition followed the introduction of the Ph.D. into the U.S. educational system, lasting into the 1950s, as it was seen as an unnecessary artificial transplant from a foreign (Germany) educational system, which corrupted a system based on England's Oxbridge model. Ph.D.s and other research doctorates in the U.S. typically entail successful completion of coursework, passing a comprehensive examination, and defending a dissertation. The median number of years for completion of U.S. doctoral degrees is seven. Doctoral applicants were previously required to have a master's degree, but many programs accept students immediately following undergraduate studies. Many programs gauge the potential of applicants to their program and grant a master's degree upon completion of the necessary course work. When so admitted, the student is expected to have mastered the material covered in the master's degree despite not holding one, though this tradition is under heavy criticism. Successfully finishing Ph.D. qualifying exams confers Ph.D. candidate status, allowing dissertation work to begin. The International Affairs Office of the U.S. Department of Education has listed 18 frequently awarded research doctorate titles identified by the National Science Foundation (NSF) as representing degrees equivalent in research content to the Ph.D. ==== Professional degrees ==== Many fields offer professional doctorates (or professional master's degrees) such as engineering, pharmacy, medicine, etc., that require such degrees for professional practice or licensure. Some of these degrees are also termed "first professional degrees", since they are the first field-specific master's or doctoral degrees. A Doctor of Engineering (DEng) is a professional degree. In contrast to a PhD in Engineering where students usually conduct original theory-based research, DEng degrees are built around applied coursework and a practice-led project and thus designed for working engineers in the industry. DEng students defend their thesis at the end of their study before a thesis committee in order to be conferred a degree. A Doctor of Pharmacy is awarded as the professional degree in pharmacy replacing a bachelor's degree. It is the only professional pharmacy degree awarded in the US. Pharmacy programs vary in length between four years for matriculants with a B.S./B.A. to six years for others. In the twenty-first century professional doctorates appeared in other fields, such as the Doctor of Audiology in 2007. Advanced Practice Registered Nurses were expected to completely transition to the Doctor of Nursing Practice by 2015, and physical therapists to the Doctor of Physical Therapy by 2020. Professional associations play a central role in this transformation amid criticisms on the lack of proper criteria to assure appropriate rigor. In many cases master's-level programs were relabeled as doctoral programs. == Revocation == A doctoral degree can be revoked or rescinded by the university that awarded it. Possible reasons include plagiarism, criminal or unethical activities of the holder, or malfunction or manipulation of academic evaluation processes. == See also == Postdoctoral researcher Compilation thesis Habilitation thesis Doctor (title) Eurodoctorate List of fields of doctoral studies == Notes == == References ==
Wikipedia/Doctorate
A Bachelor of Science (BS, BSc, B.S., B.Sc., SB, or ScB; from the Latin scientiae baccalaureus) is a bachelor's degree that is awarded for programs that generally last three to five years. The first university to admit a student to the degree of Bachelor of Science was the University of London in 1860. In the United States, the Lawrence Scientific School first conferred the degree in 1851, followed by the University of Michigan in 1855. Nathaniel Shaler, who was Harvard's Dean of Sciences, wrote in a private letter that "the degree of Bachelor of Science came to be introduced into our system through the influence of Louis Agassiz, who had much to do in shaping the plans of this School.": 48  Whether Bachelor of Science or Bachelor of Arts degrees are awarded in particular subjects varies between universities. For example, an economics student may graduate as a Bachelor of Arts in one university but as a Bachelor of Science in another, and occasionally, both options are offered. Some universities follow the Oxford and Cambridge tradition that even graduates in mathematics and the sciences become Bachelors of Arts, while other institutions offer only the Bachelor of Science degree, even in non-science fields. At universities that offer both Bachelor of Arts and Bachelor of Science degrees in the same discipline, the Bachelor of Science degree is usually more focused on that particular discipline and is targeted toward students intending to pursue graduate school or a profession in that discipline. == International differences == In some institutions, there are historical and traditional reasons that govern the granting of BS or BA degrees regardless of the disciplines offered. Georgetown University's School of Foreign Service awards the Bachelor of Science in Foreign Service (BSFS) degrees to all of its undergraduates, although many students major in humanities-oriented fields such as international history or culture and politics. University of Pennsylvania's Wharton School awards the BS in Economics to all of its undergraduates, regardless if the candidates major in economics or not. The London School of Economics offers BSc degrees in practically all subject areas, even those normally associated with the arts and humanities. Northwestern University's School of Communication grants the Bachelor of Science in Journalism degrees in all of its programs of study, including theater, dance, and radio/television/film. Meanwhile, the Oxbridge universities almost exclusively award the BA as a first degree. The decision to grant a BS or BA degree at some institutions also depends on the constituent colleges, even when the candidate pursues the same or similar subjects. For instance, Cornell University offers a BS degree in computer science from its College of Engineering and a BA degree in computer science from its College of Arts and Sciences. Likewise, for candidates majoring in computer science, Columbia University offers BS degrees for those enrolled in the School of Engineering and Applied Science but awards BA degrees for graduates of Columbia College. At Harvard University, the same undergraduate degree in computer science can be an A.B. if taken at Harvard College or Harvard John A. Paulson School of Engineering and Applied Sciences, and an A.L.B. at Harvard Extension School. === Argentina === In Argentina most university degrees are given as a license in a discipline. They are specific to a field and awarded to students upon completion of a course of study which lasts at least four and usually five years. In most cases, at the end of a course and as a mandatory condition for its completion (and ultimately, to obtain a degree), students are compelled to produce an original research project related to their field. This project is usually referred to as a thesis (although the term actually corresponds to post-graduate studies). === Australia, New Zealand and South Africa === In Australia, the BSc is generally a three to four-year degree. An honours year or a master's by research degree is required to progress on to the stage of Doctor of Philosophy (PhD). In New Zealand, in some cases, the honours degree comprises an additional postgraduate qualification. In other cases, students with strong performance in their second or third year, are invited to extend their degree to an additional year, with a focus on research, granting access to doctoral programs. In South Africa, the BSc is taken over three years, while the postgraduate BSc (Hons) entails an additional year of study. Admission to the honours degree is on the basis of a sufficiently high average in the BSc major; an honours degree is required for MSc level study, and admission to a doctorate is via the MSc. === Brazil === In Brazil, a Bachelor of Science degree is an undergraduate academic degree and is equivalent to a BSc (Hons). It could take from 4 to 6 years (8 to 12 periods) to complete, is also more specific and could be applied for Scientific Arts courses (like Engineering, Maths, Physics, etc.), somewhat is called Human Art courses in Brazil (like History, Portuguese and Literature and Lawyer studies for example) as well as for Health Arts (like Medicine, Nursery, Zootechnique, Veterinary and Biology for example). To be able to start the bachelor's degree in Brazil the candidate must prove to be proficient in different disciplines and have at least the accumulated Preliminary, Medium and High School degrees accomplished with the minimum merit of 60% to 70% of the degrees and a correspondent study period that can vary from 10 to 12 years minimum. The Bachelor of Science courses in Brazilian Universities normally have the first 1 to 2 years (first 2 to 4 periods) of basic fundamental disciplines (like for example Calculus I, II, III and IV for some engineering courses, Geometry basics and advanced, Analytical Laboratories experiments in Mechanics, Optics, Magnetism, etc.) and the last 2 to 3 years disciplines more related to the professional fields of that Bachelor of Science (for example Units Operations, Thermodynamics, Chemical Reactors, Industrial Processes, Kinetics for Chemical Engineering for example). Some disciplines are prerequisite to others and in some universities, the student is not allowed to course any discipline for the entire next period if he was unsuccessful in just one prerequisite discipline of the present period. Usually, the Bachelor of Science courses demand a one-year mandatory probation period by the end of the course (internship in the specific professional area, like a training period), followed by relatively elaborate written and oral evaluations. To get the certification as BSc, most universities require that the students achieve the accomplishment of 60% to 70% in all the "obligatory disciplines", plus the supervisioned and approved training period (like a supervisioned internship period), the final thesis of the course, and in some BSc programs, the final exam test. The final exam also is required so far. To be a professor, a Bachelor of Sciences is required to get a Licenciature degree, which lasts on top of the periods already studied until getting the BSc (Hons), plus 2 to 3 periods (1 to 1.5 years). With a master's degree (MSc) is also possible, which takes 3 to 5 periods more (1.5 to 2.5 years more). === Chile === In Chile, the completion of a university program leads to an academic degree as well as a professional title. The academic degree equivalent to Bachelor of Science is "Licenciado en Ciencias", which can be obtained as a result of completing a 4–6 year program. However, in most cases, 4-year programs will grant a Bachelor of Applied Science (Spanish: "Licenciatura en Ciencias Aplicadas") degree, while other 4-year programs will not grant to an academic degree. === Continental Europe === Many universities in Europe are changing their systems into the BA/MA system and in doing so also offering the full equivalent of a BSc or MSc (see Bologna Process). === Czech Republic === Universities in the Czech Republic are changing their systems into the Bachelor of Science/Master of Science system and in doing so also offering the full equivalent of a BSc (Bc.) or MSc (Mgr./Ing.). === Germany === In Germany, there are two kinds of universities: Universitäten and Fachhochschulen (which are also called University of Applied Sciences). Universitäten and Fachhochschulen – both also called Hochschulen - are legally equal, but Fachhochschulen have the reputation of being more related to practice and have no legal right to offer PhD programmes. The BSc in Germany is equivalent to the BSc(Hons) in the United Kingdom. Many universities in German-speaking countries are changing their systems to the BA/MA system and in doing so also offering the full equivalent of a BSc. In Germany the BA normally lasts between three and four years (six to eight semesters) and between 180 and 240 ECTS must be earned. === India === Bachelor of Science (B.Sc.) is usually a three-year graduate program in India offered by state and central universities. Some independent private colleges can also offer BS degrees with minimum changes in curriculum. B.Sc. is different from Bachelor of Engineering (B.E.) or Bachelor of Technology (B.Tech.). Two exceptions are the B.Sc. (Research) course offered by the Indian Institute of Science which lasts 4 years with an option to stay back an extra year for a master's thesis, the BS degrees in Physics, Data Science (Online degree), Electronic systems (Online degree), Medical Sciences & Engineering offered by IIT Madras which lasts four years and the BS-MS course offered by the IISERs which lasts for 5 years, all of which provide a more research oriented and interdisciplinary emphasis. From session 2022–23, the University of Delhi implemented NEP 2020 under which a bachelor's degree became a 4-year degree with multiple exit and entry options. A student receives a B.Sc. (research) field of study or B.Sc. (honours) field of multidisciplinary studies after the 4th year. === Ireland === Commonly in Ireland, graduands are admitted to the degree of Bachelor of Science after having completed a programme in one or more of the sciences. These programmes may take different lengths of time to complete. In Ireland, the former BS was changed to BSc (Hons), which is awarded after four years. The BSc (Ord) is awarded after three years. Formerly at the University of Oxford, the degree of BSc was a postgraduate degree; this former degree, still actively granted, has since been renamed MSc. === United Kingdom === Commonly in British Commonwealth countries, graduands are admitted to the degree of Bachelor of Science after having completed a programme in one or more of the sciences. These programmes may take different lengths of time to complete. A Bachelor of Science receives the designation BSc for an ordinary degree and BSc (Hons) for an honours degree. In England, Wales and Northern Ireland an honours degree is typically completed over a three-year period, though there are a few intensified two-year courses (with less vacation time). Bachelor's degrees (without honours) were typically completed in two years for most of the twentieth century. In Scotland, where access to university is possible after one less year of secondary education, degree courses have a foundation year making the total course length four years. === North America === In Canada, Mexico, and the United States, it is most often a four-year undergraduate degree, typically in engineering, computer science, mathematics, economics, finance, business, or the natural sciences. There are, however, some colleges and universities, notably in the province of Quebec, that offer three-year degree programs. == Typical completion period == === Three years === Algeria, Australia, Austria, Barbados, Belgium, Belize, Bosnia and Herzegovina (mostly three years, sometimes four), Cameroon, Canada (specifically Quebec), Côte d'Ivoire, Croatia (mostly three years, sometimes four), Czech Republic (mostly three years, sometimes four), Denmark, England (three or four years with a one-year placement in industry), Estonia, Finland, France, Germany (mostly three years, but can be up to four years), Hungary, Iceland, India (three-year BSc in arts and pure sciences excluding engineering, Agriculture and medicine, four years BS, Bsc (hons.) Agriculture, Engineering, four years for engineering program "Bachelor of Engineering", four years for Agriculture program "Bachelor of Agriculture" and five years for medicine program "Bachelor of Medicine and Bachelor of Surgery"), Ireland (Ordinary), Israel (for most subjects), Italy, Jamaica (three or four years), Latvia (three or four years), Lebanon (three or four years, five years for Bachelor of Engineering), Malaysia, New Zealand, the Netherlands (three years for research universities, four years for universities of applied sciences), Northern Ireland, Norway, Poland, Portugal, Romania, Scotland (Ordinary), Singapore (honours degree takes 4 years), Slovakia, Slovenia, South Africa (honours degree takes 4 years), Sweden, Switzerland, Trinidad and Tobago, Uganda (mostly three years, sometimes four), United Arab Emirates, Wales, and Zimbabwe. === Four years === Afghanistan, Albania (four or five years), Armenia (four or five years), Australia (honours degree), Azerbaijan (four or five years), Bahrain, Bangladesh (four or five years), Belarus, Belize, Bosnia and Herzegovina, Brazil (four or five years), Brunei (three or four years), Bulgaria, Canada (except Quebec, four or five years), China, Cyprus, the Dominican Republic, Egypt (four or five years), Ethiopia (engineering, five years), Finland (engineering, practice in industry not included), Georgia, Ghana (three or four years), Greece (four or five years), Guatemala, Haiti (three or four years), Hong Kong (starting from 2012; three years prior to then), India (Some universities and institutes offer 4 year degrees ), Indonesia (four or five years), Iran (four or five years), Iraq, Ireland (Honours Degree), Israel (engineering degree), Japan, Jordan (four to five years), Kazakhstan, Kenya, Kuwait, Libya, Lithuania, North Macedonia (three, four or five years), Malawi (four or five years), Malta, Mexico, Montenegro (three or four years), Myanmar, Nepal (previously three, now four years), the Netherlands (three years for research universities, four years for universities of applied sciences), New Zealand (honours degree), Nigeria (four or five years), Pakistan (four or five years), the Philippines (four or five years), Romania, Russia, Saudi Arabia, Scotland (Honours Degree), Serbia (three or four years), Spain, South Africa (fourth year is elective — to obtain an Honours degree, which is normally a requirement for selection into a master's degree program), South Korea, Sri Lanka (three, four, or five (specialized) years), Taiwan, Tajikistan (four or five years), Thailand, Turkmenistan (four years), Tunisia (only a Bachelor of Science in Business Administration is available, solely awarded by Tunis Business School), Turkey, Ukraine, the United States, Uruguay (four, five, six, or seven years), Vietnam (four or five years), Yemen, and Zambia (four or five years). === Five years === Canada (except Quebec, four or five years), Cuba (five years), Greece (four or five years), Peru, Argentina, Colombia (five years), Brazil (four or five years), Mexico (four or five years), Chile (five or six years), Venezuela (five years), Egypt (four or five years), Haiti (four or five years), Iran (four or five years), the Philippines (four or five years). Bangladesh (four or five years), Pakistan (four or five years), Indonesia (four or five years), Nigeria (four or five years), six months dedicated to SIWES (Students Industrial Work Exchange Scheme) but for most sciences and all engineering courses only. A semester for project work/thesis not excluding course work during the bachelor thesis. Excluding one year for the compulsory National Youth Service Corps (NYSC), para-military and civil service. North Macedonia, Sierra Leone (four years dedicated to coursework), Slovenia (four or five years), Sudan (five years for BSc honours degree and four years for BSc ordinary degree), and Syria. In Algeria, the student presents a thesis in front of a Jury at the end of the fifth year. Some universities in Canada (such as University of British Columbia and Vancouver Island University) have most of their science and applied science students extend their degree by a year compared to other institutions. === Six years === In Chile, some undergraduate majors such as engineering and geology are designed as six-year programs. However, in practice it is not uncommon for students to complete such programs over the course of ten years, while studying full-time without leaves of absence. This is in part due to a strict grading system where the highest grade of a typical class can be as low as 60% (C-). There are studies that suggest a direct correlation between reduced social mobility and differences unique to the Chilean higher education system. == See also == British undergraduate degree classification British degree abbreviations List of tagged degrees Master of Science == Notes == == References ==
Wikipedia/Bachelor_of_Science
The Faber–Evans model for crack deflection, is a fracture mechanics-based approach to predict the increase in toughness in two-phase ceramic materials due to crack deflection. The effect is named after Katherine Faber and her mentor, Anthony G. Evans, who introduced the model in 1983. The Faber–Evans model is a principal strategy for tempering brittleness and creating effective ductility. Fracture toughness is a critical property of ceramic materials, determining their ability to resist crack propagation and failure. The Faber model considers the effects of different particle morphologies, including spherical, rod-shaped, and disc-shaped particles, and their influence on the driving force at the tip of a tilted and/or twisted crack. The model first suggested that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. The findings provide a basis for designing high-toughness two-phase ceramic materials, with a focus on optimizing particle shape and volume fraction. == Fracture mechanics and crack deflection == Fracture mechanics is a fundamental discipline for understanding the mechanical behavior of materials, particularly in the presence of cracks. The critical parameter in fracture mechanics is the stress intensity factor (K), which is related to the strain energy release rate (G) and the fracture toughness (Gc). When the stress intensity factor reaches the material's fracture toughness, crack propagation becomes unstable, leading to failure. In two-phase ceramic materials, the presence of a secondary phase can lead to crack deflection, a phenomenon where the crack path deviates from its original direction due to interactions with the second-phase particles. Crack deflection can lead to a reduction in the driving force at the crack tip, increasing the material's fracture toughness. The effectiveness of crack deflection in enhancing fracture toughness depends on several factors, including particle shape, size, volume fraction, and spatial distribution. The study presents weighting functions, F(θ), for the three particle morphologies, which describe the distribution of tilt angles (θ) along the crack front: F ( θ ) s p h e r e = ( 4 π ) sin 2 ⁡ θ d θ {\displaystyle F(\theta )_{\rm {sphere}}=\left({\frac {4}{\pi }}\right)\sin ^{2}\theta \,d\theta } F ( θ ) d i s k = ( 4 π ) sin 2 ⁡ θ d θ {\displaystyle F(\theta )_{\rm {disk}}=\left({\frac {4}{\pi }}\right)\sin ^{2}\theta \,d\theta } F ( θ ) r o d ≈ ( 1.55 + 1.10 θ − 2.42 θ 2 + 1.78 θ 3 ) s i n θ c o s θ d θ {\displaystyle F(\theta )_{\rm {rod}}\approx (1.55+1.10\theta -2.42\theta ^{2}+1.78\theta ^{3})sin\theta \,cos\theta \,d\theta } The weighting functions are used to determine the net driving force on the tilted crack for each morphology. The relative driving force for spherical particles is given by: ⟨ G ⟩ s p h e r e t / G ∞ = ( 4 π ) sin 2 ⁡ θ [ ( k 1 t ) 2 + ( k 2 t ) ] d θ {\displaystyle \left\langle {G}\right\rangle _{sphere}^{t}/G_{\infty }=\left({\frac {4}{\pi }}\right)\sin ^{2}\theta [(k_{1}^{t})^{2}+(k_{2}^{t})]d\theta } where k i = k i / K 1 {\displaystyle k_{i}=k_{i}/K_{1}} and ⟨ G ⟩ t {\displaystyle \left\langle {G}\right\rangle ^{t}} prescribes the strain energy release rate only for that portion of the crack front which tilts. To characterize the entire crack front at initial tilt, ⟨ G ⟩ t {\displaystyle \left\langle {G}\right\rangle ^{t}} must be qualified by the fraction of the crack length intercepted and superposed on the driving force that derives from the remaining undeflected portion of the crack. The resultant toughening increment, derived directly from the driving forces, is given by: ( G c t ) s p h e r e = ( 1 + 0.87 V f ) G c m {\displaystyle (G_{\rm {c}}^{t})_{sphere}=(1+0.87V_{f})G_{\rm {c}}^{m}} ( G c t ) r o d ≈ ( 1 + V f ( 0.6 + 0.007 ( H / r ) − 0.0001 ( H / r ) 2 ) G c m {\displaystyle (G_{\rm {c}}^{t})_{rod}\approx (1+V_{f}(0.6+0.007(H/r)-0.0001(H/r)^{2})G_{\rm {c}}^{m}} ( G c t ) d i s k = [ 1 + 0.56 V f ( r / t ) ] G c m {\displaystyle (G_{\rm {c}}^{t})_{disk}=[1+0.56V_{f}(r/t)]G_{\rm {c}}^{m}} where G c m {\textstyle G_{\rm {c}}^{m}} represents the fracture toughness of the matrix material without the presence of any reinforcing particles, V f {\displaystyle V_{f}} is the volume fraction of spheres, ( H / r ) {\displaystyle (H/r)} relates the rod length H {\displaystyle H} to its radius, r {\displaystyle r} , and ( r / t ) {\displaystyle (r/t)} is the ratio of the disc radius, r {\displaystyle r} , to its thickness, t {\displaystyle t} . == Spatial location and orientation of particles == The spatial location and orientation of adjacent particles play a crucial role in determining whether the inter-particle crack front will tilt or twist. If adjacent particles produce tilt angles of opposite sign, twist of the crack front will result. Conversely, tilt angles of like sign at adjacent particles cause the entire crack front to tilt. Therefore, to evaluate the toughening increment, all possible particle configurations must be considered. For spherical particles, the average twist angle is determined by the mean center-to-center nearest neighboring distance, Δ {\displaystyle \Delta } , between particles with spheres of radius r: Δ r = e 8 V f V f 1 / 3 ∫ 8 V f ∞ x 1 / 3 e − x d x {\displaystyle {\frac {\Delta }{r}}={\frac {e^{8V_{f}}}{V_{f}^{1/3}}}\int _{8V_{f}}^{\infty }x^{1/3}e^{-x}dx} The maximum twist angle occurs when the particles are nearly co-planar with the crack, given by: ϕ m a x = sin − 1 ⁡ ( 2 r Δ ) {\displaystyle \phi _{max}=\sin ^{-1}\left({\frac {2r}{\Delta }}\right)} and depends exclusively on the volume fraction. For rod-shaped particles, the analysis of crack front twist is more complex due to difficulties in describing the rod orientation with respect to the crack front and adjacent rods. The twist angle, ϕ {\displaystyle \phi } , is determined by the effective tilt angle, λ {\displaystyle \lambda } , and the inter-particle spacing between randomly arranged rod-shaped particles. The twist of the crack front is influenced not only by the volume fraction of rods but also by the ratio of the rod length to radius: ϕ = tan − 1 ⁡ { α sin ⁡ θ 1 + ( 1 − β ) sin ⁡ θ 2 Δ ′ } {\displaystyle \phi =\tan ^{-1}\left\{{\frac {\alpha \sin \theta _{1}+(1-\beta )\sin \theta _{2}}{\Delta '}}\right\}} where Δ ′ {\displaystyle \Delta '} represents the dimensionless effective inter-particle spacing between two adjacent rod-shaped particles. == Morphology and volume effects on fracture toughness == The analysis reveals that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks, with the potential to increase fracture toughness by up to four times. This toughening arises primarily from the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in increasing fracture toughness. For disc-shaped particles with high aspect ratios, initial crack front tilt can provide significant toughening, although the twist component still dominates. In contrast, neither sphere nor rod particles derive substantial toughening from the initial tilting process. As the volume fraction of particles increases, an asymptotic toughening effect is observed for all three morphologies at volume fractions above 0.2. For spherical particles, the interparticle spacing distribution has a significant impact on toughening, with greater enhancements when spheres are nearly contacting and twist angles approach π/2. The Faber–Evans model suggests that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in enhancing toughness. However, the interparticle spacing distribution plays a significant role in the toughening by spherical particles, with greater toughening achieved when spheres are nearly contacting. In designing high-toughness two-phase ceramic materials, the focus should be on optimizing particle shape and volume fraction. The model proved that ideal second phase should be chemically compatible and present in amounts of 10 to 20 volume percent, with particles having high aspect ratios, particularly those with rod-shaped morphologies, providing the maximum toughening effect. This model is often used in the development of advanced ceramic materials with improved performance when the factors that contribute to the increase in fracture toughness is a consideration. == See also == Fracture toughness Toughening Ceramic Engineering Fracture == References ==
Wikipedia/Faber-Evans_model
Forensic materials engineering, a branch of forensic engineering, focuses on the material evidence from crime or accident scenes, seeking defects in those materials which might explain why an accident occurred, or the source of a specific material to identify a criminal. Many analytical methods used for material identification may be used in investigations, the exact set being determined by the nature of the material in question, be it metal, glass, ceramic, polymer or composite. An important aspect is the analysis of trace evidence such as skid marks on exposed surfaces, where contact between dissimilar materials leaves material traces of one left on the other. Provided the traces can be analysed successfully, then an accident or crime can often be reconstructed. Another aim will be to determine the cause of a broken component using the technique of fractography. Forensic materials engineers are often involved in product failures (e.g., a critical component of a safety device), process failures (e.g., a manufacturing does not produce materials with acceptable properties for an application), and design failures (e.g., many products prematurely fail). Defects Many different types of defects, which are often investigated and characterized through the process of Failure Analysis, may be involved in the crime or accident to cause some sort of failure. For example, there are primary and secondary defects that may occur in products designed for consumer use. Defects may exist prior to use--a primary defect, or defects may develop during use; moreover, primary defects may also turn into secondary defects over time. Forensic materials engineers are involved in examining the scenario and identifying relevant defects and their probability in being causal factors in the crime or accident. Causal Factors A crucial aspect of forensic materials engineering is to avoid defining causes of failure in too binary a manner (i.e., narrowly framing the problem through questioning whether a particular component involved in the crime or accident was either defective or abused by the user). Ascertaining cause inevitably leads to identifying as many possible factors that could contribute to a failure through the generation of fault tree diagrams (see diagram at right). Following the generation of the possible causes, investigations are planned and evidence is collected to then determine which of the possible factors is/are probable in having caused the failure. Within probable causes, there are root causes which may consist of multiple factors: physical, human, and latent causes or factors. During the litigation process each party involved, will on occasion, retain an expert forensic materials engineering witness to perform a failure analysis and compile a report. During such investigations, the parties are often present in the same location for the review of the physical evidence, and can be present for materials analyses. However, each party will often then independently perform their own observations of collected images and data and subsequently execute interpretations and analyses in preparation for depositions and/or court proceedings. == Metals and alloys == Metal surfaces can be analyzed in a number of ways, including by spectroscopy and EDX used during scanning electron microscopy. The nature and composition of the metal can normally be established by sectioning and polishing the bulk, and examining the flat section using optical microscopy after etching solutions have been used to provide contrast in the section between alloy constituents. Such solutions (often an acid) attack the surface preferentially, so isolating features or inclusions of one composition, enabling them to be seen much more clearly than in the polished but untreated surface. Metallography is a routine technique for examining the microstructure of metals, but can also be applied to ceramics, glasses and polymers. SEM can often be critical in determining failures modes by examining fracture surfaces. The origin of a crack can be found and the way it grew assessed, to distinguish, for example, overload failure from fatigue. Often however, fatigue fractures are easy to distinguish from overload failures by the lack of ductility, and the existence of a fast crack growth region and the slow crack growth area on the fracture surface. Crankshaft fatigue for example is a common failure mode for engine parts. The example shows just two such zones, the slow crack at the base, the fast at the top. == Ceramics and glasses == Hard products like ceramic pottery and glass windscreens can be studied using the same SEM methods used for metals, especially ESEM conducted at low vacuum. Fracture surfaces are especially valuable sources of information because surface features like hachures can enable the origin or origins of the cracks to be found. Analysis of the surface features is carried out using fractography. The position of the origin can then be matched with likely loads on the product to show how an accident occurred, for example. Inspection of bullet holes can often show the direction of travel and energy of the impact, and the way common glass products like bottles can be analysed to show whether deliberately or accidentally broken in a crime or accident. Defects such as foreign particles will often occur near or at the origin of the critical crack, and can be readily identified by ESEM. == Polymers and composites == Thermoplastics, thermosets, and composites can be analyzed using FTIR and UV spectroscopy as well as NMR and ESEM. Failed samples can either be dissolved in a suitable solvent and examined directly (UV, IR and NMR spectroscopy) or as a thin film cast from solvent or cut using microtomy from the solid product. The slicing method is preferable since there are no complications from solvent absorption, and the integrity of the sample is partly preserved. Fractured products can be examined using fractography, an especially useful method for all fractured components using macrophotography and optical microscopy. Although polymers usually possess quite different properties to metals and ceramics, they are just as susceptible to failure from mechanical overload, fatigue and stress corrosion cracking if products are poorly designed or manufactured. Many plastics are susceptible to attack by active chemicals like chlorine, present at low levels in potable water supplies, especially if the injection mouldings are faulty. ESEM is especially useful for providing elemental analysis from viewed parts of the sample being investigated. It is effectively a technique of microanalysis and valuable for examination of trace evidence. On the other hand, colour rendition is absent, and there is no information provided about the way in which those elements are bonded to one another. Specimens will be exposed to a vacuum, so any volatiles may be removed, and surfaces may be contaminated by substances used to attach the sample to the mount. == Elastomers == Rubber products are often safety-critical parts of machines, so that failure can often cause accidents or loss of function. Failed products can be examined with many of the generic polymer methods, although it is more difficult if the sample is vulcanized or cross-linked. Attenuated total reflectance infra-red spectroscopy is useful because the product is usually flexible so can be pressed against the selenium crystal used for analysis. Simple swelling tests can also help to identify the specific elastomer used in a product. Often the best technique is ESEM using the X-ray elemental analysis facility on the microscope. Although the method only provides elemental analysis, it can provide clues as to the identity of the elastomer being examined. Thus the presence of substantial amounts of chlorine indicates polychloroprene while the presence of nitrogen indicates nitrile rubber. The method is also useful in confirming ozone cracking by the large amounts of oxygen present on cracked surfaces. Ozone attacks susceptible elastomers such as natural rubber, nitrile rubber and polybutadiene and associated copolymers. Such elastomers possess double bonds in their main chains, the group which is attacked during ozonolysis. The problem occurs when small concentrations of ozone gas are present near to exposed elastomer surfaces, such as O-rings and diaphragm seals. The product must be in tension, but only very low strains are sufficient to cause degradation. == See also == == References == Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004). Lewis, Peter Rhys Forensic Polymer Engineering: Why polymer products fail in service, 2nd edition, Woodhead/Elsevier (2016).
Wikipedia/Forensic_materials_engineering
Substrate is a term used in materials science and engineering to describe the base material on which processing is conducted. Surfaces have different uses, including producing new film or layers of material and being a base to which another substance is bonded. == Description == In materials science and engineering, a substrate refers to a base material on which processing is conducted. This surface could be used to produce new film or layers of material such as deposited coatings. It could be the base to which paint, adhesives, or adhesive tape is bonded. A typical substrate might be rigid such as metal, concrete, or glass, onto which a coating might be deposited. Flexible substrates are also used. Some substrates are anisotropic with surface properties being different depending on the direction: examples include wood and paper products. == Coatings == With all coating processes, the condition of the surface of the substrate can strongly affect the bond of subsequent layers. This can include cleanliness, smoothness, surface energy, moisture, etc. Coating can be by a variety of processes, including: Adhesives and adhesive tapes Coating and printing processes Chemical vapor deposition and physical vapor deposition Conversion coating Anodizing Chromate conversion coating Plasma electrolytic oxidation Phosphate coating Paint Enamel paint Powder coating Industrial coating Silicate mineral paint Fusion bonded epoxy coating (FBE coating) Pickled and oiled, a type of plate steel coating. Plating Electroless plating Electrochemical plating Polymer coatings, such as Teflon Sputtered or vacuum deposited materials Vitreous enamel In optics, glass may be used as a substrate for an optical coating—either an antireflection coating to reduce reflection, or a mirror coating to enhance it. Ceramic substrates are also used in the renewable energy sector to produce inverters for photovoltaic solar systems and concentrators for concentrated photovoltaic systems. A substrate may be also an engineered surface where an unintended or natural process occurs, like in: Fouling Corrosion Biofouling Heterogeneous catalysis Adsorption == See also == List of coating techniques Thin film Wetting == References ==
Wikipedia/Substrate_(materials_science)
A multi-function material is a composite material. The traditional approach to the development of structures is to address the load-carrying function and other functional requirements separately. Recently, however, there has been increased interest in the development of load-bearing materials and structures which have integral non-load-bearing functions, guided by recent discoveries about how multifunctional biological systems work. == Introduction == With conventional structural materials, it has been difficult to achieve simultaneous improvement in multiple structural functions, but the increasing use of composite materials has been driven in part by the potential for such improvements. The multi-functions can vary from mechanical to electrical and thermal functions. The most widely used composites have polymer matrix materials, which are typically poor conductors. Enhanced conductivity could be achieved with reinforcing the composite with carbon nanotubes for instance. == Functions == Among the many functions that can be attained are power transmission, electrical/thermal conductivity, sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding and recyclability and biodegradability. See also functionally graded materials which are composite materials where the composition or the microstructure are locally varied so that a certain variation of the local material properties is achieved. However, functionally graded materials can be designed for specific function and applications. Many applications such as re-configurable aircraft wings, shape-changing aerodynamic panels for flow control, variable geometry engine exhausts, turbine blade, wind turbine configuration at different wind speed, microelectromechanical systems (micro-switches), mechanical memory cells, valves, micropumps, flexible direction panel position in solar cells, innovative architecture (adaptive shape panels for roofs and windows), flexible and foldable electronic devices and optics (shape changing mirrors for active focusing in adaptive optical systems). == References ==
Wikipedia/Multi-function_structure
Carbon nanotubes (CNTs) are cylinders of one or more layers of graphene (lattice). Diameters of single-walled carbon nanotubes (SWNTs) and multi-walled carbon nanotubes (MWNTs) are typically 0.8 to 2 nm and 5 to 20 nm, respectively, although MWNT diameters can exceed 100 nm. CNT lengths range from less than 100 nm to 0.5 m. Individual CNT walls can be metallic or semiconducting depending on the orientation of the lattice with respect to the tube axis, which is called chirality. MWNT's cross-sectional area offers an elastic modulus approaching 1 TPa and a tensile strength of 100 GPa, over 10-fold higher than any industrial fiber. MWNTs are typically metallic and can carry currents of up to 109 A cm−2. SWNTs can display thermal conductivity of 3500 W m−1 K−1, exceeding that of diamond. As of 2013, carbon nanotube production exceeded several thousand tons per year, used for applications in energy storage, device modelling, automotive parts, boat hulls, sporting goods, water filters, thin-film electronics, coatings, actuators and electromagnetic shields. CNT-related publications more than tripled in the prior decade, while rates of patent issuance also increased. Most output was of unorganized architecture. Organized CNT architectures such as "forests", yarns and regular sheets were produced in much smaller volumes. CNTs have even been proposed as the tether for a purported space elevator. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1 mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants. == Biological and biomedical research == Researchers from Rice University and State University of New York – Stony Brook have shown that the addition of low weight % of carbon nanotubes can lead to significant improvements in the mechanical properties of biodegradable polymeric nanocomposites for applications in tissue engineering including bone, cartilage, muscle and nerve tissue. Dispersion of low weight % of graphene (~0.02 wt.%) results in significant increases in compressive and flexural mechanical properties of polymeric nanocomposites. Researchers at Rice University, Stony Brook University, Radboud University Nijmegen Medical Centre and University of California, Riverside have shown that carbon nanotubes and their polymer nanocomposites are suitable scaffold materials for bone tissue engineering and bone formation. Compared to the commonly used polyurethane in prosthetic feet, CNT natural rubber composites with just 1% carbon nanotubes show a 25% boost in tensile strength. Additionally, this small CNT addition improves dimensional stability by 20% and heat resistance by 15% over natural rubber. CNTs exhibit dimensional and chemical compatibility with biomolecules, such as DNA and proteins. CNTs enable fluorescent and photoacoustic imaging, as well as localized heating using near-infrared radiation. SWNT biosensors exhibit large changes in electrical impedance and optical properties, which is typically modulated by adsorption of a target on the CNT surface. Low detection limits and high selectivity require engineering the CNT surface and field effects, capacitance, Raman spectral shifts and photoluminescence for sensor design. Products under development include printed test strips for estrogen and progesterone detection, microarrays for DNA and protein detection and sensors for NO2 and cardiac troponin. Similar CNT sensors support food industry, military and environmental applications. CNTs can be internalized by cells, first by binding their tips to cell membrane receptors. This enables transfection of molecular cargo attached to the CNT walls or encapsulated by CNTs. For example, the cancer drug doxorubicin was loaded at up to 60 wt % on CNTs compared with a maximum of 8 to 10 wt % on liposomes. Cargo release can be triggered by near-infrared radiation. However, limiting the retention of CNTs within the body is critical to prevent undesirable accumulation. CNT toxicity remains a concern, although CNT biocompatibility may be engineerable. The degree of lung inflammation caused by injection of well-dispersed SWNTs was insignificant compared with asbestos and with particulate matter in air. Medical acceptance of CNTs requires understanding of immune response and appropriate exposure standards for inhalation, injection, ingestion and skin contact. CNT forests immobilized in a polymer did not show elevated inflammatory response in rats relative to controls. CNTs are under consideration as low-impedance neural interface electrodes and for coating of catheters to reduce thrombosis. CNT enabled x-ray sources for medical imaging are also in development. Relying on the unique properties of the CNTs, researchers have developed field emission cathodes that allow precise x-ray control and close placement of multiple sources. CNT enabled x-ray sources have been demonstrated for pre-clinical, small animal imaging applications, and are currently in clinical trials. In November 2012 researchers at the American National Institute of Standards and Technology (NIST) proved that single-wall carbon nanotubes may help protect DNA molecules from damage by oxidation. A highly effective method of delivering carbon nanotubes into cells is Cell squeezing, a high-throughput vector-free microfluidic platform for intracellular delivery developed at the Massachusetts Institute of Technology in the labs of Robert S. Langer. Carbon nanotubes have furthermore been grown inside microfluidic channels for chemical analysis, based on electrochromatography. Here, the high surface-area-to-volume ratio and high hydrophobicity of CNTs are used in order to greatly decrease the analysis time of small neutral molecules that typically require large bulky equipment for analysis. == Composite materials == Because of the carbon nanotube's superior mechanical properties, many structures have been proposed ranging from everyday items like clothes and sports gear to combat jackets and space elevators. However, the space elevator will require further efforts in refining carbon nanotube technology, as the practical tensile strength of carbon nanotubes must be greatly improved. Applications of CNTs/polymer composites in the automotive sector highlight technological advancements across various systems, particularly in body components, electrical systems, and engine parts. Incorporating CNTs with fiberglass reinforcement in epoxy composites can boost strength by 60% and impact energy resistance by 30%. These improvements could significantly reduce fuel consumption by 16% and greenhouse gas emissions by 26%. Additionally, CNT fillers, effective even at low concentrations (0.2 wt.%), enhance dimensional and thermal stability while reducing overall weight. A 25% reduction in vehicle weight could save up to 250 milion barrels of crude oil annually. For perspective, outstanding breakthroughs have already been made. Pioneering work led by Ray H. Baughman at the NanoTech Institute has shown that single and multi-walled nanotubes can produce materials with toughness unmatched in the man-made and natural worlds. Carbon nanotubes are also a promising material as building blocks in hierarchical composite materials given their exceptional mechanical properties (~1 TPa in modulus, and ~100 GPa in strength). Initial attempts to incorporate CNTs into hierarchical structures (such as yarns, fibres or films) has led to mechanical properties that were significantly lower than these potential limits. The hierarchical integration of multi-walled carbon nanotubes and metal/metal oxides within a single nanostructure can leverage the potentiality of carbon nanotubes composite for water splitting and electrocatalysis. Windle et al. have used an in situ chemical vapor deposition (CVD) spinning method to produce continuous CNT yarns from CVD-grown CNT aerogels. CNT yarns can also be manufactured by drawing out CNT bundles from a CNT forest and subsequently twisting to form the fibre (draw-twist method, see picture on right). The Windle group have fabricated CNT yarns with strengths as high as ~9 GPa at small gage lengths of ~1 mm, however, strengths of only about ~1 GPa were reported at the longer gage length of 20 mm. The reason why fibre strengths have been low compared to the strength of individual CNTs is due to a failure to effectively transfer load to the constituent (discontinuous) CNTs within the fibre. One potential route for alleviating this problem is via irradiation (or deposition) induced covalent inter-bundle and inter-CNT cross-linking to effectively 'join up' the CNTs, with higher dosage levels leading to the possibility of amorphous carbon/carbon nanotube composite fibres. Espinosa et al. developed high performance DWNT-polymer composite yarns by twisting and stretching ribbons of randomly oriented bundles of DWNTs thinly coated with polymeric organic compounds. These DWNT-polymer yarns exhibited an unusually high energy to failure of ~100 J·g−1 (comparable to one of the toughest natural materials – spider silk), and strength as high as ~1.4 GPa. Effort is ongoing to produce CNT composites that incorporate tougher matrix materials, such as Kevlar, to further improve on the mechanical properties toward those of individual CNTs. Because of the high mechanical strength of carbon nanotubes, research is being made into weaving them into clothes to create stab-proof and bulletproof clothing. The nanotubes would effectively stop the bullet from penetrating the body, although the bullet's kinetic energy would likely cause broken bones and internal bleeding. Carbon nanotubes can also enable shorter processing times and higher energy efficiencies during composite curing with the use of carbon nanotube structured heaters. Autoclaving is the ‘gold standard’ for composite curing however, it comes at a high price and introduces part size limitations. Researchers estimate that curing a small section of the Boeing 787 carbon fiber/epoxy fuselage requires 350 GJ of energy and produces 80 tons of carbon dioxide. This is about the same amount of energy that nine households would consume in one year. In addition, eliminating part size limitations eliminates the need to join small composite components to create large scale structures. This saves manufacturing time and results in higher strength structures. Carbon nanotube structured heaters show promise in replacing autoclaves and conventional ovens for composite curing because of their ability to reach high temperatures at fast ramping rates with high electrical efficiency and mechanical flexibility. These nanostructured heaters can take the form of a film and be applied directly to the composite. This results in conductive heat transfer as opposed to convective heat transfer used by autoclaves and conventional ovens. Lee et al. reported that only 50% of the thermal energy introduced in an autoclave is transferred to the composite being cured regardless of part size, while about 90% of the thermal energy is transferred in a nanostructured film heater depending on the process. Lee et al. were able to successfully cure aerospace-grade composites using a CNT heater made by “domino-pushing” a CNT forest onto a Teflon film. This film was then laid on top of an 8-ply OOA prepreg layup. Thermal insulation was incorporated around the assembly. The entire setup was subsequently vacuum bagged and heated using a 30 V DC power supply. Degree-of-cure and mechanical tests were conducted to compare conventionally cured composites against their OOA set-up. Results showed that there was no difference in the quality of the composite created. However, the amount of energy required to cure the composite OOA was reduced by two orders of magnitude from 13.7 MJ to 118.8 kJ. Before carbon nanotubes can be used to cure Boeing 787 fuselages however, further development needs to occur. The largest challenge associated with creating reliable carbon nanotube structured heaters is being able to create a uniform carbon nanotube dispersion in a polymer matrix to ensure heat is applied evenly. CNTs high surface area results in strong Van Der Waals forces between individual CNTs, causing them to agglomerate together and yielding non-uniform heating properties. In addition, the polymer matrix chosen needs to be carefully chosen such that it can withstand the high temperatures generated and the repetitive thermal cycling required to cure multiple composite components. === Mixtures === MWNTs were first used as electrically conductive fillers in metals, at concentrations as high as 83.78 percent by weight (wt%). MWNT-polymer composites reach conductivities as high as 10,000 S m−1 at 10 wt % loading. In the automotive industry, CNT plastics are used in electrostatic-assisted painting of mirror housings, as well as fuel lines and filters that dissipate electrostatic charge. Other products include electromagnetic interference (EMI)–shielding packages and silicon wafer carriers. For load-bearing applications, CNT powders are mixed with polymers or precursor resins to increase stiffness, strength and toughness. These enhancements depend on CNT diameter, aspect ratio, alignment, dispersion and interfacial interaction. Premixed resins and master batches employ CNT loadings from 0.1 to 20 wt%. Nanoscale stick-slip among CNTs and CNT-polymer contacts can increase material damping, enhancing sporting goods, including tennis racquets, baseball bats and bicycle frames. CNT resins enhance fiber composites, including wind turbine blades and hulls for maritime security boats that are made by enhancing carbon fiber composites with CNT-enhanced resin. CNTs are deployed as additives in the organic precursors of stronger 1 μm-diameter carbon fibers. CNTs influence the arrangement of carbon in pyrolyzed fiber. Toward the challenge of organizing CNTs at larger scales, hierarchical fiber composites are created by growing aligned forests onto glass, silicon carbide (SiC), alumina and carbon fibers, creating so-called "fuzzy" fibers. Fuzzy epoxy CNT-SiC and CNT-alumina fabric showed 69% improved crack-opening (mode I) and/or in-plane shear interlaminar (mode II) toughness. Applications under investigation include lightning-strike protection, deicing, and structural health monitoring for aircraft. MWNTs can be used as a flame-retardant additive to plastics due to changes in rheology by nanotube loading. Such additives can replace halogenated flame retardants, which face environmental restrictions. CNT/Concrete blends offer increased tensile strength and reduced crack propagation. Buckypaper (nanotube aggregate) can significantly improve fire resistance due to efficient heat reflection. === Textiles === The previous studies on the use of CNTs for textile functionalization were focused on fiber spinning for improving physical and mechanical properties. Recently a great deal of attention has been focused on coating CNTs on textile fabrics. Various methods have been employed for modifying fabrics using CNTs. produced intelligent e-textiles for Human Biomonitoring using a polyelectrolyte-based coating with CNTs. Additionally, Panhuis et al. dyed textile material by immersion in either a poly (2-methoxy aniline-5-sulfonic acid) PMAS polymer solution or PMAS-SWNT dispersion with enhanced conductivity and capacitance with a durable behavior. In another study, Hu and coworkers coated single-walled carbon nanotubes with a simple “dipping and drying” process for wearable electronics and energy storage applications. In the recent study, Li and coworkers using elastomeric separator and almost achieved a fully stretchable supercapacitor based on buckled single-walled carbon nanotube macrofilms. The electrospun polyurethane was used and provided sound mechanical stretchability and the whole cell achieve excellent charge-discharge cycling stability. CNTs have an aligned nanotube structure and a negative surface charge. Therefore, they have similar structures to direct dyes, so the exhaustion method is applied for coating and absorbing CNTs on the fiber surface for preparing multifunctional fabric including antibacterial, electric conductive, flame retardant and electromagnetic absorbance properties. Later, CNT yarns and laminated sheets made by direct chemical vapor deposition (CVD) or forest spinning or drawing methods may compete with carbon fiber for high-end uses, especially in weight-sensitive applications requiring combined electrical and mechanical functionality. Research yarns made from few-walled CNTs have reached a stiffness of 357 GPa and a strength of 8.8 GPa for a gauge length comparable to the millimeter-long CNTs within the yarn. Centimeter-scale gauge lengths offer only 2-GPa gravimetric strengths, matching that of Kevlar. Because the probability of a critical flaw increases with volume, yarns may never achieve the strength of individual CNTs. However, CNT's high surface area may provide interfacial coupling that mitigates these deficiencies. CNT yarns can be knotted without loss of strength. Coating forest-drawn CNT sheets with functional powder before inserting twist yields weavable, braidable and sewable yarns containing up to 95 wt % powder. Uses include superconducting wires, battery and fuel cell electrodes and self-cleaning textiles. As yet impractical fibers of aligned SWNTs can be made by coagulation-based spinning of CNT suspensions. Cheaper SWNTs or spun MWNTs are necessary for commercialization. Carbon nanotubes can be dissolved in superacids such as fluorosulfuric acid and drawn into fibers in dry jet-wet spinning. DWNT-polymer composite yarns have been made by twisting and stretching ribbons of randomly oriented bundles of DWNTs thinly coated with polymeric organic compounds. Body armor—combat jackets Cambridge University developed the fibres and licensed a company to make them. In comparison, the bullet-resistant fiber Kevlar fails at 27–33 J/g. Synthetic muscles offer high contraction/extension ratio given an electric current. SWNT are in use as an experimental material for removable, structural bridge panels. In 2015, researchers incorporated CNTs and graphene into spider silk, increasing its strength and toughness to a new record. They sprayed 15 Pholcidae spiders with water containing the nanotubes or flakes. The resulting silk had a fracture strength up to 5.4 GPa, a Young's modulus up to 47.8 GPa and a toughness modulus up to 2.1 GPa, surpassing both synthetic polymeric high performance fibres (e.g. Kevlar49) and knotted fibers. SWNTs can enable water-repellent properties in textiles. Treating cotton fabrics with SWNTs creates an artificial lotus leaf structure, resulting in a rough surface. Modified conductive e-textiles exhibit excellent repeatability, washability, durability, and super-hydrophobic performance. These textiles are used in protective workwear, technical textiles, medical fabrics, technical garments for extreme outdoor sports, automotive and aeronautical textiles, wearable sensors, smart textiles, etc. Carbon nanotubes (CNTs) demonstrate excellent UV-blocking properties and can be incorporated into textiles by coating them with a polymer solution. The UV transmission of fabrics coated with CNTs is nearly zero, indicating that these polymer-coated fabrics can effectively protect the wearer from both UVA and UVB rays. === Carbon nanotube springs === "Forests" of stretched, aligned MWNT springs can achieve an energy density 10 times greater than that of steel springs, offering cycling durability, temperature insensitivity, no spontaneous discharge and arbitrary discharge rate. SWNT forests are expected to be able to store far more than MWNTs. === Alloys === Adding small amounts of CNTs to metals increases tensile strength and modulus with potential in aerospace and automotive structures. Commercial aluminum-MWNT composites have strengths comparable to stainless steel (0.7 to 1 GPa) at one-third the density (2.6 g cm−3), comparable to more expensive aluminium-lithium alloys. === Coatings and films === CNTs can serve as a multifunctional coating material. For example, paint/MWNT mixtures can reduce biofouling of ship hulls by discouraging attachment of algae and barnacles. They are a possible alternative to environmentally hazardous biocide-containing paints. Mixing CNTs into anticorrosion coatings for metals can enhance coating stiffness and strength and provide a path for cathodic protection. CNTs provide a less expensive alternative to ITO for a range of consumer devices. Besides cost, CNT's flexible, transparent conductors offer an advantage over brittle ITO coatings for flexible displays. CNT conductors can be deposited from solution and patterned by methods such as screen printing. SWNT films offer 90% transparency and a sheet resistivity of 100 ohm per square. Such films are under development for thin-film heaters, such as for defrosting windows or sidewalks. Carbon nanotubes forests and foams can also be coated with a variety of different materials to change their functionality and performance. Examples include silicon coated CNTs to create flexible energy-dense batteries, graphene coatings to create highly elastic aerogels and silicon carbide coatings to create a strong structural material for robust high-aspect-ratio 3D-micro architectures. There is a wide range of methods how CNTs can be formed into coatings and films. ==== Optical power detectors ==== A spray-on mixture of carbon nanotubes and ceramic demonstrates unprecedented ability to resist damage while absorbing laser light. Such coatings that absorb the energy of high-powered lasers without breaking down are essential for optical power detectors that measure the output of such lasers. These are used, for example, in military equipment for defusing unexploded mines. The composite consists of multiwall carbon nanotubes and a ceramic made of silicon, carbon and nitrogen. Including boron boosts the breakdown temperature. The nanotubes and graphene-like carbon transmit heat well, while the oxidation-resistant ceramic boosts damage resistance. Creating the coating involves dispersing the nanotubes in toluene, to which a clear liquid polymer containing boron was added. The mixture was heated to 1,100 °C (2,010 °F). The result is crushed into a fine powder, dispersed again in toluene and sprayed in a thin coat on a copper surface. The coating absorbed 97.5 percent of the light from a far-infrared laser and tolerated 15 kilowatts per square centimeter for 10 seconds. Damage tolerance is about 50 percent higher than for similar coatings, e.g., nanotubes alone and carbon paint. ==== Radar absorption ==== Radars work in the microwave frequency range, which can be absorbed by MWNTs. Applying the MWNTs to the aircraft would cause the radar to be absorbed and therefore seem to have a smaller radar cross-section. One such application could be to paint the nanotubes onto the plane. Recently there has been some work done at the University of Michigan regarding carbon nanotubes' usefulness as stealth technology on aircraft. It has been found that in addition to the radar absorbing properties, the nanotubes neither reflect nor scatter visible light, making it essentially invisible at night, much like painting current stealth aircraft black except much more effective. Current limitations in manufacturing, however, mean that the current production of nanotube-coated aircraft is not possible. One theory to overcome these current limitations is to cover small particles with the nanotubes and suspend the nanotube-covered particles in a medium such as paint, which can then be applied to a surface, like a stealth aircraft. In 2010, Lockheed Martin Corporation applied for a patent for just such a CNT based radar-absorbent material, which was reassigned and granted to Applied NanoStructure Solutions, LLC in 2012. Some believe that this material is incorporated in the F-35 Lightning II. == Microelectronics == Nanotube-based transistors, also known as carbon nanotube field-effect transistors (CNTFETs), have been made that operate at room temperature and that are capable of digital switching using a single electron. However, one major obstacle to realization of nanotubes has been the lack of technology for mass production. In 2001, IBM researchers demonstrated how metallic nanotubes can be destroyed, leaving semiconducting ones behind for use as transistors. Their process is called "constructive destruction," which includes the automatic destruction of defective nanotubes on the wafer. This process, however, only gives control over the electrical properties on a statistical scale. SWNTs are attractive for transistors because of their low electron scattering and their bandgap. SWNTs are compatible with field-effect transistor (FET) architectures and high-k dielectrics. Despite progress following the CNT transistor's appearance in 1998, including a tunneling FET with a subthreshold swing of <60 mV per decade (2004), a radio (2007) and an FET with sub-10-nm channel length and a normalized current density of 2.41 mA μm−1 at 0.5 V, greater than those obtained for silicon devices. However, control of diameter, chirality, density and placement remains insufficient for commercial production. Less demanding devices of tens to thousands of SWNTs are more immediately practical. The use of CNT arrays/transistor increases output current and compensates for defects and chirality differences, improving device uniformity and reproducibility. For example, transistors using horizontally aligned CNT arrays achieved mobilities of 80 cm2 V−1 s−1, subthreshold slopes of 140 mV per decade and on/off ratios as high as 105. CNT film deposition methods enable conventional semiconductor fabrication of more than 10,000 CNT devices per chip. Printed CNT thin-film transistors (TFTs) are attractive for driving organic light-emitting diode displays, showing higher mobility than amorphous silicon (~1 cm2 V−1 s−1) and can be deposited by low-temperature, nonvacuum methods. Flexible CNT TFTs with a mobility of 35 cm2 V−1 s−1 and an on/off ratio of 6×106 were demonstrated. A vertical CNT FET showed sufficient current output to drive OLEDs at low voltage, enabling red-green-blue emission through a transparent CNT network. CNTs are under consideration for radio-frequency identification tags. Selective retention of semiconducting SWNTs during spin-coating and reduced sensitivity to adsorbates were demonstrated. The International Technology Roadmap for Semiconductors suggests that CNTs could replace Cu interconnects in integrated circuits, owing to their low scattering, high current-carrying capacity, and resistance to electromigration. For this, vias comprising tightly packed (>1013 cm−2) metallic CNTs with low defect density and low contact resistance are needed. Recently, complementary–metal–oxide semiconductor (CMOS)-compatible 150-nm-diameter interconnects with a single CNT–contact hole resistance of 2.8 kOhm were demonstrated on full 200 mm-diameter wafers. Also, as a replacement for solder bumps, CNTs can function both as electrical leads and heat dissipaters for use in high-power amplifiers. Last, a concept for a nonvolatile memory based on individual CNT crossbar electromechanical switches has been adapted for commercialization by patterning tangled CNT thin films as the functional elements. This required development of ultrapure CNT suspensions that can be spin-coated and processed in industrial clean room environments and are therefore compatible with CMOS processing standards. === Transistors === Carbon nanotube field-effect transistors (CNTFETs) can operate at room temperature and are capable of digital switching using a single electron. In 2013, a CNT logic circuit was demonstrated that could perform useful work. Major obstacles to nanotube-based microelectronics include the absence of technology for mass production, circuit density, positioning of individual electrical contacts, sample purity, control over length, chirality and desired alignment, thermal budget and contact resistance. One of the main challenges was regulating conductivity. Depending on subtle surface features, a nanotube may act as a conductor or as a semiconductor. Another way to make carbon nanotube transistors has been to use random networks of them. By doing so one averages all of their electrical differences and one can produce devices in large scale at the wafer level. This approach was first patented by Nanomix Inc. (date of original application June 2002). It was first published in the academic literature by the United States Naval Research Laboratory in 2003 through independent research work. This approach also enabled Nanomix to make the first transistor on a flexible and transparent substrate. Since the electron mean free path in SWCNTs can exceed 1 micrometer, long channel CNTFETs exhibit near-ballistic transport characteristics, resulting in high speeds. CNT devices are projected to operate in the frequency range of hundreds of gigahertz. Nanotubes can be grown on nanoparticles of magnetic metal (Fe, Co) that facilitates production of electronic (spintronic) devices. In particular control of current through a field-effect transistor by magnetic field has been demonstrated in such a single-tube nanostructure. ==== History ==== In 2001, IBM researchers demonstrated how metallic nanotubes can be destroyed, leaving semiconducting nanotubes for use as components. Using "constructive destruction", they destroyed defective nanotubes on the wafer. This process, however, only gives control over the electrical properties on a statistical scale. In 2003, room-temperature ballistic transistors with ohmic metal contacts and high-k gate dielectric were reported, showing 20–30x more current than state-of-the-art siliconMOSFETs. Palladium is a high-work function metal that was shown to exhibit Schottky barrier-free contacts to semiconducting nanotubes with diameters >1.7 nm. The potential of carbon nanotubes was demonstrated in 2003 when room-temperature ballistic transistors with ohmic metal contacts and high-k gate dielectric were reported, showing 20–30x higher ON current than state-of-the-art Si MOSFETs. This presented an important advance in the field as CNT was shown to potentially outperform Si. At the time, a major challenge was ohmic metal contact formation. In this regard, palladium, which is a high-work function metal was shown to exhibit Schottky barrier-free contacts to semiconducting nanotubes with diameters >1.7 nm. The first nanotube integrated memory circuit was made in 2004. One of the main challenges has been regulating the conductivity of nanotubes. Depending on subtle surface features a nanotube may act as a plain conductor or as a semiconductor. A fully automated method has however been developed to remove non-semiconductor tubes. In 2013, researchers demonstrated a Turing-complete prototype micrometer-scale computer. Carbon nanotube transistors as logic-gate circuits with densities comparable to modern CMOS technology has not yet been demonstrated. In 2014, networks of purified semiconducting carbon nanotubes were used as the active material in p-type thin film transistors. They were created using 3-D printers using inkjet or gravure methods on flexible substrates, including polyimide and polyethylene (PET) and transparent substrates such as glass. These transistors reliably exhibit high-mobilities (> 10 cm2 V−1 s−1) and ON/OFF ratios (> 1000) as well as threshold voltages below 5 V. They offer current density and low power consumption as well as environmental stability and mechanical flexibility. Hysterisis in the current-voltage curses as well as variability in the threshold voltage remain to be solved. In 2015, researchers announced a new way to connect wires to SWNTs that make it possible to continue shrinking the width of the wires without increasing electrical resistance. The advance was expected to shrink the contact point between the two materials to just 40 atoms in width and later less. They tubes align in regularly spaced rows on silicon wafers. Simulations indicated that designs could be optimized either for high performance or for low power consumption. Commercial devices were not expected until the 2020s. === Thermal management === Large structures of carbon nanotubes can be used for thermal management of electronic circuits. An approximately 1 mm–thick carbon nanotube layer was used as a special material to fabricate coolers, this material has very low density, ~20 times lower weight than a similar copper structure, while the cooling properties are similar for the two materials. Buckypaper has characteristics appropriate for use as a heat sink for chipboards, a backlight for LCD screens or as a faraday cage. == Solar cells == One of the promising applications of single-walled carbon nanotubes (SWNTs) is their use in solar panels, due to their strong UV/Vis-NIR absorption characteristics. Research has shown that they can provide a sizable increase in efficiency, even at their current unoptimized state. Solar cells developed at the New Jersey Institute of Technology use a carbon nanotube complex, formed by a mixture of carbon nanotubes and carbon buckyballs (known as fullerenes) to form snake-like structures. Buckyballs trap electrons, but they can't make electrons flow. Add sunlight to excite the polymers, and the buckyballs will grab the electrons. Nanotubes, behaving like copper wires, will then be able to make the electrons or current flow. Additional research has been conducted on creating SWNT hybrid solar panels to increase the efficiency further. These hybrids are created by combining SWNT's with photo-excitable electron donors to increase the number of electrons generated. It has been found that the interaction between the photo-excited porphyrin and SWNT generates electro-hole pairs at the SWNT surfaces. This phenomenon has been observed experimentally, and contributes practically to an increase in efficiency up to 8.5%. Nanotubes can potentially replace indium tin oxide in solar cells as a transparent conductive film in solar cells to allow light to pass to the active layers and generate photocurrent. CNTs in organic solar cells help reduce energy loss (carrier recombination) and enhance resistance to photooxidation. Photovoltaic technologies may someday incorporate CNT-Silicon heterojunctions to leverage efficient multiple-exciton generation at p-n junctions formed within individual CNTs. In the nearer term, commercial photovoltaics may incorporate transparent SWNT electrodes. == Hydrogen storage == In addition to being able to store electrical energy, there has been some research in using carbon nanotubes to store hydrogen to be used as a fuel source. By taking advantage of the capillary effects of the small carbon nanotubes, it is possible to condense gases in high density inside single-walled nanotubes. This allows for gases, most notably hydrogen (H2), to be stored at high densities without being condensed into a liquid. Potentially, this storage method could be used on vehicles in place of gas fuel tanks for a hydrogen-powered car. A current issue regarding hydrogen-powered vehicles is the on-board storage of the fuel. Current storage methods involve cooling and condensing the H2 gas to a liquid state for storage which causes a loss of potential energy (25–45%) when compared to the energy associated with the gaseous state. Storage using SWNTs would allow one to keep the H2 in its gaseous state, thereby increasing the storage efficiency. This method allows for a volume to energy ratio slightly smaller to that of current gas powered vehicles, allowing for a slightly lower but comparable range. An area of controversy and frequent experimentation regarding the storage of hydrogen by adsorption in carbon nanotubes is the efficiency by which this process occurs. The effectiveness of hydrogen storage is integral to its use as a primary fuel source since hydrogen only contains about one fourth the energy per unit volume as gasoline. Studies however show that what is the most important is the surface area of the materials used. Hence activated carbon with surface area of 2600 m2/g can store up to 5,8% w/w. In all these carbonaceous materials, hydrogen is stored by physisorption at 70-90K. === Experimental capacity === One experiment sought to determine the amount of hydrogen stored in CNTs by utilizing elastic recoil detection analysis (ERDA). CNTs (primarily SWNTs) were synthesized via chemical vapor disposition (CVD) and subjected to a two-stage purification process including air oxidation and acid treatment, then formed into flat, uniform discs and exposed to pure, pressurized hydrogen at various temperatures. When the data was analyzed, it was found that the ability of CNTs to store hydrogen decreased as temperature increased. Moreover, the highest hydrogen concentration measured was ~0.18%; significantly lower than commercially viable hydrogen storage needs to be. A separate experimental work performed by using a gravimetric method also revealed the maximum hydrogen uptake capacity of CNTs to be as low as 0.2%. In another experiment, CNTs were synthesized via CVD and their structure was characterized using Raman spectroscopy. Utilizing microwave digestion, the samples were exposed to different acid concentrations and different temperatures for various amounts of time in an attempt to find the optimum purification method for SWNTs of the diameter determined earlier. The purified samples were then exposed to hydrogen gas at various high pressures, and their adsorption by weight percent was plotted. The data showed that hydrogen adsorption levels of up to 3.7% are possible with a very pure sample and under the proper conditions. It is thought that microwave digestion helps improve the hydrogen adsorption capacity of the CNTs by opening up the ends, allowing access to the inner cavities of the nanotubes. === Limitations on efficient hydrogen adsorption === The biggest obstacle to efficient hydrogen storage using CNTs is the purity of the nanotubes. To achieve maximum hydrogen adsorption, there must be minimum graphene, amorphous carbon, and metallic deposits in the nanotube sample. Current methods of CNT synthesis require a purification step. However, even with pure nanotubes, the adsorption capacity is only maximized under high pressures, which are undesirable in commercial fuel tanks. == Electronic components == Various companies are developing transparent, electrically conductive CNT films and nanobuds to replace indium tin oxide (ITO) in LCDs, touch screens and photovoltaic devices. Nanotube films show promise for use in displays for computers, cell phones, Personal digital assistants, and automated teller machines. CNT diodes display a photovoltaic effect. Multi-walled nanotubes (MWNT coated with magnetite) can generate strong magnetic fields. Recent advances show that MWNT decorated with maghemite nanoparticles can be oriented in a magnetic field and enhance the electrical properties of the composite material in the direction of the field for use in electric motor brushes. A layer of 29% iron enriched single-walled nanotubes (SWNT) placed on top of a layer of explosive material such as PETN can be ignited with a regular camera flash. CNTs can be used as electron guns in miniature cathode ray tubes (CRT) in high-brightness, low-energy, low-weight displays. A display would consist of a group of tiny CRTs, each providing the electrons to illuminate the phosphor of one pixel, instead of having one CRT whose electrons are aimed using electric and magnetic fields. These displays are known as field emission displays (FEDs). CNTs can act as antennas for radios and other electromagnetic devices. Conductive CNTs are used in brushes for commercial electric motors. They replace traditional carbon black. The nanotubes improve electrical and thermal conductivity because they stretch through the plastic matrix of the brush. This permits the carbon filler to be reduced from 30% down to 3.6%, so that more matrix is present in the brush. Nanotube composite motor brushes are better-lubricated (from the matrix), cooler-running (both from better lubrication and superior thermal conductivity), less brittle (more matrix, and fiber reinforcement), stronger and more accurately moldable (more matrix). Since brushes are a critical failure point in electric motors, and also don't need much material, they became economical before almost any other application. Wires for carrying electric current may be fabricated from nanotubes and nanotube-polymer composites. Small wires have been fabricated with specific conductivity exceeding copper and aluminum; the highest conductivity non-metallic cables. For example, carbon nanotubes guarantee electrical conductivity of 100 MS/m, greater than aluminum or copper, which have conductivities of 35 and 59.6 MS/m, respectively. CNT are under investigation as an alternative to tungsten filaments in incandescent light bulbs. === Interconnects === Metallic carbon nanotubes have aroused research interest for their applicability as very-large-scale integration (VLSI) interconnects because of their high thermal stability, high thermal conductivity and large current carrying capacity. An isolated CNT can carry current densities in excess of 1000 MA/cm2 without damage even at an elevated temperature of 250 °C (482 °F), eliminating electromigration reliability concerns that plague Cu interconnects. Recent modeling work comparing the two has shown that CNT bundle interconnects can potentially offer advantages over copper. Recent experiments demonstrated resistances as low as 20 Ohms using different architectures, detailed conductance measurements over a wide temperature range were shown to agree with theory for a strongly disordered quasi-one-dimensional conductor. Hybrid interconnects that employ CNT vias in tandem with copper interconnects may offer advantages from a reliability/thermal-management perspective. In 2016, the European Union has funded a four million euro project over three years to evaluate manufacturability and performance of composite interconnects employing both CNT and copper interconnects. The project named CONNECT (CarbON Nanotube compositE InterconneCTs) involves the joint efforts of seven European research and industry partners on fabrication techniques and processes to enable reliable Carbon NanoTubes for on-chip interconnects in ULSI microchip production. === Electrical cables and wires === Wires for carrying electric current may be fabricated from pure nanotubes and nanotube-polymer composites. It has already been demonstrated that carbon nanotube wires can successfully be used for power or data transmission. Recently small wires have been fabricated with specific conductivity exceeding copper and aluminum; these cables are the highest conductivity carbon nanotube and also highest conductivity non-metal cables. Recently, composite of carbon nanotube and copper have been shown to exhibit nearly one hundred times higher current-carrying-capacity than pure copper or gold. Significantly, the electrical conductivity of such a composite is similar to pure Cu. Thus, this Carbon nanotube-copper (CNT-Cu) composite possesses the highest observed current-carrying capacity among electrical conductors. Thus for a given cross-section of electrical conductor, the CNT-Cu composite can withstand and transport one hundred times higher current compared to metals such as copper and gold. == Energy storages behind CNT == The use of CNTs as a catalyst support in fuel cells can potentially reduce platinum usage by 60% compared with carbon black. Doped CNTs may enable the complete elimination of Pt. === Supercapacitor === MIT Research Laboratory of Electronics uses nanotubes to improve supercapacitors. The activated charcoal used in conventional ultracapacitors has many small hollow spaces of various size, which create together a large surface to store electric charge. But as charge is quantized into elementary charges, i.e. electrons, and each such elementary charge needs a minimum space, a significant fraction of the electrode surface is not available for storage because the hollow spaces are not compatible with the charge's requirements. With a nanotube electrode the spaces may be tailored to size—few too large or too small—and consequently the capacity should be increased considerably. A 40-F supercapacitor with a maximum voltage of 3.5 V that employed forest-grown SWNTs that are binder- and additive-free achieved an energy density of 15.6 Wh kg−1 and a power density of 37 kW kg−1. CNTs can be bound to the charge plates of capacitors to dramatically increase the surface area and therefore energy density. === Batteries === Carbon nanotubes' (CNTs) exciting electronic properties have shown promise in the field of batteries, where typically they are being experimented as a new electrode material, particularly the anode for lithium-ion batteries. This is due to the fact that the anode requires a relatively high reversible capacity at a potential close to metallic lithium, and a moderate irreversible capacity, observed thus far only by graphite-based composites, such as CNTs. They have shown to greatly improve capacity and cyclability of lithium-ion batteries, as well as the capability to be very effective buffering components, alleviating the degradation of the batteries that is typically due to repeated charging and discharging. Further, electronic transport in the anode can be greatly improved using highly metallic CNTs. More specifically, CNTs have shown reversible capacities from 300 to 600 mAhg−1, with some treatments to them showing these numbers rise to up to 1000 mAhg−1. Meanwhile, graphite, which is most widely used as an anode material for these lithium batteries, has shown capacities of only 320 mAhg−1. By creating composites out of the CNTs, scientists see much potential in taking advantage of these exceptional capacities, as well as their excellent mechanical strength, conductivities, and low densities. MWNTs are used in lithium ion batteries cathodes. In these batteries, small amounts of MWNT powder are blended with active materials and a polymer binder, such as 1 wt % CNT loading in LiCoO2 cathodes and graphite anodes. CNTs provide increased electrical connectivity and mechanical integrity, which enhances rate capability and cycle life. ==== Paper batteries ==== A paper battery is a battery engineered to use a paper-thin sheet of cellulose (which is the major constituent of regular paper, among other things) infused with aligned carbon nanotubes. The potential for these devices is great, as they may be manufactured via a roll-to-roll process, which would make it very low-cost, and they would be lightweight, flexible, and thin. In order to productively use paper electronics (or any thin electronic devices), the power source must be equally thin, thus indicating the need for paper batteries. Recently, it has been shown that surfaces coated with CNTs can be used to replace heavy metals in batteries. More recently, functional paper batteries have been demonstrated, where a lithium-ion battery is integrated on a single sheet of paper through a lamination process as a composite with Li4Ti5O12 (LTO) or LiCoO2 (LCO). The paper substrate would function well as the separator for the battery, where the CNT films function as the current collectors for both the anode and the cathode. These rechargeable energy devices show potential in RFID tags, functional packaging, or new disposable electronic applications. Improvements have also been shown in lead-acid batteries, based on research performed by Bar-Ilan University using high quality SWCNT manufactured by OCSiAl. The study demonstrated an increase in the lifetime of lead acid batteries by 4.5 times and a capacity increase of 30% on average and up to 200% at high discharge rates. == Chemical == CNT can be used for water transport and desalination. Water molecules can be separated from salt by forcing them through electrochemically robust nanotube networks with controlled nanoscale porosity. This process requires far lower pressures than conventional reverse osmosis methods. Compared to a plain membrane, it operates at a 20 °C lower temperature, and at a 6x greater flow rate. Membranes using aligned, encapsulated CNTs with open ends permit flow through the CNTs' interiors. Very-small-diameter SWNTs are needed to reject salt at seawater concentrations. Portable filters containing CNT meshes can purify contaminated drinking water. Such networks can electrochemically oxidize organic contaminants, bacteria and viruses. CNT membranes can filter carbon dioxide from power plant emissions. CNT can be filled with biological molecules, aiding biotechnology. CNT have the potential to store between 4.2 and 65% hydrogen by weight. If they can be mass-produced economically, 13.2 litres (2.9 imp gal; 3.5 US gal) of CNT could contain the same amount of energy as a 50 litres (11 imp gal; 13 US gal) gasoline tank. CNTs can be used to produce nanowires of other elements/molecules, such as gold or zinc oxide. Nanowires in turn can be used to cast nanotubes of other materials, such as gallium nitride. These can have very different properties from CNTs—for example, gallium nitride nanotubes are hydrophilic, while CNTs are hydrophobic, giving them possible uses in organic chemistry. == Mechanical == Oscillators based on CNT have achieved speeds of > 50 GHz. CNT electrical and mechanical properties suggest them as alternatives to traditional electrical actuators. === Actuators === The exceptional electrical and mechanical properties of carbon nanotubes have made them alternatives to the traditional electrical actuators for both microscopic and macroscopic applications. Carbon nanotubes are very good conductors of both electricity and heat, and they are also very strong and elastic molecules in certain directions. === Loudspeaker === Carbon nanotubes have also been applied in the acoustics (such as loudspeaker and earphone). In 2008, it was shown that a sheet of nanotubes can operate as a loudspeaker if an alternating current is applied. The sound is not produced through vibration but thermoacoustically. In 2013, a carbon nanotube (CNT) thin yarn thermoacoustic earphone together with CNT thin yarn thermoacoustic chip was demonstrated by a research group of Tsinghua-Foxconn Nanotechnology Research Center in Tsinghua University, using a Si-based semi-conducting technology compatible fabrication process. Near-term commercial uses include replacing piezoelectric speakers in greeting cards. == Optical == See additional applications in: Optical properties of carbon nanotubes Carbon nanotube photoluminescence (fluorescence) can be used to observe semiconducting single-walled carbon nanotube species. Photoluminescence maps, made by acquiring the emission and scanning the excitation energy, can facilitate sample characterization. Nanotube fluorescence is under investigation for biomedical imaging and sensors. The reflectivity of buckypaper produced with "super-growth" chemical vapor deposition is 0.03 or less, potentially enabling performance gains for pyroelectric infrared detectors. == Environmental == === Environmental remediation === A CNT nano-structured sponge (nanosponge) containing sulfur and iron is more effective at soaking up water contaminants such as oil, fertilizers, pesticides and pharmaceuticals. Their magnetic properties make them easier to retrieve once the clean-up job is done. The sulfur and iron increases sponge size to around 2 centimetres (0.79 in). It also increases porosity due to beneficial defects, creating buoyancy and reusability. Iron, in the form of ferrocene makes the structure easier to control and enables recovery using magnets. Such nanosponges increase the absorption of the toxic organic solvent dichlorobenzene from water by 3.5 times. The sponges can absorb vegetable oil up to 150 times their initial weight and can absorb engine oil as well. Earlier, a magnetic boron-doped MWNT nanosponge that could absorb oil from water. The sponge was grown as a forest on a substrate via chemical vapor disposition. Boron puts kinks and elbows into the tubes as they grow and promotes the formation of covalent bonds. The nanosponges retain their elastic property after 10,000 compressions in the lab. The sponges are both superhydrophobic, forcing them to remain at the water's surface and oleophilic, drawing oil to them. === Water treatment === It has been shown that carbon nanotubes exhibit strong adsorption affinities to a wide range of aromatic and aliphatic contaminants in water, due to their large and hydrophobic surface areas. They also showed similar adsorption capacities as activated carbons in the presence of natural organic matter. As a result, they have been suggested as promising adsorbents for removal of contaminant in water and wastewater treatment systems. Moreover, membranes made out of carbon nanotube arrays have been suggested as switchable molecular sieves, with sieving and permeation features that can be dynamically activated/deactivated by either pore size distribution (passive control) or external electrostatic fields (active control). == Other applications == Carbon nanotubes have been implemented in nanoelectromechanical systems, including mechanical memory elements (NRAM being developed by Nantero Inc.) and nanoscale electric motors (see Nanomotor or Nanotube nanomotor). Carboxyl-modified single-walled carbon nanotubes (so called zig-zag, armchair type) can act as sensors of atoms and ions of alkali metals Na, Li, K. In May 2005, Nanomix Inc. placed on the market a hydrogen sensor that integrated carbon nanotubes on a silicon platform. Eikos Inc of Franklin, Massachusetts and Unidym Inc. of Silicon Valley, California are developing transparent, electrically conductive films of carbon nanotubes to replace indium tin oxide (ITO). Carbon nanotube films are substantially more mechanically robust than ITO films, making them ideal for high-reliability touchscreens and flexible displays. Printable water-based inks of carbon nanotubes are desired to enable the production of these films to replace ITO. Nanotube films show promise for use in displays for computers, cell phones, PDAs, and ATMs. A nanoradio, a radio receiver consisting of a single nanotube, was demonstrated in 2007. The use in tensile stress or toxic gas sensors was proposed by Tsagarakis. A flywheel made of carbon nanotubes could be spun at extremely high velocity on a floating magnetic axis in a vacuum, and potentially store energy at a density approaching that of conventional fossil fuels. Since energy can be added to and removed from flywheels very efficiently in the form of electricity, this might offer a way of storing electricity, making the electrical grid more efficient and variable power suppliers (like wind turbines) more useful in meeting energy needs. The practicality of this depends heavily upon the cost of making massive, unbroken nanotube structures, and their failure rate under stress. Carbon nanotube springs have the potential to indefinitely store elastic potential energy at ten times the density of lithium-ion batteries with flexible charge and discharge rates and extremely high cycling durability. Ultra-short SWNTs (US-tubes) have been used as nanoscaled capsules for delivering MRI contrast agents in vivo. Carbon nanotubes provide a certain potential for metal-free catalysis of inorganic and organic reactions. For instance, oxygen groups attached to the surface of carbon nanotubes have the potential to catalyze oxidative dehydrogenations or selective oxidations. Nitrogen-doped carbon nanotubes may replace platinum catalysts used to reduce oxygen in fuel cells. A forest of vertically aligned nanotubes can reduce oxygen in alkaline solution more effectively than platinum, which has been used in such applications since the 1960s. Here, the nanotubes have the added benefit of not being subject to carbon monoxide poisoning. Wake Forest University engineers are using multiwalled carbon nanotubes to enhance the brightness of field-induced polymer electroluminescent technology, potentially offering a step forward in the search for safe, pleasing, high-efficiency lighting. In this technology, moldable polymer matrix emits light when exposed to an electric current. It could eventually yield high-efficiency lights without the mercury vapor of compact fluorescent lamps or the bluish tint of some fluorescents and LEDs, which has been linked with circadian rhythm disruption. Candida albicans has been used in combination with carbon nanotubes (CNT) to produce stable electrically conductive bio-nano-composite tissue materials that have been used as temperature sensing elements. The SWNT production company OCSiAl developed a series of masterbatches for industrial use of single-wall CNTs in multiple types of rubber blends and tires, with initial trials showing increases in hardness, viscosity, tensile strain resistance and resistance to abrasion while reducing elongation and compression In tires the three primary characteristics of durability, fuel efficiency and traction were improved using SWNTs. The development of rubber masterbatches built on earlier work by the Japanese National Institute of Advanced Industrial Science & Technology showing rubber to be a viable candidate for improvement with SWNTs. Introducing MWNTs to polymers can improve flame retardancy and retard thermal degradation of polymer. The results confirmed that combination of MWNTs and ammonium polyphosphates show a synergistic effect for improving flame retardancy. == References == == External links == Lecture by Ray Baughman on YouTube Applications of Carbon Nanotubes
Wikipedia/Potential_applications_of_carbon_nanotubes
Energy-dispersive X-ray spectroscopy (EDS, EDX, EDXS or XEDS), sometimes called energy dispersive X-ray analysis (EDXA or EDAX) or energy dispersive X-ray microanalysis (EDXMA), is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing a unique set of peaks on its electromagnetic emission spectrum (which is the main principle of spectroscopy). The peak positions are predicted by the Moseley's law with accuracy much better than experimental resolution of a typical EDX instrument. To stimulate the emission of characteristic X-rays from a specimen a beam of electrons or X-ray is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. As the energies of the X-rays are characteristic of the difference in energy between the two shells and of the atomic structure of the emitting element, EDS allows the elemental composition of the specimen to be measured. == Equipment == Four primary components of the EDS setup are the excitation source (electron beam or x-ray beam) the X-ray detector the pulse processor the analyzer. Electron beam excitation is used in electron microscopes, scanning electron microscopes (SEM) and scanning transmission electron microscopes (STEM). X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyzer for data display and analysis. The most common detector used to be a Si(Li) detector cooled to cryogenic temperatures with liquid nitrogen. Now, newer systems are often equipped with silicon drift detectors (SDD) with Peltier cooling systems. Hazards and Safety High Voltage: SEM-EDX operates at high voltages (typically several kilovolts), which can pose a risk of electric shock. X-ray Radiation: While SEM-EDX does not use as high a voltage as some X-ray techniques, it still produces X-rays that can be harmful with prolonged exposure. Proper shielding and safety measures are necessary. Sample Preparation: Handling and preparation of samples can involve hazardous chemicals or materials. Proper personal protective equipment (PPE) should be used. Vacuum System: The vacuum system used in SEM-EDX can implode if not properly maintained, leading to potential hazards. Cryogenic Hazards: Some samples may require cryogenic techniques for analysis, which can pose risks of cold burns or asphyxiation if not appropriately handled. Mechanical Hazards: If used incorrectly, moving parts in the SEM can cause injury. Fire and Explosion Risks: Some samples, particularly those involving flammable materials, can pose fire or explosion risks under vacuum conditions. Ergonomic Risks: Prolonged use of SEM-EDX can lead to ergonomic hazards if the workstation is not correctly set up for the user's comfort and safety. == Technological variants == The excess energy of the electron that migrates to an inner shell to fill the newly created hole can do more than emit an X-ray. Often, instead of X-ray emission, the excess energy is transferred to a third electron from a further outer shell, prompting its ejection. This ejected species is called an Auger electron, and the method for its analysis is known as Auger electron spectroscopy (AES). X-ray photoelectron spectroscopy (XPS) is another close relative of EDS, utilizing ejected electrons in a manner similar to that of AES. Information on the quantity and kinetic energy of ejected electrons is used to determine the binding energy of these now-liberated electrons, which is element-specific and allows chemical characterization of a sample. EDS is often contrasted with its spectroscopic counterpart, wavelength dispersive X-ray spectroscopy (WDS). WDS differs from EDS in that it uses the diffraction of X-rays on special crystals to separate its raw data into spectral components (wavelengths). WDS has a much finer spectral resolution than EDS. WDS also avoids the problems associated with artifacts in EDS (false peaks, noise from the amplifiers, and microphonics). A high-energy beam of charged particles such as electrons or protons can be used to excite a sample rather than X-rays. This is called particle-induced X-ray emission or PIXE. == Accuracy == EDS can be used to determine which chemical elements are present in a sample, and can be used to estimate their relative abundance. EDS also helps to measure multi-layer coating thickness of metallic coatings and analysis of various alloys. The accuracy of this quantitative analysis of sample composition is affected by various factors. Many elements will have overlapping X-ray emission peaks (e.g., Ti Kβ and V Kα, Mn Kβ and Fe Kα). The accuracy of the measured composition is also affected by the nature of the sample. X-rays are generated by any atom in the sample that is sufficiently excited by the incoming beam. These X-rays are emitted in all directions (isotropically), and so they may not all escape the sample. The likelihood of an X-ray escaping the specimen, and thus being available to detect and measure, depends on the energy of the X-ray and the composition, amount, and density of material it has to pass through to reach the detector. Because of this X-ray absorption effect and similar effects, accurate estimation of the sample composition from the measured X-ray emission spectrum requires the application of quantitative correction procedures, which are sometimes referred to as matrix corrections. == Emerging technology == There is a trend towards a newer EDS detector, called the silicon drift detector (SDD). The SDD consists of a high-resistivity silicon chip where electrons are driven to a small collecting anode. The advantage lies in the extremely low capacitance of this anode, thereby utilizing shorter processing times and allowing very high throughput. Benefits of the SDD include: High count rates and processing, Better resolution than traditional Si(Li) detectors at high count rates, Lower dead time (time spent on processing X-ray event), Faster analytical capabilities and more precise X-ray maps or particle data collected in seconds, Ability to be stored and operated at relatively high temperatures, eliminating the need for liquid nitrogen cooling. Because the capacitance of the SDD chip is independent of the active area of the detector, much larger SDD chips can be utilized (40 mm2 or more). This allows for even higher count rate collection. Further benefits of large area chips include: Minimizing SEM beam current allowing for optimization of imaging under analytical conditions, Reduced sample damage and Smaller beam interaction and improved spatial resolution for high speed maps. Where the X-ray energies of interest are in excess of ~ 30 keV, traditional silicon-based technologies suffer from poor quantum efficiency due to a reduction in the detector stopping power. Detectors produced from high density semiconductors such as cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) have improved efficiency at higher X-ray energies and are capable of room temperature operation. Single element systems, and more recently pixelated imaging detectors such as the high energy X-ray imaging technology (HEXITEC) system, are capable of achieving energy resolutions of the order of 1% at 100 keV. In recent years, a different type of EDS detector, based upon a superconducting microcalorimeter, has also become commercially available. This new technology combines the simultaneous detection capabilities of EDS with the high spectral resolution of WDS. The EDS microcalorimeter consists of two components: an absorber, and a superconducting transition-edge sensor (TES) thermometer. The former absorbs X-rays emitted from the sample and converts this energy into heat; the latter measures the subsequent change in temperature due to the influx of heat. The EDS microcalorimeter has historically suffered from a number of drawbacks, including low count rates and small detector areas. The count rate is hampered by its reliance on the time constant of the calorimeter's electrical circuit. The detector area must be small in order to keep the heat capacity small and maximize thermal sensitivity (resolution). However, the count rate and detector area have been improved by the implementation of arrays of hundreds of superconducting EDS microcalorimeters, and the importance of this technology is growing. == See also == Elemental mapping Scanning electron microscopy Transmission electron microscopy X-ray microtomography == References == == External links == MICROANALYST.NET – Information portal with X-ray microanalysis and EDX contents Learn how to do EDS in an SEM – an interactive learning environment provided by Microscopy Australia
Wikipedia/Energy-dispersive_X-ray_spectroscopy
Instrumentation and control engineering (ICE) is a branch of engineering that studies the measurement and control of process variables, and the design and implementation of systems that incorporate them. Process variables include pressure, temperature, humidity, flow, pH, force and speed. ICE combines two branches of engineering. Instrumentation engineering is the science of the measurement and control of process variables within a production or manufacturing area. Meanwhile, control engineering, also called control systems engineering, is the engineering discipline that applies control theory to design systems with desired behaviors. Control engineers are responsible for the research, design, and development of control devices and systems, typically in manufacturing facilities and process plants. Control methods employ sensors to measure the output variable of the device and provide feedback to the controller so that it can make corrections toward desired performance. Automatic control manages a device without the need of human inputs for correction, such as cruise control for regulating a car's speed. Control systems engineering activities are multi-disciplinary in nature. They focus on the implementation of control systems, mainly derived by mathematical modeling. Because instrumentation and control play a significant role in gathering information from a system and changing its parameters, they are a key part of control loops. == As profession == High demand for engineering professionals is found in fields associated with process automation. Specializations include industrial instrumentation, system dynamics, process control, and control systems. Additionally, technological knowledge, particularly in computer systems, is essential to the job of an instrumentation and control engineer; important technology-related topics include human–computer interaction, programmable logic controllers, and SCADA. The tasks center around designing, developing, maintaining and managing control systems. The goals of the work of an instrumentation and control engineer are to maximize: Productivity Optimization Stability Reliability Safety Continuity == As academic discipline == Instrumentation and control engineering is a vital field of study offered at many universities worldwide at both the graduate and postgraduate levels. This discipline integrates principles from various branches of engineering, providing a comprehensive understanding of the design, analysis, and management of automated systems. Typical coursework for this discipline includes, but is not limited to, subjects such as control system design, instrumentation fundamentals, process control, sensors and signal processing, automation, robotics, and industrial data communications. Advanced courses may delve into topics like intelligent control systems, digital signal processing, and embedded systems design. Students often have the opportunity to engage in hands-on laboratory work and industry-relevant projects, which foster practical skills alongside theoretical knowledge. These experiences are crucial in preparing graduates for careers in diverse sectors including manufacturing, power generation, oil and gas, and healthcare, where they may design and maintain systems that automate processes, improve efficiency, and enhance safety. Interdisciplinary by nature, the field is accessible to students from various engineering backgrounds. Most commonly, students with a foundation in Electrical Engineering and Mechanical Engineering are drawn to this field due to their strong base in control systems, system dynamics, electro-mechanical machines and devices, and electric circuits (course work). However, with the growing complexity and integration of systems, students from fields like computer engineering, chemical engineering, and even biomedical engineering are increasingly contributing to and benefiting from studies in instrumentation and control engineering. Furthermore, the rapid advancement of technology in areas like the Internet of Things (IoT), artificial intelligence (AI), and machine learning is continuously shaping the curriculum of this discipline, making it an ever-evolving and dynamic field of study. == See also == Industrial system Instrumentation in petrochemical industries List of sensors Metrology Measurement Programmable logic controller International Society of Automation == References == == External links == Industrial Instrumentation and Controls Technology Alliance "Instrumentation and Control".
Wikipedia/Instrumentation_and_control_engineering
A DNA microarray (also commonly known as DNA chip or biochip) is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Each DNA spot contains picomoles (10−12 moles) of a specific DNA sequence, known as probes (or reporters or oligos). These can be a short section of a gene or other DNA element that are used to hybridize a cDNA or cRNA (also called anti-sense RNA) sample (called target) under high-stringency conditions. Probe-target hybridization is usually detected and quantified by detection of fluorophore-, silver-, or chemiluminescence-labeled targets to determine relative abundance of nucleic acid sequences in the target. The original nucleic acid arrays were macro arrays approximately 9 cm × 12 cm and the first computerized image based analysis was published in 1981. It was invented by Patrick O. Brown. An example of its application is in SNPs arrays for polymorphisms in cardiovascular diseases, cancer, pathogens and GWAS analysis. It is also used for the identification of structural variations and the measurement of gene expression. == Principle == The core principle behind microarrays is hybridization between two DNA strands, the property of complementary nucleic acid sequences to specifically pair with each other by forming hydrogen bonds between complementary nucleotide base pairs. A high number of complementary base pairs in a nucleotide sequence means tighter non-covalent bonding between the two strands. After washing off non-specific bonding sequences, only strongly paired strands will remain hybridized. Fluorescently labeled target sequences that bind to a probe sequence generate a signal that depends on the hybridization conditions (such as temperature), and washing after hybridization. Total strength of the signal, from a spot (feature), depends upon the amount of target sample binding to the probes present on that spot. Microarrays use relative quantitation in which the intensity of a feature is compared to the intensity of the same feature under a different condition, and the identity of the feature is known by its position. == Uses and types == Many types of arrays exist and the broadest distinction is whether they are spatially arranged on a surface or on coded beads: The traditional solid-phase array is a collection of orderly microscopic "spots", called features, each with thousands of identical and specific probes attached to a solid surface, such as glass, plastic or silicon biochip (commonly known as a genome chip, DNA chip or gene array). Thousands of these features can be placed in known locations on a single DNA microarray. The alternative bead array is a collection of microscopic polystyrene beads, each with a specific probe and a ratio of two or more dyes, which do not interfere with the fluorescent dyes used on the target sequence. DNA microarrays can be used to detect DNA (as in comparative genomic hybridization), or detect RNA (most commonly as cDNA after reverse transcription) that may or may not be translated into proteins. The process of measuring gene expression via cDNA is called expression analysis or expression profiling. Applications include: Specialised arrays tailored to particular crops are becoming increasingly popular in molecular breeding applications. In the future they could be used to screen seedlings at early stages to lower the number of unneeded seedlings tried out in breeding operations. === Fabrication === Microarrays can be manufactured in different ways, depending on the number of probes under examination, costs, customization requirements, and the type of scientific question being asked. Arrays from commercial vendors may have as few as 10 probes or as many as 5 million or more micrometre-scale probes. === Spotted vs. in situ synthesised arrays === Microarrays can be fabricated using a variety of technologies, including printing with fine-pointed pins onto glass slides, photolithography using pre-made masks, photolithography using dynamic micromirror devices, ink-jet printing, or electrochemistry on microelectrode arrays. In spotted microarrays, the probes are oligonucleotides, cDNA or small fragments of PCR products that correspond to mRNAs. The probes are synthesized prior to deposition on the array surface and are then "spotted" onto glass. A common approach utilizes an array of fine pins or needles controlled by a robotic arm that is dipped into wells containing DNA probes and then depositing each probe at designated locations on the array surface. The resulting "grid" of probes represents the nucleic acid profiles of the prepared probes and is ready to receive complementary cDNA or cRNA "targets" derived from experimental or clinical samples. This technique is used by research scientists around the world to produce "in-house" printed microarrays in their own labs. These arrays may be easily customized for each experiment, because researchers can choose the probes and printing locations on the arrays, synthesize the probes in their own lab (or collaborating facility), and spot the arrays. They can then generate their own labeled samples for hybridization, hybridize the samples to the array, and finally scan the arrays with their own equipment. This provides a relatively low-cost microarray that may be customized for each study, and avoids the costs of purchasing often more expensive commercial arrays that may represent vast numbers of genes that are not of interest to the investigator. Publications exist which indicate in-house spotted microarrays may not provide the same level of sensitivity compared to commercial oligonucleotide arrays, possibly owing to the small batch sizes and reduced printing efficiencies when compared to industrial manufactures of oligo arrays. In oligonucleotide microarrays, the probes are short sequences designed to match parts of the sequence of known or predicted open reading frames. Although oligonucleotide probes are often used in "spotted" microarrays, the term "oligonucleotide array" most often refers to a specific technique of manufacturing. Oligonucleotide arrays are produced by printing short oligonucleotide sequences designed to represent a single gene or family of gene splice-variants by synthesizing this sequence directly onto the array surface instead of depositing intact sequences. Sequences may be longer (60-mer probes such as the Agilent design) or shorter (25-mer probes produced by Affymetrix) depending on the desired purpose; longer probes are more specific to individual target genes, shorter probes may be spotted in higher density across the array and are cheaper to manufacture. One technique used to produce oligonucleotide arrays include photolithographic synthesis (Affymetrix) on a silica substrate where light and light-sensitive masking agents are used to "build" a sequence one nucleotide at a time across the entire array. Each applicable probe is selectively "unmasked" prior to bathing the array in a solution of a single nucleotide, then a masking reaction takes place and the next set of probes are unmasked in preparation for a different nucleotide exposure. After many repetitions, the sequences of every probe become fully constructed. More recently, Maskless Array Synthesis from NimbleGen Systems has combined flexibility with large numbers of probes. === Two-channel vs. one-channel detection === Two-color microarrays or two-channel microarrays are typically hybridized with cDNA prepared from two samples to be compared (e.g. diseased tissue versus healthy tissue) and that are labeled with two different fluorophores. Fluorescent dyes commonly used for cDNA labeling include Cy3, which has a fluorescence emission wavelength of 570 nm (corresponding to the green part of the light spectrum), and Cy5 with a fluorescence emission wavelength of 670 nm (corresponding to the red part of the light spectrum). The two Cy-labeled cDNA samples are mixed and hybridized to a single microarray that is then scanned in a microarray scanner to visualize fluorescence of the two fluorophores after excitation with a laser beam of a defined wavelength. Relative intensities of each fluorophore may then be used in ratio-based analysis to identify up-regulated and down-regulated genes. Oligonucleotide microarrays often carry control probes designed to hybridize with RNA spike-ins. The degree of hybridization between the spike-ins and the control probes is used to normalize the hybridization measurements for the target probes. Although absolute levels of gene expression may be determined in the two-color array in rare instances, the relative differences in expression among different spots within a sample and between samples is the preferred method of data analysis for the two-color system. Examples of providers for such microarrays includes Agilent with their Dual-Mode platform, Eppendorf with their DualChip platform for colorimetric Silverquant labeling, and TeleChem International with Arrayit. In single-channel microarrays or one-color microarrays, the arrays provide intensity data for each probe or probe set indicating a relative level of hybridization with the labeled target. However, they do not truly indicate abundance levels of a gene but rather relative abundance when compared to other samples or conditions when processed in the same experiment. Each RNA molecule encounters protocol and batch-specific bias during amplification, labeling, and hybridization phases of the experiment making comparisons between genes for the same microarray uninformative. The comparison of two conditions for the same gene requires two separate single-dye hybridizations. Several popular single-channel systems are the Affymetrix "Gene Chip", Illumina "Bead Chip", Agilent single-channel arrays, the Applied Microarrays "CodeLink" arrays, and the Eppendorf "DualChip & Silverquant". One strength of the single-dye system lies in the fact that an aberrant sample cannot affect the raw data derived from other samples, because each array chip is exposed to only one sample (as opposed to a two-color system in which a single low-quality sample may drastically impinge on overall data precision even if the other sample was of high quality). Another benefit is that data are more easily compared to arrays from different experiments as long as batch effects have been accounted for. One channel microarray may be the only choice in some situations. Suppose i {\displaystyle i} samples need to be compared: then the number of experiments required using the two channel arrays quickly becomes unfeasible, unless a sample is used as a reference. === A typical protocol === This is an example of a DNA microarray experiment which includes details for a particular case to better explain DNA microarray experiments, while listing modifications for RNA or other alternative experiments. The two samples to be compared (pairwise comparison) are grown/acquired. In this example treated sample (case) and untreated sample (control). The nucleic acid of interest is purified: this can be RNA for expression profiling, DNA for comparative hybridization, or DNA/RNA bound to a particular protein which is immunoprecipitated (ChIP-on-chip) for epigenetic or regulation studies. In this example total RNA is isolated (both nuclear and cytoplasmic) by guanidinium thiocyanate-phenol-chloroform extraction (e.g. Trizol) which isolates most RNA (whereas column methods have a cut off of 200 nucleotides) and if done correctly has a better purity. The purified RNA is analysed for quality (by capillary electrophoresis) and quantity (for example, by using a NanoDrop or NanoPhotometer spectrometer). If the material is of acceptable quality and sufficient quantity is present (e.g., >1μg, although the required amount varies by microarray platform), the experiment can proceed. The labeled product is generated via reverse transcription and followed by an optional PCR amplification. The RNA is reverse transcribed with either polyT primers (which amplify only mRNA) or random primers (which amplify all RNA, most of which is rRNA). miRNA microarrays ligate an oligonucleotide to the purified small RNA (isolated with a fractionator), which is then reverse transcribed and amplified. The label is added either during the reverse transcription step, or following amplification if it is performed. The sense labeling is dependent on the microarray; e.g. if the label is added with the RT mix, the cDNA is antisense and the microarray probe is sense, except in the case of negative controls. The label is typically fluorescent; only one machine uses radiolabels. The labeling can be direct (not used) or indirect (requires a coupling stage). For two-channel arrays, the coupling stage occurs before hybridization, using aminoallyl uridine triphosphate (aminoallyl-UTP, or aaUTP) and NHS amino-reactive dyes (such as cyanine dyes); for single-channel arrays, the coupling stage occurs after hybridization, using biotin and labeled streptavidin. The modified nucleotides (usually in a ratio of 1 aaUTP: 4 TTP (thymidine triphosphate)) are added enzymatically in a low ratio to normal nucleotides, typically resulting in 1 every 60 bases. The aaDNA is then purified with a column (using a phosphate buffer solution, as Tris contains amine groups). The aminoallyl group is an amine group on a long linker attached to the nucleobase, which reacts with a reactive dye. A form of replicate known as a dye flip can be performed to control for dye artifacts in two-channel experiments; for a dye flip, a second slide is used, with the labels swapped (the sample that was labeled with Cy3 in the first slide is labeled with Cy5, and vice versa). In this example, aminoallyl-UTP is present in the reverse-transcribed mixture. The labeled samples are then mixed with a proprietary hybridization solution which can consist of SDS, SSC, dextran sulfate, a blocking agent (such as Cot-1 DNA, salmon sperm DNA, calf thymus DNA, PolyA, or PolyT), Denhardt's solution, or formamine. The mixture is denatured and added to the pinholes of the microarray. The holes are sealed and the microarray hybridized, either in a hyb oven, where the microarray is mixed by rotation, or in a mixer, where the microarray is mixed by alternating pressure at the pinholes. After an overnight hybridization, all nonspecific binding is washed off (SDS and SSC). The microarray is dried and scanned by a machine that uses a laser to excite the dye and measures the emission levels with a detector. The image is gridded with a template and the intensities of each feature (composed of several pixels) is quantified. The raw data is normalized; the simplest normalization method is to subtract background intensity and scale so that the total intensities of the features of the two channels are equal, or to use the intensity of a reference gene to calculate the t-value for all of the intensities. More sophisticated methods include z-ratio, loess and lowess regression and RMA (robust multichip analysis) for Affymetrix chips (single-channel, silicon chip, in situ synthesized short oligonucleotides). == Microarrays and bioinformatics == The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing). === Experimental design === Due to the biological complexity of gene expression, the considerations of experimental design that are discussed in the expression profiling article are of critical importance if statistically and biologically valid conclusions are to be drawn from the data. There are three main elements to consider when designing a microarray experiment. First, replication of the biological samples is essential for drawing conclusions from the experiment. Second, technical replicates (e.g. two RNA samples obtained from each experimental unit) may help to quantitate precision. The biological replicates include independent RNA extractions. Technical replicates may be two aliquots of the same extraction. Third, spots of each cDNA clone or oligonucleotide are present as replicates (at least duplicates) on the microarray slide, to provide a measure of technical precision in each hybridization. It is critical that information about the sample preparation and handling is discussed, in order to help identify the independent units in the experiment and to avoid inflated estimates of statistical significance. === Standardization === Microarray data is difficult to exchange due to the lack of standardization in platform fabrication, assay protocols, and analysis methods. This presents an interoperability problem in bioinformatics. Various grass-roots open-source projects are trying to ease the exchange and analysis of data produced with non-proprietary chips: For example, the "Minimum Information About a Microarray Experiment" (MIAME) checklist helps define the level of detail that should exist and is being adopted by many journals as a requirement for the submission of papers incorporating microarray results. But MIAME does not describe the format for the information, so while many formats can support the MIAME requirements, as of 2007 no format permits verification of complete semantic compliance. The "MicroArray Quality Control (MAQC) Project" is being conducted by the US Food and Drug Administration (FDA) to develop standards and quality control metrics which will eventually allow the use of MicroArray data in drug discovery, clinical practice and regulatory decision-making. The MGED Society has developed standards for the representation of gene expression experiment results and relevant annotations. === Data analysis === Microarray data sets are commonly very large, and analytical precision is influenced by a number of variables. Statistical challenges include taking into account effects of background noise and appropriate normalization of the data. Normalization methods may be suited to specific platforms and, in the case of commercial platforms, the analysis may be proprietary. Algorithms that affect statistical analysis include: Image analysis: gridding, spot recognition of the scanned image (segmentation algorithm), removal or marking of poor-quality and low-intensity features (called flagging). Data processing: background subtraction (based on global or local background), determination of spot intensities and intensity ratios, visualisation of data (e.g. see MA plot), and log-transformation of ratios, global or local normalization of intensity ratios, and segmentation into different copy number regions using step detection algorithms. Class discovery analysis: This analytic approach, sometimes called unsupervised classification or knowledge discovery, tries to identify whether microarrays (objects, patients, mice, etc.) or genes cluster together in groups. Identifying naturally existing groups of objects (microarrays or genes) which cluster together can enable the discovery of new groups that otherwise were not previously known to exist. During knowledge discovery analysis, various unsupervised classification techniques can be employed with DNA microarray data to identify novel clusters (classes) of arrays. This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses, hierarchical cluster analysis, Genomic Signal Processing based clustering and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature. The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index, Dunn's index, or Hubert's Γ {\displaystyle \Gamma } statistic. Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, mixture of experts, and supervised neural gas. In addition, various metaheuristic methods are employed, such as genetic algorithms, covariance matrix self-adaptation, particle swarm optimization, and ant colony optimization. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy). Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the t-test, ANOVA, Bayesian method Mann–Whitney test methods tailored to microarray data sets, which take into account multiple comparisons or cluster analysis. These methods assess statistical power based on the variation present in the data and the number of experimental replicates, and can help minimize type I and type II errors in the analyses. Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis. This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping. Network-based methods: Statistical methods that take the underlying structure of gene networks into account, representing either associative or causative interactions or dependencies among gene products. Weighted gene co-expression network analysis is widely used for identifying co-expression modules and intramodular hub genes. Modules may corresponds to cell types or pathways. Highly connected intramodular hubs best represent their respective modules. Microarray data may require further processing aimed at reducing the dimensionality of the data to aid comprehension and more focused analysis. Other methods permit analysis of data consisting of a low number of biological or technical replicates; for example, the Local Pooled Error (LPE) test pools standard deviations of genes with similar expression levels in an effort to compensate for insufficient replication. === Annotation === The relation between a probe and the mRNA that it is expected to detect is not trivial. Some mRNAs may cross-hybridize probes in the array that are supposed to detect another mRNA. In addition, mRNAs may experience amplification bias that is sequence or molecule-specific. Thirdly, probes that are designed to detect the mRNA of a particular gene may be relying on genomic EST information that is incorrectly associated with that gene. === Data warehousing === Microarray data was found to be more useful when compared to other similar datasets. The sheer volume of data, specialized formats (such as MIAME), and curation efforts associated with the datasets require specialized databases to store the data. A number of open-source data warehousing solutions, such as InterMine and BioMart, have been created for the specific purpose of integrating diverse biological datasets, and also support analysis. == Alternative technologies == Advances in massively parallel sequencing has led to the development of RNA-Seq technology, that enables a whole transcriptome shotgun approach to characterize and quantify gene expression. Unlike microarrays, which need a reference genome and transcriptome to be available before the microarray itself can be designed, RNA-Seq can also be used for new model organisms whose genome has not been sequenced yet. == Glossary == An array or slide is a collection of features spatially arranged in a two dimensional grid, arranged in columns and rows. Block or subarray: a group of spots, typically made in one print round; several subarrays/ blocks form an array. Case/control: an experimental design paradigm especially suited to the two-colour array system, in which a condition chosen as control (such as healthy tissue or state) is compared to an altered condition (such as a diseased tissue or state). Channel: the fluorescence output recorded in the scanner for an individual fluorophore and can even be ultraviolet. Dye flip or dye swap or fluor reversal: reciprocal labelling of DNA targets with the two dyes to account for dye bias in experiments. Scanner: an instrument used to detect and quantify the intensity of fluorescence of spots on a microarray slide, by selectively exciting fluorophores with a laser and measuring the fluorescence with a filter (optics) photomultiplier system. Spot or feature: a small area on an array slide that contains picomoles of specific DNA samples. For other relevant terms see: Glossary of gene expression terms Protocol (natural sciences) == See also == == References == == External links == Microarray Animation 1Lec.com PLoS Biology Primer: Microarray Analysis Rundown of microarray technology ArrayMining.net – a free web-server for online microarray analysis Microarray – How does it work? PNAS Commentary: Discovery of Principles of Nature from Mathematical Modeling of DNA Microarray Data DNA microarray virtual experiment
Wikipedia/DNA_microarray
A phase-field model is a mathematical model for solving interfacial problems. It has mainly been applied to solidification dynamics, but it has also been applied to other situations such as viscous fingering, fracture mechanics, hydrogen embrittlement, and vesicle dynamics. The method substitutes boundary conditions at the interface by a partial differential equation for the evolution of an auxiliary field (the phase field) that takes the role of an order parameter. This phase field takes two distinct values (for instance +1 and −1) in each of the phases, with a smooth change between both values in the zone around the interface, which is then diffuse with a finite width. A discrete location of the interface may be defined as the collection of all points where the phase field takes a certain value (e.g., 0). A phase-field model is usually constructed in such a way that in the limit of an infinitesimal interface width (the so-called sharp interface limit) the correct interfacial dynamics are recovered. This approach permits to solve the problem by integrating a set of partial differential equations for the whole system, thus avoiding the explicit treatment of the boundary conditions at the interface. Phase-field models were first introduced by Fix and Langer, and have experienced a growing interest in solidification and other areas. Langer, had handwritten notes where he showed you could use coupled Cahn-Hilliard and Allen-Cahn equations to solve a solidification problem. George Fix worked on programing problem. Langer felt, at the time, that the method was of no practical use since the interface thickness is so small compared to the size of a typical microstructure, so he never bothered publishing them. == Equations of the phase-field model == Phase-field models are usually constructed in order to reproduce a given interfacial dynamics. For instance, in solidification problems the front dynamics is given by a diffusion equation for either concentration or temperature in the bulk and some boundary conditions at the interface (a local equilibrium condition and a conservation law), which constitutes the sharp interface model. A number of formulations of the phase-field model are based on a free energy function depending on an order parameter (the phase field) and a diffusive field (variational formulations). Equations of the model are then obtained by using general relations of statistical physics. Such a function is constructed from physical considerations, but contains a parameter or combination of parameters related to the interface width. Parameters of the model are then chosen by studying the limit of the model with this width going to zero, in such a way that one can identify this limit with the intended sharp interface model. Other formulations start by writing directly the phase-field equations, without referring to any thermodynamical functional (non-variational formulations). In this case the only reference is the sharp interface model, in the sense that it should be recovered when performing the small interface width limit of the phase-field model. Phase-field equations in principle reproduce the interfacial dynamics when the interface width is small compared with the smallest length scale in the problem. In solidification this scale is the capillary length d o {\displaystyle d_{o}} , which is a microscopic scale. From a computational point of view integration of partial differential equations resolving such a small scale is prohibitive. However, Karma and Rappel introduced the thin interface limit, which permitted to relax this condition and has opened the way to practical quantitative simulations with phase-field models. With the increasing power of computers and the theoretical progress in phase-field modelling, phase-field models have become a useful tool for the numerical simulation of interfacial problems. === Variational formulations === A model for a phase field can be constructed by physical arguments if one has an explicit expression for the free energy of the system. A simple example for solidification problems is the following: F [ e , φ ] = ∫ d r [ K | ∇ φ | 2 + h 0 f ( φ ) + e 0 u ( φ ) 2 ] {\displaystyle F[e,\varphi ]=\int d{\mathbf {r} }\left[K|{\mathbf {\nabla } }\varphi |^{2}+h_{0}f(\varphi )+e_{0}u(\varphi )^{2}\right]} where φ {\displaystyle \varphi } is the phase field, u ( φ ) = e / e 0 + h ( φ ) / 2 {\displaystyle u(\varphi )=e/e_{0}+h(\varphi )/2} , e {\displaystyle e} is the local enthalpy per unit volume, h {\displaystyle h} is a certain polynomial function of φ {\displaystyle \varphi } , and e 0 = L 2 / T M c p {\displaystyle e_{0}={L^{2}}/{T_{M}c_{p}}} (where L {\displaystyle L} is the latent heat, T M {\displaystyle T_{M}} is the melting temperature, and c p {\displaystyle c_{p}} is the specific heat). The term with ∇ φ {\displaystyle \nabla \varphi } corresponds to the interfacial energy. The function f ( φ ) {\displaystyle f(\varphi )} is usually taken as a double-well potential describing the free energy density of the bulk of each phase, which themselves correspond to the two minima of the function f ( φ ) {\displaystyle f(\varphi )} . The constants K {\displaystyle K} and h 0 {\displaystyle h_{0}} have respectively dimensions of energy per unit length and energy per unit volume. The interface width is then given by W = K / h 0 {\displaystyle W={\sqrt {K/h_{0}}}} . The phase-field model can then be obtained from the following variational relations: ∂ t φ = − 1 τ ( δ F δ φ ) + η ( r , t ) {\displaystyle \partial _{t}\varphi =-{\frac {1}{\tau }}\left({\frac {\delta F}{\delta \varphi }}\right)+\eta ({\mathbf {r} },t)} ∂ t e = D e 0 ∇ 2 ( δ F δ e ) − ∇ ⋅ q e ( r , t ) . {\displaystyle \partial _{t}e=De_{0}\nabla ^{2}\left({\frac {\delta F}{\delta e}}\right)-{\mathbf {\nabla } }\cdot {\mathbf {q} }_{e}(\mathbf {r} ,t).} where D is a diffusion coefficient for the variable e {\displaystyle e} , and η {\displaystyle \eta } and q e {\displaystyle \mathbf {q} _{e}} are stochastic terms accounting for thermal fluctuations (and whose statistical properties can be obtained from the fluctuation dissipation theorem). The first equation gives an equation for the evolution of the phase field, whereas the second one is a diffusion equation, which usually is rewritten for the temperature or for the concentration (in the case of an alloy). These equations are, scaling space with l {\displaystyle l} and times with l 2 / D {\displaystyle l^{2}/D} : α ε 2 ∂ t φ = ε 2 ∇ 2 φ − f ′ ( φ ) − e 0 h 0 h ′ ( φ ) u + η ~ ( r , t ) {\displaystyle \alpha \varepsilon ^{2}\partial _{t}\varphi =\varepsilon ^{2}\nabla ^{2}\varphi -f'(\varphi )-{\frac {e_{0}}{h_{0}}}h'(\varphi )u+{\tilde {\eta }}({\mathbf {r} },t)} ∂ t u = ∇ 2 u + 1 2 h ′ ( φ ) ∂ t φ − ∇ ⋅ q u ( r , t ) {\displaystyle \partial _{t}u=\nabla ^{2}u+{\frac {1}{2}}h'(\varphi )\partial _{t}\varphi -\mathbf {\nabla } \cdot \mathbf {q} _{u}(\mathbf {r} ,t)} where ε = W / l {\displaystyle \varepsilon =W/l} is the nondimensional interface width, α = D τ / W 2 h 0 {\displaystyle \alpha ={D\tau }/{W^{2}h_{0}}} , and η ~ ( r , t ) {\displaystyle {\tilde {\eta }}({\mathbf {r} },t)} , q u ( r , t ) {\displaystyle \mathbf {q} _{u}(\mathbf {r} ,t)} are nondimensionalized noises. === Alternative energy-density functions === The choice of free energy function, f ( φ ) {\displaystyle f(\varphi )} , can have a significant effect on the physical behaviour of the interface, and should be selected with care. The double-well function represents an approximation of the Van der Waals equation of state near the critical point, and has historically been used for its simplicity of implementation when the phase-field model is employed solely for interface tracking purposes. But this has led to the frequently observed spontaneous drop shrinkage phenomenon, whereby the high phase miscibility predicted by an Equation of State near the critical point allows significant interpenetration of the phases and can eventually lead to the complete disappearance of a droplet whose radius is below some critical value. Minimizing perceived continuity losses over the duration of a simulation requires limits on the Mobility parameter, resulting in a delicate balance between interfacial smearing due to convection, interfacial reconstruction due to free energy minimization (i.e. mobility-based diffusion), and phase interpenetration, also dependent on the mobility. A recent review of alternative energy density functions for interface tracking applications has proposed a modified form of the double-obstacle function which avoids the spontaneous drop shrinkage phenomena and limits on mobility, with comparative results provide for a number of benchmark simulations using the double-well function and the volume-of-fluid sharp interface technique. The proposed implementation has a computational complexity only slightly greater than that of the double-well function, and may prove useful for interface tracking applications of the phase-field model where the duration/nature of the simulated phenomena introduces phase continuity concerns (i.e. small droplets, extended simulations, multiple interfaces, etc.). === Sharp interface limit of the phase-field equations === A phase-field model can be constructed to purposely reproduce a given interfacial dynamics as represented by a sharp interface model. In such a case the sharp interface limit (i.e. the limit when the interface width goes to zero) of the proposed set of phase-field equations should be performed. This limit is usually taken by asymptotic expansions of the fields of the model in powers of the interface width ε {\displaystyle \varepsilon } . These expansions are performed both in the interfacial region (inner expansion) and in the bulk (outer expansion), and then are asymptotically matched order by order. The result gives a partial differential equation for the diffusive field and a series of boundary conditions at the interface, which should correspond to the sharp interface model and whose comparison with it provides the values of the parameters of the phase-field model. Whereas such expansions were in early phase-field models performed up to the lower order in ε {\displaystyle \varepsilon } only, more recent models use higher order asymptotics (thin interface limits) in order to cancel undesired spurious effects or to include new physics in the model. For example, this technique has permitted to cancel kinetic effects, to treat cases with unequal diffusivities in the phases, to model viscous fingering and two-phase Navier–Stokes flows, to include fluctuations in the model, etc. == Multiphase-field models == In multiphase-field models, microstructure is described by set of order parameters, each of which is related to a specific phase or crystallographic orientation. This model is mostly used for solid-state phase transformations where multiple grains evolve (e.g. grain growth, recrystallization or first-order transformation like austenite to ferrite in ferrous alloys). Besides allowing the description of multiple grains in a microstructure, multiphase-field models especially allow for consideration of multiple thermodynamic phases occurring e.g. in technical alloy grades. == Phase-field models on graphs == Many of the results for continuum phase-field models have discrete analogues for graphs, just replacing calculus with calculus on graphs. == Phase Field Modeling in Fracture Mechanics == Fracture in solids is often numerically analyzed within a finite element context using either discrete or diffuse crack representations. Approaches using a finite element representation often make use of strong discontinuities embedded at the intra-element level and often require additional criteria based on, e.g., stresses, strain energy densities or energy release rates or other special treatments such as virtual crack closure techniques and remeshing to determine crack paths. In contrast, approaches using a diffuse crack representation retain the continuity of the displacement field, such as continuum damage models and phase-field fracture theories. The latter traces back to the reformulation of Griffith’s principle in a variational form and has similarities to gradient-enhanced damage-type models. Perhaps the most attractive characteristic of phase-field approaches to fracture is that crack initiation and crack paths are automatically obtained from a minimization problem that couples the elastic and fracture energies. In many situations, crack nucleation can be properly accounted for by following branches of critical points associated with elastic solutions until they lose stability. In particular, phase-field models of fracture can allow nucleation even when the elastic strain energy density is spatially constant. A limitation of this approach is that nucleation is based on strain energy density and not stress. An alternative view based on introducing a nucleation driving force seeks to address this issue. == Phase Field Models for Collective Cell Migration == A group of biological cells can self-propel in a complex way due to the consumption of Adenosine triphosphate. Interactions between cells like cohesion or several chemical cues can produce movement in a coordinated manner, this phenomenon is called "Collective cell migration". A theoretical model for these phenomena is the phase-field model and incorporates a phase field for each cell species and additional field variables like chemotactic agent concentration. Such a model can be used for phenomena like cancer, cell extrusion, wound healing, morphogenesis and ectoplasm phenomena. == Software == PACE3D – Parallel Algorithms for Crystal Evolution in 3D is a parallelized phase-field simulation package including multi-phase multi-component transformations, large scale grain structures and coupling with fluid flow, elastic, plastic and magnetic interactions. It is developed at the Karlsruhe University of Applied Sciences and Karlsruhe Institute of Technology. The Mesoscale Microstructure Simulation Project (MMSP) is a collection of C++ classes for grid-based microstructure simulation. The MICRostructure Evolution Simulation Software (MICRESS) is a multi-component, multiphase-field simulation package coupled to thermodynamic and kinetic databases. It is developed and maintained by ACCESS e.V . MOOSE massively parallel open source C++ multiphysics finite-element framework with support for phase-field simulations developed at Idaho National Laboratory. PhasePot is a Windows-based microstructure simulation tool, using a combination of phase-field and Monte Carlo Potts models. OpenPhase is an open source software for the simulation of microstructure formation in systems undergoing first order phase transformation based on the multiphase field model. mef90/vDef is an open source variational phase-field fracture simulator based on the theory developed in. MicroSim is a software stack that consists of phase-field codes that offer flexibility with discretization, models as well as the high-performance computing hardware(CPU/GPU) that they can execute on. PRISMS-PF is a massively parallel finite element code for conducting phase-field and other related simulations of microstructure evolution. It is based on the deal.II finite element library and developed and maintained by the PRISMS Center at the University of Michigan. Celadro-3D is a three-dimensional extension of Celadro that utilizes multiphase-field modeling to capture the collective dynamics of active liquid droplets, such as living cells, offering a flexible platform to incorporate physics-based models for active matter. == References == == Further reading == Chen, Long-Qing (2002). "Phase-Field models For microstructure evolution". Annual Review of Materials Research. 32: 113–140. doi:10.1146/annurev.matsci.32.112001.132041. Moelans, Nele; Blanpain, Bart; Wollants, Patrick (2008). "An introduction to phase-field modeling of microstructure evolution". Calphad. 32 (2): 268. doi:10.1016/j.calphad.2007.11.003. Steinbach, Ingo (2009). "Phase-field models in materials science". Modelling and Simulation in Materials Science and Engineering. 17 (7): 073001. Bibcode:2009MSMSE..17g3001S. doi:10.1088/0965-0393/17/7/073001. S2CID 3383625. Fries, Suzana G.; Boettger, Bernd; Eiken, Janin; Steinbach, Ingo (2009). "Upgrading CALPHAD to microstructure simulation: The phase-field method". International Journal of Materials Research. 100 (2): 128. Bibcode:2009IJMR..100..128F. doi:10.3139/146.110013. S2CID 138203262. Qin, R. S.; Bhadeshia, H. K. (2010). "Phase field method" (PDF). Materials Science and Technology. 26 (7): 803. Bibcode:2010MatST..26..803Q. doi:10.1179/174328409X453190. S2CID 136124682. Donaldson, A.A.; Kirpalani, D.M.; MacChi, A. (2011). "Diffuse interface tracking of immiscible fluids: Improving phase continuity through free energy density selection". International Journal of Multiphase Flow. 37 (7): 777. Bibcode:2011IJMF...37..777D. doi:10.1016/j.ijmultiphaseflow.2011.02.002. Gonzalez-Cinca, R.; Folch, R.; Benitez, R.; Ramirez-Piscina, L.; Casademunt, J.; Hernandez-Machado, A. (2003). "Phase-field models in interfacial pattern formation out of equilibrium". In Advances in Condensed Matter and Statistical Mechanics, ed. By E. Korutcheva and R. Cuerno, Nova Science Publishers (New York, ), Pp. 2004: 203–236. arXiv:cond-mat/0305058. Bibcode:2003cond.mat..5058G. a review of phase-field models. Provatas, Nikolas; Elder, Ken (2010). Phase-Field Methods in Materials Science and Engineering. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA. doi:10.1002/9783527631520. ISBN 9783527631520 Steinbach, I.: "Quantum-Phase-Field Concept of Matter: Emergent Gravity in the Dynamic Universe", Zeitschrift für Naturforschung A 72 1 (2017) doi:10.1515/zna-2016-0270 Schmitz, G.J.: "A Combined Entropy/Phase-Field Approach to Gravity", Entropy 2017, 19(4) 151; doi:10.3390/e19040151 unflagged free DOI (link)
Wikipedia/Phase_field_models
Quantum mechanics is the fundamental physical theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms.: 1.1  It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. == Overview and fundamental concepts == Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and subatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.: 67–87  One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.: 427–435  Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.: 102–111 : 1.1–1.8  The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).: 109  However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. == Mathematical formulation == In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ {\displaystyle \psi } belonging to a (separable) complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, ψ {\displaystyle \psi } and e i α ψ {\displaystyle e^{i\alpha }\psi } represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L 2 ( C ) {\displaystyle L^{2}(\mathbb {C} )} , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ → , ψ ⟩ | 2 {\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}} , where λ → {\displaystyle {\vec {\lambda }}} is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ , P λ ψ ⟩ {\displaystyle \langle \psi ,P_{\lambda }\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result λ {\displaystyle \lambda } was obtained, the quantum state is postulated to collapse to λ → {\displaystyle {\vec {\lambda }}} , in the non-degenerate case, or to P λ ψ / ⟨ ψ , P λ ψ ⟩ {\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}} , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). === Time evolution of a quantum state === The time evolution of a quantum state is described by the Schrödinger equation: i ℏ ∂ ∂ t ψ ( t ) = H ψ ( t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).} Here H {\displaystyle H} denotes the Hamiltonian, the observable corresponding to the total energy of the system, and ℏ {\displaystyle \hbar } is the reduced Planck constant. The constant i ℏ {\displaystyle i\hbar } is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by ψ ( t ) = e − i H t / ℏ ψ ( 0 ) . {\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).} The operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state ψ ( 0 ) {\displaystyle \psi (0)} – it makes a definite prediction of what the quantum state ψ ( t ) {\displaystyle \psi (t)} will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian.: 133–137  Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy.: 793  Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.: 849  === Uncertainty principle === One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator X ^ {\displaystyle {\hat {X}}} and momentum operator P ^ {\displaystyle {\hat {P}}} do not commute, but rather satisfy the canonical commutation relation: [ X ^ , P ^ ] = i ℏ . {\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .} Given a quantum state, the Born rule lets us compute expectation values for both X {\displaystyle X} and P {\displaystyle P} , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have σ X = ⟨ X 2 ⟩ − ⟨ X ⟩ 2 , {\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},} and likewise for the momentum: σ P = ⟨ P 2 ⟩ − ⟨ P ⟩ 2 . {\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.} The uncertainty principle states that σ X σ P ≥ ℏ 2 . {\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.} Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators A {\displaystyle A} and B {\displaystyle B} . The commutator of these two operators is [ A , B ] = A B − B A , {\displaystyle [A,B]=AB-BA,} and this provides the lower bound on the product of standard deviations: σ A σ B ≥ 1 2 | ⟨ [ A , B ] ⟩ | . {\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an i / ℏ {\displaystyle i/\hbar } factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum p i {\displaystyle p_{i}} is replaced by − i ℏ ∂ ∂ x {\displaystyle -i\hbar {\frac {\partial }{\partial x}}} , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times − ℏ 2 {\displaystyle -\hbar ^{2}} . === Composite systems and entanglement === When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces H A {\displaystyle {\mathcal {H}}_{A}} and H B {\displaystyle {\mathcal {H}}_{B}} , respectively. The Hilbert space of the composite system is then H A B = H A ⊗ H B . {\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.} If the state for the first system is the vector ψ A {\displaystyle \psi _{A}} and the state for the second system is ψ B {\displaystyle \psi _{B}} , then the state of the composite system is ψ A ⊗ ψ B . {\displaystyle \psi _{A}\otimes \psi _{B}.} Not all states in the joint Hilbert space H A B {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if ψ A {\displaystyle \psi _{A}} and ϕ A {\displaystyle \phi _{A}} are both possible states for system A {\displaystyle A} , and likewise ψ B {\displaystyle \psi _{B}} and ϕ B {\displaystyle \phi _{B}} are both possible states for system B {\displaystyle B} , then 1 2 ( ψ A ⊗ ψ B + ϕ A ⊗ ϕ B ) {\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)} is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. === Equivalence between formulations === There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. === Symmetries and conservation laws === The Hamiltonian H {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} for each value of t {\displaystyle t} . From this relation between U ( t ) {\displaystyle U(t)} and H {\displaystyle H} , it follows that any observable A {\displaystyle A} that commutes with H {\displaystyle H} will be conserved: its expectation value will not change over time.: 471  This statement generalizes, as mathematically, any Hermitian operator A {\displaystyle A} can generate a family of unitary operators parameterized by a variable t {\displaystyle t} . Under the evolution generated by A {\displaystyle A} , any observable B {\displaystyle B} that commutes with A {\displaystyle A} will be conserved. Moreover, if B {\displaystyle B} is conserved by evolution under A {\displaystyle A} , then A {\displaystyle A} is conserved under the evolution generated by B {\displaystyle B} . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. == Examples == === Free particle === The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: H = 1 2 m P 2 = − ℏ 2 2 m d 2 d x 2 . {\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.} The general solution of the Schrödinger equation is given by ψ ( x , t ) = 1 2 π ∫ − ∞ ∞ ψ ^ ( k , 0 ) e i ( k x − ℏ k 2 2 m t ) d k , {\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,} which is a superposition of all possible plane waves e i ( k x − ℏ k 2 2 m t ) {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}} , which are eigenstates of the momentum operator with momentum p = ℏ k {\displaystyle p=\hbar k} . The coefficients of the superposition are ψ ^ ( k , 0 ) {\displaystyle {\hat {\psi }}(k,0)} , which is the Fourier transform of the initial quantum state ψ ( x , 0 ) {\displaystyle \psi (x,0)} . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: ψ ( x , 0 ) = 1 π a 4 e − x 2 2 a {\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}} which has Fourier transform, and therefore momentum distribution ψ ^ ( k , 0 ) = a π 4 e − a k 2 2 . {\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.} We see that as we make a {\displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making a {\displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === As in the classical case, the potential for the quantum harmonic oscillator is given by: 234  V ( x ) = 1 2 m ω 2 x 2 . {\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by ψ n ( x ) = 1 2 n n ! ⋅ ( m ω π ℏ ) 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad } n = 0 , 1 , 2 , … . {\displaystyle n=0,1,2,\ldots .} where Hn are the Hermite polynomials H n ( x ) = ( − 1 ) n e x 2 d n d x n ( e − x 2 ) , {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),} and the corresponding energy levels are E n = ℏ ω ( n + 1 2 ) . {\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy for bound states. === Mach–Zehnder interferometer === The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector ψ ∈ C 2 {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the "lower" path ψ l = ( 1 0 ) {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the "upper" path ψ u = ( 0 1 ) {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}} , that is, ψ = α ψ l + β ψ u {\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}} for complex α , β {\displaystyle \alpha ,\beta } . In order to respect the postulate that ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} we require that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . Both beam splitters are modelled as the unitary matrix B = 1 2 ( 1 i i 1 ) {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}} , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of 1 / 2 {\displaystyle 1/{\sqrt {2}}} , or be reflected to the other path with a probability amplitude of i / 2 {\displaystyle i/{\sqrt {2}}} . The phase shifter on the upper arm is modelled as the unitary matrix P = ( 1 0 0 e i Δ Φ ) {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}} , which means that if the photon is on the "upper" path it will gain a relative phase of Δ Φ {\displaystyle \Delta \Phi } , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter B {\displaystyle B} , a phase shifter P {\displaystyle P} , and another beam splitter B {\displaystyle B} , and so end up in the state B P B ψ l = i e i Δ Φ / 2 ( − sin ⁡ ( Δ Φ / 2 ) cos ⁡ ( Δ Φ / 2 ) ) , {\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},} and the probabilities that it will be detected at the right or at the top are given respectively by p ( u ) = | ⟨ ψ u , B P B ψ l ⟩ | 2 = cos 2 ⁡ Δ Φ 2 , {\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},} p ( l ) = | ⟨ ψ l , B P B ψ l ⟩ | 2 = sin 2 ⁡ Δ Φ 2 . {\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.} One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by p ( u ) = p ( l ) = 1 / 2 {\displaystyle p(u)=p(l)=1/2} , independently of the phase Δ Φ {\displaystyle \Delta \Phi } . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. == Applications == Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. == Relation to other scientific theories == === Classical mechanics === The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.: 299  When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.: 234  Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.: 353  Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.: 687–730  Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. === Special relativity and electrodynamics === Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical − e 2 / ( 4 π ϵ 0 r ) {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential.: 285  Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.: 26  This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. === Relation to general relativity === Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. == Philosophical implications == Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. == History == Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word "atom" deriving from the Greek for 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): E = h ν {\displaystyle E=h\nu \ } , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids. == See also == == Explanatory notes == == References == == Further reading == == External links == Introduction to Quantum Theory at Quantiki. Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe. Course material Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware. Modern Physics: With waves, thermodynamics, and optics – an online textbook. MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06. ⁠5+1/2⁠ Examples in Quantum Mechanics. Philosophy Ismael, Jenann. "Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Philosophical Issues in Quantum Theory". Stanford Encyclopedia of Philosophy.
Wikipedia/Quantum_physics
Potential graphene applications include lightweight, thin, and flexible electric/photonics circuits, solar cells, and various medical, chemical and industrial processes enhanced or enabled by the use of new graphene materials, and favoured by massive cost decreases in graphene production. == Medicine == Researchers in 2011 discovered the ability of graphene to accelerate the osteogenic differentiation of human mesenchymal stem cells without the use of biochemical inducers. In 2015 researchers used graphene to create biosensors with epitaxial graphene on silicon carbide. The sensors bind to 8-hydroxydeoxyguanosine (8-OHdG) and is capable of selective binding with antibodies. The presence of 8-OHdG in blood, urine and saliva is commonly associated with DNA damage. Elevated levels of 8-OHdG have been linked to increased risk of several cancers. By the next year, a commercial version of a graphene biosensor was being used by biology researchers as a protein binding sensor platform. In 2016 researchers revealed that uncoated graphene can be used as neuro-interface electrode without altering or damaging properties such as signal strength or formation of scar tissue. Graphene electrodes in the body are significantly more stable than electrodes of tungsten or silicon because of properties such as flexibility, bio-compatibility and conductivity. === Tissue engineering === Graphene has been investigated for tissue engineering. It has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue applications. Dispersion of low weight % of graphene (≈0.02 wt.%) increased in compressive and flexural mechanical properties of polymeric nanocomposites. The addition of graphene nanoparticles in the polymer matrix lead to improvements in the crosslinking density of the nanocomposite and better load transfer from the polymer matrix to the underlying nanomaterial thereby increasing the mechanical properties. === Contrast agents, bioimaging === Functionalized and surfactant dispersed graphene solutions have been designed as blood pool MRI contrast agents. Further, iodine and manganese incorporating graphene nanoparticles have served as multimodal MRI-computerized tomograph (CT) contrast agents. Graphene micro- and nano-particles have served as contrast agents for photoacoustic and thermoacoustic tomography. Graphene has also been reported to be efficiently taking up cancerous cells thereby enabling the design of drug delivery agents for cancer therapy. Graphene nanoparticles of various morphologies such as graphene nanoribbons, graphene nanoplatelets and graphene nanoonions are non-toxic at low concentrations and do not alter stem cell differentiation suggesting that they may be safe to use for biomedical applications. === Polymerase chain reaction === Graphene is reported to have enhanced PCR by increasing the yield of DNA product. Experiments revealed that graphene's thermal conductivity could be the main factor behind this result. Graphene yields DNA product equivalent to positive control with up to 65% reduction in PCR cycles. === Devices === Graphene's modifiable chemistry, large surface area per unit volume, atomic thickness and molecularly gateable structure make antibody-functionalized graphene sheets excellent candidates for mammalian and microbial detection and diagnosis devices. Graphene is so thin that water has near-perfect wetting transparency which is an important property particularly in developing bio-sensor applications. This means that a sensor coated in graphene has as much contact with an aqueous system as an uncoated sensor, while remaining protected mechanically from its environment. Integration of graphene (thickness of 0.34 nm) layers as nanoelectrodes into a nanopore can potentially solve a bottleneck for nanopore-based single-molecule DNA sequencing. On November 20, 2013, the Bill & Melinda Gates Foundation awarded $100,000 'to develop new elastic composite materials for condoms containing nanomaterials like graphene'. In 2014, graphene-based, transparent (across infrared to ultraviolet frequencies), flexible, implantable medical sensor microarrays were announced that allow the viewing of brain tissue hidden by implants. Optical transparency was greater than 90%. Applications demonstrated include optogenetic activation of focal cortical areas, in vivo imaging of cortical vasculature via fluorescence microscopy and 3D optical coherence tomography. === Drug delivery === Researchers at Monash University discovered that a sheet of graphene oxide can be transformed into liquid crystal droplets spontaneously—like a polymer—simply by placing the material in a solution and manipulating the pH. The graphene droplets change their structure in the presence of an external magnetic field. This finding raises the possibility of carrying a drug in graphene droplets and releasing the drug upon reaching the targeted tissue by making the droplets change shape in a magnetic field. Another possible application is in disease detection if graphene is found to change shape at the presence of certain disease markers such as toxins. A graphene 'flying carpet' was demonstrated to deliver two anti-cancer drugs sequentially to the lung tumor cells (A549 cell) in a mouse model. Doxorubicin (DOX) is embedded onto the graphene sheet, while the molecules of tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) are linked to the nanostructure via short peptide chains. Injected intravenously, the graphene strips with the drug payload preferentially concentrate to the cancer cells due to common blood vessel leakage around the tumor. Receptors on the cancer cell membrane bind TRAIL and cell surface enzymes clip the peptide thus release the drug onto the cell surface. Without the bulky TRAIL, the graphene strips with the embedded DOX are swallowed into the cells. The intracellular acidic environment promotes DOX's release from graphene. TRAIL on the cell surface triggers the apoptosis while DOX attacks the nucleus. These two drugs work synergistically and were found to be more effective than either drug alone. The development of nanotechnology and molecular biology has provided the improvement of nanomaterials with specific properties which are now able to overcome the weaknesses of traditional disease diagnostic and therapeutic procedures. In recent years, more attention has been devoted to designing and the development of new methods for realizing sustained release of diverse drugs. Since each drug has a plasma level above which is toxic and below which is ineffective and in conventional drug delivery, the drug concentration in the blood rises quickly and then declines, the main aim of an ideal drug delivery system (DDS) is to maintain the drug within a desired therapeutic range after a single dose, and/or target the drug to a specific region while simultaneously lowering the systemic levels of the drug. Graphene–based materials such as graphene oxide (GO) have considerable potential for several biological applications including the development of new drug release system. GOs are an abundance of functional groups such as hydroxyl, epoxy, and carboxyl on its basal surface and edges that can be also used to immobilize or load various biomolecules for biomedical applications. On the other side, biopolymers have frequently been used as raw materials for designing drug delivery formulations owing to their excellent properties, such as non-toxicity, biocompatibility, biodegradability and environmental sensitivity, etc. Protein therapeutics possess advantages over small molecule approaches including high target specificity and low off target effects with normal biological processes. Human serum albumin (HSA) is one of the most abundant blood proteins. It serves as a transport protein for several endogenous and exogenous ligands as well as various drug molecules. HSA nanoparticles have long been the center of attention in the pharmaceutical industry due to their ability to bind to various drug molecules, high storage stability and in vivo application, non–toxicity and antigenicity, biodegradability, reproducibility, scale–up of the production process and a better control over release properties. In addition, significant amounts of drugs can be incorporated into the particle matrix because of the large number of drug binding sites on the albumin molecule. Therefore, the combination of HSA-NPs and GO-NSs could be useful for reducing the cytotoxicity of GO-NSs and the enhancement of drug loading and sustained drug release in cancer therapy. === Biomicrorobotics === Researchers demonstrated a nanoscale biomicrorobot (or cytobot) made by cladding a living endospore cell with graphene quantum dots. The device acted as a humidity sensor. === Testing === In 2014 a graphene based blood glucose testing product was announced. === Biosensors === Graphene based FRET biosensors can detect DNA and the unwinding of DNA using different probes. === Gene editing === Researchers at Binghamton University have developed a methodology to utilize graphene as a DNA polymerase buffer to facilitate direct manipulation of nucleotides. == Electronics == Graphene has a high carrier mobility, and low noise, allowing it to be used as the channel in a field-effect transistor. Unmodified graphene does not have an energy band gap, making it unsuitable for digital electronics. However, modifications (e.g. Graphene nanoribbons) have created potential uses in various areas of electronics. === Transistors === Both chemically controlled and voltage controlled graphene transistors have been built. Graphene-based transistors could be much thinner than modern silicon devices, allowing faster and smaller configurations. Graphene exhibits a pronounced response to perpendicular external electric fields, potentially forming field-effect transistors (FET), but the absence of a band gap fundamentally limits its on-off conductance ratio to less than ~30 at room temperature. A 2006 paper proposed an all-graphene planar FET with side gates. Their devices showed changes of 2% at cryogenic temperatures. The first top-gated FET (on–off ratio of <2) was demonstrated in 2007. Graphene nanoribbons may prove generally capable of replacing silicon as a semiconductor. A patent for graphene-based electronics was issued in 2006. In 2008, researchers at MIT Lincoln Lab produced hundreds of transistors on a single chip and in 2009, very high frequency transistors were produced at Hughes Research Laboratories. A 2008 paper demonstrated a switching effect based on reversible chemical modification of the graphene layer that gives an on–off ratio of greater than six orders of magnitude. These reversible switches could potentially be employed in nonvolatile memories. IBM announced in December 2008 graphene transistors operating at GHz frequencies. In 2009, researchers demonstrated four different types of logic gates, each composed of a single graphene transistor. In May 2009, an n-type transistor complemented the prior p-type graphene transistors. A functional graphene integrated circuit was demonstrated—a complementary inverter consisting of one p- and one n-type transistor. However, this inverter suffered from low voltage gain. Typically, the amplitude of the output signal is about 40 times less than that of the input signal. Moreover, none of these circuits operated at frequencies higher than 25 kHz. In the same year, tight-binding numerical simulations demonstrated that the band-gap induced in graphene bilayer field effect transistors is not sufficiently large for high-performance transistors for digital applications, but can be sufficient for ultra-low voltage applications, when exploiting a tunnel-FET architecture. In February 2010, researchers announced graphene transistors with an on-off rate of 100 gigahertz, far exceeding prior rates, and exceeding the speed of silicon transistors with an equal gate length. The 240 nm devices were made with conventional silicon-manufacturing equipment. According to a January 2010 report, graphene was epitaxially grown on SiC in a quantity and with quality suitable for mass production of integrated circuits. At high temperatures, the quantum Hall effect could be measured. IBM built 'processors' using 100 GHz transistors on 2-inch (51 mm) graphene sheets. In June 2011, IBM researchers announced the first graphene-based wafer-scale integrated circuit, a broadband radio mixer. The circuit handled frequencies up to 10 GHz. Its performance was unaffected by temperatures up to 127 °C. In November researchers used 3D printing (additive manufacturing) to fabricate devices. In 2013, researchers demonstrated graphene's high mobility in a detector that allows broad band frequency selectivity ranging from the THz to IR region (0.76–33 THz) A separate group created a terahertz-speed transistor with bistable characteristics, which means that the device can spontaneously switch between two electronic states. The device consists of two layers of graphene separated by an insulating layer of boron nitride a few atomic layers thick. Electrons move through this barrier by quantum tunneling. These new transistors exhibit negative differential conductance, whereby the same electric current flows at two different applied voltages. In June, an 8 transistor 1.28 GHz ring oscillator circuit was described. The negative differential resistance experimentally observed in graphene field-effect transistors of conventional design allows for construction of viable non-Boolean computational architectures. The negative differential resistance—observed under certain biasing schemes—is an intrinsic property of graphene resulting from its symmetric band structure. The results present a conceptual change in graphene research and indicate an alternative route for graphene applications in information processing. In 2013 researchers created transistors printed on flexible plastic that operate at 25 gigahertz, sufficient for communications circuits and that can be fabricated at scale. The researchers first fabricated non-graphene-containing structures—the electrodes and gates—on plastic sheets. Separately, they grew large graphene sheets on metal, then peeled them and transferred them to the plastic. Finally, they topped the sheet with a waterproof layer. The devices work after being soaked in water, and were flexible enough to be folded. In 2015 researchers devised a digital switch by perforating a graphene sheet with boron-nitride nanotubes that exhibited a switching ratio of 105 at a turn-on voltage of 0.5 V. Density functional theory suggested that the behavior came from the mismatch of the density of states. ==== Single atom ==== In 2008, a one atom thick, 10 atoms wide transistor was made of graphene. In 2022, researchers built a 0.34 nanometer (on state) single atom graphene transistor, smaller than a related device that used carbon nanotubes instead of graphene. The graphene formed the gate. Silicon dioxide was used as the base. The graphene sheet was formed via chemical vapor deposition, laid on top of the SiO2. A sheet of aluminum oxide was laid atop the graphene. The Al2Ox and SiO2 sandwiching the graphene act as insulators. They then etched into the sandwiched materials, cutting away the graphene and Al2Ox to create a step that exposed the edge of the graphene. They then added layers of hafnium oxide and molybdenum disulfide (another 2D material) to the top, side, and bottom of the step. Electrodes were then added to the top and bottom as source and drain. They call this construction a "sidewall transistor". The on/off ratio reached 1.02 × 105 and subthreshold swing values were 117 mV dec–1. ==== Trilayer ==== An electric field can change trilayer graphene's crystal structure, transforming its behavior from metal-like into semiconductor-like. A sharp metal scanning tunneling microscopy tip was able to move the domain border between the upper and lower graphene configurations. One side of the material behaves as a metal, while the other side behaves as a semiconductor. Trilayer graphene can be stacked in either Bernal or rhombohedral configurations, which can exist in a single flake. The two domains are separated by a precise boundary at which the middle layer is strained to accommodate the transition from one stacking pattern to the other. Silicon transistors are either p-type or n-type, whereas graphene can operate as both. This lowers costs and is more versatile. The technique provides the basis for a field-effect transistor. In trilayer graphene, the two stacking configurations exhibit different electronic properties. The region between them consists of a localized strain soliton where the carbon atoms of one graphene layer shift by the carbon–carbon bond distance. The free-energy difference between the two stacking configurations scales quadratically with electric field, favoring rhombohedral stacking as the electric field increases. This ability to control the stacking order opens the way to new devices that combine structural and electrical properties. === Transparent conducting electrodes === Graphene's high electrical conductivity and high optical transparency make it a candidate for transparent conducting electrodes, required for such applications as touchscreens, liquid crystal displays, inorganic photovoltaics cells, organic photovoltaic cells, and organic light-emitting diodes. In particular, graphene's mechanical strength and flexibility are advantageous compared to indium tin oxide, which is brittle. Graphene films may be deposited from solution over large areas. Large-area, continuous, transparent and highly conducting few-layered graphene films were produced by chemical vapor deposition and used as anodes for application in photovoltaic devices. A power conversion efficiency (PCE) up to 1.7% was demonstrated, which is 55.2% of the PCE of a control device based on indium tin oxide. However, the main disadvantage brought by the fabrication method will be the poor substrate bondings that will eventually lead to poor cyclic stability and cause high resistivity to the electrodes. Organic light-emitting diodes (OLEDs) with graphene anodes have been demonstrated. The device was formed by solution-processed graphene on a quartz substrate. The electronic and optical performance of graphene-based devices are similar to devices made with indium tin oxide. In 2017 OLED electrodes were produced by CVD on a copper substrate. A carbon-based device called a light-emitting electrochemical cell (LEC) was demonstrated with chemically-derived graphene as the cathode and the conductive polymer Poly(3,4-ethylenedioxythiophene) (PEDOT) as the anode. Unlike its predecessors, this device contains only carbon-based electrodes, with no metal. In 2014 a prototype graphene-based flexible display was demonstrated. In 2016 researchers demonstrated a display that used interferometry modulation to control colors, dubbed a "graphene balloon device" made of silicon containing 10 μm circular cavities covered by two graphene sheets. The degree of curvature of the sheets above each cavity defines the color emitted. The device exploits the phenomena known as Newton's rings created by interference between light waves bouncing off the bottom of the cavity and the (transparent) material. Increasing the distance between the silicon and the membrane increased the wavelength of the light. The approach is used in colored e-reader displays and smartwatches, such as the Qualcomm Toq. They use silicon materials instead of graphene. Graphene reduces power requirements. === Frequency multiplier === In 2009, researchers built experimental graphene frequency multipliers that take an incoming signal of a certain frequency and output a signal at a multiple of that frequency. === Optoelectronics === Graphene strongly interacts with photons, with the potential for direct band-gap creation. This is promising for optoelectronic and nanophotonic devices. Light interaction arises due to the Van Hove singularity. Graphene displays different time scales in response to photon interaction, ranging from femtoseconds (ultra-fast) to picoseconds. Potential uses include transparent films, touch screens and light emitters or as a plasmonic device that confines light and alters wavelengths. === Hall effect sensors === Due to extremely high electron mobility, graphene may be used for production of highly sensitive Hall effect sensors. Potential application of such sensors is connected with DC current transformers for special applications. New record high sensitive Hall sensors are reported in April 2015. These sensors are two times better than existing Si based sensors. === Quantum dots === Graphene quantum dots (GQDs) keep all dimensions less than 10 nm. Their size and edge crystallography govern their electrical, magnetic, optical, and chemical properties. GQDs can be produced via graphite nanotomy or via bottom-up, solution-based routes (Diels-Alder, cyclotrimerization and/or cyclodehydrogenation reactions). GQDs with controlled structure can be incorporated into applications in electronics, optoelectronics and electromagnetics. Quantum confinement can be created by changing the width of graphene nanoribbons (GNRs) at selected points along the ribbon. It is studied as a catalyst for fuel cells. === Organic electronics === A semiconducting polymer (poly(3-hexylthiophene) placed on top of single-layer graphene vertically conducts electric charge better than on a thin layer of silicon. A 50 nm thick polymer film conducted charge about 50 times better than a 10 nm thick film, potentially because the former consists of a mosaic of variably-oriented crystallites forms a continuous pathway of interconnected crystals. In a thin film or on silicon, plate-like crystallites are oriented parallel to the graphene layer. Uses include solar cells. === Spintronics === Large-area graphene created by chemical vapor deposition (CVD) and layered on a SiO2 substrate, can preserve electron spin over an extended period and communicate it. Spintronics varies electron spin rather than current flow. The spin signal is preserved in graphene channels that are up to 16 micrometers long over a nanosecond. Pure spin transport and precession extended over 16 μm channel lengths with a spin lifetime of 1.2 ns and a spin diffusion length of ≈6 μm at room temperature. Spintronics is used in disk drives for data storage and in magnetic random-access memory. Electronic spin is generally short-lived and fragile, but the spin-based information in current devices needs to travel only a few nanometers. However, in processors, the information must cross several tens of micrometers with aligned spins. Graphene is the only known candidate for such behavior. === Conductive ink === In 2012 Vorbeck Materials started shipping the Siren anti-theft packaging device, which uses their graphene-based Vor-Ink circuitry to replace the metal antenna and external wiring to an RFID chip. This was the world's first commercially available product based on graphene. == Light processing == === Optical modulator === When the Fermi level of graphene is tuned, its optical absorption can be changed. In 2011, researchers reported the first graphene-based optical modulator. Operating at 1.2 GHz without a temperature controller, this modulator has a broad bandwidth (from 1.3 to 1.6 μm) and small footprint (~25 μm2). A Mach-Zehnder modulator based on a hybrid graphene-silicon waveguide has been demonstrated recently, which can process signals nearly chirp-free. An extinction up to 34.7 dB and a minimum chirp parameter of -0.006 are obtained. Its insertion loss is roughly -1.37 dB. === Ultraviolet lens === A hyperlens is a real-time super-resolution lens that can transform evanescent waves into propagating waves and thus break the diffraction limit. In 2016 a hyperlens based on dielectric layered graphene and h-boron nitride (h-BN) can surpass metal designs. Based on its anisotropic properties, flat and cylindrical hyperlenses were numerically verified with layered graphene at 1200 THz and layered h-BN at 1400 THz, respectively. In 2016 a 1-nm thick graphene microlens that can image objects the size of a single bacterium. The lens was created by spraying a sheet of graphene oxide solution, then molding the lens using a laser beam. It can resolve objects as small as 200 nanometers, and see into the near infrared. It breaks the diffraction limit and achieve a focal length less than half the wavelength of light. Possible applications include thermal imaging for mobile phones, endoscopes, nanosatellites and photonic chips in supercomputers and superfast broadband distribution. === Infrared light detection === Graphene reacts to the infrared spectrum at room temperature, albeit with sensitivity 100 to 1000 times too low for practical applications. However, two graphene layers separated by an insulator allowed an electric field produced by holes left by photo-freed electrons in one layer to affect a current running through the other layer. The process produces little heat, making it suitable for use in night-vision optics. The sandwich is thin enough to be integrated in handheld devices, eyeglass-mounted computers and even contact lenses. === Photodetector === A graphene/n-type silicon heterojunction has been demonstrated to exhibit strong rectifying behavior and high photoresponsivity. By introducing a thin interfacial oxide layer, the dark current of graphene/n-Si heterojunction has been reduced by two orders of magnitude at zero bias. At room temperature, the graphene/n-Si photodetector with interfacial oxide exhibits a specific detectivity up to 5.77 × 1013 cm Hz1/2 W2 at the peak wavelength of 890 nm in vacuum. In addition, the improved graphene/n-Si heterojunction photodetectors possess high responsivity of 0.73 A W−1 and high photo-to-dark current ratio of ≈107. These results demonstrate that graphene/Si heterojunction with interfacial oxide is promising for the development of high detectivity photodetectors. Recently, a graphene/si Schottky photodetector with record-fast response speed (< 25 ns) from wavelength 350 nm to 1100 nm are presented. The photodetectors exhibit excellent long-term stability even stored in air for more than 2 years. These results not only advance the development of high-performance photodetectors based on the graphene/Si Schottky junction, but also have important implications for mass-production of graphene-based photodetector array devices for cost-effective environmental monitoring, medical images, free-space communications, photoelectric smart-tracking, and integration with CMOS circuits for emerging interest-of-things applications, etc. == Energy == === Generation === ==== Ethanol distillation ==== Graphene oxide membranes allow water vapor to pass through, but are impermeable to other liquids and gases. This phenomenon has been used for further distilling of vodka to higher alcohol concentrations, in a room-temperature laboratory, without the application of heat or vacuum as used in traditional distillation methods. ==== Solar cells ==== Graphene has been used on different substrates such as Si, CdS and CdSe to produce Schottky junction solar cells. Through the properties of graphene, such as graphene's work function, solar cell efficiency can be optimized. An advantage of graphene electrodes is the ability to produce inexpensive Schottky junction solar cells. ===== Charge conductor ===== Graphene solar cells use graphene's unique combination of high electrical conductivity and optical transparency. This material absorbs only 2.6% of green light and 2.3% of red light. Graphene can be assembled into a film electrode with low roughness. These films must be made thicker than one atomic layer to obtain useful sheet resistances. This added resistance can be offset by incorporating conductive filler materials, such as a silica matrix. Reduced conductivity can be offset by attaching large aromatic molecules such as pyrene-1-sulfonic acid sodium salt (PyS) and the disodium salt of 3,4,9,10-perylenetetracarboxylic diimide bisbenzenesulfonic acid (PDI). These molecules, under high temperatures, facilitate better π-conjugation of the graphene basal plane. ===== Light collector ===== Using graphene as a photoactive material requires its bandgap to be 1.4–1.9 eV. In 2010, single cell efficiencies of nanostructured graphene-based PVs of over 12% were achieved. According to P. Mukhopadhyay and R. K. Gupta organic photovoltaics could be "devices in which semiconducting graphene is used as the photoactive material and metallic graphene is used as the conductive electrodes". In 2008, chemical vapor deposition produced graphene sheets by depositing a graphene film made from methane gas on a nickel plate. A protective layer of thermoplastic is laid over the graphene layer and the nickel underneath is then dissolved in an acid bath. The final step is to attach the plastic-coated graphene to a flexible polymer sheet, which can then be incorporated into a PV cell. Graphene/polymer sheets range in size up to 150 square centimeters and can be used to create dense arrays. Silicon generates only one current-driving electron for each photon it absorbs, while graphene can produce multiple electrons. Solar cells made with graphene could offer 60% conversion efficiency. ==== Electrode ==== In 2010, researchers first reported creating a graphene-silicon heterojunction solar cell, where graphene served as a transparent electrode and introduced a built-in electric field near the interface between the graphene and n-type silicon to help collect charge carriers. In 2012 researchers reported efficiency of 8.6% for a prototype consisting of a silicon wafer coated with trifluoromethanesulfonyl-amide (TFSA) doped graphene. Doping increased efficiency to 9.6% in 2013. In 2015 researchers reported efficiency of 15.6% by choosing the optimal oxide thickness on the silicon. This combination of carbon materials with traditional silicon semiconductors to fabricate solar cells has been a promising field of carbon science. In 2013, another team reported 15.6% percent by combining titanium oxide and graphene as a charge collector and perovskite as a sunlight absorber. The device is manufacturable at temperatures under 150 °C (302 °F) using solution-based deposition. This lowers production costs and offers the potential using flexible plastics. In 2015, researchers developed a prototype cell that used semitransparent perovskite with graphene electrodes. The design allowed light to be absorbed from both sides. It offered efficiency of around 12 percent with estimated production costs of less than $0.06/watt. The graphene was coated with PEDOT:PSS conductive polymer (polythiophene) polystyrene sulfonate). Multilayering graphene via CVD created transparent electrodes reducing sheet resistance. Performance was further improved by increasing contact between the top electrodes and the hole transport layer. ==== Fuel cells ==== Appropriately perforated graphene (and hexagonal boron nitride hBN) can allow protons to pass through it, offering the potential for using graphene monolayers as a barrier that blocks hydrogen atoms but not protons/ionized hydrogen (hydrogen atoms with their electrons stripped off). They could even be used to extract hydrogen gas out of the atmosphere that could power electric generators with ambient air. The membranes are more effective at elevated temperatures and when covered with catalytic nanoparticles such as platinum. Graphene could solve a major problem for fuel cells: fuel crossover that reduces efficiency and durability. In methanol fuel cells, graphene used as a barrier layer in the membrane area, has reduced fuel cross over with negligible proton resistance, improving the performance. At room temperature, proton conductivity with monolayer hBN, outperforms graphene, with resistivity to proton flow of about 10 Ω cm2 and a low activation energy of about 0.3 electronvolts. At higher temperatures, graphene outperforms with resistivity estimated to fall below 10−3 Ω cm2 above 250 degrees Celsius. In another project, protons easily pass through slightly imperfect graphene membranes on fused silica in water. The membrane was exposed to cycles of high and low pH. Protons transferred reversibly from the aqueous phase through the graphene to the other side where they undergo acid–base chemistry with silica hydroxyl groups. Computer simulations indicated energy barriers of 0.61–0.75 eV for hydroxyl-terminated atomic defects that participate in a Grotthuss-type relay, while pyrylium-like ether terminations did not. Recently, Paul and co-workers at IISER Bhopal demonstrated solid state proton conduction for oxygen functionalized few-layer graphene (8.7x10−3 S/cm) with a low activation barrier (0.25 eV). ==== Thermoelectrics ==== Adding 0.6% graphene to a mixture of lanthanum and partly reduced strontium titanium oxide produces a strong Seebeck at temperatures ranging from room temperature to 750 °C (compared to 500–750 without graphene). The material converts 5% of the heat into electricity (compared to 1% for strontium titanium oxide.) ==== Condenser coating ==== In 2015 a graphene coating on steam condensers quadrupled condensation efficiency, increasing overall plant efficiency by 2–3 percent. === Storage === ==== Supercapacitor ==== Due to graphene's high surface-area-to-mass ratio, one potential application is in the conductive plates of supercapacitors. In February 2013 researchers announced a novel technique to produce graphene supercapacitors based on the DVD burner reduction approach. In 2014 a supercapacitor was announced that was claimed to achieve energy density comparable to current lithium-ion batteries. In 2015 the technique was adapted to produce stacked, 3-D supercapacitors. Laser-induced graphene was produced on both sides of a polymer sheet. The sections were then stacked, separated by solid electrolytes, making multiple microsupercapacitors. The stacked configuration substantially increased the energy density of the result. In testing, the researchers charged and discharged the devices for thousands of cycles with almost no loss of capacitance. The resulting devices were mechanically flexible, surviving 8,000 bending cycles. This makes them potentially suitable for rolling in a cylindrical configuration. Solid-state polymeric electrolyte-based devices exhibit areal capacitance of >9 mF/cm2 at a current density of 0.02 mA/cm2, over twice that of conventional aqueous electrolytes. Also in 2015 another project announced a microsupercapacitor that is small enough to fit in wearable or implantable devices. Just one-fifth the thickness of a sheet of paper, it is capable of holding more than twice as much charge as a comparable thin-film lithium battery. The design employed laser-scribed graphene, or LSG with manganese dioxide. They can be fabricated without extreme temperatures or expensive "dry rooms". Their capacity is six times that of commercially available supercapacitors. The device reached volumetric capacitance of over 1,100 F/cm3. This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, close to the theoretical maximum of 1,380 F/g. Energy density varies between 22 and 42 Wh/L depending on device configuration. In May 2015 a boric acid-infused, laser-induced graphene supercapacitor tripled its areal energy density and increased its volumetric energy density 5-10 fold. The new devices proved stable over 12,000 charge-discharge cycles, retaining 90 percent of their capacitance. In stress tests, they survived 8,000 bending cycles. ==== Batteries ==== Silicon-graphene anode lithium ion batteries were demonstrated in 2012. Stable lithium ion cycling was demonstrated in bi- and few layer graphene films grown on nickel substrates, while single layer graphene films have been demonstrated as a protective layer against corrosion in battery components such as the battery case. This creates possibilities for flexible electrodes for microscale Li-ion batteries, where the anode acts as the active material and the current collector. Researchers built a lithium-ion battery made of graphene and silicon, which was claimed to last over a week on one charge and took only 15 minutes to charge. In 2015 argon-ion based plasma processing was used to bombard graphene samples with argon ions. That knocked out some carbon atoms and increased the capacitance of the materials three-fold. These "armchair" and "zigzag" defects are named based on the configurations of the carbon atoms that surround the holes. In 2016, Huawei announced graphene-assisted lithium-ion batteries with greater heat tolerance and twice the life span of traditional Lithium-Ion batteries, the component with the shortest life span in mobile phones. Graphene with controlled topological defects has been demonstrated to adsorb more ions, resulting in high-efficiency batteries. === Transmission === ==== Conducting Wire ==== Due to Graphene's high electrical and thermal conductivity, mechanical strength, and corrosion resistance, one potential application is in high-power energy transmission. Copper wire has long been used for power transmission for its high conductivity, ductility, and low costs. However, traditional wire fails to meet the transmission requirements of many new technologies. Thermally dependent resistivity in mesoscopic copper wire limits efficiency and current carrying capacity in small-scale electronics. Additionally, copper wire exhibits internal failure by electromigration at high current density, limiting miniaturization of wire. Copper's high weight and low temperature oxidation also limit its applications in high-power transmission. Increasing demand for high ampacity transmission in electronics and electric vehicle applications necessitate improvements in conductor technology. Graphene-copper composite conductors are a promising alternative to standard conductors in high-power applications. In 2013, researchers demonstrated a one-hundred-fold increase in current carrying capacity with carbon nanotube-copper composite wires when compared to traditional copper wire. These composite wires exhibited a temperature coefficient of resistivity an order of magnitude smaller than copper wires, an important feature for high load applications. ===== Graphene-clad wire ===== Additionally, in 2021, researchers demonstrated a 4.5 times increase in the current density breakdown limit of copper wire with an axially continuous graphene shell. The copper wire was coated by a continuous graphene sheet through chemical vapor deposition. The coated wire exhibited reduced oxidation of the wire during joule heating, increased heat dissipation (224% higher), and increased conductivity (41% higher). == Sensors == === Biosensors === Graphene does not oxidize in air or in biological fluids, making it an attractive material for use as a biosensor. A graphene circuit can be configured as a field effect biosensor by applying biological capture molecules and blocking layers to the graphene, then controlling the voltage difference between the graphene and the liquid that includes the biological test sample. Of the various types of graphene sensors that can be made, biosensors were the first to be available for sale. === Pressure sensors === The electronic properties of graphene/h-BN heterostructures can be modulated by changing the interlayer distances via applying external pressure, leading to potential realization of atomic thin pressure sensors. In 2011 researchers proposed an in-plane pressure sensor consisting of graphene sandwiched between hexagonal boron nitride and a tunneling pressure sensor consisting of h-BN sandwiched by graphene. The current varies by 3 orders of magnitude as pressure increases from 0 to 5 nN/nm2. This structure is insensitive to the number of wrapping h-BN layers, simplifying process control. Because h-BN and graphene are inert to high temperature, the device could support ultra-thin pressure sensors for application under extreme conditions. In 2016 researchers demonstrated a biocompatible pressure sensor made from mixing graphene flakes with cross-linked polysilicone (found in silly putty). === NEMS === Nanoelectromechanical systems (NEMS) can be designed and characterized by understanding the interaction and coupling between the mechanical, electrical, and the van der Waals energy domains. Quantum mechanical limit governed by Heisenberg uncertainty relation decides the ultimate precision of nanomechanical systems. Quantum squeezing can improve the precision by reducing quantum fluctuations in one desired amplitude of the two quadrature amplitudes. Traditional NEMS hardly achieve quantum squeezing due to their thickness limits. A scheme to obtain squeezed quantum states through typical experimental graphene NEMS structures taking advantages of its atomic scale thickness has been proposed. === Molecular absorption === Theoretically graphene makes an excellent sensor due to its 2D structure. The fact that its entire volume is exposed to its surrounding environment makes it very efficient to detect adsorbed molecules. However, similar to carbon nanotubes, graphene has no dangling bonds on its surface. Gaseous molecules cannot be readily adsorbed onto graphene surfaces, so intrinsically graphene is insensitive. The sensitivity of graphene chemical gas sensors can be dramatically enhanced by functionalization, for example, coating the film with a thin layer of certain polymers. The thin polymer layer acts like a concentrator that absorbs gaseous molecules. The molecule absorption introduces a local change in electrical resistance of graphene sensors. While this effect occurs in other materials, graphene is superior due to its high electrical conductivity (even when few carriers are present) and low noise, which makes this change in resistance detectable. === Piezoelectric effect === Density functional theory simulations predict that depositing certain adatoms on graphene can render it piezoelectrically responsive to an electric field applied in the out-of-plane direction. This type of locally engineered piezoelectricity is similar in magnitude to that of bulk piezoelectric materials and makes graphene a candidate for control and sensing in nanoscale devices. === Body motion === Promoted by the demand for wearable devices, graphene has been proved to be a promising material for potential applications in flexible and highly sensitive strain sensors. An environment-friendly and cost-effective method to fabricate large-area ultrathin graphene films is proposed for highly sensitive flexible strain sensor. The assembled graphene films are derived rapidly at the liquid/air interface by Marangoni effect and the area can be scaled up. These graphene-based strain sensors exhibit extremely high sensitivity with gauge factor of 1037 at 2% strain, which represents the highest value for graphene platelets at this small deformation so far. Rubber bands infused with graphene ("G-bands") can be used as inexpensive body sensors. The bands remain pliable and can be used as a sensor to measure breathing, heart rate, or movement. Lightweight sensor suits for vulnerable patients could make it possible to remotely monitor subtle movement. These sensors display 10×104-fold increases in resistance and work at strains exceeding 800%. Gauge factors of up to 35 were observed. Such sensors can function at vibration frequencies of at least 160 Hz. At 60 Hz, strains of at least 6% at strain rates exceeding 6000%/s can be monitored. === Magnetic === In 2015 researchers announced a graphene-based magnetic sensor 100 times more sensitive than an equivalent device based on silicon (7,000 volts per amp-tesla). The sensor substrate was hexagonal boron nitride. The sensors were based on the Hall effect, in which a magnetic field induces a Lorentz force on moving electric charge carriers, leading to deflection and a measurable Hall voltage. In the worst case graphene roughly matched a best case silicon design. In the best case graphene required lower source current and power requirements. == Environmental == === Contaminant removal === Graphene oxide is non-toxic and biodegradable. Its surface is covered with epoxy, hydroxyl, and carboxyl groups that interact with cations and anions. It is soluble in water and forms stable colloid suspensions in other liquids because it is amphiphilic (able to mix with water or oil). Dispersed in liquids it shows excellent sorption capacities. It can remove copper, cobalt, cadmium, arsenate, and organic solvents. === Water filtration === Research suggests that graphene filters could outperform other techniques of desalination by a significant margin. In 2021, researchers found that a reusable graphene foam could efficiently filter uranium (and possibly other heavy metals such as lead, mercury and cadmium) from water at the rate of 4 grams of uranium/gram of graphene. === Permeation barrier === Instead of allowing the permeation, blocking is also necessary. Gas permeation barriers are important for almost all applications ranging from food, pharmaceutical, medical, inorganic and organic electronic devices, etc. packaging. It extends the life of the product and allows keeping the total thickness of devices small. Being atomically thin, defectless graphene is impermeable to all gases. In particular, ultra-thin moisture permeation barrier layers based on graphene are shown to be important for organic-FETs and OLEDs. Graphene barrier applications in biological sciences are under study. == Other == === Art preservation === In 2021, researchers reported that a graphene veil reversibly applied via chemical vapor deposition was able to preserve the colors in art objects (70%). === Aviation === In 2016, researchers developed a prototype de-icing system that incorporated unzipped carbon nanotube graphene nanoribbons in an epoxy/graphene composite. In laboratory tests, the leading edge of a helicopter rotor blade was coated with the composite, covered by a protective metal sleeve. Applying an electrical current heated the composite to over 200 °F (93 °C), melting a 1 cm (0.4 in)-thick ice layer with ambient temperatures of a -4 °F (-20 °C). === Catalyst === In 2014, researchers at the University of Western Australia discovered nano sized fragments of graphene can speed up the rate of chemical reactions. In 2015, researchers announced an atomic scale catalyst made of graphene doped with nitrogen and augmented with small amounts of cobalt whose onset voltage was comparable to platinum catalysts. In 2016 iron-nitrogen complexes embedded in graphene were reported as another form of catalyst. The new material was claimed to approach the efficiency of platinum catalysts. The approach eliminated the need for less efficient iron nanoparticles. === Coolant additive === Graphene's high thermal conductivity suggests that it could be used as an additive in coolants. Preliminary research work showed that 5% graphene by volume can enhance the thermal conductivity of a base fluid by 86%. Another application due to graphene's enhanced thermal conductivity was found in PCR. === Lubricant === Scientists discovered using graphene as a lubricant works better than traditionally used graphite. A one atom thick layer of graphene in between a steel ball and steel disc lasted for 6,500 cycles. Conventional lubricants lasted 1,000 cycles. === Nanoantennas === A graphene-based plasmonic nano-antenna (GPN) can operate efficiently at millimeter radio wavelengths. The wavelength of surface plasmon polaritons for a given frequency is several hundred times smaller than the wavelength of freely propagating electromagnetic waves of the same frequency. These speed and size differences enable efficient graphene-based antennas to be far smaller than conventional alternatives. The latter operate at frequencies 100–1000 times larger than GPNs, producing 0.01–0.001 as many photons. An electromagnetic (EM) wave directed vertically onto a graphene surface excites the graphene into oscillations that interact with those in the dielectric on which the graphene is mounted, thereby forming surface plasmon polaritons (SPP). When the antenna becomes resonant (an integral number of SPP wavelengths fit into the physical dimensions of the graphene), the SPP/EM coupling increases greatly, efficiently transferring energy between the two. A phased array antenna 100 μm in diameter could produce 300 GHz beams only a few degrees in diameter, instead of the 180 degree radiation from a conventional metal antenna of that size. Potential uses include smart dust, low-power terabit wireless networks and photonics. A nanoscale gold rod antenna captured and transformed EM energy into graphene plasmons, analogous to a radio antenna converting radio waves into electromagnetic waves in a metal cable. The plasmon wave fronts can be directly controlled by adjusting antenna geometry. The waves were focused (by curving the antenna) and refracted (by a prism-shaped graphene bilayer because the conductivity in the two-atom-thick prism is larger than in the surrounding one-atom-thick layer.) The plasmonic metal-graphene nanoantenna was composed by inserting a few nanometers of oxide between a dipole gold nanorod and the monolayer graphene. The used oxide layer here can reduce the quantum tunneling effect between graphene and metal antenna. With tuning the chemical potential of the graphene layer through field effect transistor architecture, the in-phase and out-phase mode coupling between graphene plasmonics and metal plasmonics is realized. The tunable properties of the plasmonic metal-graphene nanoantenna can be switched on and off via modifying the electrostatic gate-voltage on graphene. === Plasmonics and metamaterials === Graphene accommodates a plasmonic surface mode, observed recently via near field infrared optical microscopy techniques and infrared spectroscopy Potential applications are in the terahertz to mid-infrared frequencies, such as terahertz and midinfrared light modulators, passive terahertz filters, mid-infrared photodetectors and biosensors. === Radio wave absorption === Stacked graphene layers on a quartz substrate increased the absorption of millimeter (radio) waves by 90 per cent over 125–165 GHz bandwidth, extensible to microwave and low-terahertz frequencies, while remaining transparent to visible light. For example, graphene could be used as a coating for buildings or windows to block radio waves. Absorption is a result of mutually coupled Fabry–Perot resonators represented by each graphene-quartz substrate. A repeated transfer-and-etch process was used to control surface resistivity. === Redox === Graphene oxide can be reversibly reduced and oxidized via electrical stimulus. Controlled reduction and oxidation in two-terminal devices containing multilayer graphene oxide films are shown to result in switching between partly reduced graphene oxide and graphene, a process that modifies electronic and optical properties. Oxidation and reduction are related to resistive switching. === Reference material === Graphene's properties suggest it as a reference material for characterizing electroconductive and transparent materials. One layer of graphene absorbs 2.3% of red light. This property was used to define the conductivity of transparency that combines sheet resistance and transparency. This parameter was used to compare materials without the use of two independent parameters. === Soundproofing === Researchers demonstrated a graphene-oxide-based aerogel that could reduce noise by up to 16 decibels. The aerogel weighed 2.1 kilograms per cubic metre (0.13 lb/cu ft). A conventional polyester urethane sound absorber might weigh 32 kilograms per cubic metre (2.0 lb/cu ft). One possible application is to reduce sound levels in airplane cabins. === Sound transducers === Graphene's light weight provides relatively good frequency response, suggesting uses in electrostatic audio speakers and microphones. In 2015 an ultrasonic microphone and speaker were demonstrated that could operate at frequencies from 20 Hz–500 kHz. The speaker operated at a claimed 99% efficiency with a flat frequency response across the audible range. One application was as a radio replacement for long-distance communications, given sound's ability to penetrate steel and water, unlike radio waves. === Structural material === Graphene's strength, stiffness and lightness suggested it for use with carbon fiber. Graphene has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue. It has also been used as a strengthening agent in concrete. === Thermal management === In 2011, researchers reported that a three-dimensional, vertically aligned, functionalized multilayer graphene architecture can be an approach for graphene-based thermal interfacial materials (TIMs) with superior thermal conductivity and ultra-low interfacial thermal resistance between graphene and metal. Graphene-metal composites can be used in thermal interface materials. Adding a layer of graphene to each side of a copper film increased the metal's heat-conducting properties up to 24%. This suggests the possibility of using them for semiconductor interconnects in computer chips. The improvement is the result of changes in copper's nano- and microstructure, not from graphene's independent action as an added heat conducting channel. High temperature chemical vapor deposition stimulates grain size growth in copper films. The larger grain sizes improve heat conduction. The heat conduction improvement was more pronounced in thinner copper films, which is useful as copper interconnects shrink. Attaching graphene functionalized with silane molecules increases its thermal conductivity (κ) by 15–56% with respect to the number density of molecules. This is because of enhanced in-plane heat conduction resulting from the simultaneous increase of thermal resistance between the graphene and the substrate, which limited cross-plane phonon scattering. Heat spreading ability doubled. However, mismatches at the boundary between horizontally adjacent crystals reduces heat transfer by a factor of 10. === Waterproof coating === Graphene could potentially usher in a new generation of waterproof devices whose chassis may not need to be sealed like today's devices. == See also == Graphene applications as optical lenses Hong Byung-hee == References ==
Wikipedia/Potential_applications_of_graphene
Clinical engineering is a specialty within biomedical engineering responsible for using medical technology to optimize healthcare delivery. Clinical engineers train and supervise biomedical equipment technicians (BMETs), working with governmental regulators on hospital inspections and audits, and serve as technological consultants for other hospital staff (i.e., Physicians, Administrators, IT). Clinical engineers also assist manufacturers in improving the design of medical equipment and maintain state-of-the-art hospital supply chains. With training in both product design and point-of-use experience, clinical engineers bridge the gap between product developers and end-users. The focus on practical implementations tends to keep clinical engineers oriented towards incremental redesigns, as opposed to revolutionary or cutting-edge ideas far-off of implementation for clinical use. However, there is an effort to expand this time horizon, over which clinical engineers can influence the trajectory of biomedical innovation. Clinical engineering departments at large hospitals will sometimes hire not only biomedical engineers, but also industrial and systems engineers to address topics such as operations research, human factors, cost analysis, and safety. == History == The term clinical engineering was first used in a 1969 paper by Landoll and Caceres. Caceres, a cardiologist, is generally credited with coining the term. The broader field of biomedical engineering also has a relatively recent history, with the first inter-society engineering meeting focused on engineering in medicine probably held in 1948. However, the general notion of applying engineering to medicine can be traced back to centuries. For example, Stephen Hales' work in the early 18th century, which led to the invention of the ventilator and the discovery of blood pressure, involved applying engineering techniques to medicine. In the early 1970s, clinical engineering was thought to require many new professionals. Estimates of the time for the US ranged as high as 5,000 to 8,000 clinical engineers, or 1 per 250 hospital beds. === Credentialization === The International Certification Commission for Clinical Engineers (ICC) was formed under the sponsorship of the Association for the Advancement of Medical Instrumentation (AAMI) in the early 1970s to provide a formal certification process for clinical engineers. A similar certification program was formed by academic institutions offering graduate degrees in clinical engineering as the American Board of Clinical Engineering (ABCE). In 1979, the ABCE dissolved, and those certified under its program were accepted into the ICC certification program. By 1985, only 350 clinical engineers had become certified. After a 1998 survey demonstrating no viable market for its certification program, the AAMI ceased accepting new applicants in July 1999. The new, current clinical engineering certification (CCE) started in 2002 under the sponsorship of the American College of Clinical Engineering (ACCE) and is administered by the ACCE Healthcare Technology Foundation. In 2004, the first year the certification process was underway, 112 individuals were granted certification based upon their previous ICC certification, and three individuals were awarded the new certification. By the time of the 2006-2007 AHTF Annual Report (c. June 30, 2007), 147 individuals had become HTF certified clinical engineers. == Definition and terminology == A clinical engineer was defined by the ACCE in 1991 as "a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology." Clinical engineering is also recognized by the Biomedical Engineering Society, the major professional organization for biomedical engineering, as being a branch within the field of biomedical engineering. There are at least two issues with the ACCE definition that often cause confusion. First, it is unclear how "clinical engineer" is a subset of "biomedical engineer". The terms are often used interchangeably: some hospitals refer to their relevant departments as "Clinical Engineering" departments, while others call them "Biomedical Engineering" departments. The technicians are almost universally referred to as "biomedical equipment technicians," regardless of the department they work under. However, the term biomedical engineer is generally thought to be more all-encompassing, as it includes engineers who design medical devices for manufacturers, or in academia. In contrast, clinical engineers generally work in hospitals solving problems close to where the equipment is actually used. Clinical engineers in some countries, such as India, are trained to innovate and find technological solutions for clinical needs. The other issue, not evident from the ACCE definition, is the appropriate educational background for a clinical engineer. Generally, certification programs expect applicants to hold an accredited bachelor's degree in engineering (or at least engineering technology). === Potential new name === In 2011, AAMI arranged a meeting to discuss a new name for clinical engineering. After careful debate, the vast majority decided on "Healthcare Technology Management". Due to confusion about the dividing line between clinical engineers (engineers) and BMETs (technicians), the word engineering was deemed limiting from the administrator's perspective and unworkable from the educator's perspective. An ABET-accredited college could not name an associate degree program "engineering". Also, the adjective, clinical, limited the scope of the field to hospitals. It remains unresolved how widely accepted this change will be, how this will affect the Clinical Engineering Certification or the formal recognition of clinical engineering as a subset of biomedical engineering. For regulatory and licensure reasons, true engineering specialties must be defined in a way that distinguishes them from the technicians they work alongside. == Certification == Certification in clinical engineering is governed by the Board of Examiners for Clinical Engineering Certification. To be eligible, a candidate must hold appropriate credentials (such as an accredited engineering or engineering-technology degree), have specific and relevant experience, and pass an examination. The certification process involves a three-hour written examination of up to 150 multiple-choice questions and a separate oral exam. Weight is given to applicants who are already licensed and registered Professional Engineers, which has extensive requirements itself. In Canada, the term 'engineer' is protected by law. As a result, a candidate must be registered as a Professional Engineer (P.Eng.) before they can become a Certified Clinical Engineer. == In the UK == Clinical engineers in the UK typically work within the NHS. Clinical engineering is a modality of the clinical scientist profession, registered by the HCPC. The responsibilities of clinical engineers are varied and often include providing specialist clinical services, inventing and developing medical devices, and medical device management. The roles typically involve both patient contact and academic research. Clinical engineering units within an NHS organization are often part of a larger medical physics department. Clinical engineers are supported and represented by the Institute of Physics and Engineering in Medicine, within which the clinical engineering special interest group oversees the engineering activities. The three primary aims of Clinical Engineering with the NHS are: To ensure medical equipment in the clinical environment is available and appropriate to the needs of the clinical service. To ensure medical equipment functions effectively and safely. To ensure medical equipment and its management represents value for patient benefit. === Registration === Clinical engineers are registered with the HCPC, or the RCT (Register of Clinical Technologist). Assessments prior to registration are provided by the National School of Healthcare Science, the Association of Clinical Scientists or the AHCS. There are two HCPC programs for becoming a clinical scientist. The first is a Certificate of Attainment, awarded for completing the NHS Scientist Training Programme (STP). The second is the Certificate of Equivalence, awarded on successful demonstration of equivalence to the STP. This route is normally chosen by individuals that have significant scientific experience prior to seeking registration. Both are provided by the AHCS. === Electronics and Biomedical Engineering === EBME technicians and engineers in the UK work in the NHS and private sector. They are part of the Clinical Engineering familiar in the UK. Their role is to manage and maintain medical equipment assets in NHS and private healthcare organizations. They are professionally registered with the Engineering Council as Chartered Engineers, Incorporated Engineers, or engineering technicians. The EBME community share their knowledge on the EBME Forums. There is also an annual 2-day National Exhibition and Conference, wherein engineers meet to learn about the latest medical products and to attend the 500-seat conference where academic and business leaders share their expertise. The conference was founded in 2009 as a way of improving healthcare through sharing knowledge from experienced professionals involved in medical equipment management. == In India == Healthcare has increasingly become technology-driven and requires trained manpower to keep pace with the growing demand for professionals in the field. An M-Tech Clinical Engineering course was initiated by Indian Institute of Technology Madras, Sree Chitra Thirunal Institute of Medical Sciences and Technology, Trivandrum and Christian Medical College, Vellore, to address the country's need for human resource development. This was aimed at indigenous biomedical device development as well as technology management in order to contribute to the overall development of healthcare delivery in the country. During the course, students of engineering are given an insight into biology, medicine, relevant electronic background, clinical practices, device development, and even management aspects. Students are paired with clinical doctors from CMC and SCTIMST to get hands-on experience during internships. An important aspect of this training is simultaneous, long-term, and detailed exposure to the clinical environment as well as to medical device development activity. This will help students understand how to recognize unmet clinical needs and contribute to the creation of future medical devices. Engineers will be trained to handle and oversee the safe and effective use of technology in healthcare delivery sites as part of the program. The minimum qualification for joining this course is a bachelor's degree in any discipline of engineering, technology, or architecture, and a valid GATE score with an interview process in that field. == See also == Biomedical engineering == References == == Further reading == Villafane, Carlos, CBET. (June 2009). Biomed: From the Student's Perspective, First Edition. [Techniciansfriend.com]. ISBN 978-1-61539-663-4.{{cite book}}: CS1 maint: multiple names: authors list (link) Medical engineering stories in the news School of Engineering and Materials Science, Queen Mary University of London == External links == EBME website EBME website for Medical, Biomedical, and Clinical engineering professionals.
Wikipedia/Clinical_engineering
Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it. == Design requirements == Performance requirements can include: Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific. Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements. Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding. Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties. Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently a particular glass type is made by a given manufacturer, and can seriously affect manufacturing cost and schedule. == Process == Lenses can first be designed using paraxial theory to position images and pupils, then real surfaces inserted and optimized. Paraxial theory can be skipped in simpler cases and the lens directly optimized using real surfaces. Lenses are first designed using average index of refraction and dispersion (see Abbe number) properties published in the glass manufacturer's catalog and through glass model calculations. However, the properties of the real glass blanks will vary from this ideal; index of refraction values can vary by as much as 0.0003 or more from catalog values, and dispersion can vary slightly. These changes in index and dispersion can sometimes be enough to affect the lens focus location and imaging performance in highly corrected systems. The lens blank manufacturing process is as follows: The glass batch ingredients for a desired glass type are mixed in a powder state, the powder mixture is melted in a furnace, the fluid is further mixed while molten to maximize batch homogeneity, poured into lens blanks and annealed according to empirically determined time-temperature schedules. The glass blank pedigree, or "melt data", can be determined for a given glass batch by making small precision prisms from various locations in the batch and measuring their index of refraction on a spectrometer, typically at five or more wavelengths. Lens design programs have curve fitting routines that can fit the melt data to a selected dispersion curve, from which the index of refraction at any wavelength within the fitted wavelength range can be calculated. A re-optimization, or "melt re-comp", can then be performed on the lens design using measured index of refraction data where available. When manufactured, the resulting lens performance will more closely match the desired requirements than if average glass catalog values for index of refraction were assumed. Delivery schedules are impacted by glass and mirror blank availability and lead times to acquire, the amount of tooling a shop must fabricate prior to starting on a project, the manufacturing tolerances on the parts (tighter tolerances mean longer fab times), the complexity of any optical coatings that must be applied to the finished parts, further complexities in mounting or bonding lens elements into cells and in the overall lens system assembly, and any post-assembly alignment and quality control testing and tooling required. Tooling costs and delivery schedules can be reduced by using existing tooling at any given shop wherever possible, and by maximizing manufacturing tolerances to the extent possible. == Lens optimization == A simple two-element air-spaced lens has nine variables (four radii of curvature, two thicknesses, one airspace thickness, and two glass types). A multi-configuration lens corrected over a wide spectral band and field of view over a range of focal lengths and over a realistic temperature range can have a complex design volume having over one hundred dimensions. Lens optimization techniques that can navigate this multi-dimensional space and proceed to local minima have been studied since the 1940s, beginning with early work by James G. Baker, and later by Feder, Wynne, Glatzel, Grey and others. Prior to the development of digital computers, lens optimization was a hand-calculation task using trigonometric and logarithmic tables to plot 2-D cuts through the multi-dimensional space. Computerized ray tracing allows the performance of a lens to be modelled quickly, so that the design space can be searched rapidly. This allows design concepts to be rapidly refined. Popular optical design software includes Zemax's OpticStudio, Synopsys's Code V, and Lambda Research's OSLO. In most cases the designer must first choose a viable design for the optical system, and then numerical modelling is used to refine it. The designer ensures that designs optimized by the computer meet all requirements, and makes adjustments or restarts the process when they do not. == See also == Optical engineering Fabrication and testing (optical components) Ray transfer matrix analysis Photographic lens design Surface imperfections (optics) Stray light == References == === Notes === === Bibliography === Smith, Warren J., Modern Lens Design, McGraw-Hill, Inc., 1992, ISBN 0-07-059178-4 Kingslake, Rudolph, Lens Design Fundamentals, Academic Press, 1978 Shannon, Robert R., The Art and Science of Optical Design, Cambridge University Press, 1997. == External links == The GNU Optical design and simulation library
Wikipedia/Optical_lens_design
Integrated Computational Materials Engineering (ICME) is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. Key words are "Integrated", involving integrating models at multiple length scales, and "Engineering", signifying industrial utility. The focus is on the materials, i.e. understanding how processes produce material structures, how those structures give rise to material properties, and how to select materials for a given application. The key links are process-structures-properties-performance. The National Academies report describes the need for using multiscale materials modeling to capture the process-structures-properties-performance of a material. == Standardization in ICME == A fundamental requirement to meet the ambitious ICME objective of designing materials for specific products resp. components is an integrative and interdisciplinary computational description of the history of the component starting from the sound initial condition of a homogeneous, isotropic and stress free melt resp. gas phase and continuing via subsequent processing steps and eventually ending in the description of failure onset under operational load. Integrated Computational Materials Engineering is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. ICME thus naturally requires the combination of a variety of models and software tools. It is thus a common objective to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. == Standardization of information exchange == Efforts to generate a common language by standardizing and generalizing data formats for the exchange of simulation results represent a major mandatory step towards successful future applications of ICME. A future, structural framework for ICME comprising a variety of academic and/or commercial simulation tools operating on different scales and being modular interconnected by a common language in form of standardized data exchange will allow integrating different disciplines along the production chain, which by now have only scarcely interacted. This will substantially improve the understanding of individual processes by integrating the component history originating from preceding steps as the initial condition for the actual process. Eventually this will lead to optimized process and production scenarios and will allow effective tailoring of specific materials and component properties. === The ICMEg project and its mission === The ICMEg project aims to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. Eventually this will allow stakeholders from electronic, atomistic, mesoscopic and continuum communities to benefit from sharing knowledge and best practice and thus to promote a deeper understanding between the different communities of materials scientists, IT engineers and industrial users. ICMEg will create an international network of simulation providers and users. It will promote a deeper understanding between the different communities (academia and industry) each of them by now using very different tools/methods and data formats. The harmonization and standardization of information exchange along the life-cycle of a component and across the different scales (electronic, atomistic, mesoscopic, continuum) are the key activity of ICMEg. The mission of ICMEg is to establish and to maintain a network of contacts to simulation software providers, governmental and international standardization authorities, ICME users, associations in the area of materials and processing, and academia to define and communicate an ICME language in form of an open and standardized communication protocol to stimulate knowledge sharing in the field of multiscale materials design to identify missing tools, models and functionalities and propose a roadmap for their development to discuss and to decide about future amendments to the initial standard The activities of ICMEg include Organization of International Workshops on Software Solutions for Integrated Computational Materials Engineering Conducting market study and survey on available simulation software for ICME Create and maintain forum for knowledge sharing in ICME The ICMEg project ended in October 2016. Its major outcomes are a Handbook of Software Solutions for ICME the identification of HDF5 as a suitable communication file standard for microstructure information exchange in ICME settings the specification of a metadata description for microstructures a network of stakeholders in the area of ICME Most of the activities being launched in the ICMEg project are continued by the European Materials Modelling Council and in the MarketPlace project == Multiscale modeling in material processing == Multiscale modeling aims to evaluate material properties or behavior on one level using information or models from different levels and properties of elementary processes. Usually, the following levels, addressing a phenomenon over a specific window of length and time, are recognized: Structural scale: Finite element, finite volume and finite difference partial differential equation are solvers used to simulate structural responses such as solid mechanics and transport phenomena at large (meters) scales. process modeling/simulations: extrusion, rolling, sheet forming, stamping, casting, welding, etc. product modeling/simulations: performance, impact, fatigue, corrosion, etc. Macroscale: constitutive (rheology) equations are used at the continuum level in solid mechanics and transport phenomena at millimeter scales. Mesoscale: continuum level formulations are used with discrete quantities at multiple micrometer scales. "Meso" is an ambiguous term that means "intermediate" so it has been used as representing different intermediate scales. In this context, it can represent modeling from crystal plasticity for metals, Eshelby solutions for any materials, homogenization methods, and unit cell methods. Microscale: modeling techniques that represent the micrometer scale such as dislocation dynamics codes for metals and phase field models for multiphase materials. Phase field models of phase transitions and microstructure formation and evolution on nanometer to millimeter scales. Nanoscale: semi-empirical atomistic methods are used such as Lennard-Jones, Brenner potentials, embedded atom method (EAM) potentials, and modified embedded atom potentials (MEAM) in molecular dynamics (MD), molecular statics (MS), Monte Carlo (MC), and kinetic Monte Carlo (KMC) formulations. Electronic scale: Schroedinger equations are used in a computational framework as density functional theory (DFT) models of electron orbitals and bonding on angstrom to nanometer scales. There are some software codes that operate on different length scales such as: CALPHAD computational thermodynamics for prediction of equilibrium phase diagrams and even non-equilibrium phases. Phase field codes for simulation of microstructure evolution Databases of processing parameters, microstructure features, and properties from which one can draw correlations at various length scales GeoDict - The Digital Material Laboratory by Math2Market VPS-MICRO is a multiscale probabilistic fracture mechanics software. SwiftComp is a multiscale constitutive modeling software based on mechanics of structure genome. Digimat is a multiscale material modeling platform A comprehensive compilation of software tools with relevance for ICME is documented in the Handbook of Software Solutions for ICME == Examples of Model integration == Small scale models calculate material properties, or relationships between properties and parameters, e.g. yield strength vs. temperature, for use in continuum models CALPHAD computational thermodynamics software predicts free energy as a function of composition; a phase field model then uses this to predict structure formation and development, which one may then correlate with properties. An essential ingredient to model microstructure evolution by phase field models and other microstructre evolution codes are the initial and boundary conditions. While boundary conditions may be taken e.g. from the simulation of the actual process, the initial conditions (i.e. the initial microstructure entering into the actual process step) involve the entire integrated process history starting from the homogeneous, isotropic and stress free melt. Thus - for a successful ICME - an efficient exchange of information along the entire process chain and across all relevant length scales is mandatory. The models to be combined for this purpose comprise both academic and/or commercial modelling tools and simulation software packages. To streamline the information flow within this heterogeneous variety of modelling tools, the concept of a modular, standardized simulation platform has recently been proposed. A first realisation of this concept is the AixViPMaP® - the Aachen Virtual Platform for Materials Processing. Process models calculate spatial distribution of structure features, e.g. fiber density and orientation in a composite material; small-scale models then calculate relationships between structure and properties, for use in a continuum models of overall part or system behavior Large scale models explicitly fully couple with small scale models, e.g. a fracture simulation might integrate a continuum solid mechanics model of macroscopic deformation with an FD model of atomic motions at the crack tip Suites of models (large-scale, small-scale, atomic-scale, process-structure, structure-properties, etc.) can be hierarchically integrated into a systems design framework to enable the computational design of entirely new materials. A commercial leader in the use of ICME in computational materials design is QuesTek Innovations LLC, a small business in Evanston, IL co-founded by Prof. Greg Olson of Northwestern University. QuesTek's high-performance Ferrium® steels were designed and developed using ICME methodologies. The Mississippi State University Internal State Variable (ISV) plasticity-damage model (DMG) developed by a team led by Prof. Mark F. Horstemeyer (Founder of Predictive Design Technologies) has been used to optimize the design of a Cadillac control arm, the Corvette engine cradle, and a powder metal steel engine bearing cap. ESI Group through its ProCast and SYSWeld are commercial finite element solutions used in production environments by major manufacturers in aerospace, automotive and government organizations to simulate local material phase changes of metals prior to manufacturing. PAMFORM is utilized for tracking material changes during composite forming manufacturing simulation. == Education == Katsuyo Thorton announced at the 2010 MS&T ICME Technical Committee meeting that NSF would be funding a "Summer School" on ICME at the University of Michigan starting in 2011. Northwestern began offering a Masters of Science Certificate in ICME in the fall of 2011. The first Integrated Computational Materials Engineering (ICME) course based upon Horstemeyer 2012 was delivered at Mississippi State University (MSU) in 2012 as a graduate course with distance learning students included [cf., Sukhija et al., 2013]. It was later taught in 2013 and 2014 at MSU also with distance learning students. In 2015, the ICME Course was taught by Dr. Mark Horstemeyer (MSU) and Dr. William (Bill) Shelton (Louisiana State University, LSU) with students from each institution via distance learning. The goal of the methodology embraced in this course was to provide students with the basic skills to take advantage of the computational tools and experimental data provided by EVOCD in conducting simulations and bridging procedures for quantifying the structure-property relationships of materials at multiple length scales. On successful completion of the assigned projects, students published their multiscale modeling learning outcomes on the ICME Wiki, facilitating easy assessment of student achievements and embracing qualities set by the ABET engineering accreditation board. == See also == Computational materials science Materials informatics ICME cyberinfrastructure Cyberinfrastructure QuesTek Innovations == References == JOM November 2006 issue focused on ICME Committee on Integrated Computational Materials Engineering, National Research Council, Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National Security, National Academies Press, 2008. ISBN 0-309-11999-5, NAP Link G. Olson, Designing a New Material Word, Science, Vol. 288, May 12, 2000 Horstemeyer 2009: Horstemeyer M.F., "Multiscale Modeling: A Review," Practical Aspects of Computational Chemistry, ed. J. Leszczynski and M.K. Shukla, Springer Science+Business Media, pp. 87-135, 2009 == External links == ICME section of Materials Technology @ TMS [Advances in ICME Implementation: Concepts and Practices” in the May 2017 issue (vol. 69, no. 5) of JOM https://link.springer.com/journal/11837/69/5] Cyberinfrastructure for ICME at Mississippi State University GeoDict The Digital Material Laboratory
Wikipedia/Integrated_computational_materials_engineering
Graphite () is a crystalline allotrope (form) of the element carbon. It consists of many stacked layers of graphene, typically in excess of hundreds of layers. Graphite occurs naturally and is the most stable form of carbon under standard conditions. Synthetic and natural graphite are consumed on a large scale (1.3 million metric tons per year in 2022) for uses in many critical industries including refractories (50%), lithium-ion batteries (18%), foundries (10%), and lubricants (5%), among others (17%). Graphite converts to diamond under extremely high pressure and temperature. Graphite's low cost, thermal and chemical inertness and characteristic conductivity of heat and electricity finds numerous applications in high energy and high temperature processes. == Types and varieties == Graphite can occur naturally or be produced synthetically. Natural graphite is obtained from naturally occurring geologic deposits and synthetic graphite is produced through human activity. === Natural === Graphite occurs naturally in ores that can be classified as either amorphous (microcrystalline) or crystalline (flake or lump/chip) which is determined by the ore morphology, crystallinity, and grain size. All naturally occurring graphite deposits are formed from the metamorphism of carbonaceous sedimentary rocks, and the ore type is due to its geologic setting. Coal that has been thermally metamorphosed is the typical source of amorphous graphite. Crystalline flake graphite is mined from carbonaceous metamorphic rocks, while lump or chip graphite is mined from veins which occur in high-grade metamorphic regions. There are serious negative environmental impacts to graphite mining. === Synthetic === Synthetic graphite has high purity and is usually produced by the thermal graphitization of hydrocarbon materials at temperatures in excess of 2,100 °C, most commonly through the Acheson process. The high temperatures are maintained for weeks, and are required not only to form the graphite from the precursor carbons but also to vaporize any impurities that may be present, including hydrogen, nitrogen, sulfur, organics, and metals. The resulting synthetic graphite is highly pure—in excess of 99.9% C purity—but typically has lower density, conductivity and a higher porosity than its natural equivalent. Synthetic graphite can be formed into very large (centimeter-scale) flakes while maintaining its high purity, unlike almost all sources of natural graphite. Synthetic graphite can also be formed by other methods including by chemical vapor deposition from hydrocarbons at temperatures above 2,500 K (2,230 °C), by decomposition of thermally unstable carbides, or by crystallization from metal melts supersaturated with carbon. === Research === Research and development efforts continue into new methods for the industrial production of graphite for a variety of applications, including lithium-ion batteries, refractories, and foundries, among others. Significant work has been done on graphitizing of traditionally non-graphitizable carbons. A company in New Zealand utilizes forestry waste to produce what they have termed 'biographite' through a process referred to as thermo-catalytic graphitization. Another group in the United States uses a method referred to as photocatalytic graphitization to produce highly crystalline highly pure graphite for lithium-ion batteries and other applications from a variety of carbon sources. == Natural == === Occurrence === Graphite occurs in metamorphic rocks as a result of the reduction of sedimentary carbon compounds during metamorphism. It also occurs in igneous rocks and in meteorites. Minerals associated with graphite include quartz, calcite, micas and tourmaline. The principal export sources of mined graphite are, in order of tonnage, China, Mexico, Canada, Brazil, and Madagascar. Significant unexploited graphite resources also exist in Colombia's Cordillera Central in the form of graphite-bearing schists. In meteorites, graphite occurs with troilite and silicate minerals. Small graphitic crystals in meteoritic iron are called cliftonite. Some microscopic grains have distinctive isotopic compositions, indicating that they were formed before the Solar System. They are one of about 12 known types of minerals that predate the Solar System and have also been detected in molecular clouds. These minerals were formed in the ejecta when supernovae exploded or low to intermediate-sized stars expelled their outer envelopes late in their lives. Graphite may be the second or third oldest mineral in the Universe. === Structure === Graphite consists of sheets of trigonal planar carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower. There are two allotropic forms called alpha (hexagonal) and beta (rhombohedral), differing in terms of the stacking of the graphene layers: stacking in alpha graphite is ABA, as opposed to ABC stacking in the energetically less stable beta graphite. Rhombohedral graphite cannot occur in pure form. Natural graphite, or commercial natural graphite, contains 5 to 15% rhombohedral graphite and this may be due to intensive milling. The alpha form can be converted to the beta form through shear forces, and the beta form reverts to the alpha form when it is heated to 1300 °C for four hours. === Thermodynamics === The equilibrium pressure and temperature conditions for a transition between graphite and diamond is well established theoretically and experimentally. The pressure changes linearly between 1.7 GPa at 0 K and 12 GPa at 5000 K (the diamond/graphite/liquid triple point). However, the phases have a wide region about this line where they can coexist. At normal temperature and pressure, 20 °C (293 K) and 1 standard atmosphere (0.10 MPa), the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about 4500 K, diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at 2000 K, a pressure of 35 GPa is needed. === Other properties === The acoustic and thermal properties of graphite are highly anisotropic, since phonons propagate quickly along the tightly bound planes, but are slower to travel from one plane to another. Graphite's high thermal stability and electrical and thermal conductivity facilitate its widespread use as electrodes and refractories in high temperature material processing applications. However, in oxygen-containing atmospheres graphite readily oxidizes to form carbon dioxide at temperatures of 700 °C and above. Graphite is an electrical conductor, hence useful in such applications as arc lamp electrodes. It can conduct electricity due to the vast electron delocalization within the carbon layers (a phenomenon called aromaticity). These valence electrons are free to move, so are able to conduct electricity. However, the electricity is primarily conducted within the plane of the layers. The conductive properties of powdered graphite allow its use as pressure sensor in carbon microphones. Graphite and graphite powder are valued in industrial applications for their self-lubricating and dry lubricating properties. However, the use of graphite is limited by its tendency to facilitate pitting corrosion in some stainless steel, and to promote galvanic corrosion between dissimilar metals (due to its electrical conductivity). It is also corrosive to aluminium in the presence of moisture. For this reason, the US Air Force banned its use as a lubricant in aluminium aircraft, and discouraged its use in aluminium-containing automatic weapons. Even graphite pencil marks on aluminium parts may facilitate corrosion. Another high-temperature lubricant, hexagonal boron nitride, has the same molecular structure as graphite. It is sometimes called white graphite, due to its similar properties. When a large number of crystallographic defects bind its planes together, graphite loses its lubrication properties and becomes what is known as pyrolytic graphite. It is also highly anisotropic, and diamagnetic, thus it will float in mid-air above a strong magnet. (If it is made in a fluidized bed at 1000–1300 °C then it is isotropic turbostratic, and is used in blood-contacting devices like mechanical heart valves and is called pyrolytic carbon, and is not diamagnetic. Pyrolytic graphite and pyrolytic carbon are often confused but are very different materials.) For a long time graphite has been considered to be hydrophobic. However, recent studies using highly ordered pyrolytic graphite have shown that freshly clean graphite is hydrophilic (contact angle of 70° approximately), and it becomes hydrophobic (contact angle of 95° approximately) due to airborne pollutants (hydrocarbons) present in the atmosphere. Those contaminants also alter the electric equipotential surface of graphite by creating domains with potential differences of up to 200 mV as measured with kelvin probe force microscopy. Such contaminants can be desorbed by increasing the temperature of graphite to approximately 50 °C or higher. Natural and crystalline graphites are not often used in pure form as structural materials, due to their shear-planes, brittleness, and inconsistent mechanical properties. == History of use == In the 4th millennium BCE, during the Neolithic Age in southeastern Europe, the Marița culture used graphite in a ceramic paint for decorating pottery. Sometime before 1565 (some sources say as early as 1500), an enormous deposit of graphite was discovered on the approach to Grey Knotts from the hamlet of Seathwaite in Borrowdale parish, Cumbria, England, which the locals found useful for marking sheep. During the reign of Elizabeth I (1558–1603), Borrowdale graphite was used as a refractory material to line molds for cannonballs, resulting in rounder, smoother balls that could be fired farther, contributing to the strength of the English navy. This particular deposit of graphite was extremely pure and soft, and could easily be cut into sticks. Because of its military importance, this unique mine and its production were strictly controlled by the Crown. During the 19th century, graphite's uses greatly expanded to include stove polish, lubricants, paints, crucibles, foundry facings, and pencils, a major factor in the expansion of educational tools during the first great rise of education for the masses. The British Empire controlled most of the world's production (especially from Ceylon), but production from Austrian, German, and American deposits expanded by mid-century. For example, the Dixon Crucible Company of Jersey City, New Jersey, founded by Joseph Dixon and partner Orestes Cleveland in 1845, opened mines in the Lake Ticonderoga district of New York, built a processing plant there, and a factory to manufacture pencils, crucibles and other products in New Jersey, described in the Engineering & Mining Journal 21 December 1878. The Dixon pencil is still in production. The beginnings of the revolutionary froth flotation process are associated with graphite mining. Included in the E&MJ article on the Dixon Crucible Company is a sketch of the "floating tanks" used in the age-old process of extracting graphite. Because graphite is so light, the mix of graphite and waste was sent through a final series of water tanks where a cleaner graphite "floated" off, which left waste to drop out. In an 1877 patent, the two brothers Bessel (Adolph and August) of Dresden, Germany, took this "floating" process a step further and added a small amount of oil to the tanks and boiled the mix – an agitation or frothing step – to collect the graphite, the first steps toward the future flotation process. Adolph Bessel received the Wohler Medal for the patented process that upgraded the recovery of graphite to 90% from the German deposit. In 1977, the German Society of Mining Engineers and Metallurgists organized a special symposium dedicated to their discovery and, thus, the 100th anniversary of flotation. In the United States, in 1885, Hezekiah Bradford of Philadelphia patented a similar process, but it is uncertain if his process was used successfully in the nearby graphite deposits of Chester County, Pennsylvania, a major producer by the 1890s. The Bessel process was limited in use, primarily because of the abundant cleaner deposits found around the globe, which needed not much more than hand-sorting to gather the pure graphite. The state of the art, c. 1900, is described in the Canadian Department of Mines report on graphite mines and mining when Canadian deposits began to become important producers of graphite. === Other names === Historically, graphite was called black lead or plumbago. Plumbago was commonly used in its massive mineral form. Both of these names arise from confusion with the similar-appearing lead ores, particularly galena. The Latin word for lead, plumbum, gave its name to the English term for this grey metallic-sheened mineral and even to the leadworts or plumbagos, plants with flowers that resemble this colour. The term black lead usually refers to a powdered or processed graphite, matte black in color. Abraham Gottlob Werner coined the name graphite ("writing stone") in 1789. He attempted to clear up the confusion between molybdena, plumbago and black lead after Carl Wilhelm Scheele in 1778 proved that these were at least three different minerals. Scheele's analysis showed that the chemical compounds molybdenum sulfide (molybdenite), lead(II) sulfide (galena) and graphite were three different soft black minerals. == Uses == Natural graphite is mostly used for refractories, batteries, steelmaking, expanded graphite, brake linings, foundry facings, and lubricants. === Refractories === The use of graphite as a refractory (heat-resistant) material began before 1900 with graphite crucibles used to hold molten metal; this is now a minor part of refractories. In the mid-1980s, the carbon-magnesite brick became important, and a bit later the alumina-graphite shape. As of 2017 the order of importance is: alumina-graphite shapes, carbon-magnesite brick, Monolithics (gunning and ramming mixes), and then crucibles. Crucibles began using very large flake graphite, and carbon-magnesite bricks requiring not quite so large flake graphite; for these and others there is now much more flexibility in the size of flake required, and amorphous graphite is no longer restricted to low-end refractories. Alumina-graphite shapes are used as continuous casting ware, such as nozzles and troughs, to convey the molten steel from ladle to mold, and carbon magnesite bricks line steel converters and electric-arc furnaces to withstand extreme temperatures. Graphite blocks are also used in parts of blast furnace linings where the high thermal conductivity of the graphite is critical to ensuring adequate cooling of the bottom and hearth of the furnace. High-purity monolithics are often used as a continuous furnace lining instead of carbon-magnesite bricks. The US and European refractories industry had a crisis in 2000–2003, with an indifferent market for steel and a declining refractory consumption per tonne of steel underlying firm buyouts and many plant closures. Many of the plant closures resulted from the acquisition of Harbison-Walker Refractories by RHI AG and some plants had their equipment auctioned off. Since much of the lost capacity was for carbon-magnesite brick, graphite consumption within the refractories area moved towards alumina-graphite shapes and Monolithics, and away from the brick. The major source of carbon-magnesite brick is now China. Almost all of the above refractories are used to make steel and account for 75% of refractory consumption; the rest is used by a variety of industries, such as cement. According to the USGS, US natural graphite consumption in refractories comprised 12,500 tonnes in 2010. === Batteries === The use of graphite in batteries has increased since the 1970s. Natural and synthetic graphite are used as an anode material to construct electrodes in major battery technologies. The demand for batteries, primarily nickel–metal hydride and lithium-ion batteries, caused a growth in demand for graphite in the late 1980s and early 1990s – a growth driven by portable electronics, such as portable CD players and power tools. Laptops, mobile phones, tablets, and smartphone products have increased the demand for batteries. Electric-vehicle batteries are anticipated to increase graphite demand. As an example, a lithium-ion battery in a fully electric Nissan Leaf contains nearly 40 kg of graphite. Radioactive graphite removed from nuclear reactors has been investigated as a source of electricity for low-power applications. This waste is rich in carbon-14, which emits electrons through beta decay, so it could potentially be used as the basis for a betavoltaic device. This concept is known as the diamond battery. === Graphite anode materials === Graphite is the "predominant anode material used today in lithium-ion batteries". Electric-vehicle (EV) batteries contain four basic components: anode, cathode, electrolyte, and separator. While there is much focus on the cathode materials—lithium, nickel, cobalt, manganese, etc., the anode material used in virtually all EV batteries is graphite. === Steelmaking === Natural graphite in steelmaking mostly goes into raising the carbon content in molten steel; it can also serve to lubricate the dies used to extrude hot steel. Carbon additives face competitive pricing from alternatives such as synthetic graphite powder, petroleum coke, and other forms of carbon. A carbon raiser is added to increase the carbon content of the steel to a specified level. An estimate based on USGS's graphite consumption statistics indicates that steelmakers in the US used 10,500 tonnes in this fashion in 2005. === Brake linings === Natural amorphous and fine flake graphite are used in brake linings or brake shoes for heavier (nonautomotive) vehicles, and became important with the need to substitute for asbestos. This use has been important for quite some time, but nonasbestos organic (NAO) compositions are beginning to reduce graphite's market share. A brake-lining industry shake-out with some plant closures has not been beneficial, nor has an indifferent automotive market. According to the USGS, US natural graphite consumption in brake linings was 6,510 tonnes in 2005. === Foundry facings and lubricants === A foundry-facing mold wash is a water-based paint of amorphous or fine flake graphite. Painting the inside of a mold with it and letting it dry leaves a fine graphite coat that will ease the separation of the object cast after the hot metal has cooled. Graphite lubricants are specialty items for use at very high or very low temperatures, as forging die lubricant, an antiseize agent, a gear lubricant for mining machinery, and to lubricate locks. Having low-grit graphite, or even better, no-grit graphite (ultra high purity), is highly desirable. It can be used as a dry powder, in water or oil, or as colloidal graphite (a permanent suspension in a liquid). An estimate based on USGS graphite consumption statistics indicates that 2,200 tonnes were used in this fashion in 2005. Metal can also be impregnated into graphite to create a self-lubricating alloy for application in extreme conditions, such as bearings for machines exposed to high or low temperatures. === Everyday use === ==== Pencils ==== The ability to leave marks on paper and other objects gave graphite its name, given in 1789 by German mineralogist Abraham Gottlob Werner. It stems from γράφειν ("graphein"), meaning to write or draw in Ancient Greek. From the 16th century, all pencils were made with leads of English natural graphite, but modern pencil lead is most commonly a mix of powdered graphite and clay; it was invented by Nicolas-Jacques Conté in 1795. It is chemically unrelated to the metal lead, whose ores had a similar appearance, hence the continuation of the name. Plumbago is another older term for natural graphite used for drawing, typically as a lump of the mineral without a wood casing. The term plumbago drawing is normally restricted to 17th and 18th-century works, mostly portraits. Today, pencils are still a small but significant market for natural graphite. Around 7% of the 1.1 million tonnes produced in 2011 was used to make pencils. Low-quality amorphous graphite is used and sourced mainly from China. In art, graphite is typically used to create detailed and precise drawings, as it allows for a wide range of values (light to dark) to be achieved. It can also be used to create softer, more subtle lines and shading. Graphite is popular among artists because it is easy to control, easy to erase, and produces a clean, professional look. It is also relatively inexpensive and widely available. Many artists use graphite in conjunction with other media, such as charcoal or ink, to create a range of effects and textures in their work. Graphite of various hardness or softness results in different qualities and tones when used as an artistic medium. ==== Pinewood derby ==== Graphite is probably the most-used lubricant in pinewood derbies. === Other uses === Natural graphite has found uses in zinc-carbon batteries, electric motor brushes, and various specialized applications. Railroads would often mix powdered graphite with waste oil or linseed oil to create a heat-resistant protective coating for the exposed portions of a steam locomotive's boiler, such as the smokebox or lower part of the firebox. The Scope soldering iron uses a graphite tip as its heating element. === Expanded graphite === Expanded graphite is made by immersing natural flake graphite in a bath of chromic acid, then concentrated sulfuric acid, which forces the crystal lattice planes apart, thus expanding the graphite. The expanded graphite can be used to make graphite foil or used directly as a "hot top" compound to insulate molten metal in a ladle or red-hot steel ingots and decrease heat loss, or as firestops fitted around a fire door or in sheet metal collars surrounding plastic pipe (during a fire, the graphite expands and chars to resist fire penetration and spread), or to make high-performance gasket material for high-temperature use. After being made into graphite foil, the foil is machined and assembled into the bipolar plates in fuel cells. The foil is made into heat sinks for laptop computers which keeps them cool while saving weight, and is made into a foil laminate that can be used in valve packings or made into gaskets. Old-style packings are now a minor member of this grouping: fine flake graphite in oils or greases for uses requiring heat resistance. A GAN estimate of current US natural graphite consumption in this end-use is 7,500 tonnes. === Intercalated graphite === Graphite forms intercalation compounds with some metals and small molecules. In these compounds, the host molecule or atom gets "sandwiched" between the graphite layers, resulting in a type of compound with variable stoichiometry. A prominent example of an intercalation compound is potassium graphite, denoted by the formula KC8. Some graphite intercalation compounds are superconductors. The highest transition temperature (by June 2009) Tc = 11.5 K is achieved in CaC6, and it further increases under applied pressure (15.1 K at 8 GPa). Graphite's ability to intercalate lithium ions without significant damage from swelling is what makes it the dominant anode material in lithium-ion batteries. == Mining, beneficiation, and milling == Graphite is mined by both open pit and underground methods. Graphite usually needs beneficiation. This may be carried out by hand-picking the pieces of gangue (rock) and hand-screening the product or by crushing the rock and floating out the graphite. Beneficiation by flotation encounters the difficulty that graphite is very soft and "marks" (coats) the particles of gangue. This makes the "marked" gangue particles float off with the graphite, yielding impure concentrate. There are two ways of obtaining a commercial concentrate or product: repeated regrinding and floating (up to seven times) to purify the concentrate, or by acid leaching (dissolving) the gangue with hydrofluoric acid (for a silicate gangue) or hydrochloric acid (for a carbonate gangue). In milling, the incoming graphite products and concentrates can be ground before being classified (sized or screened), with the coarser flake size fractions (below 8 mesh, 8–20 mesh, 20–50 mesh) carefully preserved, and then the carbon contents are determined. Some standard blends can be prepared from the different fractions, each with a certain flake size distribution and carbon content. Custom blends can also be made for individual customers who want a certain flake size distribution and carbon content. If flake size is unimportant, the concentrate can be ground more freely. Typical end products include a fine powder for use as a slurry in oil drilling and coatings for foundry molds, carbon raiser in the steel industry (Synthetic graphite powder and powdered petroleum coke can also be used as carbon raiser). Environmental impacts from graphite mills consist of air pollution including fine particulate exposure of workers and also soil contamination from powder spillages leading to heavy metal contamination of soil. According to the United States Geological Survey (USGS), world production of natural graphite in 2016 was 1,200,000 tonnes, of which the following major exporters are: China (780,000 t), India (170,000 t), Brazil (80,000 t), Turkey (32,000 t) and North Korea (6,000 t). Graphite is not currently mined in the United States, but there are many historical mine sites including ones in Alabama, Montana, and in the Adirondacks of NY. Westwater Resources is in the development stages of creating a pilot plant for their Coosa Graphite Mine near Sylacauga, Alabama. U.S. production of synthetic graphite in 2010 was 134,000 t valued at $1.07 billion. === Occupational safety === Potential health effects include: Inhalation: No inhalation hazard in manufactured and shipped state. Dust and fumes generated from the material can enter the body by inhalation. High concentrations of dust and fumes may irritate the throat and respiratory system and cause coughing. Frequent inhalation of fume/dust over a long period of time increases the risk of developing lung diseases. Prolonged and repeated overexposure to dust can lead to pneumoconiosis. Pre-existing pulmonary disorders, such as emphysema, may possibly be aggravated by prolonged exposure to high concentrations of graphite dusts. Eye contact: Dust in the eyes will cause irritation. Exposed may experience eye tearing, redness, and discomfort. Skin contact: Under normal conditions of intended use, this material does not pose a risk to health. Dust may irritate skin. Ingestion: Not relevant, due to the form of the product in its manufactured and shipped state. However, ingestion of dusts generated during working operations may cause nausea and vomiting. Potential physical / chemical effects: Bulk material is non-combustible. The material may form dust and can accumulate electrostatic charges, which may cause an electrical spark (ignition source). High dust levels may create potential for explosion. ==== United States ==== The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for graphite exposure in the workplace as a time weighted average (TWA) of 15 million particles per cubic foot (1.5 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 2.5 mg/m3 respirable dust over an 8-hour workday. At levels of 1250 mg/m3, graphite is immediately dangerous to life and health. == Recycling == The most common way of recycling graphite occurs when synthetic graphite electrodes are either manufactured and pieces are cut off or lathe turnings are discarded for reuse, or the electrode (or other materials) are used all the way down to the electrode holder. A new electrode replaces the old one, but a sizeable piece of the old electrode remains. This is crushed and sized, and the resulting graphite powder is mostly used to raise the carbon content of molten steel. Graphite-containing refractories are sometimes also recycled, but often are not due to their low graphite content: the largest-volume items, such as carbon-magnesite bricks that contain only 15–25% graphite, usually contain too little graphite to be worthwhile to recycle. However, some recycled carbon–magnesite brick is used as the basis for furnace-repair materials, and also crushed carbon–magnesite brick is used in slag conditioners. While crucibles have a high graphite content, the volume of crucibles used and then recycled is very small. A high-quality flake graphite product that closely resembles natural flake graphite can be made from steelmaking kish. Kish is a large-volume near-molten waste skimmed from the molten iron feed to a basic oxygen furnace and consists of a mix of graphite (precipitated out of the supersaturated iron), lime-rich slag, and some iron. The iron is recycled on-site, leaving a mixture of graphite and slag. The best recovery process uses hydraulic classification (which utilizes a flow of water to separate minerals by specific gravity: graphite is light and settles nearly last) to get a 70% graphite rough concentrate. Leaching this concentrate with hydrochloric acid gives a 95% graphite product with a flake size ranging from 10 mesh (2 mm) down. == History of synthetic == === Invention of a production process === In 1893, Charles Street of Le Carbone discovered a process for making artificial graphite. In the mid-1890s, Edward Goodrich Acheson (1856–1931) accidentally invented another way to produce synthetic graphite after synthesizing carborundum (also called silicon carbide). He discovered that overheating carborundum, as opposed to pure carbon, produced almost pure graphite. While studying the effects of high temperature on carborundum, he had found that silicon vaporizes at about 4,150 °C (7,500 °F), leaving the carbon behind in graphitic carbon. This graphite became valuable as a lubricant. Acheson's technique for producing silicon carbide and graphite is named the Acheson process. In 1896, Acheson received a patent for his method of synthesizing graphite, and in 1897 started commercial production. The Acheson Graphite Co. was formed in 1899. Synthetic graphite can also be prepared from polyimide and then commercialized. === Scientific research === Highly oriented pyrolytic graphite (HOPG) is the highest-quality synthetic form of graphite. It is used in scientific research, in particular, as a length standard for the calibration of scanning probe microscopes. === Electrodes === Graphite electrodes carry the electricity that melts scrap iron and steel, and sometimes direct-reduced iron (DRI), in electric arc furnaces, which are the vast majority of steel furnaces. They are made from petroleum coke after it is mixed with coal tar pitch. They are extruded and shaped, then baked to carbonize the binder (pitch). This is finally graphitized by heating it to temperatures approaching 3,000 °C (5,430 °F), at which the carbon atoms arrange into graphite. They can vary in size up to 3.5 m (11 ft) long and 75 cm (30 in) in diameter. An increasing proportion of global steel is made using electric arc furnaces, and the electric arc furnace itself is becoming more efficient, making more steel per tonne of electrode. An estimate based on USGS data indicates that graphite electrode consumption was 197,000 t (217,000 short tons) in 2005. Electrolytic aluminium smelting also uses graphitic carbon electrodes. On a much smaller scale, synthetic graphite electrodes are used in electrical discharge machining (EDM), commonly to make injection molds for plastics. === Powder and scrap === The powder is made by heating powdered petroleum coke above the temperature of graphitization, sometimes with minor modifications. The graphite scrap comes from pieces of unusable electrode material (in the manufacturing stage or after use) and lathe turnings, usually after crushing and sizing. Most synthetic graphite powder goes to carbon raising in steel (competing with natural graphite), with some used in batteries and brake linings. According to the United States Geographical Survey, US synthetic graphite powder and scrap production were 95,000 t (93,000 long tons; 105,000 short tons) in 2001 (latest data). It is possible to create battery-grade graphite by recycling the numerous fines from battery production. The process involves spray drying the fines over petroleum pitch with a binder and cross-linking agent, then drying them. === Neutron moderator === Special grades of synthetic graphite, such as Gilsocarbon, also find use as a matrix and neutron moderator within nuclear reactors. Its low neutron cross-section also recommends it for use in proposed fusion reactors. Care must be taken that reactor-grade graphite is free of neutron absorbing materials such as boron, widely used as the seed electrode in commercial graphite deposition systems – this caused the failure of the Germans' World War II graphite-based nuclear reactors. Since they could not isolate the difficulty they were forced to use far more expensive heavy water moderators. Graphite used for nuclear reactors is often referred to as nuclear graphite. Herbert G. McPherson, a Berkeley trained physicist at National Carbon, a division of Union Carbide, was key in confirming a conjecture of Leo Szilard that boron impurities even in "pure" graphite were responsible for a neutron absorption cross-section in graphite that compromised U-235 chain reactions. McPherson was aware of the presence of impurities in graphite because, with the use of Technicolor in cinematography, the spectra of graphite electrode arcs used in movie projectors required impurities to enhance emission of light in the red region to display warmer skin tones on the screen. Thus, had it not been for color movies, chances are that the first sustained natural U chain reaction would have required a heavy water moderated reactor. === Other uses === Graphite (carbon) fiber and carbon nanotubes are also used in carbon fiber reinforced plastics, and in heat-resistant composites such as reinforced carbon-carbon (RCC). Commercial structures made from carbon fiber graphite composites include fishing rods, golf club shafts, bicycle frames, sports car body panels, the fuselage of the Boeing 787 Dreamliner and pool cue sticks and have been successfully employed in reinforced concrete. The mechanical properties of carbon fiber graphite-reinforced plastic composites and grey cast iron are strongly influenced by the role of graphite in these materials. In this context, the term "(100%) graphite" is often loosely used to refer to a pure mixture of carbon reinforcement and resin, while the term "composite" is used for composite materials with additional ingredients. Modern smokeless powder is coated in graphite to prevent the buildup of static charge. Graphite has been used in at least three radar absorbent materials. It was mixed with rubber in Sumpf and Schornsteinfeger, which were used on U-boat snorkels to reduce their radar cross section. It was also used in tiles on early F-117 Nighthawk stealth strike fighters. Graphite composites are used as absorber for high-energy particles, for example in the Large Hadron Collider beam dump. Glassworking tools are often made from graphite since it will not stick to hot molten glass, unlike metal tools and moulds which require coatings or lubricants, which are themselves often graphite-based. Automated glassworking machines make significant use of graphite for handling the molten glass and freshly formed items. == Research and innovation == Globally, over 60,000 patent families in graphite technologies were filed from 2012 to 2021. Patents were filed by applicants from over 60 countries and regions. However, graphite-related patent families originated predominantly from just a few countries. China was the top contributor with more than 47,000 patent families, accounting for four in every five graphite patent families filed worldwide in the last decade. Among other leading countries were Japan, the Republic of Korea, the United States and the Russian Federation. Together, these top five countries of applicant origin accounted for 95 percent of global patenting output related to graphite. Among the different graphite sources, flake graphite has the highest number of patent families, with more than 5,600 filed worldwide from 2012 to 2021. Supported by active research from its commercial entities and research institutions, China is the country most actively exploiting flake graphite and has contributed to 85 percent of global patent filings in this area. At the same time, innovations exploring new synthesis methods and uses for artificial graphite are gaining interest worldwide, as countries seek to exploit the superior material qualities associated with this man-made substance and reduce reliance on the natural material. Patenting activity is strongly led by commercial entities, particularly world-renowned battery manufacturers and anode material suppliers, with patenting interest focused on battery anode applications. The exfoliation process for bulk graphite, which involves separating the carbon layers within graphite, has been extensively studied between 2012 and 2021. Specifically, ultrasonic and thermal exfoliation have been the two most popular approaches worldwide, with 4,267 and 2,579 patent families, respectively, significantly more than for either the chemical or electrochemical alternatives. Global patenting activity relating to ultrasonic exfoliation has decreased over the years, indicating that this low-cost technique has become well established. Thermal exfoliation is a more recent process. Compared to ultrasonic exfoliation, this fast and solvent-free thermal approach has attracted greater commercial interest. As the most widespread anode material for lithium-ion batteries, graphite has drawn significant attention worldwide for use in battery applications. With over 8,000 patent families filed from 2012 to 2021, battery applications were a key driver of global graphite-related inventions. Innovations in this area are led by battery manufacturers or anode suppliers who have amassed sizable patent portfolios focused strongly on battery performance improvements based on graphite anode innovation. Besides industry players, academia and research institutions have been an essential source of innovation in graphite anode technologies. Graphite for polymer applications was an innovation hot topic from 2012 to 2021, with over 8,000 patent families recorded worldwide. However, in recent years, in the top countries of applicant origin in this area, including China, Japan and the United States of America (US), patent filings have decreased. Graphite for manufacturing ceramics represents another area of intensive research, with over 6,000 patent families registered in the last decade alone. Specifically, graphite for refractory accounted for over one-third of ceramics-related graphite patent families in China and about one-fifth in the rest of the world. Other important graphite applications include high-value ceramic materials such as carbides for specific industries, ranging from electrical and electronics, aerospace and precision engineering to military and nuclear applications. Carbon brushes represent a long-explored graphite application area. There have been few inventions in this area over the last decade, with less than 300 patent families filed from 2012 to 2021, very significantly less than between 1992 and 2011. Biomedical, sensor, and conductive ink are emerging application areas for graphite that have attracted interest from both academia and commercial entities, including renowned universities and multinational corporations. Typically for an emerging technology area, related patent families were filed by various organizations without any players dominating. As a result, the top applicants have a small number of inventions, unlike in well-explored areas, where they will have strong technology accumulation and large patent portfolios. The innovation focus of these three emerging areas is highly scattered and can be diverse, even for a single applicant. However, recent inventions are seen to leverage the development of graphite nanomaterials, particularly graphite nanocomposites and graphene. == See also == == Sources == This article incorporates text from a free content work. Licensed under CC-BY. Text taken from Patent Landscape Report - Graphite and its applications​, WIPO. == References == == Further reading == Lipson, H.; Stokes, A. R. (1942). "A New Structure of Carbon". Nature. 149 (3777): 328. Bibcode:1942Natur.149Q.328L. doi:10.1038/149328a0. S2CID 36502694. C.Michael Hogan; Marc Papineau; et al. (December 18, 1989). Phase I Environmental Site Assessment, Asbury Graphite Mill, 2426–2500 Kirkham Street, Oakland, California, Earth Metrics report 10292.001 (Report). Klein, Cornelis; Cornelius S. Hurlbut, Jr. (1985). Manual of Mineralogy: after Dana (20th ed.). Wiley. ISBN 978-0-471-80580-9. Taylor, Harold A. (2000). Graphite. Financial Times Executive Commodity Reports. London: Mining Journal Books. ISBN 978-1-84083-332-4. Taylor, Harold A. (2005). Graphite. Industrial Minerals and Rocks (7th ed.). Littleton, CO: AIME-Society of Mining Engineers. ISBN 978-0-87335-233-8. == External links == Battery Grade Graphite Graphite at Minerals.net Mineral galleries Mineral & Exploration – Map of World Graphite Mines and Producers 2012 Mindat w/ locations giant covalent structures The Graphite Page Video lecture on the properties of graphite by M. Heggie, University of Sussex CDC – NIOSH Pocket Guide to Chemical Hazards
Wikipedia/Graphite
Materials science in science fiction is the study of how materials science is portrayed in works of science fiction. The accuracy of the materials science portrayed spans a wide range – sometimes it is an extrapolation of existing technology, sometimes it is a physically realistic portrayal of a far-out technology, and sometimes it is simply a plot device that looks scientific, but has no basis in science. Examples are: Realistic: In 1944, the science fiction story "Deadline" by Cleve Cartmill depicted the atomic bomb. The properties of various radioactive isotopes are critical to the proposed device, and the plot. This technology was real, unknown to the author. Extrapolation: In the 1979 novel The Fountains of Paradise, Arthur C. Clarke wrote about space elevators – basically long cables extending from the Earth's surface to geosynchronous orbit. These require a material with enormous tensile strength and light weight. Carbon nanotubes are strong enough in theory, so the idea is plausible; while one cannot be built today, it violates no physical principles. Plot device: An example of an unsupported plot device is scrith, the material used to construct Ringworld, in the novels by Larry Niven. Scrith has unreasonable strength, and is unsupported by known physics, but needed for the plot. Critical analysis of materials science in science fiction falls into the same general categories. The predictive aspects are emphasized, for example, in the motto of the Georgia Tech's department of materials science and engineering – Materials scientists lead the way in turning yesterday's science fiction into tomorrow's reality. This is also the theme of many technical articles, such as Material By Design: Future Science or Science Fiction?, found in IEEE Spectrum, the flagship magazine of the Institute of Electrical and Electronics Engineers. On the other hand, there is criticism of the unrealistic materials science used in science fiction. In the professional materials science journal JOM, for example, there are articles such as The (Mostly Improbable) Materials Science and Engineering of the Star Wars Universe and Personification: The Materials Science and Engineering of Humanoid Robots. == Examples == In many cases, the materials science aspect of a fictional work was interesting enough that someone other than the author has remarked on it. Here are some examples, and their relationship to real world materials science usage, if any. == See also == Science in science fiction Hypothetical types of biochemistry. Most of these potential types of biochemistry have been used in science fiction. Unobtainium List of fictional elements, materials, isotopes and atomic particles Category:Fictional materials Category:Fiction about physics == References == The Science in Science Fiction by Brian Stableford, David Langford, & Peter Nicholls (1982)
Wikipedia/Materials_science_in_science_fiction
Computer network engineering is a technology discipline within engineering that deals with the design, implementation, and management of computer networks. These systems contain both physical components, such as routers, switches, cables, and some logical elements, such as protocols and network services. Computer network engineers attempt to ensure that the data is transmitted efficiently, securely, and reliably over both local area networks (LANs) and wide area networks (WANs), as well as across the Internet. Computer networks often play a large role in modern industries ranging from telecommunications to cloud computing, enabling processes such as email and file sharing, as well as complex real-time services like video conferencing and online gaming. == Background == The evolution of network engineering is marked by significant milestones that have greatly impacted communication methods. These milestones particularly highlight the progress made in developing communication protocols that are vital to contemporary networking. This discipline originated in the 1960s with projects like ARPANET, which initiated important advancements in reliable data transmission. The advent of protocols such as TCP/IP revolutionized networking by enabling interoperability among various systems, which, in turn, fueled the rapid growth of the Internet. Key developments include the standardization of protocols and the shift towards increasingly complex layered architectures. These advancements have profoundly changed the way devices interact across global networks. == Network infrastructure design == The foundation of computer network engineering lies in the design of the network infrastructure. This involves planning both the physical layout of the network and its logical topology to ensure optimal data flow, reliability, and scalability. === Physical infrastructure === The physical infrastructure consists of the hardware used to transmit data, which is represented by the first layer of the OSI model. ==== Cabling ==== Copper cables such as ethernet over twisted pair are commonly used for short-distance connections, especially in local area networks (LANs), while fiber optic cables are favored for long-distance communication due to their high-speed transmission capabilities and lower susceptibility to interference. Fiber optics play a significant role in the backbone of large-scale networks, such as those used in data centers and internet service provider (ISP) infrastructures. ==== Wireless networks ==== In addition to wired connections, wireless networks have become a common component of physical infrastructure. These networks facilitate communication between devices without the need for physical cables, providing flexibility and mobility. Wireless technologies use a range of transmission methods, including radio frequency (RF) waves, infrared signals, and laser-based communication, allowing devices to connect to the network. Wi-Fi based on IEEE 802.11 standards is the most widely used wireless technology in local area networks and relies on RF waves to transmit data between devices and access points. Wireless networks operate across various frequency bands, including 2.4 GHz and 5 GHz, each offering unique ranges and data rates; the 2.4 GHz band provides broader coverage, while the 5 GHz band supports faster data rates with reduced interference, ideal for densely populated environments. Beyond Wi-Fi, other wireless transmission methods, such as infrared and laser-based communication, are used in specific contexts, like short-range, line-of-sight links or secure point-to-point communication. In mobile networks, cellular technologies like 3G, 4G, and 5G enable wide-area wireless connectivity. 3G introduced faster data rates for mobile browsing, while 4G significantly improved speed and capacity, supporting advanced applications like video streaming. The latest evolution, 5G, operates across a range of frequencies, including millimeter-wave bands, and provides high data rates, low latency, and support for more device connectivity, useful for applications like the Internet of Things (IoT) and autonomous systems. Together, these wireless technologies allow networks to meet a variety of connectivity needs across local and wide areas. ==== Network devices ==== Routers and switches help direct data traffic and assist in maintaining network security; network engineers configure these devices to optimize traffic flow and prevent network congestion. In wireless networks, wireless access points (WAP) allow devices to connect to the network. To expand coverage, multiple access points can be placed to create a wireless infrastructure. Beyond Wi-Fi, cellular network components like base stations and repeaters support connectivity in wide-area networks, while network controllers and firewalls manage traffic and enforce security policies. Together, these devices enable a secure, flexible, and scalable network architecture suitable for both local and wide-area coverage. === Logical topology === Beyond the physical infrastructure, a network must be organized logically, which defines how data is routed between devices. Various topologies, such as star, mesh, and hierarchical designs, are employed depending on the network’s requirements. In a star topology, for example, all devices are connected to a central hub that directs traffic. This configuration is relatively easy to manage and troubleshoot but can create a single point of failure. In contrast, a mesh topology, where each device is interconnected with several others, offers high redundancy and reliability but requires a more complex design and larger hardware investment. Large networks, especially those in enterprises, often employ a hierarchical model, dividing the network into core, distribution, and access layers to enhance scalability and performance. == Network protocols and communication standards == Communication protocols dictate how data in a network is transmitted, routed, and delivered. Depending on the goals of the specific network, protocols are selected to ensure that the network functions efficiently and securely. The Transmission Control Protocol/Internet Protocol (TCP/IP) suite is fundamental to modern computer networks, including the Internet. It defines how data is divided into packets, addressed, routed, and reassembled. The Internet Protocol (IP) is critical for routing packets between different networks. In addition to traditional protocols, advanced protocols such as Multiprotocol Label Switching (MPLS) and Segment Routing (SR) enhance traffic management and routing efficiency. For intra-domain routing, protocols like Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) provide dynamic routing capabilities. On the local area network (LAN) level, protocols like Virtual Extensible LAN (VXLAN) and Network Virtualization using Generic Routing Encapsulation (NVGRE) facilitate the creation of virtual networks. Furthermore, Internet Protocol Security (IPsec) and Transport Layer Security (TLS) secure communication channels, ensuring data integrity and confidentiality. For real-time applications, protocols such as Real-time Transport Protocol (RTP) and WebRTC provide low-latency communication, making them suitable for video conferencing and streaming services. Additionally, protocols like QUIC enhance web performance and security by establishing secure connections with reduced latency. == Network security == As networks have become essential for business operations and personal communication, the demand for robust security measures has increased. Network security is a critical component of computer network engineering, concentrating on the protection of networks against unauthorized access, data breaches, and various cyber threats. Engineers are responsible for designing and implementing security measures that ensure the integrity and confidentiality of data transmitted across networks. Firewalls serve as barriers between trusted internal networks and external environments, such as the Internet. Network engineers configure firewalls, including next-generation firewalls (NGFW), which incorporate advanced features such as deep packet inspection and application awareness, thereby enabling more refined control over network traffic and protection against sophisticated attacks. In addition to firewalls, engineers use encryption protocols, including Internet Protocol Security (IPsec) and Transport Layer Security (TLS), to secure data in transit. These protocols provide a means of safeguarding sensitive information from interception and tampering. For secure remote access, Virtual Private Networks (VPNs) are deployed, using technologies to create encrypted tunnels for data transmission over public networks. These VPNs are often used for maintaining security when remote users access corporate networks but are also used ion other settings. To enhance threat detection and response capabilities, network engineers implement Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). Additionally, they may employ Security Information and Event Management (SIEM) solutions that aggregate and analyze security data across the network. Endpoint Detection and Response (EDR) solutions are also used to monitor and respond to threats at the device level, contributing to a more comprehensive security posture. Furthermore, network segmentation techniques, such as using VLANs and subnets are commonly employed to isolate sensitive data and systems within a network. This practice limits the potential impact of breaches and enhances overall security by controlling access to critical resources. == Network performance and optimization == As modern networks grow in complexity and scale, driven by data-intensive applications such as cloud computing, high-definition video streaming, and distributed systems, optimizing network performance has become a critical responsibility of network engineers. Network performance and optimization tools aim for scalability, resilience, and efficient resource use with minimal, if any, negative performance impact. === Quality of Service (QoS) === Modern network architectures require more than basic Quality of Service (QoS) policies. Advanced techniques like service function chaining (SFC) allow engineers to create dynamic service flows, applying specific QoS policies at various points in the traffic path. Additionally, network slicing, widely used in 5G networks, enables custom resource allocation for different service types, aiding high-bandwidth or low-latency services when necessary. === Intelligent load balancing and traffic engineering === Beyond traditional load balancing, techniques such as intent-based networking (IBN) and AI-driven traffic optimization are now implemented to predict and adjust traffic distribution based on usage patterns, network failures, or infrastructure performance. In hybrid cloud infrastructures, Software-Defined WAN (SD-WAN) optimizes connectivity between on-premises and cloud environments, dynamically managing routes and bandwidth allocation. Policies like data center interconnect (DCI) ensure high-performance connections across geographically distributed data centers. === Proactive network monitoring and predictive troubleshooting === Traditional network monitoring tools are supplemented by telemetry streaming and real-time analytics solutions. Intent-based networking systems (IBNS) help automatically identify performance deviations from established service intents, while predictive maintenance techniques, powered by machine learning (ML), allow engineers to detect hardware failures or traffic congestion before they impact users. Self-healing networks, using software-defined networking (SDN), can make automatic adjustments to restore performance without always requiring manual intervention. === Network function virtualization (NFV) and edge computing === With the advent of network function virtualization (NFV), engineers can virtualize network functions, such as routing, firewalls, and load balancing. Additionally, edge computing brings processing and storage closer to end users, which is relevant to applications requiring low-latency, such as IoT and real-time analytics. === Multipath protocols and application-layer optimization === Multipath transport protocols, such as Multipath TCP (MPTCP), optimize the use of multiple paths simultaneously, improving high availability and distribution of network load. This can be useful in networks that support redundant connections or where latency must be minimized. Simultaneously, application-layer optimizations focus on fine-tuning traffic at the software level to better deliver data flow across distributed systems, reducing overhead and enhancing throughput. == Cloud computing engineering == The advent of cloud computing has introduced new paradigms for network engineering, focusing on the design and optimization of virtualized infrastructures. Network engineers can manage the integration of on-premises systems with cloud services with the intention of improving scalability, reliability, and security. === Cloud network architecture === Cloud network architecture requires the design of virtualized networks that can scale to meet varying demand. Virtual private cloud (VPC) and hybrid cloud models allow organizations to extend their internal networks into cloud environments, balancing on-premises resources with public cloud services. Cloud interconnect solutions, such as dedicated connections, can minimize latency and optimize data transfer between on-premises and cloud infrastructures. === Software-defined networking (SDN) === Software-defined networking (SDN) is central to cloud networking, enabling centralized control over network configurations. SDN, combined with NFV, allows the management of network resources through software, automating tasks such as load balancing, routing, and firewalling. Overlay networks are commonly employed to create virtual networks on shared physical infrastructure, supporting multi-tenant environments with enhanced security and isolation. === Cloud network security === Cloud security involves securing data that traverses multiple environments. Engineers implement encryption, Identity and access management (IAM), and zero trust architectures to protect cloud networks. Firewalls, intrusion detection systems, and cloud-native security solutions monitor and safeguard these environments. Micro-segmentation is used to isolate workloads and minimize the attack surface, while VPNs and IPsec tunnels secure communication between cloud and on-premises networks. === Performance optimization === Optimizing network performance in the cloud is relevant for applications requiring low latency and high throughput. Engineers deploy content delivery networks to reduce latency and configure dedicated connections, and traffic engineering policies ensure optimal routing between cloud regions. === Tools and protocols === Cloud networking relies on protocols such as VXLAN and Generic Routing Encapsulation (GRE) to facilitate communication across virtualized environments. Automation tools enable Infrastructure As Code (IaC) practices, allowing for more scalable and consistent deployment of cloud network configurations. == Emerging trends == Network engineering is rapidly evolving, driven by advancements in technology and new demands for connectivity. One trend is the integration of artificial intelligence (AI) and machine learning (ML) into network management. AI-powered tools are increasingly used for network automation and optimization, predictive analytics, and intelligent fault detection. AI's role in cybersecurity is also expanding, where it is used to identify and mitigate threats by analyzing patterns in network behavior. The development of quantum networking offers the potential for highly secure communications through quantum cryptography and quantum key distribution (QKD). Quantum networking is still in experimental stages. Space-based internet systems are also a growing trend in network engineering. Projects involving low Earth orbit (LEO) satellite constellations, like SpaceX's Starlink, aim to extend Internet access to remote and underserved areas. In the future, the rollout of 6G networks may improve data transfer rates, latency, and connectivity. 6G is expected to support new technologies such as real-time holographic communication, virtual environments, and AI-driven applications. These advancements will most likely require new approaches to spectrum management, energy efficiency, and sustainable infrastructure design to meet the projected growth of spending on digital transformation. == IoT == The Internet of Things (IoT) is a central theme discussed in this review paper. It represents a comprehensive framework addressing various challenges associated with connecting the internet and the physical world. Currently, the internet plays a vital role in daily life, significantly transforming human experiences. A key aspect of this technological advancement is the integration of multiple technologies with communication systems. One of the most crucial applications of IoT includes the identification and tracking of smart objects. Wireless Sensing Networks (WSN) enable universal sensing mechanisms, impacting many facets of contemporary living. The growth of these devices within a communicative and responsive network will ultimately form the Internet of Things. In this context, sensors and actuators seamlessly interact with the surrounding environment, facilitating information sharing across various platforms to develop a common operating picture (COP). The IoT envisions a future where the digital and physical domains are interconnected through advanced information and wireless communication technologies. This survey outlines the visions, concepts, technologies, challenges, innovative directions, and applications of the Internet of Things (IoT). == References ==
Wikipedia/Computer_network_engineering
Electromechanics combine processes and procedures drawn from electrical engineering and mechanical engineering. Electromechanics focus on the interaction of electrical and mechanical systems as a whole and how the two systems interact with each other. This process is especially prominent in systems such as those of DC or AC rotating electrical machines which can be designed and operated to generate power from a mechanical process (generator) or used to power a mechanical effect (motor). Electrical engineering in this context also encompasses electronics engineering. Electromechanical devices are ones which have both electrical and mechanical processes. Strictly speaking, a manually operated switch is an electromechanical component due to the mechanical movement causing an electrical output. Though this is true, the term is usually understood to refer to devices which involve an electrical signal to create mechanical movement, or vice versa mechanical movement to create an electric signal. Often involving electromagnetic principles such as in relays, which allow a voltage or current to control another, usually isolated circuit voltage or current by mechanically switching sets of contacts, and solenoids, by which a voltage can actuate a moving linkage as in solenoid valves. Before the development of modern electronics, electromechanical devices were widely used in complicated subsystems of parts, including electric typewriters, teleprinters, clocks, initial television systems, and the very early electromechanical digital computers. Solid-state electronics have replaced electromechanics in many applications. == History == The first electric motor was invented in 1822 by Michael Faraday. The motor was developed only a year after Hans Christian Ørsted discovered that the flow of electric current creates a proportional magnetic field. This early motor was simply a wire partially submerged into a glass of mercury with a magnet at the bottom. When the wire was connected to a battery a magnetic field was created and this interaction with the magnetic field given off by the magnet caused the wire to spin. Ten years later the first electric generator was invented, again by Michael Faraday. This generator consisted of a magnet passing through a coil of wire and inducing current that was measured by a galvanometer. Faraday's research and experiments into electricity are the basis of most of modern electromechanical principles known today. Interest in electromechanics surged with the research into long distance communication. The Industrial Revolution's rapid increase in production gave rise to a demand for intracontinental communication, allowing electromechanics to make its way into public service. Relays originated with telegraphy as electromechanical devices were used to regenerate telegraph signals. The Strowger switch, the Panel switch, and similar devices were widely used in early automated telephone exchanges. Crossbar switches were first widely installed in the middle 20th century in Sweden, the United States, Canada, and Great Britain, and these quickly spread to the rest of the world. Electromechanical systems saw a massive leap in progress from 1910-1945 as the world was put into global war twice. World War I saw a burst of new electromechanics as spotlights and radios were used by all countries. By World War II, countries had developed and centralized their military around the versatility and power of electromechanics. One example of these still used today is the alternator, which was created to power military equipment in the 1950s and later repurposed for automobiles in the 1960s. Post-war America greatly benefited from the military's development of electromechanics as household work was quickly replaced by electromechanical systems such as microwaves, refrigerators, and washing machines. The electromechanical television systems of the late 19th century were less successful. Electric typewriters developed, up to the 1980s, as "power-assisted typewriters". They contained a single electrical component, the motor. Where the keystroke had previously moved a typebar directly, now it engaged mechanical linkages that directed mechanical power from the motor into the typebar. This was also true of the later IBM Selectric. At Bell Labs, in the 1946, the Bell Model V computer was developed. It was an electromechanical relay-based device; cycles took seconds. In 1968 electromechanical systems were still under serious consideration for an aircraft flight control computer, until a device based on large scale integration electronics was adopted in the Central Air Data Computer. === Microelectromechanical systems (MEMS) === Microelectromechanical systems (MEMS) have roots in the silicon revolution, which can be traced back to two important silicon semiconductor inventions from 1959: the monolithic integrated circuit (IC) chip by Robert Noyce at Fairchild Semiconductor, and the metal–oxide–semiconductor field-effect transistor (MOSFET) invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered and used surface passivation by silicon dioxide to create the first planar transistors, the first in which drain and source were adjacent at the same surface. MOSFET scaling, the miniaturisation of MOSFETs on IC chips, led to the miniaturisation of electronics (as predicted by Moore's law and Dennard scaling). This laid the foundations for the miniaturisation of mechanical systems, with the development of micromachining technology based on silicon semiconductor devices, as engineers began realizing that silicon chips and MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. One of the first silicon pressure sensors was isotropically micromachined by Honeywell in 1962. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters. In the early 21st century, there has been research on nanoelectromechanical systems (NEMS). == Modern practice == Today, electromechanical processes are mainly used by power companies. All fuel based generators convert mechanical movement to electrical power. Some renewable energies such as wind and hydroelectric are powered by mechanical systems that also convert movement to electricity. In the last thirty years of the 20th century, equipment which would generally have used electromechanical devices became less expensive. This equipment became cheaper because it used more reliably integrated microcontroller circuits containing ultimately a few million transistors, and a program to carry out the same task through logic. With electromechanical components there were only moving parts, such as mechanical electric actuators. This more reliable logic has replaced most electromechanical devices, because any point in a system which must rely on mechanical movement for proper operation will inevitably have mechanical wear and eventually fail. Properly designed electronic circuits without moving parts will continue to operate correctly almost indefinitely and are used in most simple feedback control systems. Circuits without moving parts appear in a large number of items from traffic lights to washing machines. Another electromechanical device is piezoelectric devices, but they do not use electromagnetic principles. Piezoelectric devices can create sound or vibration from an electrical signal or create an electrical signal from sound or mechanical vibration. To become an electromechanical engineer, typical college courses involve mathematics, engineering, computer science, designing of machines, and other automotive classes that help gain skill in troubleshooting and analyzing issues with machines. To be an electromechanical engineer a bachelor's degree is required, usually in electrical, mechanical, or electromechanical engineering. As of April 2018, only two universities, Michigan Technological University and Wentworth Institute of Technology, offer the major of electromechanical engineering . To enter the electromechanical field as an entry-level technician, an associative degree is all that is required. As of 2016, approximately 13,800 people work as electro-mechanical technicians in the US. The job outlook for 2016 to 2026 for technicians is 4% growth which is about an employment change of 500 positions. This outlook is slower than average. == See also == == References == Citations Sources Davim, J. Paulo, editor (2011) Mechatronics, John Wiley & Sons ISBN 978-1-84821-308-1 . Furlani, Edward P. (August 15, 2001). Permanent Magnet and Electromechanical Devices: Materials, Analysis and Applications. Academic Press Series in Electromagnetism. San Diego: Academic Press. ISBN 978-0-12-269951-1. OCLC 47726317. Krause, Paul C.; Wasynczuk, Oleg (1989). Electromechanical Motion Devices. McGraw-Hill Series in Electrical and Computer Engineering. New York: McGraw-Hill. ISBN 978-0-07-035494-4. OCLC 18224514. Szolc T., Konowrocki R., Michajlow M., Pregowska A., An Investigation of the Dynamic Electromechanical Coupling Effects in Machine Drive Systems Driven by Asynchronous Motors, Mechanical Systems and Signal Processing, ISSN 0888-3270, Vol.49, pp. 118–134, 2014 "WWI: Technology and the weapons of war | NCpedia". www.ncpedia.org. Retrieved 2018-04-22. == Further reading == A first course in electromechanics. By Hugh Hildreth Skilling. Wiley, 1960. Electromechanics: a first course in electromechanical energy conversion, Volume 1. By Hugh Hildreth Skilling. R. E. Krieger Pub. Co., Jan 1, 1979. Electromechanics and electrical machinery. By J. F. Lindsay, M. H. Rashid. Prentice-Hall, 1986. Electromechanical motion devices. By Hi-Dong Chai. Prentice Hall PTR, 1998. Mechatronics: Electromechanics and Contromechanics. By Denny K. Miu. Springer London, Limited, 2011.
Wikipedia/Electromechanics
The strength of materials is determined using various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength, Young's modulus, and Poisson's ratio. In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered. The theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko. == Definition == In the mechanics of materials, the strength of a material is its ability to withstand an applied load without failure or plastic deformation. The field of strength of materials deals with forces and deformations that result from their acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses when those forces are expressed on a unit basis. The stresses acting on the material cause deformation of the material in various manners including breaking them completely. Deformation of the material is called strain when those deformations too are placed on a unit basis. The stresses and strains that develop within a mechanical member must be calculated in order to assess the load capacity of that member. This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the member is composed. The applied loads may be axial (tensile or compressive), or rotational (strength shear). With a complete description of the loading and the geometry of the member, the state of stress and state of strain at any point within the member can be calculated. Once the state of stress and strain within the member is known, the strength (load carrying capacity) of that member, its deformations (stiffness qualities), and its stability (ability to maintain its original configuration) can be calculated. The calculated stresses may then be compared to some measure of the strength of the member such as its material yield or ultimate strength. The calculated deflection of the member may be compared to deflection criteria that are based on the member's use. The calculated buckling load of the member may be compared to the applied load. The calculated stiffness and mass distribution of the member may be used to calculate the member's dynamic response and then compared to the acoustic environment in which it will be used. Material strength refers to the point on the engineering stress–strain curve (yield stress) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading and as a result, the member will have a permanent deflection. The ultimate strength of the material refers to the maximum value of stress reached. The fracture strength is the stress value at fracture (the last stress value recorded). === Types of loadings === Transverse loadings – Forces applied perpendicular to the longitudinal axis of a member. Transverse loading causes the member to bend and deflect from its original position, with internal tensile and compressive strains accompanying the change in curvature of the member. Transverse loading also induces shear forces that cause shear deformation of the material and increase the transverse deflection of the member. Axial loading – The applied forces are collinear with the longitudinal axis of the member. The forces cause the member to either stretch or shorten. Torsional loading – Twisting action caused by a pair of externally applied equal and oppositely directed force couples acting on parallel planes or by a single external couple applied to a member that has one end fixed against rotation. === Stress terms === Uniaxial stress is expressed by σ = F A , {\displaystyle \sigma ={\frac {F}{A}},} where F is the force acting on an area A. The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest. Compressive stress (or compression) is the stress state caused by an applied load that acts to reduce the length of the material (compression member) along the axis of the applied load; it is, in other words, a stress state that causes a squeezing of the material. A simple case of compression is the uniaxial compression induced by the action of opposite, pushing forces. Compressive strength for materials is generally higher than their tensile strength. However, structures loaded in compression are subject to additional failure modes, such as buckling, that are dependent on the member's geometry. Tensile stress is the stress state caused by an applied load that tends to elongate the material along the axis of the applied load, in other words, the stress caused by pulling the material. The strength of structures of equal cross-sectional area loaded in tension is independent of shape of the cross-section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behaviour (many metals for example) can tolerate some defects while brittle materials (such as ceramics and some steels) can fail well below their ultimate material strength. Shear stress is the stress state caused by the combined energy of a pair of opposing forces acting along parallel lines of action through the material, in other words, the stress caused by faces of the material sliding relative to one another. An example is cutting paper with scissors or stresses due to torsional loading. === Stress parameters for resistance === Material resistance can be expressed in several mechanical stress parameters. The term material strength is used when referring to mechanical stress parameters. These are physical quantities with dimension homogeneous to pressure and force per unit surface. The traditional measure unit for strength are therefore MPa in the International System of Units, and the psi between the United States customary units. Strength parameters include: yield strength, tensile strength, fatigue strength, crack resistance, and other parameters. Yield strength is the lowest stress that produces a permanent deformation in a material. In some materials, like aluminium alloys, the point of yielding is difficult to identify, thus it is usually defined as the stress required to cause 0.2% plastic strain. This is called a 0.2% proof stress. Compressive strength is a limit state of compressive stress that leads to failure in a material in the manner of ductile failure (infinite theoretical yield) or brittle failure (rupture as the result of crack propagation, or sliding along a weak plane – see shear strength). Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads to tensile failure in the manner of ductile failure (yield as the first stage of that failure, some hardening in the second stage and breakage after a possible "neck" formation) or brittle failure (sudden breaking in two or more pieces at a low-stress state). The tensile strength can be quoted as either true stress or engineering stress, but engineering stress is the most commonly used. Fatigue strength is a more complex measure of the strength of a material that considers several loading episodes in the service period of an object, and is usually more difficult to assess than the static strength measures. Fatigue strength is quoted here as a simple range ( Δ σ = σ m a x − σ m i n {\displaystyle \Delta \sigma =\sigma _{\mathrm {max} }-\sigma _{\mathrm {min} }} ). In the case of cyclic loading it can be appropriately expressed as an amplitude usually at zero mean stress, along with the number of cycles to failure under that condition of stress. Impact strength is the capability of the material to withstand a suddenly applied load and is expressed in terms of energy. Often measured with the Izod impact strength test or Charpy impact test, both of which measure the impact energy required to fracture a sample. Volume, modulus of elasticity, distribution of forces, and yield strength affect the impact strength of a material. In order for a material or object to have a high impact strength, the stresses must be distributed evenly throughout the object. It also must have a large volume with a low modulus of elasticity and a high material yield strength. === Strain parameters for resistance === Deformation of the material is the change in geometry created when stress is applied (as a result of applied forces, gravitational fields, accelerations, thermal expansion, etc.). Deformation is expressed by the displacement field of the material. Strain, or reduced deformation, is a mathematical term that expresses the trend of the deformation change among the material field. Strain is the deformation per unit length. In the case of uniaxial loading the displacement of a specimen (for example, a bar element) lead to a calculation of strain expressed as the quotient of the displacement and the original length of the specimen. For 3D displacement fields it is expressed as derivatives of displacement functions in terms of a second-order tensor (with 6 independent elements). Deflection is a term to describe the magnitude to which a structural element is displaced when subject to an applied load. === Stress–strain relations === Elasticity is the ability of a material to return to its previous shape after stress is released. In many materials, the relation between applied stress is directly proportional to the resulting strain (up to a certain limit), and a graph representing those two quantities is a straight line. The slope of this line is known as Young's modulus, or the "modulus of elasticity". The modulus of elasticity can be used to determine the stress–strain relationship in the linear-elastic portion of the stress–strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress–strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs. Plasticity or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low strain, while ductile materials such as metallics, lead, or polymers will plastically deform much more before a fracture initiation. Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking. == Design terms == Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m2). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MPa. In Imperial units, the unit of stress is given as lbf/in2 or pounds-force per square inch. This unit is often abbreviated as psi. One thousand psi is abbreviated ksi. A factor of safety is a design criteria that an engineered component or structure must achieve. F S = F / f {\displaystyle FS=F/f} , where FS: the factor of safety, Rf The applied stress, and F: ultimate allowable stress (psi or MPa) Margin of Safety is the common method for design criteria. It is defined MS = Pu/P − 1. For example, to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be F = U T S / F S {\displaystyle F=UTS/FS} = 440/4 = 110 MPa, or F {\displaystyle F} = 110×106 N/m2. Such allowable stresses are also known as "design stresses" or "working stresses". Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non-steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material's yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure. === Failure theories === There are four failure theories: maximum shear stress theory, maximum normal stress theory, maximum strain energy theory, and maximum distortion energy theory (von Mises criterion of failure). Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials. Of the latter three, the distortion energy theory provides the most accurate results in a majority of the stress conditions. The strain energy theory needs the value of Poisson's ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result. Maximum shear stress theory postulates that failure will occur if the magnitude of the maximum shear stress in the part exceeds the shear strength of the material determined from uniaxial testing. Maximum normal stress theory postulates that failure will occur if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety. Maximum strain energy theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing. Maximum distortion energy theory, also known as maximum distortion energy theory of failure or von Mises–Hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts: one part causes change in volume, and the other part causes a change in shape. Distortion energy is the amount of energy that is needed to change the shape. Fracture mechanics was established by Alan Arnold Griffith and George Rankine Irwin. This important theory is also known as numeric conversion of toughness of material in the case of crack existence. A material's strength depends on its microstructure. The engineering processes to which a material is subjected can alter its microstructure. Strengthening mechanisms that alter the strength of a material include work hardening, solid solution strengthening, precipitation hardening, and grain boundary strengthening. Strengthening mechanisms are accompanied by the caveat that some other mechanical properties of the material may degenerate in an attempt to make a material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural properties and the desired end effect. Strength is expressed in terms of the limiting values of the compressive stress, tensile stress, and shear stresses that would cause failure. The effects of dynamic loading are probably the most important practical consideration of the theory of elasticity, especially the problem of fatigue. Repeated loading often initiates cracks, which grow until failure occurs at the corresponding residual strength of the structure. Cracks always start at a stress concentrations especially changes in cross-section of the product or defects in manufacturing, near holes and corners at nominal stress levels far lower than those quoted for the strength of the material. == See also == == References == == Further reading == == External links == Failure theories Case studies in structural failure
Wikipedia/Strength_of_materials
An accumulator is an energy storage device: a device which accepts energy, stores energy, and releases energy as needed. Some accumulators accept energy at a low rate (low power) over a long time interval and deliver the energy at a high rate (high power) over a short time interval. Some accumulators accept energy at a high rate over a short time interval and deliver the energy at a low rate over a longer time interval. Some accumulators typically accept and release energy at comparable rates. Various devices can store thermal energy, mechanical energy, and electrical energy. Energy is usually accepted and delivered in the same form. Some devices store a different form of energy than what they receive and deliver performing energy conversion on the way in and on the way out. Examples of accumulators include steam accumulators, mainsprings, flywheel energy storage, hydraulic accumulators, rechargeable batteries, capacitors, inductors, compensated pulsed alternators (compulsators), and pumped-storage hydroelectric plants. In general usage in an electrical context, the word accumulator normally refers to a lead–acid battery. The London Tower Bridge is operated via an accumulator. The original raising mechanism was powered by pressurised water stored in several hydraulic accumulators. In 1974, the original operating mechanism was largely replaced by a new electro-hydraulic drive system. == See also == Rechargeable battery Electric vehicle battery Battery storage power station == References == == Bibliography == Wanger, E C; Willard, W E (June 1981). "Low Maintenance Hydraulic Accumulator" (report). Defense Technical Information Center. Boeing Military Airplane Company / USAF Wright Aeronautical Laboratories. Archived (PDF) from the original on September 24, 2015. Retrieved 12 April 2015. Frazier, Captain John C. (December 1981). "Electric Vehicle Power Controller" (thesis). Defense Technical Information Center. Air Force Institute of Technology. Archived (PDF) from the original on September 24, 2015. Retrieved 12 April 2015. Hayano, Ryugo S. (29 September 2009). "Development of a charged-particle accumulator using an RF confinement method" (report). Defense Technical Information Center. University of Tokyo. Archived (PDF) from the original on September 24, 2015. Retrieved 12 April 2015. Tyler, Nathan (June 2008). "Design, Analysis and Construction of a High Voltage Capacitor Charging Supply" (thesis). Defense Technical Information Center. Naval Postgraduate School. Archived (PDF) from the original on September 24, 2015. Retrieved 12 April 2015. Benediktov, G L (1 December 1983). "Thyristor Converter for Capacitive Laser Accumulators". Defense Technical Information Center. Foreign Technology Division, Wright-Patterson Air Force Base. Archived from the original (citation) on September 24, 2015. Retrieved 12 April 2015. Babykin, M V; Bartov, A V (14 December 1977). "Methods of Obtaining Maximum Electrical Power in Short Pulses". Defense Technical Information Center. Foreign Technology Division, Wright-Patterson Air Force Base. Archived from the original (citation) on September 24, 2015. Retrieved 12 April 2015.
Wikipedia/Accumulator_(energy)
A design engineer is an engineer focused on the engineering design process in any of the various engineering disciplines (including civil, mechanical, electrical, chemical, textiles, aerospace, nuclear, manufacturing, systems, and structural /building/architectural) and design disciplines like Human-Computer Interaction. Design engineers tend to work on products and systems that involve adapting and using complex scientific and mathematical techniques. The emphasis tends to be on utilizing engineering physics and other applied sciences to develop solutions for society. A design engineer usually works with a team of other engineers and other types of designers (e.g. industrial designers), to develop conceptual and detailed designs that ensure a product functions, performs, and is fit for its purpose. They may also work with marketers to develop the product concept and specifications to meet customer needs, and may direct the design effort. In many engineering areas, a distinction is made between the "design engineer" and other engineering roles (e.g. planning engineer, project engineer, test engineer). Analysis tends to play a larger role for the latter areas, while synthesis is more paramount for the former; nevertheless, all such roles are technically part of the overall engineering design process. When an engineering project involves public safety, design engineers involved are often required to be licensed - for example, as a Professional Engineer (in the U.S. and Canada). There is often an "industrial exemption" for engineers working on project only internally to their organization, although the scope and conditions of such exemptions vary widely across jurisdictions. == Design engineer tasks == Design engineers may work in a team along with other designers to create the drawings necessary for prototyping and production, or in the case of buildings, for construction. However, with the advent of CAD and solid modeling software, the design engineers may create the drawings themselves, or perhaps with the help of many corporate service providers. The next responsibility of many design engineers is prototyping. A model of the product is created and reviewed. Prototypes are either functional or non-functional. Functional "alpha" prototypes are used for testing; non-functional prototypes are used for form and fit checking. Virtual prototyping and hence for any such software solutions may also be used. This stage is where design flaws are found and corrected, and tooling, manufacturing fixtures, and packaging are developed. Once the "alpha" prototype is finalized after many iterations, the next step is the "beta" pre-production prototype. The design engineer, working with an industrial engineer, manufacturing engineer, and quality engineer, reviews an initial run of components and assemblies for design compliance and fabrication/manufacturing methods analysis. This is often determined through statistical process control. Variations in the product are correlated to aspects of the process and eliminated. The most common metric used is the process capability index Cpk. A Cpk of 1.0 is considered the baseline acceptance for full production go-ahead. The design engineer may follow the product and make requested changes and corrections throughout the whole life of the product. This is referred to as "cradle to grave" engineering. The design engineer works closely with the manufacturing engineer throughout the product life cycle, and is often required to investigate and validate design changes which could lead to possible production cost reductions in order to consistently reduce the price as the product becomes mature and thus subject to discounting to defend market volumes against newer competing products. Moreover, design changes may be also made mandatory by updates in laws and regulations. The design process is an information intensive one, and design engineers have been found to spend 56% of their time engaged in various information behaviours, including 14% actively searching for information. In addition to design engineers' core technical competence, research has demonstrated the critical nature of their personal attributes, project management skills, and cognitive abilities to succeed in the role. Amongst other more detailed findings, a recent work sampling study found that design engineers spend 62.92% of their time engaged in technical work, 40.37% in social work, and 49.66% in computer-based work. There was considerable overlap between these different types of work, with engineers spending 24.96% of their time engaged in technical and social work, 37.97% in technical and non-social, 15.42% in non-technical and social, and 21.66% in non-technical and non-social. == In software engineering == In software engineering, a Design Engineer is a person with the skills to tackle both design and software development tasks. As Maggie Appleton puts it, "a person who sits squarely at the intersection of design and engineering, and works to bridge the gap between them". Some of their main tasks include prototyping and designing in code. Related terms for Design Engineers in the Software Engineering industry include: UX Engineer UI Engineer Design Technologist Creative Technologist Design System Architect Product Designer Experience Designer == See also == Architectural engineering, also known as building engineering Chemical engineering Civil engineering Digital twin Electrical engineering Industrial design engineering Industrial engineering List of engineering branches Manufacturing engineering Mechanical engineering New product development Production engineering Test engineer Tool engineering == References ==
Wikipedia/Design_engineer
Solid-state physics is the study of rigid matter, or solids, through methods such as solid-state chemistry, quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. Along with solid-state chemistry, it also has direct applications in the technology of transistors and semiconductors. == Background == Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass). The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding. == History == The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society. Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices. == Crystal structure and properties == Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially. Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials. == Electronic properties == Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity. Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators. The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators. The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory. == Modern research == Modern research topics in solid-state physics include: High-temperature superconductivity Quasicrystals Spin glass Strongly correlated materials Two-dimensional materials Nanomaterials == See also == Condensed matter physics Crystallography Nuclear spectroscopy Solid mechanics == References == == Further reading == Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976). Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 2004). H. M. Rosenberg, The Solid State (Oxford University Press: Oxford, 1995). Steven H. Simon, The Oxford Solid State Basics (Oxford University Press: Oxford, 2013). Out of the Crystal Maze. Chapters from the History of Solid State Physics, ed. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann, Spencer Weart (Oxford: Oxford University Press, 1992). M. A. Omar, Elementary Solid State Physics (Revised Printing, Addison-Wesley, 1993). Hofmann, Philip (2015-05-26). Solid State Physics (2 ed.). Wiley-VCH. ISBN 978-3527412822.
Wikipedia/Solid_state_physics
Graphene () is a carbon allotrope consisting of a single layer of atoms arranged in a honeycomb planar nanostructure. The name "graphene" is derived from "graphite" and the suffix -ene, indicating the presence of double bonds within the carbon structure. Graphene is known for its exceptionally high tensile strength, electrical conductivity, transparency, and being the thinnest two-dimensional material in the world. Despite the nearly transparent nature of a single graphene sheet, graphite (formed from stacked layers of graphene) appears black because it absorbs all visible light wavelengths. On a microscopic scale, graphene is the strongest material ever measured. The existence of graphene was first theorized in 1947 by Philip R. Wallace during his research on graphite's electronic properties, while the term graphene was first defined by Hanns-Peter Boehm in 1987. In 2004, the material was isolated and characterized by Andre Geim and Konstantin Novoselov at the University of Manchester using a piece of graphite and adhesive tape. In 2010, Geim and Novoselov were awarded the Nobel Prize in Physics for their "groundbreaking experiments regarding the two-dimensional material graphene". While small amounts of graphene are easy to produce using the method by which it was originally isolated, attempts to scale and automate the manufacturing process for mass production have had limited success due to cost-effectiveness and quality control concerns. The global graphene market was $9 million in 2012, with most of the demand from research and development in semiconductors, electronics, electric batteries, and composites. The IUPAC (International Union of Pure and Applied Chemistry) advises using the term "graphite" for the three-dimensional material and reserving "graphene" for discussions about the properties or reactions of single-atom layers. A narrower definition, of "isolated or free-standing graphene", requires that the layer be sufficiently isolated from its environment, but would include layers suspended or transferred to silicon dioxide or silicon carbide. == History == === Structure of graphite and its intercalation compounds === In 1859, Benjamin Brodie noted the highly lamellar structure of thermally reduced graphite oxide. Pioneers in X-ray crystallography attempted to determine the structure of graphite. The lack of large single crystal graphite specimens contributed to the independent development of X-ray powder diffraction by Peter Debye and Paul Scherrer in 1915, and Albert Hull in 1916. However, neither of their proposed structures was correct. In 1918, Volkmar Kohlschütter and P. Haenni described the properties of graphite oxide paper. The structure of graphite was successfully determined from single-crystal X-ray diffraction by J. D. Bernal in 1924, although subsequent research has made small modifications to the unit cell parameters. The theory of graphene was first explored by P. R. Wallace in 1947 as a starting point for understanding the electronic properties of 3D graphite. The emergent massless Dirac equation was separately pointed out in 1984 by Gordon Walter Semenoff, and by David P. Vincenzo and Eugene J. Mele. Semenoff emphasized the occurrence in a magnetic field of an electronic Landau level precisely at the Dirac point. This level is responsible for the anomalous integer Quantum Hall effect. === Observations of thin graphite layers and related structures === Transmission electron microscopy (TEM) images of thin graphite samples consisting of a few graphene layers were published by G. Ruess and F. Vogt in 1948. Eventually, single layers were also observed directly. Single layers of graphite were also observed by transmission electron microscopy within bulk materials, particularly inside soot obtained by chemical exfoliation. From 1961 to 1962, Hanns-Peter Boehm published a study of extremely thin flakes of graphite. The study measured flakes as small as ~0.4 nm, which is around 3 atomic layers of amorphous carbon. This was the best possible resolution for TEMs in the 1960s. However, it is impossible to distinguish between suspended monolayer and multilayer graphene by their TEM contrasts, and the only known method is to analyze the relative intensities of various diffraction spots. The first reliable TEM observations of monolayers are likely given in references 24 and 26 of Geim and Novoselov's 2007 review. In 1975, van Bommel et al. epitaxially grew a single layer of graphite on top of silicon carbide. Others grew single layers of carbon atoms on other materials. This "epitaxial graphene" consists of a single-atom-thick hexagonal lattice of sp2-bonded carbon atoms, as in free-standing graphene. However, there is significant charge transfer between the two materials and, in some cases, hybridization between the d-orbitals of the substrate atoms and π orbitals of graphene, which significantly alter the electronic structure compared to that of free-standing graphene. Boehm et al. coined the term "graphene" for the hypothetical single-layer structure in 1986. The term was used again in 1987 to describe single sheets of graphite as a constituent of graphite intercalation compounds, which can be seen as crystalline salts of the intercalant and graphene. It was also used in the descriptions of carbon nanotubes by R. Saito and Mildred and Gene Dresselhaus in 1992, and in the description of polycyclic aromatic hydrocarbons in 2000 by S. Wang and others. Efforts to make thin films of graphite by mechanical exfoliation started in 1990. Initial attempts employed exfoliation techniques similar to the drawing method. Multilayer samples down to 10 nm in thickness were obtained. In 2002, Robert B. Rutherford and Richard L. Dudman filed for a patent in the US on a method to produce graphene by repeatedly peeling off layers from a graphite flake adhered to a substrate, achieving a graphite thickness of 0.00001 inches (0.00025 millimetres). The key to success was the ability to quickly and efficiently identify graphene flakes on the substrate using optical microscopy, which provided a small but visible contrast between the graphene and the substrate. Another U.S. patent was filed in the same year by Bor Z. Jang and Wen C. Huang for a method to produce graphene-based on exfoliation followed by attrition. In 2014, inventor Larry Fullerton patented a process for producing single-layer graphene sheets by graphene's strong diamagnetic properties. === Full isolation and characterization === Graphene was properly isolated and characterized in 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester. They pulled graphene layers from graphite with a common adhesive tape in a process called micro-mechanical cleavage, colloquially referred to as the Scotch tape technique. The graphene flakes were then transferred onto a thin silicon dioxide layer on a silicon plate ("wafer"). The silica electrically isolated the graphene and weakly interacted with it, providing nearly charge-neutral graphene layers. The silicon beneath the SiO2 could be used as a "back gate" electrode to vary the charge density in the graphene over a wide range. This work resulted in the two winning the Nobel Prize in Physics in 2010 for their groundbreaking experiments with graphene. Their publication and the surprisingly easy preparation method that they described, sparked a "graphene gold rush". Research expanded and split off into many different subfields, exploring different exceptional properties of the material—quantum mechanical, electrical, chemical, mechanical, optical, magnetic, etc. === Exploring commercial applications === Since the early 2000s, several companies and research laboratories have been working to develop commercial applications of graphene. In 2014, a National Graphene Institute was established with that purpose at the University of Manchester, with a £60 million initial funding. In North East England two commercial manufacturers, Applied Graphene Materials and Thomas Swan Limited have begun manufacturing. Cambridge Nanosystems is a large-scale graphene powder production facility in East Anglia. == Structure == Graphene is a single layer of carbon atoms tightly bound in a hexagonal honeycomb lattice. It is an allotrope of carbon in the form of a plane of sp2-bonded atoms with a molecular bond length of 0.142 nm (1.42 Å). In a graphene sheet, each atom is connected to its three nearest carbon neighbors by σ-bonds, and a delocalized π-bond, which contributes to a valence band that extends over the whole sheet. This type of bonding is also seen in polycyclic aromatic hydrocarbons. The valence band is touched by a conduction band, making graphene a semimetal with unusual electronic properties that are best described by theories for massless relativistic particles. Charge carriers in graphene show linear, rather than quadratic, dependence of energy on momentum, and field-effect transistors with graphene can be made that show bipolar conduction. Charge transport is ballistic over long distances; the material exhibits large quantum oscillations and large nonlinear diamagnetism. === Bonding === Three of the four outer-shell electrons of each atom in a graphene sheet occupy three sp2 hybrid orbitals – a combination of orbitals s, px and py — that are shared with the three nearest atoms, forming σ-bonds. The length of these bonds is about 0.142 nanometers. The remaining outer-shell electron occupies a pz orbital that is oriented perpendicularly to the plane. These orbitals hybridize together to form two half-filled bands of free-moving electrons, π, and π∗, which are responsible for most of graphene's notable electronic properties. Recent quantitative estimates of aromatic stabilization and limiting size derived from the enthalpies of hydrogenation (ΔHhydro) agree well with the literature reports. Graphene sheets stack to form graphite with an interplanar spacing of 0.335 nm (3.35 Å). Graphene sheets in solid form usually show evidence in diffraction for graphite's (002) layering. This is true of some single-walled nanostructures. However, unlayered graphene displaying only (hk0) rings have been observed in the core of presolar graphite onions. TEM studies show faceting at defects in flat graphene sheets and suggest a role for two-dimensional crystallization from a melt. === Geometry === The hexagonal lattice structure of isolated, single-layer graphene can be directly seen with transmission electron microscopy (TEM) of sheets of graphene suspended between bars of a metallic grid. Some of these images showed a "rippling" of the flat sheet, with an amplitude of about one nanometer. These ripples may be intrinsic to the material as a result of the instability of two-dimensional crystals, or may originate from the ubiquitous dirt seen in all TEM images of graphene. Photoresist residue, which must be removed to obtain atomic-resolution images, may be the "adsorbates" observed in TEM images, and may explain the observed rippling. The hexagonal structure is also seen in scanning tunneling microscope (STM) images of graphene supported on silicon dioxide substrates The rippling seen in these images is caused by the conformation of graphene to the substrates' lattice and is not intrinsic. === Stability === Ab initio calculations show that a graphene sheet is thermodynamically unstable if its size is less than about 20 nm and becomes the most stable fullerene (as within graphite) only for molecules larger than 24,000 atoms. == Electronic properties == Graphene is a zero-gap semiconductor because its conduction and valence bands meet at the Dirac points. The Dirac points are six locations in momentum space on the edge of the Brillouin zone, divided into two non-equivalent sets of three points. These sets are labeled K and K'. These sets give graphene a valley degeneracy of g v = 2 {\displaystyle g_{v}=2} . In contrast, for traditional semiconductors, the primary point of interest is generally Γ, where momentum is zero. If the in-plane direction is confined rather than infinite, its electronic structure changes. These confined structures are referred to as graphene nanoribbons. If the nanoribbon has a "zig-zag" edge, the bandgap remains zero. If it has an "armchair" edge, the bandgap is non-zero. Graphene's honeycomb structure can be viewed as two interleaving triangular lattices. This perspective has been used to calculate the band structure for a single graphite layer using a tight-binding approximation. === Electronic spectrum === Electrons propagating through the graphene honeycomb lattice effectively lose their mass, producing quasi-particles described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin-⁠1/2⁠ particles. === Dispersion relation === The cleavage technique led directly to the first observation of the anomalous quantum Hall effect in graphene in 2005 by Geim's group and by Philip Kim and Yuanbo Zhang. This effect provided direct evidence of graphene's theoretically predicted Berry's phase of massless Dirac fermions and proof of the Dirac fermion nature of electrons. These effects were previously observed in bulk graphite by Yakov Kopelevich, Igor A. Luk'yanchuk, and others, in 2003–2004. When atoms are placed onto the graphene hexagonal lattice, the overlap between the pz(π) orbitals and the s or the px and py orbitals is zero by symmetry. Therefore, pz electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces the energy of the electrons with wave vector k is: E ( k x , k y ) = ± γ 0 1 + 4 cos 2 ⁡ 1 2 a k x + 4 cos ⁡ 1 2 a k x ⋅ cos ⁡ 3 2 a k y {\displaystyle E(k_{x},k_{y})=\pm \,\gamma _{0}{\sqrt {1+4\cos ^{2}{{\tfrac {1}{2}}ak_{x}}+4\cos {{\tfrac {1}{2}}ak_{x}}\cdot \cos {{\tfrac {\sqrt {3}}{2}}ak_{y}}}}} with the nearest-neighbor (π orbitals) hopping energy γ0 ≈ 2.8 eV and the lattice constant a ≈ 2.46 Å. The conduction and valence bands correspond to the different signs. With one pz electron per atom in this model, the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the K point in the Brillouin zone), where there is a zero density of states but no band gap. Thus, graphene exhibits a semi-metallic (or zero-gap semiconductor) character, although this is not true for a graphene sheet rolled into a carbon nanotube due to its curvature. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. Near the K-points, the energy depends linearly on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure. Consequently, at low energies even neglecting the true spin, electrons can be described by an equation formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass M0, leading to interesting additional features: v F σ → ⋅ ∇ ψ ( r ) = E ψ ( r ) . {\displaystyle v_{F}\,{\vec {\sigma }}\cdot \nabla \psi (\mathbf {r} )\,=\,E\psi (\mathbf {r} ).} Here vF ~ 106 m/s (.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory; σ → {\displaystyle {\vec {\sigma }}} is the vector of the Pauli matrices, ψ ( r ) {\displaystyle \psi (\mathbf {r} )} is the two-component wave function of the electrons, and E is their energy. The equation describing the electrons' linear dispersion relation is: E ( q ) = ℏ v F q {\displaystyle E(q)=\hbar v_{F}q} where the wavevector q is measured from the Brillouin zone vertex K, q = | k − K | {\displaystyle q=\left|\mathbf {k} -\mathrm {K} \right|} , and the zero of energy is set to coincide with the Dirac point. The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice. === Single-atom wave propagation === Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors, and ferromagnets. === Ambipolar electron and hole transport === Graphene exhibits high electron mobility at room temperature, with values reported in excess of 15000 cm2⋅V−1⋅s−1. Hole and electron mobilities are nearly identical. The mobility is independent of temperature between 10 K and 100 K, showing minimal change even at room temperature (300 K), suggesting that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility in freestanding graphene to 200000 cm2⋅V−1⋅s−1 at a carrier density of 1012 cm−2. The corresponding resistivity of graphene sheets is 10−8 Ω⋅m, lower than the resistivity of silver, which is the lowest known at room temperature. However, on SiO2 substrates, electron scattering by optical phonons of the substrate has a more significant effect than scattering by graphene's phonons, limiting mobility to 40000 cm2⋅V−1⋅s−1. Charge transport can be affected by the adsorption of contaminants such as water and oxygen molecules, leading to non-repetitive and large hysteresis I-V characteristics. Researchers need to conduct electrical measurements in a vacuum. Coating the graphene surface with materials such as SiN, PMMA or h-BN has been proposed for protection. In January 2015, the first stable graphene device operation in the air over several weeks was reported for graphene whose surface was protected by aluminum oxide. In 2015, lithium-coated graphene exhibited superconductivity, a first for graphene. Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can function more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases proportionally with length as electrons encounter impurities. Transport is dominated by two modes: one ballistic and temperature-independent, and the other thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a specific length—the ballistic mode at 16 micrometers and the thermally activated mode at 160 nanometers (1% of the former length). Graphene electrons can traverse micrometer distances without scattering, even at room temperature. ==== Electrical conductivity and charge transport ==== Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of 4 e 2 / h {\displaystyle 4e^{2}/h} . The origin of this minimum conductivity is still unclear. However, rippling of the graphene sheet or ionized impurities in the SiO2 substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be 4 e 2 / ( π h ) {\displaystyle 4e^{2}/{(\pi }h)} ; however, most measurements are of the order of 4 e 2 / h {\displaystyle 4e^{2}/h} or greater and depend on impurity concentration. Near zero carrier density, graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density, governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate. Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in a vacuum. Even for dopant concentrations in excess of 1012 cm−2, carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on heating the graphene to remove the potassium. Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits. === Chiral half-integer quantum Hall effect === ==== Quantum hall effect in graphene ==== The quantum Hall effect is a quantum mechanical version of the Hall effect, which is the production of transverse (perpendicular to the main current) conductivity in the presence of a magnetic field. The quantization of the Hall effect σ x y {\displaystyle \sigma _{xy}} at integer multiples (the "Landau level") of the basic quantity e2/h (where e is the elementary electric charge and h is the Planck constant). It can usually be observed only in very clean silicon or gallium arsenide solids at temperatures around 3 K and very high magnetic fields. Graphene shows the quantum Hall effect: the conductivity quantization is unusual in that the sequence of steps is shifted by 1/2 with respect to the standard sequence and with an additional factor of 4. Graphene's Hall conductivity is σ x y = ± 4 ⋅ ( N + 1 / 2 ) e 2 / h {\displaystyle \sigma _{xy}=\pm {4\cdot \left(N+1/2\right)e^{2}}/h} , where N is the Landau level and the double valley and double spin degeneracies give the factor of 4. These anomalies are present not only at extremely low temperatures but also at room temperature, i.e. at roughly 20 °C (293 K). ==== Chiral electrons and anomalies ==== This behavior is a direct result of graphene's chiral, massless Dirac electrons. In a magnetic field, their spectrum has a Landau level with energy precisely at the Dirac point. This level is a consequence of the Atiyah–Singer index theorem and is half-filled in neutral graphene, leading to the "+1/2" in the Hall conductivity. Bilayer graphene also shows the quantum Hall effect, but with only one of the two anomalies (i.e. σ x y = ± 4 ⋅ N ⋅ e 2 / h {\displaystyle \sigma _{xy}=\pm {4\cdot N\cdot e^{2}}/h} ). In the second anomaly, the first plateau at N = 0 is absent, indicating that bilayer graphene stays metallic at the neutrality point. Unlike normal metals, graphene's longitudinal resistance shows maxima rather than minima for integral values of the Landau filling factor in measurements of the Shubnikov–de Haas oscillations, thus the term "integral quantum Hall effect". These oscillations show a phase shift of π, known as Berry's phase. Berry's phase arises due to chirality or dependence (locking) of the pseudospin quantum number on the momentum of low-energy electrons near the Dirac points. The temperature dependence of the oscillations reveals that the carriers have a non-zero cyclotron mass, despite their zero effective mass in the Dirac-fermion formalism. ==== Experimental observations ==== Graphene samples prepared on nickel films, and on both the silicon face and carbon face of silicon carbide, show the anomalous effect directly in electrical measurements. Graphitic layers on the carbon face of silicon carbide show a clear Dirac spectrum in angle-resolved photoemission experiments, and the effect is observed in cyclotron resonance and tunneling experiments. === "Massive" electrons === Graphene's unit cell has two identical carbon atoms and two zero-energy states: one where the electron resides on atom A, and the other on atom B. However, if the unit cell's two atoms are not identical, the situation changes. Research shows that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atoms A and B sufficiently for the electrons to develop a mass and an accompanying band gap of about 30 meV. The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and displays much the same physics as topological insulators. If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires, and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation. == Interactions and phenomena == === Strong magnetic fields === In magnetic fields above 10 tesla, additional plateaus of the Hall conductivity at σxy = νe2/h with ν = 0, ±1, ±4 are observed. A plateau at ν = 3 and the fractional quantum Hall effect at ν = ⁠1/3⁠ were also reported. These observations with ν = 0, ±1, ±3, ±4 indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. === Casimir effect === The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electromagnetic vacuum. Mathematically, it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary conditions on the interacting bodies' surfaces. Due to graphene's strong interaction with the electromagnetic field as a one-atom-thick material, the Casimir effect has garnered significant interest. === Van der Waals force === The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic asymptotic power law in contrast to the usual inverse quartic law. === Permittivity === Graphene's permittivity varies with frequency. Over a range from microwave to millimeter wave frequencies, it is approximately 3.3. This permittivity, combined with its ability to function as both a conductor and as an insulator, theoretically allows compact capacitors made of graphene to store large amounts of electrical energy. == Optical properties == Graphene exhibits unique optical properties, showing unexpectedly high opacity for an atomic monolayer in vacuum, absorbing approximately πα ≈ 2.3% of light from visible to infrared wavelengths, where α is the fine-structure constant. This is due to the unusual low-energy electronic structure of monolayer graphene, characterized by electron and hole conical bands meeting at the Dirac point, which is qualitatively different from more common quadratic massive bands. Based on the Slonczewski–Weiss–McClure (SWMcC) band model of graphite, calculations using Fresnel equations in the thin-film limit account for interatomic distance, hopping values, and frequency, thus assessing optical conductance. Experimental verification, though confirmed, lacks the precision required to improve upon existing techniques for determining the fine-structure constant. === Multi-parametric surface plasmon resonance === Multi-parametric surface plasmon resonance has been utilized to characterize both the thickness and refractive index of chemical-vapor-deposition (CVD)-grown graphene films. At a wavelength of 670 nm (6.7×10−7 m), measured refractive index and extinction coefficient values are 3.135 and 0.897, respectively. Thickness determination yielded 3.7 Å across a 0.5mm area, consistent with the 3.35 Å reported for layer-to-layer carbon atom distance of graphite crystals. This method is applicable for real-time label-free interactions of graphene with organic and inorganic substances. The existence of unidirectional surface plasmons in nonreciprocal graphene-based gyrotropic interfaces has been theoretically demonstrated, offering tunability from THz to near-infrared and visible frequencies by controlling graphene's chemical potential. Particularly, the unidirectional frequency bandwidth can be 1– 2 orders of magnitude larger than that achievable with metal under similar magnetic field conditions, stemming from graphene's extremely small effective electron mass. === Tunable band gap and optical response === Graphene's band gap can be tuned from 0 to 0.25 eV (about 5-micrometer wavelength) by applying a voltage to a dual-gate bilayer graphene field-effect transistor (FET) at room temperature. The optical response of graphene nanoribbons is tunable into the terahertz regime by an applied magnetic fields. Graphene/graphene oxide systems exhibit electrochromic behavior, enabling tuning of both linear and ultrafast optical properties. === Graphene-based Bragg grating === A graphene-based Bragg grating (one-dimensional photonic crystal) has been fabricated, demonstrating its capability to excite surface electromagnetic waves in periodic structure using a 633 nm (6.33×10−7 m) He–Ne laser as the light source. === Saturable absorption === Graphene exhibits unique saturable absorption, which saturates when the input optical intensity exceeds a threshold value. This nonlinear optical behavior, termed saturable absorption, occurs across the visible to near-infrared spectrum, due to graphene's universal optical absorption and zero band gap. This property has enabled full-band mode-locking in fiber lasers using graphene-based saturable absorbers, contributing significantly to ultrafast photonics. Additionally, the optical response of graphene/graphene oxide layers can be electrically tuned. Saturable absorption in graphene could occur at the Microwave and Terahertz band, owing to its wideband optical absorption property. The microwave-saturable absorption in graphene demonstrates the possibility of graphene microwaves and terahertz photonics devices, such as a microwave-saturable absorber, modulator, polarizer, microwave signal processing, and broadband wireless access networks. === Nonlinear Kerr effect === Under intense laser illumination, graphene exhibits a nonlinear phase shift due to the optical nonlinear Kerr effect. Graphene demonstrates a large nonlinear Kerr coefficient of 10−7 cm2⋅W−1, nearly nine orders of magnitude larger than that of bulk dielectrics, suggesting its potential as a powerful nonlinear Kerr medium capable of supporting various nonlinear effects, including solitons. == Excitonic properties == First-principle calculations incorporating quasiparticle corrections and many-body effects have been employed to study the electronic and optical properties of graphene-based materials. The approach was described as three stages. With GW calculation, the properties of graphene-based materials were accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair ribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect and armchair ribbon scaling properties. === Spin transport === Graphene is considered an ideal material for spintronics due to its minimal spin–orbit interaction, the near absence of nuclear magnetic moments in carbon, and weak hyperfine interaction. Electrical injection and detection of spin current have been demonstrated up to room temperature, with spin coherence length exceeding 1 micrometer observed at this temperature. Control of spin current polarity via electrical gating has been achieved at low temperatures. == Magnetic properties == === Strong magnetic fields === Graphene's quantum Hall effect in magnetic fields above approximately 10 tesla reveals additional interesting features. Additional plateaus in Hall conductivity at σ x y = ν e 2 / h {\displaystyle \sigma _{xy}=\nu e^{2}/h} with ν = 0 , ± 1 , ± 4 {\displaystyle \nu =0,\pm {1},\pm {4}} have been observed, along with plateau at ν = 3 {\displaystyle \nu =3} and a fractional quantum Hall effect at ν = 1 / 3 {\displaystyle \nu =1/3} . These observations with ν = 0 , ± 1 , ± 3 , ± 4 {\displaystyle \nu =0,\pm 1,\pm 3,\pm 4} indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis proposes that magnetic catalysis of symmetry breaking is responsible for this degeneracy lift. === Spintronic properties === Graphene exhibits spintronic and magnetic properties concurrently. Low-defect graphene Nano-meshes, fabricated using a non-lithographic approach, exhibit significant ferromagnetism even at room temperature. Additionally, a spin pumping effect has been observed with fields applied in parallel to the planes of few-layer ferromagnetic nano-meshes, while a magnetoresistance hysteresis loop is evident under perpendicular fields. Charge-neutral graphene has demonstrated magnetoresistance exceeding 100% in magnetic fields generated by standard permanent magnets (approximately 0.1 tesla), marking a record magneto resistivity at room temperature among known materials. === Magnetic substrates === In 2010, researchers magnetized graphene by producing it via CVD on the Ni(111) substrate and then in 2014 by placing it on an atomically smooth layer of magnetic yttrium iron garnet, maintaining graphene's electronic properties unaffected. Previous methods involved doping graphene with other substances. The dopant's presence negatively affected its electronic properties. == Mechanical properties == The (two-dimensional) density of graphene is 0.763 mg per square meter. Graphene is the strongest material ever tested, with an intrinsic tensile strength of 130 GPa (19,000,000 psi) (with representative engineering tensile strength ~50-60 GPa for stretching large-area freestanding graphene) and a Young's modulus (stiffness) close to 1 TPa (150,000,000 psi). The Nobel announcement illustrated this by saying that a 1 square meter graphene hammock would support a 4 kg cat but would weigh only as much as one of the cat's whiskers, at 0.77 mg (about 0.001% of the weight of 1 m2 of paper). Large-angle bending of graphene monolayers with minimal strain demonstrates its mechanical robustness. Even under extreme deformation, monolayer graphene maintains excellent carrier mobility. The spring constant of suspended graphene sheets has been measured using an atomic force microscope (AFM). Graphene sheets were suspended over SiO2 cavities where an AFM tip was used to apply stress to the sheet to test its mechanical properties. Its spring constant was in the range 1–5 N/m and the stiffness was 0.5 TPa, which differs from that of bulk graphite. These intrinsic properties could lead to applications such as NEMS as pressure sensors and resonators. Due to its large surface energy and out of plane ductility, flat graphene sheets are unstable with respect to scrolling, i.e. bending into a cylindrical shape, which is its lower-energy state. In two-dimensional structures like graphene, thermal and quantum fluctuations cause relative displacement, with fluctuations growing logarithmically with structure size as per the Mermin–Wagner theorem. This shows that the amplitude of long-wavelength fluctuations grows logarithmically with the scale of a 2D structure, and would therefore be unbounded in structures of infinite size. Local deformation and elastic strain are negligibly affected by this long-range divergence in relative displacement. It is believed that a sufficiently large 2D structure, in the absence of applied lateral tension, will bend and crumple to form a fluctuating 3D structure. Researchers have observed ripples in suspended layers of graphene, and it has been proposed that the ripples are caused by thermal fluctuations in the material. As a consequence of these dynamical deformations, it is debatable whether graphene is truly a 2D structure. These ripples, when amplified by vacancy defects, induce a negative Poisson's ratio into graphene, resulting in the thinnest auxetic material known so far. Graphene-nickel (Ni) composites, created through plating processes, exhibit enhanced mechanical properties due to strong Ni-graphene interactions inhibiting dislocation sliding in the Ni matrix. === Fracture toughness === In 2014, researchers from Rice University and the Georgia Institute of Technology have indicated that despite its strength, graphene is also relatively brittle, with a fracture toughness of about 4 MPa√m. This indicates that imperfect graphene is likely to crack in a brittle manner like ceramic materials, as opposed to many metallic materials which tend to have fracture toughness in the range of 15–50 MPa√m. Later in 2014, the Rice team announced that graphene showed a greater ability to distribute force from an impact than any known material, ten times that of steel per unit weight. The force was transmitted at 22.2 kilometres per second (13.8 mi/s). === Polycrystalline graphene === Various methods – most notably, chemical vapor deposition (CVD), as discussed in the section below – have been developed to produce large-scale graphene needed for device applications. Such methods often synthesize polycrystalline graphene. The mechanical properties of polycrystalline graphene are affected by the nature of the defects, such as grain-boundaries (GB) and vacancies, present in the system and the average grain-size. Graphene grain boundaries typically contain heptagon-pentagon pairs. The arrangement of such defects depends on whether the GB is in a zig-zag or armchair direction. It further depends on the tilt-angle of the GB. In 2010, researchers from Brown University computationally predicted that as the tilt-angle increases, the grain boundary strength also increases. They showed that the weakest link in the grain boundary is at the critical bonds of the heptagon rings. As the grain boundary angle increases, the strain in these heptagon rings decreases, causing the grain boundary to be stronger than lower-angle GBs. They proposed that, in fact, for sufficiently large angle GB, the strength of the GB is similar to pristine graphene. In 2012, it was further shown that the strength can increase or decrease, depending on the detailed arrangements of the defects. These predictions have since been supported by experimental evidence. In a 2013 study led by James Hone's group, researchers probed the elastic stiffness and strength of CVD-grown graphene by combining nano-indentation and high-resolution TEM. They found that the elastic stiffness is identical and strength is only slightly lower than those in pristine graphene. In the same year, researchers from University of California, Berkeley and University of California, Los Angeles probed bi-crystalline graphene with TEM and AFM. They found that the strength of grain boundaries indeed tends to increase with the tilt angle. While the presence of vacancies is not only prevalent in polycrystalline graphene, vacancies can have significant effects on the strength of graphene. The consensus is that the strength decreases along with increasing densities of vacancies. Various studies have shown that for graphene with a sufficiently low density of vacancies, the strength does not vary significantly from that of pristine graphene. On the other hand, a high density of vacancies can severely reduce the strength of graphene. Compared to the fairly well-understood nature of the effect that grain boundary and vacancies have on the mechanical properties of graphene, there is no clear consensus on the general effect that the average grain size has on the strength of polycrystalline graphene. In fact, three notable theoretical or computational studies on this topic have led to three different conclusions. First, in 2012, Kolakowski and Myer studied the mechanical properties of polycrystalline graphene with "realistic atomistic model", using molecular-dynamics (MD) simulation. To emulate the growth mechanism of CVD, they first randomly selected nucleation sites that are at least 5A (arbitrarily chosen) apart from other sites. Polycrystalline graphene was generated from these nucleation sites and was subsequently annealed at 3000K, and then quenched. Based on this model, they found that cracks are initiated at grain-boundary junctions, but the grain size does not significantly affect the strength. Second, in 2013, Z. Song et al. used MD simulations to study the mechanical properties of polycrystalline graphene with uniform-sized hexagon-shaped grains. The hexagon grains were oriented in various lattice directions and the GBs consisted of only heptagon, pentagon, and hexagonal carbon rings. The motivation behind such a model was that similar systems had been experimentally observed in graphene flakes grown on the surface of liquid copper. While they also noted that crack is typically initiated at the triple junctions, they found that as the grain size decreases, the yield strength of graphene increases. Based on this finding, they proposed that polycrystalline follows pseudo Hall-Petch relationship. Third, in 2013, Z. D. Sha et al. studied the effect of grain size on the properties of polycrystalline graphene, by modeling the grain patches using Voronoi construction. The GBs in this model consisted of heptagons, pentagons, and hexagons, as well as squares, octagons, and vacancies. Through MD simulation, contrary to the aforementioned study, they found an inverse Hall-Petch relationship, where the strength of graphene increases as the grain size increases. Experimental observations and other theoretical predictions also gave differing conclusions, similar to the three given above. Such discrepancies show the complexity of the effects that grain size, arrangements of defects, and the nature of defects have on the mechanical properties of polycrystalline graphene. == Other properties == === Thermal conductivity === Thermal transport in graphene is a burgeoning area of research, particularly for its potential applications in thermal management. Most experimental measurements have posted large uncertainties in the results of thermal conductivity due to the limitations of the instruments used. Following predictions for graphene and related carbon nanotubes, early measurements of the thermal conductivity of suspended graphene reported an exceptionally large thermal conductivity up to 5300 W⋅m−1⋅K−1, compared with the thermal conductivity of pyrolytic graphite of approximately 2000 W⋅m−1⋅K−1 at room temperature. However, later studies primarily on more scalable but more defected graphene derived by Chemical Vapor Deposition have been unable to reproduce such high thermal conductivity measurements, producing a wide range of thermal conductivities between 1500 – 2500 W⋅m−1⋅K−1 for suspended single-layer graphene. The large range in the reported thermal conductivity can be caused by large measurement uncertainties as well as variations in the graphene quality and processing conditions. In addition, it is known that when single-layer graphene is supported on an amorphous material, the thermal conductivity is reduced to about 500 – 600 W⋅m−1⋅K−1 at room temperature as a result of scattering of graphene lattice waves by the substrate, and can be even lower for few-layer graphene encased in amorphous oxide. Likewise, polymeric residue can contribute to a similar decrease in the thermal conductivity of suspended graphene to approximately 500 – 600 W⋅m−1⋅K−1 for bilayer graphene. Isotopic composition, specifically the ratio of 12C to 13C, significantly affects graphene's thermal conductivity. Isotopically pure 12C graphene exhibits higher thermal conductivity than either a 50:50 isotope ratio or the naturally occurring 99:1 ratio. It can be shown by using the Wiedemann–Franz law, that the thermal conduction is phonon-dominated. However, for a gated graphene strip, an applied gate bias causing a Fermi energy shift much larger than kBT can cause the electronic contribution to increase and dominate over the phonon contribution at low temperatures. The ballistic thermal conductance of graphene is isotropic. Graphite, a 3D counterpart to graphene, exhibits a basal plane thermal conductivity exceeding 1000 W⋅m−1⋅K−1 (similar to diamond), In graphite, the c-axis (out of plane) thermal conductivity is over a factor of ~100 smaller due to the weak binding forces between basal planes as well as the larger lattice spacing. In addition, the ballistic thermal conductance of graphene is shown to give the lower limit of the ballistic thermal conductance, per unit circumference, length of carbon nanotubes. Graphene's thermal conductivity is influenced by its three acoustic phonon modes: two linear dispersion relation dispersion relation in-plane modes (LA, TA) and one quadratic dispersion relation out-of-plane mode (ZA). At low temperatures, the dominance of the T1.5 thermal conductivity contribution of the out-of-plane mode supersedes the T2 dependence of the linear modes. Some graphene phonon bands exhibit negative Grüneisen parameters, resulting in negative thermal expansion coefficient at low temperatures. The lowest negative Grüneisen parameters correspond to the lowest transverse acoustic ZA modes, whose frequencies increase with in-plane lattice parameter, akin to a stretched string with higher frequency vibrations. === Chemical properties === Graphene has a theoretical specific surface area (SSA) of 2630 m2/g. This is much larger than that reported to date for carbon black (typically smaller than 900 m2/g) or for carbon nanotubes (CNTs), from ≈100 to 1000 m2/g and is similar to activated carbon. Graphene is the only form of carbon (or solid material) in which every atom is available for chemical reaction from two sides (due to the 2D structure). Atoms at the edges of a graphene sheet have special chemical reactivity. Graphene has the highest ratio of edge atoms of any allotrope. Defects within a sheet increase its chemical reactivity. The onset temperature of reaction between the basal plane of single-layer graphene and oxygen gas is below 260 °C (530 K). Graphene burns at very low temperatures (e.g., 350 °C (620 K)). Graphene is commonly modified with oxygen- and nitrogen-containing functional groups and analyzed by infrared spectroscopy and X-ray photoelectron spectroscopy. However, the determination of structures of graphene with oxygen- and nitrogen- functional groups require the structures to be well controlled. In 2013, Stanford University physicists reported that single-layer graphene is a hundred times more chemically reactive than thicker multilayer sheets. Graphene can self-repair holes in its sheets, when exposed to molecules containing carbon, such as hydrocarbons. Bombarded with pure carbon atoms, the atoms perfectly align into hexagons, filling the holes. === Biological properties === Despite the promising results in different cell studies and proof of concept studies, there is still incomplete understanding of the full biocompatibility of graphene-based materials. Different cell lines react differently when exposed to graphene, and it has been shown that the lateral size of the graphene flakes, the form and surface chemistry can elicit different biological responses on the same cell line. There are indications that graphene has promise as a useful material for interacting with neural cells; studies on cultured neural cells show limited success. Graphene also has some utility in osteogenesis. Researchers at the Graphene Research Centre at the National University of Singapore (NUS) discovered in 2011 the ability of graphene to accelerate the osteogenic differentiation of human mesenchymal stem cells without the use of biochemical inducers. Graphene can be used in biosensors; in 2015, researchers demonstrated that a graphene-based sensor can be used to detect a cancer risk biomarker. In particular, by using epitaxial graphene on silicon carbide, they were repeatedly able to detect 8-hydroxydeoxyguanosine (8-OHdG), a DNA damage biomarker. === Support substrate === The electronic property of graphene can be significantly influenced by the supporting substrate. Studies of graphene monolayers on clean and hydrogen(H)-passivated silicon (100) (Si(100)/H) surfaces have been performed. The Si(100)/H surface does not perturb the electronic properties of graphene, whereas the interaction between the clean Si(100) surface and graphene changes the electronic states of graphene significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy. == Graphene layers and structural variants == === Monolayer sheets === In 2013 a group of Polish scientists presented a production unit that allows the manufacture of continuous monolayer sheets. The process is based on graphene growth on a liquid metal matrix. The product of this process was called High Strength Metallurgical Graphene. In a new study published in Nature, the researchers have used a single-layer graphene electrode and a novel surface-sensitive non-linear spectroscopy technique to investigate the top-most water layer at the electrochemically charged surface. They found that the interfacial water response to the applied electric field is asymmetric concerning the nature of the applied field. === Bilayer graphene === Bilayer graphene displays the anomalous quantum Hall effect, a tunable band gap and potential for excitonic condensation –making it a promising candidate for optoelectronic and nanoelectronic applications. Bilayer graphene typically can be found either in twisted configurations where the two layers are rotated relative to each other or graphitic Bernal stacked configurations where half the atoms in one layer lie atop half the atoms in the other. Stacking order and orientation govern the optical and electronic properties of bilayer graphene. One way to synthesize bilayer graphene is via chemical vapor deposition, which can produce large bilayer regions that almost exclusively conform to a Bernal stack geometry. It has been shown that the two graphene layers can withstand important strain or doping mismatch which ultimately should lead to their exfoliation. === Turbostratic === Turbostratic graphene exhibits weak interlayer coupling, and the spacing is increased with respect to Bernal-stacked multilayer graphene. Rotational misalignment preserves the 2D electronic structure, as confirmed by Raman spectroscopy. The D peak is very weak, whereas the 2D and G peaks remain prominent. A rather peculiar feature is that the I2D/IG ratio can exceed 10. However, most importantly, the M peak, which originates from AB stacking, is absent, whereas the TS1 and TS2 modes are visible in the Raman spectrum. The material is formed through conversion of non-graphenic carbon into graphenic carbon without providing sufficient energy to allow for the reorganization through annealing of adjacent graphene layers into crystalline graphitic structures. === Graphene superlattices === Periodically stacked graphene and its insulating isomorph provide a fascinating structural element in implementing highly functional superlattices at the atomic scale, which offers possibilities for designing nanoelectronic and photonic devices. Various types of superlattices can be obtained by stacking graphene and its related forms. The energy band in layer-stacked superlattices is found to be more sensitive to the barrier width than that in conventional III–V semiconductor superlattices. When adding more than one atomic layer to the barrier in each period, the coupling of electronic wavefunctions in neighboring potential wells can be significantly reduced, which leads to the degeneration of continuous subbands into quantized energy levels. When varying the well width, the energy levels in the potential wells along the L-M direction behave distinctly from those along the K-H direction. A superlattice corresponds to a periodic or quasi-periodic arrangement of different materials and can be described by a superlattice period which confers a new translational symmetry to the system, impacting their phonon dispersions and subsequently their thermal transport properties. Recently, uniform monolayer graphene-hBN structures have been successfully synthesized via lithography patterning coupled with chemical vapor deposition (CVD). Furthermore, superlattices of graphene-HBN are ideal model systems for the realization and understanding of coherent (wave-like) and incoherent (particle-like) phonon thermal transport. == Nanostructured graphene forms == === Graphene nanoribbons === Graphene nanoribbons ("nanostripes" in the "zig-zag"/"zigzag" orientation), at low temperatures, show spin-polarized metallic edge currents, which also suggests applications in the new field of spintronics. (In the "armchair" orientation, the edges behave like semiconductors.) === Graphene quantum dots === A graphene quantum dot (GQD) is a graphene fragment with a size lesser than 100 nm. The properties of GQDs are different from bulk graphene due to the quantum confinement effects which only become apparent when the size is smaller than 100 nm. == Modified and functionalized graphene == === Graphene oxide === Graphene oxide is usually produced through chemical exfoliation of graphite. A particularly popular technique is the improved Hummers' method. Using paper-making techniques on dispersed, oxidized and chemically processed graphite in water, the monolayer flakes form a single sheet and create strong bonds. These sheets, called graphene oxide paper, have a measured tensile modulus of 32 GPa. The chemical property of graphite oxide is related to the functional groups attached to graphene sheets. These can change the polymerization pathway and similar chemical processes. Graphene oxide flakes in polymerss display enhanced photo-conducting properties. Graphene is normally hydrophobic and impermeable to all gases and liquids (vacuum-tight). However, when formed into a graphene oxide-based capillary membrane, both liquid water and water vapor flow through as quickly as if the membrane were not present. In 2022, researchers evaluated the biological effects of low doses on graphene oxide on larvae and imago of Drosophila melanogaster. Results show that oral administration of graphene oxide at concentrations of 0.02-1% has a beneficial effect on the developmental rate and hatching ability of larvae. Long-term administration of a low dose of graphene oxide extends the lifespan of Drosophila and significantly enhances resistance to environmental stresses. These suggest that graphene oxide affects carbohydrate and lipid metabolism in adult Drosophila. These findings might provide a useful reference to assess the biological effects of graphene oxide, which could play an important role in a variety of graphene-based biomedical applications. === Chemical modification === Soluble fragments of graphene can be prepared in the laboratory through chemical modification of graphite. First, microcrystalline graphite is treated with an acidic mixture of sulfuric acid and nitric acid. A series of oxidation and exfoliation steps produce small graphene plates with carboxyl groups at their edges. These are converted to acid chloride groups by treatment with thionyl chloride; next, they are converted to the corresponding graphene amide via treatment with octadecyl amine. The resulting material (circular graphene layers of 5.3 Å or 5.3×10−10 m thickness) is soluble in tetrahydrofuran, tetrachloromethane and dichloroethane. Refluxing single-layer graphene oxide (SLGO) in solvents leads to size reduction and folding of individual sheets as well as loss of carboxylic group functionality, by up to 20%, indicating thermal instabilities of SLGO sheets dependent on their preparation methodology. When using thionyl chloride, acyl chloride groups result, which can then form aliphatic and aromatic amides with a reactivity conversion of around 70–80%. Hydrazine reflux is commonly used for reducing SLGO to SLG(R), but titrations show that only around 20–30% of the carboxylic groups are lost, leaving a significant number available for chemical attachment. Analysis of SLG(R) generated by this route reveals that the system is unstable and using a room temperature stirring with hydrochloric acid (< 1.0 M) leads to around 60% loss of COOH functionality. Room temperature treatment of SLGO with carbodiimides leads to the collapse of the individual sheets into star-like clusters that exhibited poor subsequent reactivity with amines (c. 3–5% conversion of the intermediate to the final amide). It is apparent that conventional chemical treatment of carboxylic groups on SLGO generates morphological changes of individual sheets that leads to a reduction in chemical reactivity, which may potentially limit their use in composite synthesis. Therefore, chemical reaction types have been explored. SLGO has also been grafted with polyallylamine, cross-linked through epoxy groups. When filtered into graphene oxide paper, these composites exhibit increased stiffness and strength relative to unmodified graphene oxide paper. Full hydrogenation from both sides of the graphene sheet results in Graphane, but partial hydrogenation leads to hydrogenated graphene. Similarly, both-side fluorination of graphene (or chemical and mechanical exfoliation of graphite fluoride) leads to fluorographene (graphene fluoride), while partial fluorination (generally halogenation) provides fluorinated (halogenated) graphene. === Graphene ligand/complex === Graphene can be a ligand to coordinate metals and metal ions by introducing functional groups. Structures of graphene ligands are similar to e.g. metal-porphyrin complex, metal-phthalocyanine complex, and metal-phenanthroline complex. Copper and nickel ions can be coordinated with graphene ligands. == Advanced graphene structures == === Graphene fiber === In 2011, researchers reported a novel yet simple approach to fabricating graphene fibers from chemical vapor deposition-grown graphene films. The method was scalable and controllable, delivering tunable morphology and pore structure by controlling the evaporation of solvents with suitable surface tension. Flexible all-solid-state supercapacitors based on these graphene fibers were demonstrated in 2013. In 2015, intercalating small graphene fragments into the gaps formed by larger, coiled graphene sheets, after annealing provided pathways for conduction, while the fragments helped reinforce the fibers. The resulting fibers offered better thermal and electrical conductivity and mechanical strength. Thermal conductivity reached 1,290 W/m/K (1,290 watts per metre per kelvin), while tensile strength reached 1,080 MPa (157,000 psi). In 2016, kilometer-scale continuous graphene fibers with outstanding mechanical properties and excellent electrical conductivity were produced by high-throughput wet-spinning of graphene oxide liquid crystals followed by graphitization through a full-scale synergetic defect-engineering strategy. The graphene fibers with superior performances promise wide applications in functional textiles, lightweight motors, microelectronic devices, etc. Tsinghua University in Beijing, led by Wei Fei of the Department of Chemical Engineering, claims to be able to create a carbon nanotube fiber that has a tensile strength of 80 GPa (12,000,000 psi). === 3D graphene === In 2013, a three-dimensional honeycomb of hexagonally arranged carbon was termed 3D graphene, and self-supporting 3D graphene was also produced. 3D structures of graphene can be fabricated by using either CVD or solution-based methods. A 2016 review by Khurram and Xu et al. provided a summary of then-state-of-the-art techniques for fabrication of the 3D structure of graphene and other related two-dimensional materials. In 2013, researchers at Stony Brook University reported a novel radical-initiated crosslinking method to fabricate porous 3D free-standing architectures of graphene and carbon nanotubes using nanomaterials as building blocks without any polymer matrix as support. These 3D graphenes (all-carbon) scaffolds/foams have applications in several fields such as energy storage, filtration, thermal management, and biomedical devices and implants. Box-shaped graphene (BSG) nanostructure appearing after mechanical cleavage of pyrolytic graphite was reported in 2016. The discovered nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. Potential fields of BSG application include ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA sequencing and manipulation, high-performance heat sinking surfaces, rechargeable batteries of enhanced performance, nanomechanical resonators, electron multiplication channels in emission Nano-electronic devices, high-capacity sorbents for safe hydrogen storage. Three dimensional bilayer graphene has also been reported. === Pillared graphene === Pillared graphene is a hybrid carbon structure, consisting of an oriented array of carbon nanotubes connected at each end to a sheet of graphene. It was first described theoretically by George Froudakis and colleagues at the University of Crete in Greece in 2008. Pillared graphene has not yet been synthesized in the laboratory, but it has been suggested that it may have useful electronic properties, or as a hydrogen storage material. === Reinforced graphene === Graphene reinforced with embedded carbon nanotube reinforcing bars ("rebar") is easier to manipulate, while improving the electrical and mechanical qualities of both materials. Functionalized single- or multi-walled carbon nanotubes are spin-coated on copper foils and then heated and cooled, using the nanotubes themselves as the carbon source. Under heating, the functional carbon groups decompose into graphene, while the nanotubes partially split and form in-plane covalent bonds with the graphene, adding strength. π–π stacking domains add more strength. The nanotubes can overlap, making the material a better conductor than standard CVD-grown graphene. The nanotubes effectively bridge the grain boundaries found in conventional graphene. The technique eliminates the traces of substrate on which later-separated sheets were deposited using epitaxy. Stacks of a few layers have been proposed as a cost-effective and physically flexible replacement for indium tin oxide (ITO) used in displays and photovoltaic cells. === Molded graphene === In 2015, researchers from the University of Illinois at Urbana–Champaign (UIUC) developed a new approach for forming 3D shapes from flat, 2D sheets of graphene. A film of graphene that had been soaked in solvent to make it swell and become malleable was overlaid on an underlying substrate "former". The solvent evaporated over time, leaving behind a layer of graphene that had taken on the shape of the underlying structure. In this way, they were able to produce a range of relatively intricate micro-structured shapes. Features vary from 3.5 to 50 μm. Pure graphene and gold-decorated graphene were each successfully integrated with the substrate. == Specialized graphene configurations == === Graphene aerogel === An aerogel made of graphene layers separated by carbon nanotubes was measured at 0.16 milligrams per cubic centimeter. A solution of graphene and carbon nanotubes in a mold is freeze-dried to dehydrate the solution, leaving the aerogel. The material has superior elasticity and absorption. It can recover completely after more than 90% compression, and absorb up to 900 times its weight in oil, at a rate of 68.8 grams per second. === Graphene nanocoil === In 2015, a coiled form of graphene was discovered in graphitic carbon (coal). The spiraling effect is produced by defects in the material's hexagonal grid that causes it to spiral along its edge, mimicking a Riemann surface, with the graphene surface approximately perpendicular to the axis. When voltage is applied to such a coil, current flows around the spiral, producing a magnetic field. The phenomenon applies to spirals with either zigzag or armchair patterns, although with different current distributions. Computer simulations indicated that a conventional spiral inductor of 205 microns in diameter could be matched by a nanocoil just 70 nanometers wide, with a field strength reaching as much as 1 tesla. The nano-solenoids analyzed through computer models at Rice University should be capable of producing powerful magnetic fields of about 1 tesla, about the same as the coils found in typical loudspeakers, according to Yakobson and his team – and about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral's center. A solenoid made with such a coil behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance. === Crumpled graphene === In 2016, Brown University introduced a method for "crumpling" graphene, adding wrinkles to the material on a nanoscale. This was achieved by depositing layers of graphene oxide onto a shrink film, then shrunken, with the film dissolved before being shrunken again on another sheet of film. The crumpled graphene became superhydrophobic, and when used as a battery electrode, the material was shown to have as much as a 400% increase in electrochemical current density. == Mechanical synthesis == A rapidly increasing list of production techniques have been developed to enable graphene's use in commercial applications. Isolated 2D crystals cannot be grown via chemical synthesis beyond small sizes even in principle, because the rapid growth of phonon density with increasing lateral size forces 2D crystallites to bend into the third dimension. In all cases, graphene must bond to a substrate to retain its two-dimensional shape. === Bottom-up and top-down methods === Small graphene structures, such as graphene quantum dots and nanoribbons, can be produced by "bottom-up" methods that assemble the lattice from organic molecule monomers (e. g. citric acid, glucose). "Top-down" methods, on the other hand, cut bulk graphite and graphene materials with strong chemicals (e. g. mixed acids). === Micro-mechanical cleavage === The most famous, clean and rather straight-forward method of isolating graphene sheets, called micro-mechanical cleavage or more colloquially called the scotch tape method, was introduced by Novoselov et al. in 2004, which uses adhesive tape to mechanically cleave high-quality graphite crystals into successively thinner platelets. Other methods do exist like exfoliation. === Exfoliation techniques === ==== Mechanical exfoliation ==== Geim and Novoselov initially used adhesive tape to pull graphene sheets away from graphite. Achieving single layers typically requires multiple exfoliation steps. After exfoliation, the flakes are deposited on a silicon wafer. Crystallites larger than 1 mm and visible to the naked eye can be obtained. As of 2014, exfoliation produced graphene with the lowest number of defects and highest electron mobility. One specific exfoliation technique involved using a sharp single-crystal diamond wedge to penetrate the graphite source and precisely cleave individual layers. That same year, researchers also developed liquid-phase methods, creating defect-free, unoxidized graphene-containing liquids from graphite using mixers that generate extremely high local shear rates greater than 10×104. A 2014 study published in Nature Materials demonstrated that scalable production of defect-free graphene is possible through shear exfoliation using a high-shear mixer. This technique can produce large quantities of few-layer graphene in solution while preserving structural integrity. As turbulence is not necessary for mechanical exfoliation, resonant acoustic mixing or low-speed ball milling can also be effective in the production of high-yield and water-soluble graphene. ==== Liquid phase exfoliation ==== Liquid phase exfoliation (LPE) is a relatively simple method that involves dispersing graphite in a liquid medium to produce graphene by sonication or high shear mixing, followed by centrifugation. Restacking is an issue with this technique unless solvents with appropriate surface energy are used (e.g. NMP). Adding a surfactant to a solvent prior to sonication prevents restacking by adsorbing to the graphene's surface. This produces a higher graphene concentration, but removing the surfactant requires chemical treatments. LPE results in nanosheets with a broad size distribution and thicknesses roughly in the range of 1-10 monolayers. However, liquid cascade centrifugation can be used to size-select the suspensions and achieve monolayer enrichment. Sonicating graphite at the interface of two immiscible liquids, most notably heptane and water, produced macro-scale graphene films. The graphene sheets are adsorbed to the high-energy interface between the materials and are kept from restacking. The sheets are up to about 95% transparent and conductive. With definite cleavage parameters, the box-shaped graphene (BSG) nanostructure can be prepared on graphite crystal. A major advantage of LPE is that it can be used to exfoliate many inorganic 2D materials beyond graphene, e.g. BN, MoS2, WS2. ==== Exfoliation with supercritical carbon dioxide ==== Liquid-phase exfoliation can also be done by a less-known process of intercalating supercritical carbon dioxide (scCO2) into the interstitial spaces in the graphite lattice, followed by rapid depressurization. The scCO2 intercalates easily inside the graphite lattice at a pressure of roughly 100 atm. Carbon dioxide turns gaseous as soon as the vessel is depressurized and makes the graphite explode into few-layered graphene. This method may have multiple advantages: being non-toxic, the graphite does not have to be chemically treated in any way before the process, and the whole process can be completed in a single step as opposed to other exfoliation methods. === Splitting monolayer carbon allotropes === Graphene can be created by opening carbon nanotubes by cutting or etching. In one such method, multi-walled carbon nanotubes were cut open in solution by action of potassium permanganate and sulfuric acid. In 2014, carbon nanotube-reinforced graphene was made via spin coating and annealing functionalized carbon nanotubes. Another approach sprays buckyballs at supersonic speeds onto a substrate. The balls cracked open upon impact, and the resulting unzipped cages then bond together to form a graphene film. == Chemical synthesis == === Graphite oxide reduction === P. Boehm reported producing monolayer flakes of reduced graphene oxide in 1962. Rapid heating of graphite oxide and exfoliation yields highly dispersed carbon powder with a few percent of graphene flakes. Another method is the reduction of graphite oxide monolayer films, e.g. by hydrazine with annealing in argon/hydrogen with an almost intact carbon framework that allows efficient removal of functional groups. Measured charge carrier mobility exceeded 1,000 cm/Vs (10 m/Vs). Burning a graphite oxide coated DVD produced a conductive graphene film (1,738 siemens per meter) and specific surface area (1,520 square meters per gram) that was highly resistant and malleable. A dispersed reduced graphene oxide suspension was synthesized in water by a hydrothermal dehydration method without using any surfactant. The approach is facile, industrially applicable, environmentally friendly, and cost-effective. Viscosity measurements confirmed that the graphene colloidal suspension (graphene nanofluid) exhibits Newtonian behavior, with the viscosity showing a close resemblance to that of water. === Molten salts === Graphite particles can be corroded in molten salts to form a variety of carbon nanostructures including graphene. Hydrogen cations, dissolved in molten lithium chloride, can be discharged on cathodically-polarized graphite rods, which then intercalate, peeling graphene sheets. The graphene nanosheets produced displayed a single-crystalline structure with a lateral size of several hundred nanometers and a high degree of crystallinity and thermal stability. === Electrochemical synthesis === Electrochemical synthesis can exfoliate graphene. Varying a pulsed voltage controls thickness, flake area, and number of defects and affects its properties. The process begins by bathing the graphite in a solvent for intercalation. The process can be tracked by monitoring the solution's transparency with an LED and photodiode. === Hydrothermal self-assembly === Graphene has been prepared by using a sugar like glucose, fructose, etc. This substrate-free "bottom-up" synthesis is safer, simpler and more environmentally friendly than exfoliation. The method can control the thickness, ranging from monolayer to multilayer, which is known as the "Tang-Lau Method". === Sodium ethoxide pyrolysis === Gram-quantities were produced by the reaction of ethanol with sodium metal, followed by pyrolysis and washing with water. === Microwave-assisted oxidation === In 2012, microwave energy was reported to directly synthesize graphene in one step. This approach avoids use of potassium permanganate in the reaction mixture. It was also reported that by microwave radiation assistance, graphene oxide with or without holes can be synthesized by controlling microwave time. Microwave heating can dramatically shorten the reaction time from days to seconds. Graphene can also be made by microwave assisted hydrothermal pyrolysis. === Thermal decomposition of silicon carbide === Heating silicon carbide (SiC) to high temperatures (1100 °C) under low pressures (c. 10−6 torr, or 10−4 Pa) reduces it to graphene. == Vapor deposition and growth techniques == === Chemical vapor deposition === ==== Epitaxy ==== Epitaxial graphene growth on silicon carbide is a wafer-scale technique to produce graphene. Epitaxial graphene may be coupled to surfaces weakly enough (by the active valence electrons that create Van der Waals forces) to retain the two-dimensional electronic band structure of isolated graphene. A normal silicon wafer coated with a layer of germanium (Ge) dipped in dilute hydrofluoric acid strips the naturally forming germanium oxide groups, creating hydrogen-terminated germanium. CVD can coat that with graphene. The direct synthesis of graphene on insulator TiO2 with high-dielectric-constant (high-κ). A two-step CVD process is shown to grow graphene directly on TiO2 crystals or exfoliated TiO2 nanosheets without using any metal catalyst. ==== Metal substrates ==== CVD graphene can be grown on metal substrates including ruthenium, iridium, nickel and copper. ==== Roll-to-roll ==== In 2014, a two-step roll-to-roll manufacturing process was announced. The first roll-to-roll step produces the graphene via chemical vapor deposition. The second step binds the graphene to a substrate. ==== Cold wall ==== Growing graphene in an industrial resistive-heating cold wall CVD system was claimed to produce graphene 100 times faster than conventional CVD systems, cut costs by 99%, and produce material with enhanced electronic qualities. ==== Wafer scale CVD graphene ==== CVD graphene is scalable and has been grown on deposited Cu thin film catalyst on 100 to 300 mm standard Si/SiO2 wafers on an Axitron Black Magic system. Monolayer graphene coverage of >95% is achieved on 100 to 300 mm wafer substrates with negligible defects, confirmed by extensive Raman mapping. === Solvent interface trapping method (SITM) === As reported by a group led by D. H. Adamson, graphene can be produced from natural graphite while preserving the integrity of the sheets using the solvent interface trapping method (SITM). SITM uses a high-energy interface, such as oil and water, to exfoliate graphite to graphene. Stacked graphite delaminates, or spreads, at the oil/water interface to produce few-layer graphene in a thermodynamically favorable process in much the same way as small molecule surfactants spread to minimize the interfacial energy. In this way, graphene behaves like a 2D surfactant. SITM has been reported for a variety of applications such conductive polymer-graphene foams, conductive polymer-graphene microspheres, conductive thin films and conductive inks. === Carbon dioxide reduction === A highly exothermic reaction combusts magnesium in an oxidation-reduction reaction with carbon dioxide, producing carbon nanoparticles including graphene and fullerenes. === Supersonic spray === Supersonic acceleration of droplets through a Laval nozzle was used to deposit reduced graphene oxide on a substrate. The energy of the impact rearranges those carbon atoms into flawless graphene. === Laser === In 2014, a CO2 infrared laser was used to produce patterned porous three-dimensional laser-induced graphene (LIG) film networks from commercial polymer films. The resulting material exhibits high electrical conductivity and surface area. The laser induction process is compatible with roll-to-roll manufacturing processes. A similar material, laser-induced graphene fibers (LIGF), was reported in 2018. === Flash Joule heating === In 2019, flash Joule heating (transient high-temperature electrothermal heating) was discovered to be a method to synthesize turbostratic graphene in bulk powder form. The method involves electrothermally converting various carbon sources, such as carbon black, coal, and food waste into micron-scale flakes of graphene. More recent works demonstrated the use of mixed plastic waste, waste rubber tires, and pyrolysis ash as carbon feedstocks. The graphenization process is kinetically controlled, and the energy dose is chosen to preserve the carbon in its graphenic state (excessive energy input leads to subsequent graphitization through annealing). === Ion implantation === Accelerating carbon ions inside an electrical field into a semiconductor made of thin nickel films on a substrate of SiO2/Si, creates a wafer-scale (4 inches (100 mm)) wrinkle/tear/residue-free graphene layer at a relatively low temperature of 500 °C. === CMOS-compatible graphene === Integration of graphene in the widely employed CMOS fabrication process demands its transfer-free direct synthesis on dielectric substrates at temperatures below 500 °C. At the IEDM 2018, researchers from University of California, Santa Barbara, demonstrated a novel CMOS-compatible graphene synthesis process at 300 °C suitable for back-end-of-line (BEOL) applications. The process involves pressure-assisted solid-state diffusion of carbon through a thin-film of metal catalyst. The synthesized large-area graphene films were shown to exhibit high quality (via Raman characterization) and similar resistivity values when compared with high-temperature CVD synthesized graphene films of the same cross-section down to widths of 20 nm. == Simulation == In addition to experimental investigation of graphene and graphene-based devices, numerical modeling and simulation of graphene has also been an important research topic. The Kubo formula provides an analytic expression for the graphene's conductivity and shows that it is a function of several physical parameters including wavelength, temperature, and chemical potential. Moreover, a surface conductivity model, which describes graphene as an infinitesimally thin (two-sided) sheet with a local and isotropic conductivity, has been proposed. This model permits the derivation of analytical expressions for the electromagnetic field in the presence of a graphene sheet in terms of a dyadic Green function (represented using Sommerfeld integrals) and exciting electric current. Even though these analytical models and methods can provide results for several canonical problems for benchmarking purposes, many practical problems involving graphene, such as the design of arbitrarily shaped electromagnetic devices, are analytically intractable. With the recent advances in the field of computational electromagnetics (CEM), various accurate and efficient numerical methods have become available for analysis of electromagnetic field/wave interactions on graphene sheets and/or graphene-based devices. A comprehensive summary of computational tools developed for analyzing graphene-based devices/systems is proposed. == Graphene analogs == Graphene analogs (also referred to as "artificial graphene") are two-dimensional systems which exhibit similar properties to graphene. Graphene analogs have been studied intensively since the discovery of graphene in 2004. People try to develop systems in which the physics is easier to observe and manipulate than in graphene. In those systems, electrons are not always the particles that are used. They might be optical photons, microwave photons, plasmons, microcavity polaritons, or even atoms. Also, the honeycomb structure in which those particles evolve can be of a different nature than carbon atoms in graphene. It can be, respectively, a photonic crystal, an array of metallic rods, metallic nanoparticles, a lattice of coupled microcavities, or an optical lattice. == Applications == Graphene is a transparent and flexible conductor that holds great promise for various material/device applications, including solar cells, light-emitting diodes (LED), integrated photonic circuit devices, touch panels, and smart windows or phones. Smartphone products with graphene touch screens are already on the market. In 2013, Head announced their new range of graphene tennis racquets. As of 2015, there is one product available for commercial use: a graphene-infused printer powder. Many other uses for graphene have been proposed or are under development, in areas including electronics, biological engineering, filtration, lightweight/strong composite materials, photovoltaics and energy storage. Graphene is often produced as a powder and as a dispersion in a polymer matrix. This dispersion is supposedly suitable for advanced composites, paints and coatings, lubricants, oils and functional fluids, capacitors and batteries, thermal management applications, display materials and packaging, solar cells, inks and 3D-printer materials, and barriers and films. On 2 August 2016, Briggs Automative Company's new Mono model is said to be made out of graphene as the first of both a street-legal track car and a production car. In January 2018, graphene-based spiral inductors exploiting kinetic inductance at room temperature were first demonstrated at the University of California, Santa Barbara, led by Kaustav Banerjee. These inductors were predicted to allow significant miniaturization in radio-frequency integrated circuit applications. The potential of epitaxial graphene on SiC for metrology has been shown since 2010, displaying quantum Hall resistance quantization accuracy of three parts per billion in monolayer epitaxial graphene. Over the years precisions of parts-per-trillion in the Hall resistance quantization and giant quantum Hall plateaus have been demonstrated. Developments in the encapsulation and doping of epitaxial graphene have led to the commercialization of epitaxial graphene quantum resistance standards. Novel uses for graphene continue to be researched and explored. One such use is in combination with water-based epoxy resins to produce anticorrosive coatings. The van der Waals nature of graphene and other two-dimensional (2D) materials also permits van der Waals heterostructures and integrated circuits based on Van der Waals integration of 2D materials. Graphene is utilized in detecting gasses and chemicals in environmental monitoring, developing highly sensitive biosensors for medical diagnostics, and creating flexible, wearable sensors for health monitoring. Graphene's transparency also enhances optical sensors, making them more effective in imaging and spectroscopy. == Toxicity == One review on graphene toxicity published in 2016 by Lalwani et al. summarizes the in vitro, in vivo, antimicrobial and environmental effects and highlights the various mechanisms of graphene toxicity. Another review published in 2016 by Ou et al. focused on graphene-family nanomaterials (GFNs) and revealed several typical mechanisms such as physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis. A 2020 study showed that the toxicity of graphene is dependent on several factors such as shape, size, purity, post-production processing steps, oxidative state, functional groups, dispersion state, synthesis methods, route and dose of administration, and exposure times. In 2014, research at Stony Brook University showed that graphene nanoribbons, graphene nanoplatelets, and graphene nano–onions are non-toxic at concentrations up to 50 μg/ml. These nanoparticles do not alter the differentiation of human bone marrow stem cells towards osteoblasts (bone) or adipocytes (fat), suggesting that at low doses, graphene nanoparticles are safe for biomedical applications. In 2013, research at Brown University found that 10 μm few-layered graphene flakes can pierce cell membranes in solution. They were observed to enter initially via sharp and jagged points, allowing graphene to be internalized in the cell. The physiological effects of this remain unknown, and this remains a relatively unexplored field. == See also == Borophene – Allotrope of boron Carbon fiber – Light, strong and rigid composite materialPages displaying short descriptions of redirect targets Penta-graphene – allotrope of carbonPages displaying wikidata descriptions as a fallback Phagraphene – proposed graphene allotropePages displaying wikidata descriptions as a fallback Plumbene – Material made up of a single layer of lead atoms Silicene – Two-dimensional allotrope of silicon == References == == External links == Manchester's Revolutionary 2D Material at The University of Manchester Graphene at The Periodic Table of Videos (University of Nottingham) Graphene: Patent surge reveals global race 'Engineering Controls for Nano-scale Graphene Platelets During Manufacturing and Handling Processes' (PDF) Band structure of graphene (PDF).
Wikipedia/Graphene
Nanomaterials describe, in principle, chemical substances or materials of which a single unit is sized (in at least one dimension) between 1 and 100 nm (the usual definition of nanoscale). Nanomaterials research takes a materials science-based approach to nanotechnology, leveraging advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, thermo-physical or mechanical properties. Nanomaterials are slowly becoming commercialized and beginning to emerge as commodities. == Definition == In ISO/TS 80004, nanomaterial is defined as the "material with any external dimension in the nanoscale or having internal structure or surface structure in the nanoscale", with nanoscale defined as the "length range approximately from 1 nm to 100 nm". This includes both nano-objects, which are discrete pieces of material, and nanostructured materials, which have internal or surface structure on the nanoscale; a nanomaterial may be a member of both these categories. On 18 October 2011, the European Commission adopted the following definition of a nanomaterial:A natural, incidental or manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm – 100 nm. In specific cases and where warranted by concerns for the environment, health, safety or competitiveness the number size distribution threshold of 50% may be replaced by a threshold between 1% to 50%. == Sources == === Engineered === Engineered nanomaterials have been deliberately engineered and manufactured by humans to have certain required properties. Legacy nanomaterials are those that were in commercial production prior to the development of nanotechnology as incremental advancements over other colloidal or particulate materials. They include carbon black and titanium dioxide nanoparticles. === Incidental === Nanomaterials may be unintentionally produced as a byproduct of mechanical or industrial processes through combustion and vaporization. Sources of incidental nanoparticles include vehicle engine exhausts, smelting, welding fumes, combustion processes from domestic solid fuel heating and cooking. For instance, the class of nanomaterials called fullerenes are generated by burning gas, biomass, and candle. It can also be a byproduct of wear and corrosion products. Incidental atmospheric nanoparticles are often referred to as ultrafine particles, which are unintentionally produced during an intentional operation, and could contribute to air pollution. === Natural === Biological systems often feature natural, functional nanomaterials. The structure of foraminifera (mainly chalk) and viruses (protein, capsid), the wax crystals covering a lotus or nasturtium leaf, spider and spider-mite silk, the blue hue of tarantulas, the "spatulae" on the bottom of gecko feet, some butterfly wing scales, natural colloids (milk, blood), horny materials (skin, claws, beaks, feathers, horns, hair), paper, cotton, nacre, corals, and even our own bone matrix are all natural organic nanomaterials. Natural inorganic nanomaterials occur through crystal growth in the diverse chemical conditions of the Earth's crust. For example, clays display complex nanostructures due to anisotropy of their underlying crystal structure, and volcanic activity can give rise to opals, which are an instance of a naturally occurring photonic crystals due to their nanoscale structure. Fires represent particularly complex reactions and can produce pigments, cement, fumed silica etc. Natural sources of nanoparticles include combustion products forest fires, volcanic ash, ocean spray, and the radioactive decay of radon gas. Natural nanomaterials can also be formed through weathering processes of metal- or anion-containing rocks, as well as at acid mine drainage sites. Gallery of natural nanomaterials == Types == Nano-materials are often categorized as to how many of their dimensions fall in the nanoscale. A nanoparticle is defined a nano-object with all three external dimensions in the nanoscale, whose longest and the shortest axes do not differ significantly. A nanofiber has two external dimensions in the nanoscale, with nanotubes being hollow nanofibers and nanorods being solid nanofibers. A nanoplate/nanosheet has one external dimension in the nanoscale, and if the two larger dimensions are significantly different it is called a nanoribbon. For nanofibers and nanoplates, the other dimensions may or may not be in the nanoscale, but must be significantly larger. In all of these cases, a significant difference is noted to typically be at least a factor of 3. Nanostructured materials are often categorized by what phases of matter they contain. A nanocomposite is a solid containing at least one physically or chemically distinct region or collection of regions, having at least one dimension in the nanoscale. A nanofoam has a liquid or solid matrix, filled with a gaseous phase, where one of the two phases has dimensions on the nanoscale. A nanoporous material is a solid material containing nanopores, voids in the form of open or closed pores of sub-micron lengthscales. A nanocrystalline material has a significant fraction of crystal grains in the nanoscale. === Nanoporous materials === The term nanoporous materials contain subsets of microporous and mesoporous materials. Microporous materials are porous materials with a mean pore size smaller than 2 nm, while mesoporous materials are those with pores sizes in the region 2–50 nm. Microporous materials exhibit pore sizes with comparable length-scale to small molecules. For this reason such materials may serve valuable applications including separation membranes. Mesoporous materials are interesting towards applications that require high specific surface areas, while enabling penetration for molecules that may be too large to enter the pores of a microporous material. In some sources, nanoporous materials and nanofoam are sometimes considered nanostructures but not nanomaterials because only the voids and not the materials themselves are nanoscale. Although the ISO definition only considers round nano-objects to be nanoparticles, other sources use the term nanoparticle for all shapes. === Nanoparticles === Nanoparticles have all three dimensions on the nanoscale. Nanoparticles can also be embedded in a bulk solid to form a nanocomposite. ==== Fullerenes ==== The fullerenes are a class of allotropes of carbon which conceptually are graphene sheets rolled into tubes or spheres. These include the carbon nanotubes (or silicon nanotubes) which are of interest both because of their mechanical strength and also because of their electrical properties. The first fullerene molecule to be discovered, and the family's namesake, buckminsterfullerene (C60), was prepared in 1985 by Richard Smalley, Robert Curl, James Heath, Sean O'Brien, and Harold Kroto at Rice University. The name was a homage to Buckminster Fuller, whose geodesic domes it resembles. Fullerenes have since been found to occur in nature. More recently, fullerenes have been detected in outer space. For the past decade, the chemical and physical properties of fullerenes have been a hot topic in the field of research and development, and are likely to continue to be for a long time. In April 2003, fullerenes were under study for potential medicinal use: binding specific antibiotics to the structure of resistant bacteria and even target certain types of cancer cells such as melanoma. The October 2005 issue of Chemistry and Biology contains an article describing the use of fullerenes as light-activated antimicrobial agents. In the field of nanotechnology, heat resistance and superconductivity are among the properties attracting intense research. A common method used to produce fullerenes is to send a large current between two nearby graphite electrodes in an inert atmosphere. The resulting carbon plasma arc between the electrodes cools into sooty residue from which many fullerenes can be isolated. There are many calculations that have been done using ab-initio Quantum Methods applied to fullerenes. By DFT and TDDFT methods one can obtain IR, Raman, and UV spectra. Results of such calculations can be compared with experimental results. ==== Metal-based nanoparticles ==== Inorganic nanomaterials, (e.g. quantum dots, nanowires, and nanorods) because of their interesting optical and electrical properties, could be used in optoelectronics. Furthermore, the optical and electronic properties of nanomaterials which depend on their size and shape can be tuned via synthetic techniques. There are the possibilities to use those materials in organic material based optoelectronic devices such as organic solar cells, OLEDs etc. The operating principles of such devices are governed by photoinduced processes like electron transfer and energy transfer. The performance of the devices depends on the efficiency of the photoinduced process responsible for their functioning. Therefore, better understanding of those photoinduced processes in organic/inorganic nanomaterial composite systems is necessary in order to use them in optoelectronic devices. Nanoparticles or nanocrystals made of metals, semiconductors, or oxides are of particular interest for their mechanical, electrical, magnetic, optical, chemical and other properties. Nanoparticles have been used as quantum dots and as chemical catalysts such as nanomaterial-based catalysts. Recently, a range of nanoparticles are extensively investigated for biomedical applications including tissue engineering, drug delivery, biosensor. Nanoparticles are of great scientific interest as they are effectively a bridge between bulk materials and atomic or molecular structures. A bulk material should have constant physical properties regardless of its size, but at the nano-scale this is often not the case. Size-dependent properties are observed such as quantum confinement in semiconductor particles, surface plasmon resonance in some metal particles, and superparamagnetism in magnetic materials. Nanoparticles exhibit a number of special properties relative to bulk material. For example, the bending of bulk copper (wire, ribbon, etc.) occurs with movement of copper atoms/clusters at about the 50 nm scale. Copper nanoparticles smaller than 50 nm are considered super hard materials that do not exhibit the same malleability and ductility as bulk copper. The change in properties is not always desirable. Ferroelectric materials smaller than 10 nm can switch their polarization direction using room temperature thermal energy, thus making them useless for memory storage. Suspensions of nanoparticles are possible because the interaction of the particle surface with the solvent is strong enough to overcome differences in density, which usually result in a material either sinking or floating in a liquid. Nanoparticles often have unexpected visual properties because they are small enough to confine their electrons and produce quantum effects. For example, gold nanoparticles appear deep red to black in solution. The often very high surface area to volume ratio of nanoparticles provides a tremendous driving force for diffusion, especially at elevated temperatures. Sintering is possible at lower temperatures and over shorter durations than for larger particles. This theoretically does not affect the density of the final product, though flow difficulties and the tendency of nanoparticles to agglomerate do complicate matters. The surface effects of nanoparticles also reduces the incipient melting temperature. === One-dimensional nanostructures === The smallest possible crystalline wires with cross-section as small as a single atom can be engineered in cylindrical confinement. Carbon nanotubes, a natural semi-1D nanostructure, can be used as a template for synthesis. Confinement provides mechanical stabilization and prevents linear atomic chains from disintegration; other structures of 1D nanowires are predicted to be mechanically stable even upon isolation from the templates. === Two-dimensional nanostructures === 2D materials are crystalline materials consisting of a two-dimensional single layer of atoms. The most important representative graphene was discovered in 2004. Thin films with nanoscale thicknesses (nanofilms) are considered nanostructures, but are sometimes not considered nanomaterials because they do not exist separately from the substrate. === Bulk nanostructured materials === Some bulk materials contain features on the nanoscale, including nanocomposites, nanocrystalline materials, nanostructured films, and nanotextured surfaces. Box-shaped graphene (BSG) nanostructure is an example of 3D nanomaterial. BSG nanostructure has appeared after mechanical cleavage of pyrolytic graphite. This nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm. == Applications == Nano materials are used in a variety of, manufacturing processes, products and healthcare including paints, filters, insulation and lubricant additives. In healthcare Nanozymes are nanomaterials with enzyme-like characteristics. They are an emerging type of artificial enzyme, which have been used for wide applications in such as biosensing, bioimaging, tumor diagnosis, antibiofouling and more. High quality filters may be produced using nanostructures, these filters are capable of removing particulate as small as a virus as seen in a water filter created by Seldon Technologies. Nanomaterials membrane bioreactor (NMs-MBR), the next generation of conventional MBR, are recently proposed for the advanced treatment of wastewater. In the air purification field, nano technology was used to combat the spread of MERS in Saudi Arabian hospitals in 2012. Nanomaterials are being used in modern and human-safe insulation technologies; in the past they were found in Asbestos-based insulation. As a lubricant additive, nano materials have the ability to reduce friction in moving parts. Worn and corroded parts can also be repaired with self-assembling anisotropic nanoparticles called TriboTEX. Nanomaterials have also been applied in a range of industries and consumer products. Mineral nanoparticles such as titanium-oxide have been used to improve UV protection in sunscreen. In the sports industry, lighter bats to have been produced with carbon nanotubes to improve performance. Another application is in the military, where mobile pigment nanoparticles have been used to create more effective camouflage. Nanomaterials can also be used in three-way-catalyst applications, which have the advantage of controlling the emission of nitrogen oxides (NOx), which are precursors to acid rain and smog. In core-shell structure, nanomaterials form shell as the catalyst support to protect the noble metals such as palladium and rhodium. The primary function is that the supports can be used for carrying catalysts active components, making them highly dispersed, reducing the use of noble metals, enhancing catalysts activity, and potentially improving the stability. == Synthesis == The goal of any synthetic method for nanomaterials is to yield a material that exhibits properties that are a result of their characteristic length scale being in the nanometer range (1 – 100 nm). Accordingly, the synthetic method should exhibit control of size in this range so that one property or another can be attained. Often the methods are divided into two main types, "bottom up" and "top down". === Bottom-up methods === Bottom-up methods involve the assembly of atoms or molecules into nanostructured arrays. In these methods the raw material sources can be in the form of gases, liquids, or solids. The latter require some sort of disassembly prior to their incorporation onto a nanostructure. Bottom up methods generally fall into two categories: chaotic and controlled. Chaotic processes involve elevating the constituent atoms or molecules to a chaotic state and then suddenly changing the conditions so as to make that state unstable. Through the clever manipulation of any number of parameters, products form largely as a result of the insuring kinetics. The collapse from the chaotic state can be difficult or impossible to control and so ensemble statistics often govern the resulting size distribution and average size. Accordingly, nanoparticle formation is controlled through manipulation of the end state of the products. Examples of chaotic processes are laser ablation, exploding wire, arc, flame pyrolysis, combustion, and precipitation synthesis techniques. Controlled processes involve the controlled delivery of the constituent atoms or molecules to the site(s) of nanoparticle formation such that the nanoparticle can grow to a prescribed sizes in a controlled manner. Generally the state of the constituent atoms or molecules are never far from that needed for nanoparticle formation. Accordingly, nanoparticle formation is controlled through the control of the state of the reactants. Examples of controlled processes are self-limiting growth solution, self-limited chemical vapor deposition, shaped pulse femtosecond laser techniques, plant and microbial approaches and molecular beam epitaxy. === Top-down methods === Top-down methods adopt some 'force' (e. g. mechanical force, laser) to break bulk materials into nanoparticles. A popular method involves mechanical break apart bulk materials into nanomaterials is 'ball milling'. Besides that, nanoparticles can also be made by laser ablation which apply short pulse lasers (e. g. femtosecond laser) to ablate a target (solid). == Characterization == Novel effects can occur in materials when structures are formed with sizes comparable to any one of many possible length scales, such as the de Broglie wavelength of electrons, or the optical wavelengths of high energy photons. In these cases quantum mechanical effects can dominate material properties. One example is quantum confinement where the electronic properties of solids are altered with great reductions in particle size. The optical properties of nanoparticles, e.g. fluorescence, also become a function of the particle diameter. This effect does not come into play by going from macrosocopic to micrometer dimensions, but becomes pronounced when the nanometer scale is reached. In addition to optical and electronic properties, the novel mechanical properties of many nanomaterials is the subject of nanomechanics research. When added to a bulk material, nanoparticles can strongly influence the mechanical properties of the material, such as the stiffness or elasticity. For example, traditional polymers can be reinforced by nanoparticles (such as carbon nanotubes) resulting in novel materials which can be used as lightweight replacements for metals. Such composite materials may enable a weight reduction accompanied by an increase in stability and improved functionality. Finally, nanostructured materials with small particle size, such as zeolites and asbestos, are used as catalysts in a wide range of critical industrial chemical reactions. The further development of such catalysts can form the basis of more efficient, environmentally friendly chemical processes. The first observations and size measurements of nano-particles were made during the first decade of the 20th century. Zsigmondy made detailed studies of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. He used an ultramicroscope that employs a dark field method for seeing particles with sizes much less than light wavelength. There are traditional techniques developed during the 20th century in interface and colloid science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section. These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Others apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions. There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. This information is required for proper system stabilization, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering, and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems. == Mechanical properties == The ongoing research has shown that mechanical properties can vary significantly in nanomaterials compared to bulk material. Nanomaterials have substantial mechanical properties due to the volume, surface, and quantum effects of nanoparticles. This is observed when the nanoparticles are added to common bulk material, the nanomaterial refines the grain and forms intergranular and intragranular structures which improve the grain boundaries and therefore the mechanical properties of the materials. Grain boundary refinements provide strengthening by increasing the stress required to cause intergranular or transgranular fractures. A common example where this can be observed is the addition of nano Silica to cement, which improves the tensile strength, compressive strength, and bending strength by the mechanisms just mentioned. The understanding of these properties will enhance the use of nanoparticles in novel applications in various fields such as surface engineering, tribology, nanomanufacturing, and nanofabrication. Techniques used: Steinitz in 1943 used the micro-indentation technique to test the hardness of microparticles, and now nanoindentation has been employed to measure elastic properties of particles at about 5-micron level. These protocols are frequently used to calculate the mechanical characteristics of nanoparticles via atomic force microscopy (AFM) techniques. To measure the elastic modulus; indentation data is obtained via AFM force-displacement curves being converted to force-indentation curves. Hooke's law is used to determine the cantilever deformation and depth of the tip, and in conclusion, the pressure equation can be written as: P=k (ẟc - ẟc0) ẟc : cantilever deformation ẟc0 : deflection ofset AFM allows us to obtain a high-resolution image of multiple types of surfaces while the tip of the cantilever can be used to obtain information about mechanical properties. Computer simulations are also being progressively used to test theories and complement experimental studies. The most used computer method is molecular dynamics simulation, which uses newton's equations of motion for the atoms or molecules in the system. Other techniques such direct probe method are used to determine the adhesive properties of nanomaterials. Both the technique and simulation are coupled with transmission electron microscope (TEM) and AFM techniques to provide results. Mechanical properties of common nanomaterials classes: Crystalline metal nanomaterials: Dislocations are one of the major contributors toward elastic properties within nanomaterials similar to bulk crystalline materials. Despite the traditional view of there being no dislocations in nanomaterials. Ramos, experimental work has shown that the hardness of gold nanoparticles is much higher than their bulk counterparts, as there are stacking faults and dislocations forming that activate multiple strengthening mechanisms in the material. Through these experiments, more research has shown that via nanoindentation techniques, material strength; compressive stress, increases under compression with decreasing particle size, because of nucleating dislocations. These dislocations have been observed using TEM techniques, coupled with nanoindentation. Silicon nanoparticles strength and hardness are four times more than the value of the bulk material. The resistance to pressure applied can be attributed to the line defects inside the particles as well as a dislocation that provides strengthening of the mechanical properties of the nanomaterial. Furthermore, the addition of nanoparticles strengthens a matrix because the pinning of particles inhibits grain growth. This refines the grain, and hence improves the mechanical properties. However, not all additions of nanomaterials lead to an increase in properties for example nano-Cu. But this is attributed to the inherent properties of the material being weaker than the matrix. Nonmetallic nanoparticles and nanomaterials: Size-dependent behavior of mechanical properties is still not clear in the case of polymer nanomaterials however, in one research by Lahouij they found that the compressive moduli of polystyrene nanoparticles were found to be less than that of the bulk counterparts. This can be associated with the functional groups being hydrated. Furthermore, nonmetallic nanomaterials can lead to agglomerates forming inside the matrix they are being added to and hence decrease the mechanical properties by leading to fracture under even low mechanical loads, such as the addition of CNTs. The agglomerates will act as slip planes as well as planes in which cracks can easily propagate (9). However, most organic nanomaterials are flexible and these and the mechanical properties such as hardness etc. are not dominant. Nanowires and nanotubes: The elastic moduli of some nanowires namely lead and silver, decrease with increasing diameter. This has been associated with surface stress, oxidation layer, and surface roughness. However, the elastic behavior of ZnO nanowires does not get affected by surface effects but their fracture properties do. So, it is generally dependent on material behavior and their bonding as well. The reason why mechanical properties of nanomaterials are still a hot topic for research is that measuring the mechanical properties of individual nanoparticles is a complicated method, involving multiple control factors. Nonetheless, Atomic force microscopy has been widely used to measure the mechanical properties of nanomaterials. Adhesion and friction of nanoparticles When talking about the application of a material adhesion and friction play a critical role in determining the outcome of the application. Therefore, it is critical to see how these properties also get affected by the size of a material. Again, AFM is a technique most used to measure these properties and to determine the adhesive strength of nanoparticles to any solid surface, along with the colloidal probe technique and other chemical properties. Furthermore, the forces playing a role in providing these adhesive properties to nanomaterials are either the electrostatic forces, VdW, capillary forces, solvation forces, structure force, etc. It has been found that the addition of nanomaterials in bulk materials substantially increases their adhesive capabilities by increasing their strength through various bonding mechanisms. Nanomaterials dimension approaches zero, which means that the fraction of the particle's surface to overall atoms increases. Along with surface effects, the movement of nanoparticles also plays a role in dictating their mechanical properties such as shearing capabilities. The movement of particles can be observed under TEM. For example, the movement behavior of MoS2 nanoparticles dynamic contact was directly observed in situ which led to the conclusion that fullerenes can shear via rolling or sliding. However, observing these properties is again a very complicated process due to multiple contributing factors. Applications specific to Mechanical Properties: Lubrication Nano-manufacturing Coatings == Uniformity == The chemical processing and synthesis of high performance technological components for the private, industrial and military sectors requires the use of high purity ceramics, polymera, glass-ceramics, and composite materials. In condensed bodies formed from fine powders, the irregular sizes and shapes of nanoparticles in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact. Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved. In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws. It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. A number of dispersants such as ammonium citrate (aqueous) and imidazoline or oleyl alcohol (nonaqueous) are promising solutions as possible additives for enhanced dispersion and deagglomeration. Monodisperse nanoparticles and colloids provide this potential. Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in high performance materials and components. == Nanomaterials in articles, patents, and products == The quantitative analysis of nanomaterials showed that nanoparticles, nanotubes, nanocrystalline materials, nanocomposites, and graphene have been mentioned in 400,000, 181,000, 144,000, 140,000, and 119,000 ISI-indexed articles, respectively, by September 2018. As far as patents are concerned, nanoparticles, nanotubes, nanocomposites, graphene, and nanowires have been played a role in 45,600, 32,100, 12,700, 12,500, and 11,800 patents, respectively. Monitoring approximately 7,000 commercial nano-based products available on global markets revealed that the properties of around 2,330 products have been enabled or enhanced aided by nanoparticles. Liposomes, nanofibers, nanocolloids, and aerogels were also of the most common nanomaterials in consumer products. The European Union Observatory for Nanomaterials (EUON) has produced a database (NanoData) that provides information on specific patents, products, and research publications on nanomaterials. == Health and safety == === World Health Organization guidelines === The World Health Organization (WHO) published a guideline on protecting workers from potential risk of manufactured nanomaterials at the end of 2017. WHO used a precautionary approach as one of its guiding principles. This means that exposure has to be reduced, despite uncertainty about the adverse health effects, when there are reasonable indications to do so. This is highlighted by recent scientific studies that demonstrate a capability of nanoparticles to cross cell barriers and interact with cellular structures. In addition, the hierarchy of controls was an important guiding principle. This means that when there is a choice between control measures, those measures that are closer to the root of the problem should always be preferred over measures that put a greater burden on workers, such as the use of personal protective equipment (PPE). WHO commissioned systematic reviews for all important issues to assess the current state of the science and to inform the recommendations according to the process set out in the WHO Handbook for guideline development. The recommendations were rated as "strong" or "conditional" depending on the quality of the scientific evidence, values and preferences, and costs related to the recommendation. The WHO guidelines contain the following recommendations for safe handling of manufactured nanomaterials (MNMs) A. Assess health hazards of MNMs WHO recommends assigning hazard classes to all MNMs according to the Globally Harmonized System (GHS) of Classification and Labelling of Chemicals for use in safety data sheets. For a limited number of MNMs this information is made available in the guidelines (strong recommendation, moderate-quality evidence). WHO recommends updating safety data sheets with MNM-specific hazard information or indicating which toxicological end-points did not have adequate testing available (strong recommendation, moderate-quality evidence). For the respirable fibres and granular biopersistent particles' groups, the GDG suggests using the available classification of MNMs for provisional classification of nanomaterials of the same group (conditional recommendation, low-quality evidence). B. Assess exposure to MNMs WHO suggests assessing workers' exposure in workplaces with methods similar to those used for the proposed specific occupational exposure limit (OEL) value of the MNM (conditional recommendation, low-quality evidence). Because there are no specific regulatory OEL values for MNMs in workplaces, WHO suggests assessing whether workplace exposure exceeds a proposed OEL value for the MNM. A list of proposed OEL values is provided in an annex of the guidelines. The chosen OEL should be at least as protective as a legally mandated OEL for the bulk form of the material (conditional recommendation, low-quality evidence). If specific OELs for MNMs are not available in workplaces, WHO suggests a step-wise approach for inhalation exposure with, first an assessment of the potential for exposure; second, conducting basic exposure assessment and third, conducting a comprehensive exposure assessment such as those proposed by the Organisation for Economic Cooperation and Development (OECD) or Comité Européen de Normalisation (the European Committee for Standardization, CEN) (conditional recommendation, moderate quality evidence). For dermal exposure assessment, WHO found that there was insufficient evidence to recommend one method of dermal exposure assessment over another. C. Control exposure to MNMs Based on a precautionary approach, WHO recommends focusing control of exposure on preventing inhalation exposure with the aim of reducing it as much as possible (strong recommendation, moderate-quality evidence). WHO recommends reduction of exposures to a range of MNMs that have been consistently measured in workplaces especially during cleaning and maintenance, collecting material from reaction vessels and feeding MNMs into the production process. In the absence of toxicological information, WHO recommends implementing the highest level of controls to prevent workers from any exposure. When more information is available, WHO recommends taking a more tailored approach (strong recommendation, moderate-quality evidence). WHO recommends taking control measures based on the principle of hierarchy of controls, meaning that the first control measure should be to eliminate the source of exposure before implementing control measures that are more dependent on worker involvement, with PPE being used only as a last resort. According to this principle, engineering controls should be used when there is a high level of inhalation exposure or when there is no, or very little, toxicological information available. In the absence of appropriate engineering controls PPE should be used, especially respiratory protection, as part of a respiratory protection programme that includes fit-testing (strong recommendation, moderate-quality evidence). WHO suggests preventing dermal exposure by occupational hygiene measures such as surface cleaning, and the use of appropriate gloves (conditional recommendation, low quality evidence). When assessment and measurement by a workplace safety expert is not available, WHO suggests using control banding for nanomaterials to select exposure control measures in the workplace. Owing to a lack of studies, WHO cannot recommend one method of control banding over another (conditional recommendation, very low-quality evidence). For health surveillance WHO could not make a recommendation for targeted MNM-specific health surveillance programmes over existing health surveillance programmes that are already in use owing to the lack of evidence. WHO considers training of workers and worker involvement in health and safety issues to be best practice but could not recommend one form of training of workers over another, or one form of worker involvement over another, owing to the lack of studies available. It is expected that there will be considerable progress in validated measurement methods and risk assessment and WHO expects to update these guidelines in five years' time, in 2022. === Other guidance === Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern. Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Acute inhalation exposure of healthy animals to biodegradable inorganic nanomaterials have not demonstrated significant toxicity effects. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials, although no reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. Additional concerns include skin contact and ingestion exposure, and dust explosion hazards. Elimination and substitution are the most desirable approaches to hazard control. While the nanomaterials themselves often cannot be eliminated or substituted with conventional materials, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. Handling procedures can also be improved, for example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Engineering controls are physical changes to the workplace that isolate workers from hazards, mainly ventilation systems such as fume hoods, gloveboxes, biosafety cabinets, and vented balance enclosures. Administrative controls are changes to workers' behavior to mitigate a hazard, including training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture. Personal protective equipment must be worn on the worker's body and is the least desirable option for controlling hazards. Personal protective equipment normally used for typical chemicals are also appropriate for nanomaterials, including long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves, goggles, and impervious laboratory coats. In some circumstances respirators may be used. Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. The assessment should use both particle counters, which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis. As of 2016, quantitative occupational exposure limits have not been determined for most nanomaterials. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits for carbon nanotubes, carbon nanofibers, and ultrafine titanium dioxide. Agencies and organizations from other countries, including the British Standards Institute and the Institute for Occupational Safety and Health in Germany, have established OELs for some nanomaterials, and some companies have supplied OELs for their products. Nanoscale diagnostics Nanotechnology has been making headlines in the medical field, being responsible for biomedical imaging. The unique optical, magnetic and chemical properties of materials on the Nano scale has allowed the development of imaging probes with multi-functionality such as better contrast enhancement, better spatial information, controlled bio distribution, and multi-modal imaging across various scanning devices. These developments have had advantages such as being able to detect the location of tumors and inflammations, accurate assessment of disease progression, and personalized medicine. Silica nanoparticles- Silica nanoparticles can be classified into solid, non-porous, and mesoporous. They have large surface are, hydrophilic surface, and chemical and physical stabilities. Silica nanoparticles are made by the use of the Stöber process. Which is the hydrolysis of silyl ethers such as tetraethyl silicate into silanols (Si-OH) using ammonia in a mixture of water and alcohol followed by the condensation of silanols into 50–2000 nm silica particles. The size of the particle can be controlled by varying the concentration of silyl ether and alcohol or the micro emulsion method. Mesoporous silica nanoparticles are synthesized by the sol-gel process. They have pores that range in diameter from 2 nm to 50 nm. They are synthesized in a water-based solution in the presence of a base catalyst and a pore forming agent known as a surfactant. Surfactants are molecules that present the particularity to have a hydrophobic tail (alkyl chain) and a hydrophilic head (charged group, such as a quaternary amine for example). As these surfactants are added to a water-based solution, they will coordinate to form micelles with increasing concentration in order to stabilize the hydrophobic tails. Varying the pH of the solution and composition of the solvents, and the addition of certain swelling agents can control the pore size. Their hydrophilic surface is what makes silica nanoparticles so important and allows them to carry out functions such as drug and gene delivery, bio imaging and therapy. In order for this application to be successful, assorted surface functional groups are necessary and can be added either by the co-condensation process during preparation or by post surface modification. The high surface area of silica nanoparticles allows them to carry much larger amounts of the desired drug than through conventional methods like polymers and liposomes. It allows for site specific targeting, especially in the treatment of cancer. Once the particles have reached their destination, they can act as a reporter, release a compound, or be remotely heated to damage biological structures in close proximity. Targeting is typically accomplished by modifying the surface of the nanoparticle with a chemical or biological compound. They accumulate at tumor sites through Enhanced Permeability Retention (EPR), where the tumor vessels accelerate the delivery of the nanoparticles directly into the tumor. The porous shell of the silica allows control over the rate at which the drug diffuses out of the nanoparticle. The shell can be modified to have an affinity for the drug, or even to be triggered by pH, heat, light, salts, or other signaling molecules. Silica nanoparticles are also used in bio imaging because they can accommodate fluorescent/MRI/PET/ SPECT contrast agents and drug/DNA molecules to their adaptable surface and pores. This is made possible by using the silica nanoparticle as a vector for the expression of fluorescent proteins. Several different types of fluorescent probes, like cyanine dyes, methyl violegen, or semiconductor quantum dots can be conjugated to silica nanoparticles and delivered into specific cells or injected in vivo. Carrier molecule RGD peptide has been very useful of targeted in vivo imaging. Topically applied surface-enhanced resonance Raman ratiometric spectroscopy (TAS3RS)- TAS3RS is another technique that is starting to make advancement in the medical field. It is an imaging technique that uses Folate Receptors (FR) to detect tumor lesions as small as 370 micrometers. Folate Receptors are membrane bound surface proteins that bind folates and folate conjugates with high affinity. FR is frequently overexpressed in a number of human malignancies including cancer of the ovary, lung, kidney, breast, bladder, brain, and endometrium. Raman imaging is a type of spectroscopy that is used in chemistry to provide structural fingerprint by which molecules can be identified. It relies upon inelastic scattering of photons, which result in ultra high sensitivity. There was a study that was done where two different surface enhanced resonance Raman scattering were synthesized (SERRS). One of the SERRS was a "targeted nanoprobe functionalized with an anti-folate-receptor antibody (αFR-Ab) via a PEG-maleimide-succinimide and using the infrared dye IR780 as the Raman reporter, henceforth referred to as αFR-NP, and a nontargeted probe (nt-NP) coated with PEG5000-maleimide and featuring the IR140 infrared dye as the Raman reporter." These two different mixtures were injected into tumor bearing mice and healthy controlled mice. The mice were imaged with Bioluminescence (BLI) signal that produces light energy within an organism's body. They were also scanned with the Raman microscope in order to be able to see the correlation between the TAS3RS and the BLI map. TAS3RS did not show anything in the healthy mice, but was able to locate the tumor lesions in the infected mice and also able to create a TAS3RS map that could be used as guidance during surgery. TAS3RS shows to be promising in being able to combat ovarian and peritoneal cancer as it allows early detection with high accuracy. This technique can be administered locally, which is an advantage as it does not have to enter the bloodstream and therefore bypassing the toxicity concerns circulating nanoprobes. This technique is also more photostable than fluorochromes because SERRS nanoparticles cannot form from biomolecules and therefore there would not be any false positives in TAS3RS as there is in fluorescence imaging. == See also == Artificial enzyme Directional freezing List of software for nanostructures modeling Nanostructure Nanotopography Nanozymes == References == == External links == European Union Observatory for Nanomaterials (EUON) Acquisition, evaluation and public orientated presentation of societal relevant data and findings for nanomaterials (DaNa) Safety of Manufactured Nanomaterials: OECD Environment Directorate Assessing health risks of nanomaterials summary by GreenFacts of the European Commission SCENIHR assessment Textiles Nanotechnology Laboratory at Cornell University Nano Structured Material Online course MSE 376-Nanomaterials by Mark C. Hersam (2006) Nanomaterials: Quantum Dots, Nanowires and Nanotubes online presentation by Dr Sands Lecture Videos for the Second International Symposium on the Risk Assessment of Manufactured Nanomaterials, NEDO 2012 Nader Engheta: Wave interaction with metamaterials, SPIE Newsroom 2016 Managing nanomaterials in the Workplace by the European Agency for Safety and Health at Work.
Wikipedia/Nanomaterials
The foot-pound force (symbol: ft⋅lbf, ft⋅lbf, or ft⋅lb ) is a unit of work or energy in the engineering and gravitational systems in United States customary and imperial units of measure. It is the energy transferred upon applying a force of one pound-force (lbf) through a linear displacement of one foot. The corresponding SI unit is the joule, though in terms of energy, one joule is not equal to one foot-pound. == Usage == The term foot-pound is also used as a unit of torque (see pound-foot (torque)). In the United States this is often used to specify, for example, the tightness of a fastener (such as screws and nuts) or the output of an engine. Although they are dimensionally equivalent, energy (a scalar) and torque (a Euclidean vector) are distinct physical quantities. Both energy and torque can be expressed as a product of a force vector with a displacement vector (hence pounds and feet); energy is the scalar product of the two, and torque is the vector product. Although calling the torque unit "pound-foot" has been academically suggested, both are still commonly called "foot-pound" in colloquial usage. To avoid confusion, it is not uncommon for people to specify each as "foot-pound of energy" or "foot-pound of torque" respectively. In small arms ballistics and particularly in the United States, the foot-pound is often used to specify the muzzle energy of a bullet. == Conversion factors == === Energy === 1 foot pound-force is equivalent to: 1.355818 joules or newton-metres 13,558,180 ergs 1.285067×10−3 British thermal units 0.3240483 calories 8.462351×1018 electronvolts = 8.462351×109 gigaelectronvolts === Power === 1 foot pound-force per second is equivalent to: 1.355818 watts 1.818182×10−3 horsepower Related conversions: 1 watt ≈ 44.25373 ft⋅lbf/min ≈ 0.7375621 ft⋅lbf/s 1 horsepower (mechanical) = 33,000 ft⋅lbf/min = 550 ft⋅lbf/s == See also == Conversion of units Pound-foot (torque) Poundal Slug (unit) Units of energy == References ==
Wikipedia/Foot-pound_force
In mathematical knot theory, a link is a collection of knots that do not intersect, but which may be linked (or knotted) together. A knot can be described as a link with one component. Links and knots are studied in a branch of mathematics called knot theory. Implicit in this definition is that there is a trivial reference link, usually called the unlink, but the word is also sometimes used in context where there is no notion of a trivial link. For example, a co-dimension 2 link in 3-dimensional space is a subspace of 3-dimensional Euclidean space (or often the 3-sphere) whose connected components are homeomorphic to circles. The simplest nontrivial example of a link with more than one component is called the Hopf link, which consists of two circles (or unknots) linked together once. The circles in the Borromean rings are collectively linked despite the fact that no two of them are directly linked. The Borromean rings thus form a Brunnian link and in fact constitute the simplest such link. == Generalizations == The notion of a link can be generalized in a number of ways. === General manifolds === Frequently the word link is used to describe any submanifold of the sphere S n {\displaystyle S^{n}} diffeomorphic to a disjoint union of a finite number of spheres, S j {\displaystyle S^{j}} . In full generality, the word link is essentially the same as the word knot – the context is that one has a submanifold M of a manifold N (considered to be trivially embedded) and a non-trivial embedding of M in N, non-trivial in the sense that the 2nd embedding is not isotopic to the 1st. If M is disconnected, the embedding is called a link (or said to be linked). If M is connected, it is called a knot. === Tangles, string links, and braids === While (1-dimensional) links are defined as embeddings of circles, it is often interesting and especially technically useful to consider embedded intervals (strands), as in braid theory. Most generally, one can consider a tangle – a tangle is an embedding T : X → R 2 × I {\displaystyle T\colon X\to \mathbf {R} ^{2}\times I} of a (smooth) compact 1-manifold with boundary ( X , ∂ X ) {\displaystyle (X,\partial X)} into the plane times the interval I = [ 0 , 1 ] , {\displaystyle I=[0,1],} such that the boundary T ( ∂ X ) {\displaystyle T(\partial X)} is embedded in R × { 0 , 1 } {\displaystyle \mathbf {R} \times \{0,1\}} ( { 0 , 1 } = ∂ I {\displaystyle \{0,1\}=\partial I} ). The type of a tangle is the manifold X, together with a fixed embedding of ∂ X . {\displaystyle \partial X.} Concretely, a connected compact 1-manifold with boundary is an interval I = [ 0 , 1 ] {\displaystyle I=[0,1]} or a circle S 1 {\displaystyle S^{1}} (compactness rules out the open interval ( 0 , 1 ) {\displaystyle (0,1)} and the half-open interval [ 0 , 1 ) , {\displaystyle [0,1),} neither of which yields non-trivial embeddings since the open end means that they can be shrunk to a point), so a possibly disconnected compact 1-manifold is a collection of n intervals I = [ 0 , 1 ] {\displaystyle I=[0,1]} and m circles S 1 . {\displaystyle S^{1}.} The condition that the boundary of X lies in R × { 0 , 1 } {\displaystyle \mathbf {R} \times \{0,1\}} says that intervals either connect two lines or connect two points on one of the lines, but imposes no conditions on the circles. One may view tangles as having a vertical direction (I), lying between and possibly connecting two lines ( R × 0 {\displaystyle \mathbf {R} \times 0} and R × 1 {\displaystyle \mathbf {R} \times 1} ), and then being able to move in a two-dimensional horizontal direction ( R 2 {\displaystyle \mathbf {R} ^{2}} ) between these lines; one can project these to form a tangle diagram, analogous to a knot diagram. Tangles include links (if X consists of circles only), braids, and others besides – for example, a strand connecting the two lines together with a circle linked around it. In this context, a braid is defined as a tangle which is always going down – whose derivative always has a non-zero component in the vertical (I) direction. In particular, it must consist solely of intervals, and not double back on itself; however, no specification is made on where on the line the ends lie. A string link is a tangle consisting of only intervals, with the ends of each strand required to lie at (0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1), ... – i.e., connecting the integers, and ending in the same order that they began (one may use any other fixed set of points); if this has ℓ components, we call it an "ℓ-component string link". A string link need not be a braid – it may double back on itself, such as a two-component string link that features an overhand knot. A braid that is also a string link is called a pure braid, and corresponds with the usual such notion. The key technical value of tangles and string links is that they have algebraic structure. Isotopy classes of tangles form a tensor category, where for the category structure, one can compose two tangles if the bottom end of one equals the top end of the other (so the boundaries can be stitched together), by stacking them – they do not literally form a category (pointwise) because there is no identity, since even a trivial tangle takes up vertical space, but up to isotopy they do. The tensor structure is given by juxtaposition of tangles – putting one tangle to the right of the other. For a fixed ℓ, isotopy classes of ℓ-component string links form a monoid (one can compose all ℓ-component string links, and there is an identity), but not a group, as isotopy classes of string links need not have inverses. However, concordance classes (and thus also homotopy classes) of string links do have inverses, where inverse is given by flipping the string link upside down, and thus form a group. Every link can be cut apart to form a string link, though this is not unique, and invariants of links can sometimes be understood as invariants of string links – this is the case for Milnor's invariants, for instance. Compare with closed braids. == See also == Hyperbolic link Unlink Link group == References ==
Wikipedia/Link_(knot_theory)
Quantum topology is a branch of mathematics that connects quantum mechanics with low-dimensional topology. Dirac notation provides a viewpoint of quantum mechanics which becomes amplified into a framework that can embrace the amplitudes associated with topological spaces and the related embedding of one space within another such as knots and links in three-dimensional space. This bra–ket notation of kets and bras can be generalised, becoming maps of vector spaces associated with topological spaces that allow tensor products. Topological entanglement involving linking and braiding can be intuitively related to quantum entanglement. == See also == Topological quantum field theory Reshetikhin–Turaev invariant == References == == External links == Quantum Topology, a journal published by EMS Publishing House
Wikipedia/Quantum_topology
Physical knot theory is the study of mathematical models of knotting phenomena, often motivated by considerations from biology, chemistry, and physics (Kauffman 1991). Physical knot theory is used to study how geometric and topological characteristics of filamentary structures, such as magnetic flux tubes, vortex filaments, polymers, DNAs, influence their physical properties and functions. It has applications in various fields of science, including topological fluid dynamics, structural complexity analysis and DNA biology (Kauffman 1991, Ricca 1998). Traditional knot theory models a knot as a simple closed loop in three-dimensional space. Such a knot has no thickness or physical properties such as tension or friction. Physical knot theory incorporates more realistic models. The traditional model is also studied but with an eye toward properties of specific embeddings ("conformations") of the circle. Such properties include ropelength and various knot energies (O’Hara 2003). Most of the work discussed in this article and in the references below is not concerned with knots tied in physical pieces of rope. For the more specific physics of such knots, see Knot: Physical theory of friction knots. == References == Kauffman, L.H. (1991) Knots and Physics. Series on Knots and Everything 1, World Scientific. Kauffman, L.H., Editor (1991) Knots and Applications. Series on Knots and Everything 6, World Scientific. O’Hara, J. (2003) Energy of Knots and Conformal Geometry. Series on Knots and Everything 33, World Scientific. Ricca, R.L. (1998) Applications of knot theory in fluid mechanics. In Knot Theory (ed. V.F.R. Jones et al.), pp. 321–346. Banach Center Publs. 42, Polish Academy of Sciences, Warsaw.
Wikipedia/Physical_knot_theory
The vortex theory of the atom was a 19th-century attempt by William Thomson (later Lord Kelvin) to explain why the atoms recently discovered by chemists came in only relatively few varieties but in very great numbers of each kind. Based on the idea of stable, knotted vortices in the ether or aether, it contributed an important mathematical legacy. == Description == The vortex theory of the atom was based on the observation that a stable vortex can be created in a fluid by making it into a ring with no ends. Such vortices could be sustained in the luminiferous aether, a hypothetical fluid thought at the time to pervade all of space. In the vortex theory of the atom, a chemical atom is modelled by such a vortex in the aether. Knots can be tied in the core of such a vortex, leading to the hypothesis that each chemical element corresponds to a different kind of knot. The simple toroidal vortex, represented by the circular "unknot" 01, was thought to represent hydrogen. Many elements had yet to be discovered, so the next knot, the trefoil knot 31, was thought to represent carbon. However, as more elements were discovered and the periodicity of their characteristics established in the periodic table of the elements, it became clear that this could not be explained by any rational classification of knots. This, together with the discovery of subatomic particles such as the electron, led to the theory being abandoned. == History == Between 1870 and 1890 the vortex atom theory, which hypothesised that an atom was a vortex in the aether, was popular among British physicists and mathematicians. William Thomson, who became better known as Lord Kelvin, first conjectured that atoms might be vortices in the aether that pervades space. About 60 scientific papers were subsequently written on it by approximately 25 scientists. === Origins === In the seventeenth century Descartes developed a theory of vortex motion to explain such things as why light radiated in all directions and the planets moved in circular orbits. He believed that there was no vacuum and any object which moved had to be entering a gap left by another moving object. He realised that a circular chain of such objects, all replacing each other, would enable such movement. Thus, all movement consisted of endless circular vortices at all scales. However his Treatise on Light remained unfinished. Hermann Helmholtz realized in the mid-19th century that the core of a vortex, analogous to the eye of a hurricane, is a line-like filament that can become tangled up with other filaments in a knotted loop that cannot come undone. It is not necessary for the core to circulate, as it did in the Cartesian model. Helmholtz also showed that vortices exert forces on one another, and those forces take a form analogous to the magnetic forces between electrical wires. During the intervening period, chemist John Dalton had developed his atomic theory of matter. It remained only to bring the two strands of discovery together. === William Thomson (Lord Kelvin) === William Thomson, later to become Lord Kelvin, became concerned with the nature of Dalton's chemical elements, whose atoms appeared in only a few forms but in vast numbers. He was inspired by Helmholtz's findings, reasoning that the aether, a substance then hypothesised to pervade all of space, should be capable of supporting such stable vortices. According to Helmholtz’s theorems, these vortices would correspond to different kinds of knot. Thomson suggested that each type of knot might represent an atom of a different chemical element. He further speculated that multiple knots might aggregate into molecules of somewhat lower stability. He published his paper "On Vortex Atoms" in the Proceedings of the Royal Society of Edinburgh in 1867. === Peter Tait === Thomson's colleague Peter Guthrie Tait was attracted by the vortex atom theory and undertook a pioneering study of knots, producing a systematic classification of those with up to 10 crossings, in the hope of thus systematizing the various elements. === J. J. Thomson === J. J. Thomson took up the challenge in his 1883 Master's degree thesis, a Treatise on the motion of vortex rings. In it, Thomson developed a mathematical treatment of the motions of William Thomson and Peter Tait's atoms. When Thomson later discovered the electron (for which he received a Nobel Prize), he abandoned his "nebular atom" hypothesis based on the vortex atomic theory, in favour of his plum pudding model. === Legacy === Tait's work especially founded the branch of topology called knot theory, with J. J. Thomson providing some early mathematical advancements. Kelvin's insight continues to inspire new mathematics and has led to persistence of the topic in the history of science. === Other === The vortex theory of the atom seems to have an extension in skyrmion and Einstein–Cartan theory. == See also == Loop quantum gravity Magnetic skyrmion, a vortex-like magnetic quasiparticle Quantum vortex, a quantised flux circulation Toroidal ring model of elementary particles == References == === Citations === === Bibliography === Kragh, Helge (2002). "The Vortex Atom: A Victorian Theory of Everything". Centaurus. 44 (1–2): 32–114. doi:10.1034/j.1600-0498.2002.440102.x. ISSN 0008-8994. Retrieved 9 March 2019.
Wikipedia/Vortex_theory_of_the_atom
The Tait conjectures are three conjectures made by 19th-century mathematician Peter Guthrie Tait in his study of knots. The Tait conjectures involve concepts in knot theory such as alternating knots, chirality, and writhe. All of the Tait conjectures have been solved, the most recent being the Flyping conjecture. == Background == Tait came up with his conjectures after his attempt to tabulate all knots in the late 19th century. As a founder of the field of knot theory, his work lacks a mathematically rigorous framework, and it is unclear whether he intended the conjectures to apply to all knots, or just to alternating knots. It turns out that most of them are only true for alternating knots. In the Tait conjectures, a knot diagram is called "reduced" if all the "isthmi", or "nugatory crossings" have been removed. == Crossing number of alternating knots == Tait conjectured that in certain circumstances, crossing number was a knot invariant, specifically: Any reduced diagram of an alternating link has the fewest possible crossings. In other words, the crossing number of a reduced, alternating link is an invariant of the knot. This conjecture was proved by Louis Kauffman, Kunio Murasugi (村杉 邦男), and Morwen Thistlethwaite in 1987, using the Jones polynomial. A geometric proof, not using knot polynomials, was given in 2017 by Joshua Greene. == Writhe and chirality == A second conjecture of Tait: An amphicheiral (or acheiral) alternating link has zero writhe. This conjecture was also proved by Kauffman and Thistlethwaite. == Flyping == The Tait flyping conjecture can be stated: Given any two reduced alternating diagrams D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} of an oriented, prime alternating link: D 1 {\displaystyle D_{1}} may be transformed to D 2 {\displaystyle D_{2}} by means of a sequence of certain simple moves called flypes. The Tait flyping conjecture was proved by Thistlethwaite and William Menasco in 1991. The Tait flyping conjecture implies some more of Tait's conjectures: Any two reduced diagrams of the same alternating knot have the same writhe. This follows because flyping preserves writhe. This was proved earlier by Murasugi and Thistlethwaite. It also follows from Greene's work. For non-alternating knots this conjecture is not true; the Perko pair is a counterexample. This result also implies the following conjecture: Alternating amphicheiral knots have even crossing number. This follows because a knot's mirror image has opposite writhe. This conjecture is again only true for alternating knots: non-alternating amphichiral knot with crossing number 15 exist. == See also == Prime knot Tangle (knot theory) == References ==
Wikipedia/Tait_conjectures
In knot theory, Conway notation, invented by John Horton Conway, is a way of describing knots that makes many of their properties clear. It composes a knot using certain operations on tangles to construct it. == Basic concepts == === Tangles === In Conway notation, the tangles are generally algebraic 2-tangles. This means their tangle diagrams consist of 2 arcs and 4 points on the edge of the diagram; furthermore, they are built up from rational tangles using the Conway operations. [The following seems to be attempting to describe only integer or 1/n rational tangles] Tangles consisting only of positive crossings are denoted by the number of crossings, or if there are only negative crossings it is denoted by a negative number. If the arcs are not crossed, or can be transformed into an uncrossed position with the Reidemeister moves, it is called the 0 or ∞ tangle, depending on the orientation of the tangle. === Operations on tangles === If a tangle, a, is reflected on the NW-SE line, it is denoted by −a. (Note that this is different from a tangle with a negative number of crossings.)Tangles have three binary operations, sum, product, and ramification, however all can be explained using tangle addition and negation. The tangle product, a b, is equivalent to −a+b. and ramification or a,b, is equivalent to −a+−b. == Advanced concepts == Rational tangles are equivalent if and only if their fractions are equal. An accessible proof of this fact is given in (Kauffman and Lambropoulou 2004). A number before an asterisk, *, denotes the polyhedron number; multiple asterisks indicate that multiple polyhedra of that number exist. == See also == Conway knot Dowker notation Alexander–Briggs notation Gauss notation == References == == Further reading == Conway, J.H. (1970). "An Enumeration of Knots and Links, and Some of Their Algebraic Properties" (PDF). In Leech, J. (ed.). Computational Problems in Abstract Algebra. Pergamon Press. pp. 329–358. ISBN 0080129757. Kauffman, Louis H.; Lambropoulou, Sofia (2004). "On the classification of rational tangles". Advances in Applied Mathematics. 33 (2): 199–237. arXiv:math/0311499. doi:10.1016/j.aam.2003.06.002. S2CID 119143716.
Wikipedia/Conway_notation_(knot_theory)
In the mathematical area of knot theory, the crossing number of a knot is the smallest number of crossings of any diagram of the knot. It is a knot invariant. == Examples == By way of example, the unknot has crossing number zero, the trefoil knot three and the figure-eight knot four. There are no other knots with a crossing number this low, and just two knots have crossing number five, but the number of knots with a particular crossing number increases rapidly as the crossing number increases. == Tabulation == Tables of prime knots are traditionally indexed by crossing number, with a subscript to indicate which particular knot out of those with this many crossings is meant (this sub-ordering is not based on anything in particular, except that torus knots then twist knots are listed first). The listing goes 31 (the trefoil knot), 41 (the figure-eight knot), 51, 52, 61, etc. This order has not changed significantly since P. G. Tait published a tabulation of knots in 1877. == Additivity == There has been very little progress on understanding the behavior of crossing number under rudimentary operations on knots. A big open question asks if the crossing number is additive when taking knot sums. It is also expected that a satellite of a knot K should have larger crossing number than K, but this has not been proven. Additivity of crossing number under knot sum has been proven for special cases, for example if the summands are alternating knots (or more generally, adequate knot), or if the summands are torus knots. Marc Lackenby has also given a proof that there is a constant N > 1 such that ⁠1/N⁠(cr(K1) + cr(K2)) ≤ cr(K1 + K2), but his method, which utilizes normal surfaces, cannot improve N to 1. == Applications in bioinformatics == There are connections between the crossing number of a knot and the physical behavior of DNA knots. For prime DNA knots, crossing number is a good predictor of the relative velocity of the DNA knot in agarose gel electrophoresis. Basically, the higher the crossing number, the faster the relative velocity. For composite knots, this does not appear to be the case, although experimental conditions can drastically change the results. == Related invariants == There are related concepts of average crossing number and asymptotic crossing number. Both of these quantities bound the standard crossing number. Asymptotic crossing number is conjectured to be equal to crossing number. Other numerical knot invariants include the bridge number, linking number, stick number, and unknotting number. == References ==
Wikipedia/Crossing_number_(knot_theory)
In mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them (in particular real polynomial mappings). Semialgebraic geometry is the study of semialgebraic sets, i.e. real-number solutions to algebraic inequalities with-real number coefficients, and mappings between them. The most natural mappings between semialgebraic sets are semialgebraic mappings, i.e., mappings whose graphs are semialgebraic sets. == Terminology == Nowadays the words 'semialgebraic geometry' and 'real algebraic geometry' are used as synonyms, because real algebraic sets cannot be studied seriously without the use of semialgebraic sets. For example, a projection of a real algebraic set along a coordinate axis need not be a real algebraic set, but it is always a semialgebraic set: this is the Tarski–Seidenberg theorem. Related fields are o-minimal theory and real analytic geometry. Examples: Real plane curves are examples of real algebraic sets and polyhedra are examples of semialgebraic sets. Real algebraic functions and Nash functions are examples of semialgebraic mappings. Piecewise polynomial mappings (see the Pierce–Birkhoff conjecture) are also semialgebraic mappings. Computational real algebraic geometry is concerned with the algorithmic aspects of real algebraic (and semialgebraic) geometry. The main algorithm is cylindrical algebraic decomposition. It is used to cut semialgebraic sets into nice pieces and to compute their projections. Real algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields) and their applications to the study of positive polynomials and sums-of-squares of polynomials. (See Hilbert's 17th problem and Krivine's Positivestellensatz.) The relation of real algebra to real algebraic geometry is similar to the relation of commutative algebra to complex algebraic geometry. Related fields are the theory of moment problems, convex optimization, the theory of quadratic forms, valuation theory and model theory. == Timeline of real algebra and real algebraic geometry == 1826 Fourier's algorithm for systems of linear inequalities. Rediscovered by Lloyd Dines in 1919 and Theodore Motzkin in 1936. 1835 Sturm's theorem on real root counting 1856 Hermite's theorem on real root counting. 1876 Harnack's curve theorem. (This bound on the number of components was later extended to all Betti numbers of all real algebraic sets and all semialgebraic sets.) 1888 Hilbert's theorem on ternary quartics. 1900 Hilbert's problems (especially the 16th and the 17th problem) 1902 Farkas' lemma (Can be reformulated as linear positivstellensatz.) 1914 Annibale Comessatti showed that not every real algebraic surface is birational to RP2 1916 Fejér's conjecture about nonnegative trigonometric polynomials. (Solved by Frigyes Riesz.) 1927 Emil Artin's solution of Hilbert's 17th problem 1927 Krull–Baer Theorem (connection between orderings and valuations) 1928 Pólya's Theorem on positive polynomials on a simplex 1929 B. L. van der Waerden sketches a proof that real algebraic and semialgebraic sets are triangularizable, but the necessary tools have not been developed to make the argument rigorous. 1931 Alfred Tarski's real quantifier elimination. Improved and popularized by Abraham Seidenberg in 1954. (Both use Sturm's theorem.) 1936 Herbert Seifert proved that every closed smooth submanifold of R n {\displaystyle \mathbb {R} ^{n}} with trivial normal bundle, can be isotoped to a component of a nonsingular real algebraic subset of R n {\displaystyle \mathbb {R} ^{n}} which is a complete intersection (from the conclusion of this theorem the word "component" can not be removed). 1940 Marshall Stone's representation theorem for partially ordered rings. Improved by Richard Kadison in 1951 and Donald Dubois in 1967 (Kadison–Dubois representation theorem). Further improved by Mihai Putinar in 1993 and Jacobi in 2001 (Putinar–Jacobi representation theorem). 1952 John Nash proved that every closed smooth manifold is diffeomorphic to a nonsingular component of a real algebraic set. 1956 Pierce–Birkhoff conjecture formulated. (Solved in dimensions ≤ 2.) 1964 Krivine's Nullstellensatz and Positivestellensatz. Rediscovered and popularized by Stengle in 1974. (Krivine uses real quantifier elimination while Stengle uses Lang's homomorphism theorem.) 1964 Lojasiewicz triangulated semi-analytic sets 1964 Heisuke Hironaka proved the resolution of singularity theorem 1964 Hassler Whitney proved that every analytic variety admits a stratification satisfying the Whitney conditions. 1967 Theodore Motzkin finds a positive polynomial which is not a sum of squares of polynomials. 1972 Vladimir Rokhlin proved Gudkov's conjecture. 1973 Alberto Tognoli proved that every closed smooth manifold is diffeomorphic to a nonsingular real algebraic set. 1975 George E. Collins discovers cylindrical algebraic decomposition algorithm, which improves Tarski's real quantifier elimination and allows to implement it on a computer. 1973 Jean-Louis Verdier proved that every subanalytic set admits a stratification with condition (w). 1979 Michel Coste and Marie-Françoise Roy discover the real spectrum of a commutative ring. 1980 Oleg Viro introduced the "patch working" technique and used it to classify real algebraic curves of low degree. Later Ilya Itenberg and Viro used it to produce counterexamples to the Ragsdale conjecture, and Grigory Mikhalkin applied it to tropical geometry for curve counting. 1980 Selman Akbulut and Henry C. King gave a topological characterization of real algebraic sets with isolated singularities, and topologically characterized nonsingular real algebraic sets (not necessarily compact) 1980 Akbulut and King proved that every knot in S n {\displaystyle S^{n}} is the link of a real algebraic set with isolated singularity in R n + 1 {\displaystyle \mathbb {R} ^{n+1}} 1981 Akbulut and King proved that every compact PL manifold is PL homeomorphic to a real algebraic set. 1983 Akbulut and King introduced "Topological Resolution Towers" as topological models of real algebraic sets, from this they obtained new topological invariants of real algebraic sets, and topologically characterized all 3-dimensional algebraic sets. These invariants later generalized by Michel Coste and Krzysztof Kurdyka as well as Clint McCrory and Adam Parusiński. 1984 Ludwig Bröcker's theorem on minimal generation of basic open semialgebraic sets (improved and extended to basic closed semialgebraic sets by Scheiderer.) 1984 Benedetti and Dedo proved that not every closed smooth manifold is diffeomorphic to a totally algebraic nonsingular real algebraic set (totally algebraic means all its Z/2Z-homology cycles are represented by real algebraic subsets). 1991 Akbulut and King proved that every closed smooth manifold is homeomorphic to a totally algebraic real algebraic set. 1991 Schmüdgen's solution of the multidimensional moment problem for compact semialgebraic sets and related strict positivstellensatz. Algebraic proof found by Wörmann. Implies Reznick's version of Artin's theorem with uniform denominators. 1992 Akbulut and King proved ambient versions of the Nash-Tognoli theorem: Every closed smooth submanifold of Rn is isotopic to the nonsingular points (component) of a real algebraic subset of Rn, and they extended this result to immersed submanifolds of Rn. 1992 Benedetti and Marin proved that every compact closed smooth 3-manifold M can be obtained from S 3 {\displaystyle S^{3}} by a sequence of blow ups and downs along smooth centers, and that M is homeomorphic to a possibly singular affine real algebraic rational threefold 1997 Bierstone and Milman proved a canonical resolution of singularities theorem 1997 Mikhalkin proved that every closed smooth n-manifold can be obtained from S n {\displaystyle S^{n}} by a sequence of topological blow ups and downs 1998 János Kollár showed that not every closed 3-manifold is a projective real 3-fold which is birational to RP3 2000 Scheiderer's local-global principle and related non-strict extension of Schmüdgen's positivstellensatz in dimensions ≤ 2. 2000 János Kollár proved that every closed smooth 3–manifold is the real part of a compact complex manifold which can be obtained from C P 3 {\displaystyle \mathbb {CP} ^{3}} by a sequence of real blow ups and blow downs. 2003 Welschinger introduces an invariant for counting real rational curves 2005 Akbulut and King showed that not every nonsingular real algebraic subset of RPn is smoothly isotopic to the real part of a nonsingular complex algebraic subset of CPn == References == S. Akbulut and H.C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992) ISBN 0-387-97744-9 Bochnak, Jacek; Coste, Michel; Roy, Marie-Françoise. Real Algebraic Geometry. Translated from the 1987 French original. Revised by the authors. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 36. Springer-Verlag, Berlin, 1998. x+430 pp. ISBN 3-540-64663-9 Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ISBN 978-3-540-33098-1; 3-540-33098-4 Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1; 0-8218-4402-4 == Notes == == External links == The Role of Hilbert Problems in Real Algebraic Geometry (PostScript) Real Algebraic and Analytic Geometry Preprint Server
Wikipedia/Real_algebraic_set
In mathematics, a tangle is generally one of two related concepts: In John Conway's definition, an n-tangle is a proper embedding of the disjoint union of n arcs into a 3-ball; the embedding must send the endpoints of the arcs to 2n marked points on the ball's boundary. In link theory, a tangle is an embedding of n arcs and m circles into R 2 × [ 0 , 1 ] {\displaystyle \mathbf {R} ^{2}\times [0,1]} – the difference from the previous definition is that it includes circles as well as arcs, and partitions the boundary into two (isomorphic) pieces, which is algebraically more convenient – it allows one to add tangles by stacking them, for instance. (A quite different use of 'tangle' appears in Graph minors X. Obstructions to tree-decomposition by N. Robertson and P. D. Seymour, Journal of Combinatorial Theory B 52 (1991) 153–190, who used it to describe separation in graphs. This usage has been extended to matroids.) The balance of this article discusses Conway's sense of tangles; for the link theory sense, see that article. Two n-tangles are considered equivalent if there is an ambient isotopy of one tangle to the other keeping the boundary of the 3-ball fixed. Tangle theory can be considered analogous to knot theory except, instead of closed loops, strings whose ends are nailed down are used. See also braid theory. == Tangle diagrams == Without loss of generality, consider the marked points on the 3-ball boundary to lie on a great circle. The tangle can be arranged to be in general position with respect to the projection onto the flat disc bounded by the great circle. The projection then gives us a tangle diagram, where we make note of over and undercrossings as with knot diagrams. Tangles often show up as tangle diagrams in knot or link diagrams and can be used as building blocks for link diagrams, e.g. pretzel links. == Rational and algebraic tangles == A rational tangle is a 2-tangle that is homeomorphic to the trivial 2-tangle by a map of pairs consisting of the 3-ball and two arcs. The four endpoints of the arcs on the boundary circle of a tangle diagram are usually referred as NE, NW, SW, SE, with the symbols referring to the compass directions. An arbitrary tangle diagram of a rational tangle may look very complicated, but there is always a diagram of a particular simple form: start with a tangle diagram consisting of two horizontal (vertical) arcs; add a "twist", i.e. a single crossing by switching the NE and SE endpoints (SW and SE endpoints); continue by adding more twists using either the NE and SE endpoints or the SW and SE endpoints. One can suppose each twist does not change the diagram inside a disc containing previously created crossings. We can describe such a diagram by considering the numbers given by consecutive twists around the same set of endpoints, e.g. (2, 1, -3) means start with two horizontal arcs, then 2 twists using NE/SE endpoints, then 1 twist using SW/SE endpoints, and then 3 twists using NE/SE endpoints but twisting in the opposite direction from before. The list begins with 0 if you start with two vertical arcs. The diagram with two horizontal arcs is then (0), but we assign (0, 0) to the diagram with vertical arcs. A convention is needed to describe a "positive" or "negative" twist. Often, "rational tangle" refers to a list of numbers representing a simple diagram as described. The fraction of a rational tangle ( a 0 , a 1 , a 2 , … ) {\displaystyle (a_{0},a_{1},a_{2},\dots )} is then defined as the number given by the continued fraction [ a n , a n − 1 , a n − 2 , … ] {\displaystyle [a_{n},a_{n-1},a_{n-2},\dots ]} . The fraction given by (0,0) is defined as ∞ {\displaystyle \infty } . Conway proved that the fraction is well-defined and completely determines the rational tangle up to tangle equivalence. An accessible proof of this fact is given in:. Conway also defined a fraction of an arbitrary tangle by using the Alexander polynomial. === Operations on tangles === There is an "arithmetic" of tangles with addition, multiplication, and reciprocal operations. An algebraic tangle is obtained from the addition and multiplication of rational tangles. The numerator closure of a rational tangle is defined as the link obtained by joining the "north" endpoints together and the "south" endpoints also together. The denominator closure is defined similarly by grouping the "east" and "west" endpoints. Rational links are defined to be such closures of rational tangles. == Conway notation == One motivation for Conway's study of tangles was to provide a notation for knots more systematic than the traditional enumeration found in tables. == Applications == Tangles have been shown to be useful in studying DNA topology. The action of a given enzyme can be analysed with the help of tangle theory. == See also == Tanglement puzzle == References == == Further reading == Adams, C. C. (2004). The Knot Book: An elementary introduction to the mathematical theory of knots. Providence, RI: American Mathematical Society. pp. xiv+307. ISBN 0-8218-3678-1. == External links == MacKay, David. "Metapost code for drawing tangles and other pictures". Inference Group. Retrieved 2018-04-13. Goldman, Jay R.; Kauffman, Louis H. (1997). "Rational Tangles" (PDF). Advances in Applied Mathematics. 18 (3): 300–332. doi:10.1006/aama.1996.0511.
Wikipedia/Algebraic_tangle
In the mathematical field of knot theory, a mutation is an operation on a knot that can produce different knots. Suppose K is a knot given in the form of a knot diagram. Consider a disc D in the projection plane of the diagram whose boundary circle intersects K exactly four times. We may suppose that (after planar isotopy) the disc is geometrically round and the four points of intersection on its boundary with K are equally spaced. The part of the knot inside the disc is a tangle. There are two reflections that switch pairs of endpoints of the tangle. There is also a rotation that results from composition of the reflections. A mutation replaces the original tangle by a tangle given by any of these operations. The result will always be a knot and is called a mutant of K. Mutants can be difficult to distinguish as they have a number of the same invariants. They have the same hyperbolic volume (by a result of Ruberman), and have the same HOMFLY polynomials. == Examples == Conway and Kinoshita-Terasaka mutant pair, distinguished as knot genus 3 and 2, respectively. == References == == Further reading == Colin Adams, The Knot Book, American Mathematical Society, ISBN 0-8050-7380-9 == External links == A list of pairs of mutant nodes
Wikipedia/Mutation_(knot_theory)
In differential geometry, a ribbon (or strip) is the combination of a smooth space curve and its corresponding normal vector. More formally, a ribbon denoted by ( X , U ) {\displaystyle (X,U)} includes a curve X {\displaystyle X} given by a three-dimensional vector X ( s ) {\displaystyle X(s)} , depending continuously on the curve arc-length s {\displaystyle s} ( a ≤ s ≤ b {\displaystyle a\leq s\leq b} ), and a unit vector U ( s ) {\displaystyle U(s)} perpendicular to ∂ X ∂ s ( s ) {\displaystyle {\partial X \over \partial s}(s)} at each point. Ribbons have seen particular application as regards DNA. == Properties and implications == The ribbon ( X , U ) {\displaystyle (X,U)} is called simple if X {\displaystyle X} is a simple curve (i.e. without self-intersections) and closed and if U {\displaystyle U} and all its derivatives agree at a {\displaystyle a} and b {\displaystyle b} . For any simple closed ribbon the curves X + ε U {\displaystyle X+\varepsilon U} given parametrically by X ( s ) + ε U ( s ) {\displaystyle X(s)+\varepsilon U(s)} are, for all sufficiently small positive ε {\displaystyle \varepsilon } , simple closed curves disjoint from X {\displaystyle X} . The ribbon concept plays an important role in the Călugăreanu formula, that states that L k = W r + T w , {\displaystyle Lk=Wr+Tw,} where L k {\displaystyle Lk} is the asymptotic (Gauss) linking number, the integer number of turns of the ribbon around its axis; W r {\displaystyle Wr} denotes the total writhing number (or simply writhe), a measure of non-planarity of the ribbon's axis curve; and T w {\displaystyle Tw} is the total twist number (or simply twist), the rate of rotation of the ribbon around its axis. Ribbon theory investigates geometric and topological aspects of a mathematical reference ribbon associated with physical and biological properties, such as those arising in topological fluid dynamics, DNA modeling and in material science. == See also == Bollobás–Riordan polynomial Knots and graphs Knot theory DNA supercoil Möbius strip == References == == Bibliography == Adams, Colin (2004), The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, American Mathematical Society, ISBN 0-8218-3678-1, MR 2079925 Călugăreanu, Gheorghe (1959), "L'intégrale de Gauss et l'analyse des nœuds tridimensionnels", Revue de Mathématiques Pure et Appliquées, 4: 5–20, MR 0131846 Călugăreanu, Gheorghe (1961), "Sur les classes d'isotopie des noeuds tridimensionels et leurs invariants", Czechoslovak Mathematical Journal, 11: 588–625, doi:10.21136/CMJ.1961.100486, MR 0149378 White, James H. (1969), "Self-linking and the Gauss integral in higher dimensions", American Journal of Mathematics, 91 (3): 693–728, doi:10.2307/2373348, JSTOR 2373348, MR 0253264
Wikipedia/Ribbon_theory
When Topology Meets Chemistry: A Topological Look At Molecular Chirality is a book in chemical graph theory on the graph-theoretic analysis of chirality in molecular structures. It was written by Erica Flapan, based on a series of lectures she gave in 1996 at the Institut Henri Poincaré, and was published in 2000 by the Cambridge University Press and Mathematical Association of America as the first volume in their shared Outlooks book series. == Topics == A chiral molecule is a molecular structure that is different from its mirror image. This property, while seemingly abstract, can have big consequences in biochemistry, where the shape of molecules is essential to their chemical function, and where a chiral molecule can have very different biological activities from its mirror-image molecule. When Topology Meets Chemistry concerns the mathematical analysis of molecular chirality. The book has seven chapters, beginning with an introductory overview and ending with a chapter on the chirality of DNA molecules. Other topics covered through the book include the rigid geometric chirality of tree-like molecular structures such as tartaric acid, and the stronger topological chirality of molecules that cannot be deformed into their mirror image without breaking and re-forming some of their molecular bonds. It discusses results of Flapan and Jonathan Simon on molecules with the molecular structure of Möbius ladders, according to which every embedding of a Möbius ladder with an odd number of rungs is chiral while Möbius ladders with an even number of rungs have achiral embeddings. It uses the symmetries of graphs, in a result that the symmetries of certain graphs can always be extended to topological symmetries of three-dimensional space, from which it follows that non-planar graphs with no self-inverse symmetry are always chiral. It discusses graphs for which every embedding is topologically knotted or linked. And it includes material on the use of knot invariants to detect topological chirality. == Audience and reception == The book is self-contained, and requires only an undergraduate level of mathematics. It includes many exercises, making it suitable for use as a textbook at both the advanced undergraduate and introductory graduate levels. Reviewer Buks van Rensburg describes the book's presentation as "efficient and intuitive", and recommends the book to "every mathematician or chemist interested in the notions of chirality and symmetry". == References ==
Wikipedia/When_Topology_Meets_Chemistry
In geometric topology, the spherical space form conjecture (now a theorem) states that a finite group acting on the 3-sphere is conjugate to a group of isometries of the 3-sphere. == History == The conjecture was posed by Heinz Hopf in 1926 after determining the fundamental groups of three-dimensional spherical space forms as a generalization of the Poincaré conjecture to the non-simply connected case. == Status == The conjecture is implied by Thurston's geometrization conjecture, which was proven by Grigori Perelman in 2003. The conjecture was independently proven for groups whose actions have fixed points—this special case is known as the Smith conjecture. It is also proven for various groups acting without fixed points, such as cyclic groups whose orders are a power of two (George Livesay, Robert Myers) and cyclic groups of order 3 (J. Hyam Rubinstein). == See also == Killing–Hopf theorem == References ==
Wikipedia/Spherical_space_form_conjecture
In the mathematical field of geometric topology, the Poincaré conjecture (UK: , US: , French: [pwɛ̃kaʁe]) is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century. The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In papers posted to the arXiv repository in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture (and the more powerful geometrization conjecture of William Thurston). Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work. Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize in 2011 and the Leroy P. Steele Prize for Seminal Contribution to Research in 2009. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006. The Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million in 2010 for the conjecture's resolution. He declined the award, saying that Hamilton's contribution had been equal to his own. == Overview == The Poincaré conjecture was a mathematical problem in the field of geometric topology. In terms of the vocabulary of that field, it says the following: Poincaré conjecture.Every three-dimensional topological manifold which is closed, connected, and has trivial fundamental group is homeomorphic to the three-dimensional sphere. Familiar shapes, such as the surface of a ball (which is known in mathematics as the two-dimensional sphere) or of a torus, are two-dimensional. The surface of a ball has trivial fundamental group, meaning that any loop drawn on the surface can be continuously deformed to a single point. By contrast, the surface of a torus has nontrivial fundamental group, as there are loops on the surface which cannot be so deformed. Both are topological manifolds which are closed (meaning that they have no boundary and take up a finite region of space) and connected (meaning that they consist of a single piece). Two closed manifolds are said to be homeomorphic when it is possible for the points of one to be reallocated to the other in a continuous way. Because the (non)triviality of the fundamental group is known to be invariant under homeomorphism, it follows that the two-dimensional sphere and torus are not homeomorphic. The two-dimensional analogue of the Poincaré conjecture says that any two-dimensional topological manifold which is closed and connected but non-homeomorphic to the two-dimensional sphere must possess a loop which cannot be continuously contracted to a point. (This is illustrated by the example of the torus, as above.) This analogue is known to be true via the classification of closed and connected two-dimensional topological manifolds, which was understood in various forms since the 1860s. In higher dimensions, the closed and connected topological manifolds do not have a straightforward classification, precluding an easy resolution of the Poincaré conjecture. == History == === Poincaré's question === In the 1800s, Bernhard Riemann and Enrico Betti initiated the study of topological invariants of manifolds. They introduced the Betti numbers, which associate to any manifold a list of nonnegative integers. Riemann showed that a closed connected two-dimensional manifold is fully characterized by its Betti numbers. As part of his 1895 paper Analysis Situs (announced in 1892), Poincaré showed that Riemann's result does not extend to higher dimensions. To do this he introduced the fundamental group as a novel topological invariant, and was able to exhibit examples of three-dimensional manifolds which have the same Betti numbers but distinct fundamental groups. He posed the question of whether the fundamental group is sufficient to topologically characterize a manifold (of given dimension), although he made no attempt to pursue the answer, saying only that it would "demand lengthy and difficult study". The primary purpose of Poincaré's paper was the interpretation of the Betti numbers in terms of his newly-introduced homology groups, along with the Poincaré duality theorem on the symmetry of Betti numbers. Following criticism of the completeness of his arguments, he released a number of subsequent "supplements" to enhance and correct his work. The closing remark of his second supplement, published in 1900, said: In order to avoid making this work too prolonged, I confine myself to stating the following theorem, the proof of which will require further developments: Each polyhedron which has all its Betti numbers equal to 1 and all its tables Tq orientable is simply connected, i.e., homeomorphic to a hypersphere. (In a modern language, taking note of the fact that Poincaré is using the terminology of simple-connectedness in an unusual way, this says that a closed connected oriented manifold with the homology of a sphere must be homeomorphic to a sphere.) This modified his negative generalization of Riemann's work in two ways. Firstly, he was now making use of the full homology groups and not only the Betti numbers. Secondly, he narrowed the scope of the problem from asking if an arbitrary manifold is characterized by topological invariants to asking whether the sphere can be so characterized. However, after publication he found his announced theorem to be incorrect. In his fifth and final supplement, published in 1904, he proved this with the counterexample of the Poincaré homology sphere, which is a closed connected three-dimensional manifold which has the homology of the sphere but whose fundamental group has 120 elements. This example made it clear that homology is not powerful enough to characterize the topology of a manifold. In the closing remarks of the fifth supplement, Poincaré modified his erroneous theorem to use the fundamental group instead of homology: One question remains to be dealt with: is it possible for the fundamental group of V to reduce to the identity without V being simply connected? [...] However, this question would carry us too far away. In this remark, as in the closing remark of the second supplement, Poincaré used the term "simply connected" in a way which is at odds with modern usage, as well as his own 1895 definition of the term. (According to modern usage, Poincaré's question is a tautology, asking if it is possible for a manifold to be simply connected without being simply connected.) However, as can be inferred from context, Poincaré was asking whether the triviality of the fundamental group uniquely characterizes the sphere. Throughout the work of Riemann, Betti, and Poincaré, the topological notions in question are not defined or used in a way that would be recognized as precise from a modern perspective. Even the key notion of a "manifold" was not used in a consistent way in Poincaré's own work, and there was frequent confusion between the notion of a topological manifold, a PL manifold, and a smooth manifold. For this reason, it is not possible to read Poincaré's questions unambiguously. It is only through the formalization and vocabulary of topology as developed by later mathematicians that Poincaré's closing question has been understood as the "Poincaré conjecture" as stated in the preceding section. However, despite its usual phrasing in the form of a conjecture, proposing that all manifolds of a certain type are homeomorphic to the sphere, Poincaré only posed an open-ended question, without venturing to conjecture one way or the other. Moreover, there is no evidence as to which way he believed his question would be answered. === Solutions === In the 1930s, J. H. C. Whitehead claimed a proof but then retracted it. In the process, he discovered some examples of simply-connected (indeed contractible, i.e. homotopically equivalent to a point) non-compact 3-manifolds not homeomorphic to R 3 {\displaystyle \mathbb {R} ^{3}} , the prototype of which is now called the Whitehead manifold. In the 1950s and 1960s, other mathematicians attempted proofs of the conjecture only to discover that they contained flaws. Influential mathematicians such as Georges de Rham, R. H. Bing, Wolfgang Haken, Edwin E. Moise, and Christos Papakyriakopoulos attempted to prove the conjecture. In 1958, R. H. Bing proved a weak version of the Poincaré conjecture: if every simple closed curve of a compact 3-manifold is contained in a 3-ball, then the manifold is homeomorphic to the 3-sphere. Bing also described some of the pitfalls in trying to prove the Poincaré conjecture. Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true. Over time, the conjecture gained the reputation of being particularly tricky to tackle. John Milnor commented that sometimes the errors in false proofs can be "rather subtle and difficult to detect". Work on the conjecture improved understanding of 3-manifolds. Experts in the field were often reluctant to announce proofs and tended to view any such announcement with skepticism. The 1980s and 1990s witnessed some well-publicized fallacious proofs (which were not actually published in peer-reviewed form). An exposition of attempts to prove this conjecture can be found in the non-technical book Poincaré's Prize by George Szpiro. === Dimensions === The classification of closed surfaces gives an affirmative answer to the analogous question in two dimensions. For dimensions greater than three, one can pose the Generalized Poincaré conjecture: is a homotopy n-sphere homeomorphic to the n-sphere? A stronger assumption than simply-connectedness is necessary; in dimensions four and higher there are simply-connected, closed manifolds which are not homotopy equivalent to an n-sphere. Historically, while the conjecture in dimension three seemed plausible, the generalized conjecture was thought to be false. In 1961, Stephen Smale shocked mathematicians by proving the Generalized Poincaré conjecture for dimensions greater than four and extended his techniques to prove the fundamental h-cobordism theorem. In 1982, Michael Freedman proved the Poincaré conjecture in four dimensions. Freedman's work left open the possibility that there is a smooth four-manifold homeomorphic to the four-sphere which is not diffeomorphic to the four-sphere. This so-called smooth Poincaré conjecture, in dimension four, remains open and is thought to be very difficult. Milnor's exotic spheres show that the smooth Poincaré conjecture is false in dimension seven, for example. These earlier successes in higher dimensions left the case of three dimensions in limbo. The Poincaré conjecture was essentially true in both dimension four and all higher dimensions for substantially different reasons. In dimension three, the conjecture had an uncertain reputation until the geometrization conjecture put it into a framework governing all 3-manifolds. John Morgan wrote: It is my view that before Thurston's work on hyperbolic 3-manifolds and … the Geometrization conjecture there was no consensus among the experts as to whether the Poincaré conjecture was true or false. After Thurston's work, notwithstanding the fact that it had no direct bearing on the Poincaré conjecture, a consensus developed that the Poincaré conjecture (and the Geometrization conjecture) were true. === Hamilton's program and solution === Hamilton's program was started in his 1982 paper in which he introduced the Ricci flow on a manifold and showed how to use it to prove some special cases of the Poincaré conjecture. In the following years, he extended this work but was unable to prove the conjecture. The actual solution was not found until Grigori Perelman published his papers. In late 2002 and 2003, Perelman posted three papers on arXiv. In these papers, he sketched a proof of the Poincaré conjecture and a more general conjecture, Thurston's geometrization conjecture, completing the Ricci flow program outlined earlier by Richard S. Hamilton. From May to July 2006, several groups presented papers that filled in the details of Perelman's proof of the Poincaré conjecture, as follows: Bruce Kleiner and John W. Lott posted a paper on arXiv in May 2006 which filled in the details of Perelman's proof of the geometrization conjecture, following partial versions which had been publicly available since 2003. Their manuscript was published in the journal Geometry and Topology in 2008. A small number of corrections were made in 2011 and 2013; for instance, the first version of their published paper made use of an incorrect version of Hamilton's compactness theorem for Ricci flow. Huai-Dong Cao and Xi-Ping Zhu published a paper in the June 2006 issue of the Asian Journal of Mathematics with an exposition of the complete proof of the Poincaré and geometrization conjectures. The opening paragraph of their paper stated In this paper, we shall present the Hamilton-Perelman theory of Ricci flow. Based on it, we shall give the first written account of a complete proof of the Poincaré conjecture and the geometrization conjecture of Thurston. While the complete work is an accumulated efforts of many geometric analysts, the major contributors are unquestionably Hamilton and Perelman. Some observers interpreted Cao and Zhu as taking credit for Perelman's work. They later posted a revised version, with new wording, on arXiv. In addition, a page of their exposition was essentially identical to a page in one of Kleiner and Lott's early publicly available drafts; this was also amended in the revised version, together with an apology by the journal's editorial board. John Morgan and Gang Tian posted a paper on arXiv in July 2006 which gave a detailed proof of just the Poincaré Conjecture (which is somewhat easier than the full geometrization conjecture) and expanded this to a book. All three groups found that the gaps in Perelman's papers were minor and could be filled in using his own techniques. On August 22, 2006, the ICM awarded Perelman the Fields Medal for his work on the Ricci flow, but Perelman refused the medal. John Morgan spoke at the ICM on the Poincaré conjecture on August 24, 2006, declaring that "in 2003, Perelman solved the Poincaré Conjecture". In December 2006, the journal Science honored the proof of Poincaré conjecture as the Breakthrough of the Year and featured it on its cover. == Ricci flow with surgery == Hamilton's program for proving the Poincaré conjecture involves first putting a Riemannian metric on the unknown simply connected closed 3-manifold. The basic idea is to try to "improve" this metric; for example, if the metric can be improved enough so that it has constant positive curvature, then according to classical results in Riemannian geometry, it must be the 3-sphere. Hamilton prescribed the "Ricci flow equations" for improving the metric; ∂ t g i j = − 2 R i j {\displaystyle \partial _{t}g_{ij}=-2R_{ij}} where g is the metric and R its Ricci curvature, and one hopes that, as the time t increases, the manifold becomes easier to understand. Ricci flow expands the negative curvature part of the manifold and contracts the positive curvature part. In some cases, Hamilton was able to show that this works; for example, his original breakthrough was to show that if the Riemannian manifold has positive Ricci curvature everywhere, then the above procedure can only be followed for a bounded interval of parameter values, t ∈ [ 0 , T ) {\displaystyle t\in [0,T)} with T < ∞ {\displaystyle T<\infty } , and more significantly, that there are numbers c t {\displaystyle c_{t}} such that as t ↗ T {\displaystyle t\nearrow T} , the Riemannian metrics c t g ( t ) {\displaystyle c_{t}g(t)} smoothly converge to one of constant positive curvature. According to classical Riemannian geometry, the only simply-connected compact manifold which can support a Riemannian metric of constant positive curvature is the sphere. So, in effect, Hamilton showed a special case of the Poincaré conjecture: if a compact simply-connected 3-manifold supports a Riemannian metric of positive Ricci curvature, then it must be diffeomorphic to the 3-sphere. If, instead, one only has an arbitrary Riemannian metric, the Ricci flow equations must lead to more complicated singularities. Perelman's major achievement was to show that, if one takes a certain perspective, if they appear in finite time, these singularities can only look like shrinking spheres or cylinders. With a quantitative understanding of this phenomenon, he cuts the manifold along the singularities, splitting the manifold into several pieces and then continues with the Ricci flow on each of these pieces. This procedure is known as Ricci flow with surgery. Perelman provided a separate argument based on curve shortening flow to show that, on a simply-connected compact 3-manifold, any solution of the Ricci flow with surgery becomes extinct in finite time. An alternative argument, based on the min-max theory of minimal surfaces and geometric measure theory, was provided by Tobias Colding and William Minicozzi. Hence, in the simply-connected context, the above finite-time phenomena of Ricci flow with surgery is all that is relevant. In fact, this is even true if the fundamental group is a free product of finite groups and cyclic groups. This condition on the fundamental group turns out to be necessary and sufficient for finite time extinction. It is equivalent to saying that the prime decomposition of the manifold has no acyclic components and turns out to be equivalent to the condition that all geometric pieces of the manifold have geometries based on the two Thurston geometries S2 × R and S3. In the context that one makes no assumption about the fundamental group whatsoever, Perelman made a further technical study of the limit of the manifold for infinitely large times, and in so doing, proved Thurston's geometrization conjecture: at large times, the manifold has a thick-thin decomposition, whose thick piece has a hyperbolic structure, and whose thin piece is a graph manifold. Due to Perelman's and Colding and Minicozzi's results, however, these further results are unnecessary in order to prove the Poincaré conjecture. == Solution == On November 11, 2002, Russian mathematician Grigori Perelman posted the first of a series of three eprints on arXiv outlining a solution of the Poincaré conjecture. Perelman's proof uses a modified version of a Ricci flow program developed by Richard S. Hamilton. In August 2006, Perelman was awarded, but declined, the Fields Medal (worth $15,000 CAD) for his work on the Ricci flow. On March 18, 2010, the Clay Mathematics Institute awarded Perelman the $1 million Millennium Prize in recognition of his proof. Perelman rejected that prize as well. Perelman proved the conjecture by deforming the manifold using the Ricci flow (which behaves similarly to the heat equation that describes the diffusion of heat through an object). The Ricci flow usually deforms the manifold towards a rounder shape, except for some cases where it stretches the manifold apart from itself towards what are known as singularities. Perelman and Hamilton then chop the manifold at the singularities (a process called "surgery"), causing the separate pieces to form into ball-like shapes. Major steps in the proof involve showing how manifolds behave when they are deformed by the Ricci flow, examining what sort of singularities develop, determining whether this surgery process can be completed, and establishing that the surgery need not be repeated infinitely many times. The first step is to deform the manifold using the Ricci flow. The Ricci flow was defined by Richard S. Hamilton as a way to deform manifolds. The formula for the Ricci flow is an imitation of the heat equation, which describes the way heat flows in a solid. Like the heat flow, Ricci flow tends towards uniform behavior. Unlike the heat flow, the Ricci flow could run into singularities and stop functioning. A singularity in a manifold is a place where it is not differentiable: like a corner or a cusp or a pinching. The Ricci flow was only defined for smooth differentiable manifolds. Hamilton used the Ricci flow to prove that some compact manifolds were diffeomorphic to spheres, and he hoped to apply it to prove the Poincaré conjecture. He needed to understand the singularities. Hamilton created a list of possible singularities that could form, but he was concerned that some singularities might lead to difficulties. He wanted to cut the manifold at the singularities and paste in caps and then run the Ricci flow again, so he needed to understand the singularities and show that certain kinds of singularities do not occur. Perelman discovered the singularities were all very simple: consider that a cylinder is formed by 'stretching' a circle along a line in another dimension, repeating that process with spheres instead of circles essentially gives the form of the singularities. Perelman proved this using something called the "Reduced Volume", which is closely related to an eigenvalue of a certain elliptic equation. Sometimes, an otherwise complicated operation reduces to multiplication by a scalar (a number). Such numbers are called eigenvalues of that operation. Eigenvalues are closely related to vibration frequencies and are used in analyzing a famous problem: can you hear the shape of a drum? Essentially, an eigenvalue is like a note being played by the manifold. Perelman proved this note goes up as the manifold is deformed by the Ricci flow. This helped him eliminate some of the more troublesome singularities that had concerned Hamilton, particularly the cigar soliton solution, which looked like a strand sticking out of a manifold with nothing on the other side. In essence, Perelman showed that all the strands that form can be cut and capped and none stick out on one side only. Completing the proof, Perelman takes any compact, simply connected, three-dimensional manifold without boundary and starts to run the Ricci flow. This deforms the manifold into round pieces with strands running between them. He cuts the strands and continues deforming the manifold until, eventually, he is left with a collection of round three-dimensional spheres. Then, he rebuilds the original manifold by connecting the spheres together with three-dimensional cylinders, morphs them into a round shape, and sees that, despite all the initial confusion, the manifold was, in fact, homeomorphic to a sphere. One immediate question posed was how one could be sure that infinitely many cuts are not necessary. This was raised due to the cutting potentially progressing forever. Perelman proved this cannot happen by using minimal surfaces on the manifold. A minimal surface is one on which any local deformation increases area; a familiar example is a soap film spanning a bent loop of wire. Hamilton had shown that the area of a minimal surface decreases as the manifold undergoes Ricci flow. Perelman verified what happened to the area of the minimal surface when the manifold was sliced. He proved that, eventually, the area is so small that any cut after the area is that small can only be chopping off three-dimensional spheres and not more complicated pieces. This is described as a battle with a Hydra by Sormani in Szpiro's book cited below. This last part of the proof appeared in Perelman's third and final paper on the subject. == See also == Manifold Destiny == References == == Further reading == Kleiner, Bruce; Lott, John (2008). "Notes on Perelman's papers". Geometry & Topology. 12 (5): 2587–2855. arXiv:math/0605667. doi:10.2140/gt.2008.12.2587. MR 2460872. S2CID 119133773. Huai-Dong Cao; Xi-Ping Zhu (December 3, 2006). "Hamilton-Perelman's Proof of the Poincaré Conjecture and the Geometrization Conjecture". arXiv:math.DG/0612069. Morgan, John W.; Tian, Gang (2007). Ricci Flow and the Poincaré Conjecture. Clay Mathematics Monographs. Vol. 3. Providence, RI: American Mathematical Society. arXiv:math/0607607. ISBN 978-0-8218-4328-4. MR 2334563. O'Shea, Donal (2007). The Poincaré Conjecture: In Search of the Shape of the Universe. Walker & Company. ISBN 978-0-8027-1654-5. Perelman, Grisha (November 11, 2002). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math.DG/0211159. Perelman, Grisha (March 10, 2003). "Ricci flow with surgery on three-manifolds". arXiv:math.DG/0303109. Perelman, Grisha (July 17, 2003). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math.DG/0307245. Szpiro, George (2008). Poincaré's Prize: The Hundred-Year Quest to Solve One of Math's Greatest Puzzles. Plume. ISBN 978-0-452-28964-2. Stillwell, John (2012). "Poincaré and the early history of 3-manifolds". Bulletin of the American Mathematical Society. 49 (4): 555–576. doi:10.1090/S0273-0979-2012-01385-X. MR 2958930. Yau, Shing-Tung; Nadis, Steve (2019). The Shape of a Life: One Mathematician's Search for the Universe's Hidden Geometry. New Haven, CT: Yale University Press. ISBN 978-0-300-23590-6. MR 3930611. == External links == "The Poincaré Conjecture" – BBC Radio 4 programme In Our Time, 2 November 2006. Contributors June Barrow-Green, Lecturer in the History of Mathematics at the Open University, Ian Stewart, Professor of Mathematics at the University of Warwick, Marcus du Sautoy, Professor of Mathematics at the University of Oxford, and presenter Melvyn Bragg.
Wikipedia/Ricci_flow_with_surgery
William Thurston's elliptization conjecture states that a closed 3-manifold with finite fundamental group is spherical, i.e. has a Riemannian metric of constant positive sectional curvature. == Relation to other conjectures == A 3-manifold with a Riemannian metric of constant positive sectional curvature is covered by the 3-sphere, moreover the group of covering transformations are isometries of the 3-sphere. If the original 3-manifold had in fact a trivial fundamental group, then it is homeomorphic to the 3-sphere (via the covering map). Thus, proving the elliptization conjecture would prove the Poincaré conjecture as a corollary. In fact, the elliptization conjecture is logically equivalent to two simpler conjectures: the Poincaré conjecture and the spherical space form conjecture. The elliptization conjecture is a special case of Thurston's geometrization conjecture, which was proved in 2003 by G. Perelman. == References == For the proof of the conjectures, see the references in the articles on geometrization conjecture or Poincaré conjecture. William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. ISBN 0-691-08304-5. William Thurston. The Geometry and Topology of Three-Manifolds, 1980 Princeton lecture notes on geometric structures on 3-manifolds, that states his elliptization conjecture near the beginning of section 3.
Wikipedia/Thurston_elliptization_conjecture
Clay Mathematics Monographs is a series of expositions in mathematics co-published by AMS and Clay Mathematics Institute. Each volume in the series offers an exposition of an active area of current research, provided by a group of mathematicians. == List of books == Morgan, John; Tian, Gang (2014). The Geometrization Conjecture. CIMM 5. ISBN 978-0-8218-5201-9. Aspinwall, Paul S.; Bridgeland, Tom; Craw, Alastair; Douglas, Michael R.; Gross, Mark; Kapustin, Anton; Moore, Gregory W.; Segal, Graeme; Szendrői, Balázs; Wilson, P.M.H. (2009). Dirichlet Branes and Mirror Symmetry. CIMM 4. ISBN 978-0-8218-3848-8. Morgan, John; Tian, Gang (2007). Ricci Flow and the Poincaré Conjecture. CIMM 3. ISBN 978-0-8218-4328-4. Mazza, Carlo; Voevodsky, Vladimir; Weibel, Charles (2006). Lecture Notes on Motivic cohomology. CIMM 2. ISBN 978-0-8218-3847-1. Hori, Kentaro; Katz, Sheldon; Klemm, Albrecht; Pandharipande, Rahul; Thomas, Richard; Vafa, Cumrun; Vakil, Ravi; Zaslow, Eric (2003). Mirror Symmetry. CIMM 1. ISBN 978-0-8218-2955-4. == External links == Clay Mathematics Monographs list at ams.org
Wikipedia/Clay_Mathematics_Monographs
In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion of A is the Dedekind–MacNeille completion. More generally, if κ is a cardinal then a Boolean algebra is called κ-complete if every subset of cardinality less than κ has a supremum. == Examples == === Complete Boolean algebras === Every finite Boolean algebra is complete. The algebra of subsets of a given set is a complete Boolean algebra. The regular open sets of any topological space form a complete Boolean algebra. This example is of particular importance because every forcing poset can be considered as a topological space (a base for the topology consisting of sets that are the set of all elements less than or equal to a given element). The corresponding regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions by the given forcing poset. The algebra of all measurable subsets of a σ-finite measure space, modulo null sets, is a complete Boolean algebra. When the measure space is the unit interval with the σ-algebra of Lebesgue measurable sets, the Boolean algebra is called the random algebra. The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra. === Non-complete Boolean algebras === The algebra of all subsets of an infinite set that are finite or have finite complement is a Boolean algebra but is not complete. The algebra of all measurable subsets of a measure space is a ℵ1-complete Boolean algebra, but is not usually complete. Another example of a Boolean algebra that is not complete is the Boolean algebra P(ω) of all sets of natural numbers, quotiented out by the ideal Fin of finite subsets. The resulting object, denoted P(ω)/Fin, consists of all equivalence classes of sets of naturals, where the relevant equivalence relation is that two sets of naturals are equivalent if their symmetric difference is finite. The Boolean operations are defined analogously, for example, if A and B are two equivalence classes in P(ω)/Fin, we define A ∧ B {\displaystyle A\land B} to be the equivalence class of a ∩ b {\displaystyle a\cap b} , where a and b are some (any) elements of A and B respectively. Now let a0, a1, ... be pairwise disjoint infinite sets of naturals, and let A0, A1, ... be their corresponding equivalence classes in P(ω)/Fin. Then given any upper bound X of A0, A1, ... in P(ω)/Fin, we can find a lesser upper bound, by removing from a representative for X one element of each an. Therefore the An have no supremum. == Properties of complete Boolean algebras == Every subset of a complete Boolean algebra has a supremum, by definition; it follows that every subset also has an infimum (greatest lower bound). For a complete boolean algebra, both infinite distributive laws hold if and only if it is isomorphic to the powerset of some set. For a complete boolean algebra infinite de-Morgan's laws hold. A Boolean algebra is complete if and only if its Stone space of prime ideals is extremally disconnected. Sikorski's extension theorem states that if A is a subalgebra of a Boolean algebra B, then any homomorphism from A to a complete Boolean algebra C can be extended to a morphism from B to C. == The completion of a Boolean algebra == The completion of a Boolean algebra can be defined in several equivalent ways: The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that A is dense in B; this means that for every nonzero element of B there is a smaller non-zero element of A. The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that every element of B is the supremum of some subset of A. The completion of a Boolean algebra A can be constructed in several ways: The completion is the Boolean algebra of regular open sets in the Stone space of prime ideals of A. Each element x of A corresponds to the open set of prime ideals not containing x (which is open and closed, and therefore regular). The completion is the Boolean algebra of regular cuts of A. Here a cut is a subset U of A+ (the non-zero elements of A) such that if q is in U and p ≤ q then p is in U, and is called regular if whenever p is not in U there is some r ≤ p such that U has no elements ≤ r. Each element p of A corresponds to the cut of elements ≤ p. If A is a metric space and B its completion then any isometry from A to a complete metric space C can be extended to a unique isometry from B to C. The analogous statement for complete Boolean algebras is not true: a homomorphism from a Boolean algebra A to a complete Boolean algebra C cannot necessarily be extended to a (supremum preserving) homomorphism of complete Boolean algebras from the completion B of A to C. (By Sikorski's extension theorem it can be extended to a homomorphism of Boolean algebras from B to C, but this will not in general be a homomorphism of complete Boolean algebras; in other words, it need not preserve suprema.) == Free κ-complete Boolean algebras == Unless the Axiom of Choice is relaxed, free complete boolean algebras generated by a set do not exist (unless the set is finite). More precisely, for any cardinal κ, there is a complete Boolean algebra of cardinality 2κ greater than κ that is generated as a complete Boolean algebra by a countable subset; for example the Boolean algebra of regular open sets in the product space κω, where κ has the discrete topology. A countable generating set consists of all sets am,n for m, n integers, consisting of the elements x ∊ κω such that x(m) < x(n). (This boolean algebra is called a collapsing algebra, because forcing with it collapses the cardinal κ onto ω.) In particular the forgetful functor from complete Boolean algebras to sets has no left adjoint, even though it is continuous and the category of Boolean algebras is small-complete. This shows that the "solution set condition" in Freyd's adjoint functor theorem is necessary. Given a set X, one can form the free Boolean algebra A generated by this set and then take its completion B. However B is not a "free" complete Boolean algebra generated by X (unless X is finite or AC is omitted), because a function from X to a free Boolean algebra C cannot in general be extended to a (supremum-preserving) morphism of Boolean algebras from B to C. On the other hand, for any fixed cardinal κ, there is a free (or universal) κ-complete Boolean algebra generated by any given set. == See also == Complete lattice Complete Heyting algebra == References == == Literature == Johnstone, Peter T. (1982). Stone spaces. Cambridge University Press. ISBN 0-521-33779-8. Koppelberg, Sabine (1989). Monk, J. Donald; Bonnet, Robert (eds.). Handbook of Boolean algebras. Vol. 1. Amsterdam: North-Holland Publishing Co. pp. xx+312. ISBN 0-444-70261-X. MR 0991565. Monk, J. Donald; Bonnet, Robert, eds. (1989). Handbook of Boolean algebras. Vol. 2. Amsterdam: North-Holland Publishing Co. ISBN 0-444-87152-7. MR 0991595. Monk, J. Donald; Bonnet, Robert, eds. (1989). Handbook of Boolean algebras. Vol. 3. Amsterdam: North-Holland Publishing Co. ISBN 0-444-87153-5. MR 0991607. Stavi, Jonathan (1974). "A model of ZF with an infinite free complete Boolean algebra". Israel Journal of Mathematics. 20 (2): 149–163. doi:10.1007/BF02757883. S2CID 119543439. Vladimirov, D.A. (2001) [1994], "Boolean algebra", Encyclopedia of Mathematics, EMS Press
Wikipedia/Complete_Boolean_algebra
In geometry, a point is an abstract idealization of an exact position, without size, in physical space, or its generalization to other kinds of mathematical spaces. As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves, two-dimensional surfaces, and higher-dimensional objects consist. In classical Euclidean geometry, a point is a primitive notion, defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms, that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points". As physical diagrams, geometric figures are made with tools such as a compass, scriber, or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve. A point can also be determined by the intersection of two curves or three surfaces, called a vertex or corner. Since the advent of analytic geometry, points are often defined or represented in terms of numerical coordinates. In modern mathematics, a space of points is typically treated as a set, a point set. An isolated point is an element of some subset of points which has some neighborhood containing no other points of the subset. == Points in Euclidean geometry == Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (x, y) of numbers, where the first number conventionally represents the horizontal and is often denoted by x, and the second number conventionally represents the vertical and is often denoted by y. This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (x, y, z) with the additional third number representing depth and often denoted by z. Further generalizations are represented by an ordered tuplet of n terms, (a1, a2, … , an) where n is the dimension of the space in which the point is located. Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form L = { ( a 1 , a 2 , . . . a n ) ∣ a 1 c 1 + a 2 c 2 + . . . a n c n = d } , {\displaystyle L=\lbrace (a_{1},a_{2},...a_{n})\mid a_{1}c_{1}+a_{2}c_{2}+...a_{n}c_{n}=d\rbrace ,} where c1 through cn and d are constants and n is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions. == Dimension of a point == There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional. === Vector space dimension === The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: 1 ⋅ 0 = 0 {\displaystyle 1\cdot \mathbf {0} =\mathbf {0} } . === Topological dimension === The topological dimension of a topological space X {\displaystyle X} is defined to be the minimum value of n, such that every finite open cover A {\displaystyle {\mathcal {A}}} of X {\displaystyle X} admits a finite open cover B {\displaystyle {\mathcal {B}}} of X {\displaystyle X} which refines A {\displaystyle {\mathcal {A}}} in which no point is included in more than n+1 elements. If no such minimal n exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set. === Hausdorff dimension === Let X be a metric space. If S ⊂ X and d ∈ [0, ∞), the d-dimensional Hausdorff content of S is the infimum of the set of numbers δ ≥ 0 such that there is some (indexed) collection of balls { B ( x i , r i ) : i ∈ I } {\displaystyle \{B(x_{i},r_{i}):i\in I\}} covering S with ri > 0 for each i ∈ I that satisfies ∑ i ∈ I r i d < δ . {\displaystyle \sum _{i\in I}r_{i}^{d}<\delta .} The Hausdorff dimension of X is defined by dim H ⁡ ( X ) := inf { d ≥ 0 : C H d ( X ) = 0 } . {\displaystyle \operatorname {dim} _{\operatorname {H} }(X):=\inf\{d\geq 0:C_{H}^{d}(X)=0\}.} A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. == Geometry without points == Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology. A "pointless" or "pointfree" space is defined not as a set, but via some structure (algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of inclusion or connection. == Point masses and the Dirac delta function == Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism, where electrons are idealized as points with non-zero charge). The Dirac delta function, or δ function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1. == See also == == Notes == == References == == External links == "Point". PlanetMath. Weisstein, Eric W. "Point". MathWorld.
Wikipedia/Point_(topology)
In mathematics, especially in order theory, a complete Heyting algebra is a Heyting algebra that is complete as a lattice. Complete Heyting algebras are the objects of three different categories; the category CHey, the category Loc of locales, and its opposite, the category Frm of frames. Although these three categories contain the same objects, they differ in their morphisms, and thus get distinct names. Only the morphisms of CHey are homomorphisms of complete Heyting algebras. Locales and frames form the foundation of pointless topology, which, instead of building on point-set topology, recasts the ideas of general topology in categorical terms, as statements on frames and locales. == Definition == Consider a partially ordered set (P, ≤) that is a complete lattice. Then P is a complete Heyting algebra or frame if any of the following equivalent conditions hold: P is a Heyting algebra, i.e. the operation ( x ∧ ⋅ ) {\displaystyle (x\land \cdot )} has a right adjoint (also called the lower adjoint of a (monotone) Galois connection), for each element x of P. For all elements x of P and all subsets S of P, the following infinite distributivity law holds: x ∧ ⋁ s ∈ S s = ⋁ s ∈ S ( x ∧ s ) . {\displaystyle x\land \bigvee _{s\in S}s=\bigvee _{s\in S}(x\land s).} P is a distributive lattice, i.e., for all x, y and z in P, we have x ∧ ( y ∨ z ) = ( x ∧ y ) ∨ ( x ∧ z ) {\displaystyle x\land (y\lor z)=(x\land y)\lor (x\land z)} and the meet operations ( x ∧ ⋅ ) {\displaystyle (x\land \cdot )} are Scott continuous (i.e., preserve the suprema of directed sets) for all x in P. The entailed definition of Heyting implication is a → b = ⋁ { c ∣ a ∧ c ≤ b } . {\displaystyle a\to b=\bigvee \{c\mid a\land c\leq b\}.} Using a bit more category theory, we can equivalently define a frame to be a cocomplete cartesian closed poset. == Examples == The system of all open sets of a given topological space ordered by inclusion is a complete Heyting algebra. == Frames and locales == The objects of the category CHey, the category Frm of frames and the category Loc of locales are complete Heyting algebras. These categories differ in what constitutes a morphism: The morphisms of Frm are (necessarily monotone) functions that preserve finite meets and arbitrary joins. The definition of Heyting algebras crucially involves the existence of right adjoints to the binary meet operation, which together define an additional implication operation. Thus, the morphisms of CHey are morphisms of frames that in addition preserve implication. The morphisms of Loc are opposite to those of Frm, and they are usually called maps (of locales). The relation of locales and their maps to topological spaces and continuous functions may be seen as follows. Let f : X → Y {\displaystyle f:X\to Y} be any map. The power sets P(X) and P(Y) are complete Boolean algebras, and the map f − 1 : P ( Y ) → P ( X ) {\displaystyle f^{-1}:P(Y)\to P(X)} is a homomorphism of complete Boolean algebras. Suppose the spaces X and Y are topological spaces, endowed with the topology O(X) and O(Y) of open sets on X and Y. Note that O(X) and O(Y) are subframes of P(X) and P(Y). If f {\displaystyle f} is a continuous function, then f − 1 : O ( Y ) → O ( X ) {\displaystyle f^{-1}:O(Y)\to O(X)} preserves finite meets and arbitrary joins of these subframes. This shows that O is a functor from the category Top of topological spaces to Loc, taking any continuous map f : X → Y {\displaystyle f:X\to Y} to the map O ( f ) : O ( X ) → O ( Y ) {\displaystyle O(f):O(X)\to O(Y)} in Loc that is defined in Frm to be the inverse image frame homomorphism f − 1 : O ( Y ) → O ( X ) . {\displaystyle f^{-1}:O(Y)\to O(X).} Given a map of locales f : A → B {\displaystyle f:A\to B} in Loc, it is common to write f ∗ : B → A {\displaystyle f^{*}:B\to A} for the frame homomorphism that defines it in Frm. Using this notation, O ( f ) {\displaystyle O(f)} is defined by the equation O ( f ) ∗ = f − 1 . {\displaystyle O(f)^{*}=f^{-1}.} Conversely, any locale A has a topological space S(A), called its spectrum, that best approximates the locale. In addition, any map of locales f : A → B {\displaystyle f:A\to B} determines a continuous map S ( A ) → S ( B ) . {\displaystyle S(A)\to S(B).} Moreover this assignment is functorial: letting P(1) denote the locale that is obtained as the power set of the terminal set 1 = { ∗ } , {\displaystyle 1=\{*\},} the points of S(A) are the maps p : P ( 1 ) → A {\displaystyle p:P(1)\to A} in Loc, i.e., the frame homomorphisms p ∗ : A → P ( 1 ) . {\displaystyle p^{*}:A\to P(1).} For each a ∈ A {\displaystyle a\in A} we define U a {\displaystyle U_{a}} as the set of points p ∈ S ( A ) {\displaystyle p\in S(A)} such that p ∗ ( a ) = { ∗ } . {\displaystyle p^{*}(a)=\{*\}.} It is easy to verify that this defines a frame homomorphism A → P ( S ( A ) ) , {\displaystyle A\to P(S(A)),} whose image is therefore a topology on S(A). Then, if f : A → B {\displaystyle f:A\to B} is a map of locales, to each point p ∈ S ( A ) {\displaystyle p\in S(A)} we assign the point S ( f ) ( q ) {\displaystyle S(f)(q)} defined by letting S ( f ) ( p ) ∗ {\displaystyle S(f)(p)^{*}} be the composition of p ∗ {\displaystyle p^{*}} with f ∗ , {\displaystyle f^{*},} hence obtaining a continuous map S ( f ) : S ( A ) → S ( B ) . {\displaystyle S(f):S(A)\to S(B).} This defines a functor S {\displaystyle S} from Loc to Top, which is right adjoint to O. Any locale that is isomorphic to the topology of its spectrum is called spatial, and any topological space that is homeomorphic to the spectrum of its locale of open sets is called sober. The adjunction between topological spaces and locales restricts to an equivalence of categories between sober spaces and spatial locales. Any function that preserves all joins (and hence any frame homomorphism) has a right adjoint, and, conversely, any function that preserves all meets has a left adjoint. Hence, the category Loc is isomorphic to the category whose objects are the frames and whose morphisms are the meet preserving functions whose left adjoints preserve finite meets. This is often regarded as a representation of Loc, but it should not be confused with Loc itself, whose morphisms are formally the same as frame homomorphisms in the opposite direction. == Literature == P. T. Johnstone, Stone Spaces, Cambridge Studies in Advanced Mathematics 3, Cambridge University Press, Cambridge, 1982. (ISBN 0-521-23893-5) Still a great resource on locales and complete Heyting algebras. G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, and D. S. Scott, Continuous Lattices and Domains, In Encyclopedia of Mathematics and its Applications, Vol. 93, Cambridge University Press, 2003. ISBN 0-521-80338-1 Includes the characterization in terms of meet continuity. Francis Borceux: Handbook of Categorical Algebra III, volume 52 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1994. Surprisingly extensive resource on locales and Heyting algebras. Takes a more categorical viewpoint. Steven Vickers, Topology via logic, Cambridge University Press, 1989, ISBN 0-521-36062-5. Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001. == External links == Locale at the nLab
Wikipedia/Complete_Heyting_algebra
In mathematics, an algebraic structure or algebraic system consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities (known as axioms) that these operations must satisfy. An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category. == Introduction == Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, a + (b + c) = (a + b) + c and a(bc) = (ab)c are associative laws, and a + b = b + a and ab = ba are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law. Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem. In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses. == Common axioms == === Equational axioms === An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples. Commutativity An operation ∗ {\displaystyle *} is commutative if x ∗ y = y ∗ x {\displaystyle x*y=y*x} for every x and y in the algebraic structure. Associativity An operation ∗ {\displaystyle *} is associative if ( x ∗ y ) ∗ z = x ∗ ( y ∗ z ) {\displaystyle (x*y)*z=x*(y*z)} for every x, y and z in the algebraic structure. Left distributivity An operation ∗ {\displaystyle *} is left-distributive with respect to another operation + {\displaystyle +} if x ∗ ( y + z ) = ( x ∗ y ) + ( x ∗ z ) {\displaystyle x*(y+z)=(x*y)+(x*z)} for every x, y and z in the algebraic structure (the second operation is denoted here as + {\displaystyle +} , because the second operation is addition in many common examples). Right distributivity An operation ∗ {\displaystyle *} is right-distributive with respect to another operation + {\displaystyle +} if ( y + z ) ∗ x = ( y ∗ x ) + ( z ∗ x ) {\displaystyle (y+z)*x=(y*x)+(z*x)} for every x, y and z in the algebraic structure. Distributivity An operation ∗ {\displaystyle *} is distributive with respect to another operation + {\displaystyle +} if it is both left-distributive and right-distributive. If the operation ∗ {\displaystyle *} is commutative, left and right distributivity are both equivalent to distributivity. === Existential axioms === Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all X there is y such that f ( X , y ) = g ( X , y ) {\displaystyle f(X,y)=g(X,y)} ", where X is a k-tuple of variables. Choosing a specific value of y for each value of X defines a function φ : X ↦ y , {\displaystyle \varphi :X\mapsto y,} which can be viewed as an operation of arity k, and the axiom becomes the identity f ( X , φ ( X ) ) = g ( X , φ ( X ) ) . {\displaystyle f(X,\varphi (X))=g(X,\varphi (X)).} The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation x ↦ − x . {\displaystyle x\mapsto -x.} Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety. Here are some of the most common existential axioms. Identity element A binary operation ∗ {\displaystyle *} has an identity element if there is an element e such that x ∗ e = x and e ∗ x = x {\displaystyle x*e=x\quad {\text{and}}\quad e*x=x} for all x in the structure. Here, the auxiliary operation is the operation of arity zero that has e as its result. Inverse element Given a binary operation ∗ {\displaystyle *} that has an identity element e, an element x is invertible if it has an inverse element, that is, if there exists an element inv ⁡ ( x ) {\displaystyle \operatorname {inv} (x)} such that inv ⁡ ( x ) ∗ x = e and x ∗ inv ⁡ ( x ) = e . {\displaystyle \operatorname {inv} (x)*x=e\quad {\text{and}}\quad x*\operatorname {inv} (x)=e.} For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible. === Non-equational axioms === The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers ( ∀ , ∃ {\displaystyle \forall ,\exists } ) that apply to elements (not to subsets) of the structure. Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation inv such that ∀ x , x = 0 or x ⋅ inv ⁡ ( x ) = 1. {\displaystyle \forall x,\quad x=0\quad {\text{or}}\quad x\cdot \operatorname {inv} (x)=1.} The operation inv can be viewed either as a partial operation that is not defined for x = 0; or as an ordinary function whose value at 0 is arbitrary and must not be used. == Common algebraic structures == === One set with operations === Simple structures: no binary operation: Set: a degenerate algebraic structure S having no operations. Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers. Group: a monoid with a unary operation (inverse), giving rise to inverse elements. Abelian group: a group whose binary operation is commutative. Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition. Ring: a semiring whose additive monoid is an abelian group. Division ring: a nontrivial ring in which division by nonzero elements is defined. Commutative ring: a ring in which the multiplication operation is commutative. Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element). Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law. Complete lattice: a lattice in which arbitrary meet and joins exist. Bounded lattice: a lattice with a greatest element and least element. Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice. Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. === Two sets with operations === Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations. Vector space: a module where the ring R is a field or, in some contexts, a division ring. Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication. Inner product space: a field F and vector space V with a definite bilinear form V × V → F. == Hybrid structures == Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure. Topological group: a group with a topology compatible with the group operation. Lie group: a topological group with a compatible smooth manifold structure. Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order. Archimedean group: a linearly ordered group for which the Archimedean property holds. Topological vector space: a vector space whose M has a compatible topology. Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space. Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure. Vertex operator algebra Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology. == Universal algebra == Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry). Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group. Some structures do not form varieties, because either: It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity; Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations. Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because ( 1 , 0 ) ⋅ ( 0 , 1 ) = ( 0 , 0 ) {\displaystyle (1,0)\cdot (0,1)=(0,0)} , but fields do not have zero divisors. == Category theory == Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure. There are various concepts in category theory that try to capture the algebraic character of a context, for instance algebraic category essentially algebraic category presentable category locally presentable category monadic functors and categories universal property. == Different meanings of "structure" == In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set A {\displaystyle A} ", means that we have defined ring operations on the set A {\displaystyle A} . For another example, the group ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} can be seen as a set Z {\displaystyle \mathbb {Z} } that is equipped with an algebraic structure, namely the operation + {\displaystyle +} . == See also == Free object Mathematical structure Signature (logic) Structure (mathematical logic) == Notes == == References == Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (2nd ed.), AMS Chelsea, ISBN 978-0-8218-1646-2 Michel, Anthony N.; Herget, Charles J. (1993), Applied Algebra and Functional Analysis, New York: Dover Publications, ISBN 978-0-486-67598-5 Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90578-3 Category theory Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 Taylor, Paul (1999), Practical foundations of mathematics, Cambridge University Press, ISBN 978-0-521-63107-5 == External links == Jipsen's algebra structures. Includes many structures not mentioned here. Mathworld page on abstract algebra. Stanford Encyclopedia of Philosophy: Algebra by Vaughan Pratt.
Wikipedia/Algebraic_system
In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by ∨ {\displaystyle \lor } , and a single unary operation usually denoted by ¬ {\displaystyle \neg } satisfying the following axioms: For all elements a, b, and c: Associativity: a ∨ ( b ∨ c ) = ( a ∨ b ) ∨ c {\displaystyle a\lor \left(b\lor c\right)=\left(a\lor b\right)\lor c} Commutativity: a ∨ b = b ∨ a {\displaystyle a\lor b=b\lor a} Robbins equation: ¬ ( ¬ ( a ∨ b ) ∨ ¬ ( a ∨ ¬ b ) ) = a {\displaystyle \neg \left(\neg \left(a\lor b\right)\lor \neg \left(a\lor \neg b\right)\right)=a} For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra". == History == In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus: Huntington's equation: ¬ ( ¬ a ∨ b ) ∨ ¬ ( ¬ a ∨ ¬ b ) = a . {\displaystyle \neg (\neg a\lor b)\lor \neg (\neg a\lor \neg b)=a.} From these axioms, Huntington derived the usual axioms of Boolean algebra. Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. ∨ {\displaystyle \lor } would interpret Boolean join and ¬ {\displaystyle \neg } Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra." Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample. William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof. == See also == Algebraic structure Minimal axioms for Boolean algebra == References == Dahn, B. I. (1998) Abstract to "Robbins Algebras Are Boolean: A Revision of McCune's Computer-Generated Solution of Robbins Problem," Journal of Algebra 208(2): 526–32. Mann, Allen (2003) "A Complete Proof of the Robbins Conjecture." William McCune, "Robbins Algebras Are Boolean," With links to proofs and other papers.
Wikipedia/Robbins_conjecture
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement. A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other. While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed. This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a table – namely, check all entries of the latter, sequentially – is called linear search. == Implementing the brute-force search == === Basic algorithm === In order to apply brute-force search to a specific class of problems, one must implement four procedures, first, next, valid, and output. These procedures should take as a parameter the data P for the particular instance of the problem that is to be solved, and should do the following: first (P): generate a first candidate solution for P. next (P, c): generate the next candidate for P after the current one c. valid (P, c): check whether candidate c is a solution for P. output (P, c): use the solution c of P as appropriate to the application. The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm c ← first(P) while c ≠ Λ do if valid(P,c) then output(P, c) c ← next(P, c) end while For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests n ≥ 1 and c < n are unnecessary.)The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time. == Combinatorial explosion == The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letter – which is only a 10% increase in the data size – will multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called the combinatorial explosion, or the curse of dimensionality. One example of a case where combinatorial complexity leads to solvability limit is in solving chess. Chess is not a solved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity. == Speeding up brute-force searches == One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, in the eight queens problem the challenge is to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutions – about 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements. As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one. In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 – which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests. == Reordering the search space == In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to n − 1, than the other way around – because the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance. == Alternatives to brute-force search == There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means of Constraint propagation, that is efficiently implemented in Constraint programming languages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function. == In cryptography == In cryptography, a brute-force attack involves systematically checking all possible keys until the correct key is found. This strategy can in theory be used against any encrypted data (except a one-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier. The key length used in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective by obfuscating the data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it. == References == == See also == A brute-force algorithm to solve Sudoku puzzles. Brute-force attack Big O notation Iteration#Computing
Wikipedia/Brute_force_search
In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function ⁠ f : A → A {\displaystyle f:A\rightarrow A} ⁠, where A is a set; the function ⁠ f {\displaystyle f} ⁠ is a unary operation on A. Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial n!), functional notation (e.g. sin x or sin(x)), and superscripts (e.g. transpose AT). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument. == Examples == === Absolute value === Obtaining the absolute value of a number is a unary operation. This function is defined as | n | = { n , if n ≥ 0 − n , if n < 0 {\displaystyle |n|={\begin{cases}n,&{\mbox{if }}n\geq 0\\-n,&{\mbox{if }}n<0\end{cases}}} where | n | {\displaystyle |n|} is the absolute value of n {\displaystyle n} . === Negation === Negation is used to find the negative value of a single number. Here are some examples: − ( 3 ) = − 3 {\displaystyle -(3)=-3} − ( − 3 ) = 3 {\displaystyle -(-3)=3} === Factorial === For any positive integer n, the product of the integers less than or equal to n is a unary operation called factorial. In the context of complex numbers, the gamma function is a unary operation extension of factorial. === Trigonometry === In trigonometry, the trigonometric functions, such as sin {\displaystyle \sin } , cos {\displaystyle \cos } , and tan {\displaystyle \tan } , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result. === Examples from programming languages === Below is a table summarizing common unary operators along with their symbols, description, and examples: ==== JavaScript ==== In JavaScript, these operators are unary: Increment: ++x, x++ Decrement: --x, x-- Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x ==== C family of languages ==== In the C family of languages, the following operators are unary: Increment: ++x, x++ Decrement: --x, x-- Address: &x Indirection: *x Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x Sizeof: sizeof x, sizeof(type-name) Cast: (type-name) cast-expression ==== Unix shell (Bash) ==== In the Unix shell (Bash/Bourne Shell), e.g., the following operators are unary: Pre and Post-Increment: ++$x, $x++ Pre and Post-Decrement: --$x, $x-- Positive: +$x Negative: -$x Logical negation: !$x Simple expansion: $x Complex expansion: ${#x} ==== PowerShell ==== In the PowerShell, the following operators are unary: Increment: ++$x, $x++ Decrement: --$x, $x-- Positive: +$x Negative: -$x Logical negation: !$x Invoke in current scope: .$x Invoke in new scope: &$x Cast: [type-name] cast-expression Cast: +$x Array: ,$array == See also == Unary function Binary operation Iterated binary operation Binary function Ternary operation Arity Operation (mathematics) Operator (programming) == References == == External links == Media related to Unary operations at Wikimedia Commons
Wikipedia/Unary_functional_symbol
In probability theory, a conditional event algebra (CEA) is an alternative to a standard, Boolean algebra of possible events (a set of possible events related to one another by the familiar operations and, or, and not) that contains not just ordinary events but also conditional events that have the form "if A, then B". The usual motivation for a CEA is to ground the definition of a probability function for events, P, that satisfies the equation P(if A then B) = P(A and B) / P(A). == Motivation == In standard probability theory the occurrence of an event corresponds to a set of possible outcomes, each of which is an outcome that corresponds to the occurrence of the event. P(A), the probability of event A, is the sum of the probabilities of all outcomes that correspond to event A; P(B) is the sum of the probabilities of all outcomes that correspond to event B; and P(A and B) is the sum of the probabilities of all outcomes that correspond to both A and B. In other words, and, customarily represented by the logical symbol ∧, is interpreted as set intersection: P(A ∧ B) = P(A ∩ B). In the same vein, or, ∨, becomes set union, ∪, and not, ¬, becomes set complementation, ′. Any combination of events using the operations and, or, and not is also an event, and assigning probabilities to all outcomes generates a probability for every event. In technical terms, this means that the set of events and the three operations together constitute a Boolean algebra of sets, with an associated probability function. In standard practice, P(if A, then B) is not interpreted as P(A′ ∪ B), following the rule of material implication, but rather as the conditional probability of B given A, P(B | A) = P(A ∩ B) / P(A). This raises a question: what about a probability like P(if A, then B, and if C, then D)? For this, there is no standard answer. What would be needed, for consistency, is a treatment of if-then as a binary operation, →, such that for conditional events A → B and C → D, P(A → B) = P(B | A), P(C → D) = P(D | C), and P((A → B) ∧ (C → D)) are well-defined and reasonable. Philosophers including Robert Stalnaker argued that ideally, a conditional event algebra, or CEA, would support a probability function that meets three conditions: 1. The probability function satisfies the usual axioms. 2. For any two ordinary events A and B, if P(A) > 0, then P(A → B) = P(B | A) = P(A ∧ B) / P(A). 3. For ordinary event A and acceptable probability function P, if P(A) > 0, then PA = P ( ⋅ | A), the function produced by conditioning on A, is also an acceptable probability function. However, David Lewis proved in 1976 a fact now known as Lewis's triviality result: these conditions can only be met with near-standard approaches in trivial examples. In particular, those conditions can only be met when there are just two possible outcomes—as with, say, a single coin flip. With three or more possible outcomes, constructing a probability function requires choosing which of the above three conditions to violate. Interpreting A → B as A′ ∪ B produces an ordinary Boolean algebra that violates 2. With CEAs, the choice is between 1 and 3. == Types of conditional event algebra == === Tri-event CEAs === Tri-event CEAs take their inspiration from three-valued logic, where the identification of logical conjunction, disjunction, and negation with simple set operations no longer applies. For ordinary events A and B, the tri-event A → B occurs when A and B both occur, fails to occur when A occurs but B does not, and is undecided when A fails to occur. (The term “tri-event” comes from de Finetti (1935): triévénement.) Ordinary events, which are never undecided, are incorporated into the algebra as tri-events conditional on Ω, the vacuous event represented by the entire sample space of outcomes; thus, A becomes Ω → A. Since there are many three-valued logics, there are many possible tri-event algebras. Two types, however, have attracted more interest than the others. In one type, A ∧ B and A ∨ B are each undecided only when both A and B are undecided; when just one of them is, the conjunction or disjunction follows the other conjunct or disjunct. When negation is handled in the obvious way, with ¬A undecided just in case A is, this type of tri-event algebra corresponds to a three-valued logic proposed by Sobociński (1920) and favored by Belnap (1973), and also implied by Adams’s (1975) “quasi-conjunction” for conditionals. Schay (1968) was the first to propose an algebraic treatment, which Calabrese (1987) developed more properly. The other type of tri-event CEA treats negation the same way as the first, but it treats conjunction and disjunction as min and max functions, respectively, with occurrence as the high value, failure as the low value, and undecidedness in between. This type of tri-event algebra corresponds to a three-valued logic proposed by Łukasiewicz (1920) and also favored by de Finetti (1935). Goodman, Nguyen and Walker (1991) eventually provided the algebraic formulation. The probability of any tri-event is defined as the probability that it occurs divided by the probability that it either occurs or fails to occur. With this convention, conditions 2 and 3 above are satisfied by the two leading tri-event CEA types. Condition 1, however, fails. In a Sobociński-type algebra, ∧ does not distribute over ∨, so P(A ∧ (B ∨ C)) and P((A ∧ B) ∨ (A ∧ C)) need not be equal. In a Łukasiewicz-type algebra, ∧ distributes over ∨ but not over exclusive or, ⊕ {\displaystyle \oplus } (A ⊕ {\displaystyle \oplus } B = (A ∧ ¬B) ∨ (¬A ∧ B)). Also, tri-event CEAs are not complemented lattices, only pseudocomplemented, because in general, (A → B) ∧ ¬(A → B) cannot occur but can be undecided and therefore is not identical to Ω → ∅, the bottom element of the lattice. This means that P(C) and P(C ⊕ {\displaystyle \oplus } ((A → B) ∧ ¬(A → B))) can differ, when classically they would not. === Product-space CEAs === If P(if A, then B) is thought of as the probability of A-and-B occurring before A-and-not-B in a series of trials, this can be calculated as an infinite sum of simple probabilities: the probability of A-and-B on the first trial, plus the probability of not-A (and either B or not-B) on the first trial and A-and-B on the second, plus the probability of not-A on the first two trials and A-and-B on the third, and so on—that is, P(A ∧ B) + P(¬A)P(A ∧ B) + P(¬A)2P(A ∧ B) + …, or, in factored form, P(A ∧ B)[1 + P(¬A) + P(¬A)2 + …]. Since the second factor is the Maclaurin series expansion of 1 / [1 – P(¬A)] = 1 / P(A), the infinite sum equals P(A ∧ B) / P(A) = P(B |A). The infinite sum is itself is a simple probability, but with the sample space now containing not ordinary outcomes of single trials but infinite sequences of ordinary outcomes. Thus the conditional probability P(B |A) is turned into simple probability P(B → A) by replacing Ω, the sample space of all ordinary outcomes, with Ω*, the sample space of all sequences of ordinary outcomes, and by identifying conditional event A → B with the set of sequences where the first (A ∧ B)-outcome comes before the first (A ∧ ¬B)-outcome. In Cartesian-product notation, Ω* = Ω × Ω × Ω × …, and A → B is the infinite union [(A ∩ B) × Ω × Ω × …] ∪ [A′ × (A ∩ B) × Ω × Ω × …] ∪ [A′ × A′ × (A ∩ B) × Ω × Ω × …] ∪ …. Unconditional event A is, again, represented by conditional event Ω → A. Unlike tri-event CEAs, this type of CEA supports the identification of ∧, ∨, and ¬ with the familiar operations ∩, ∪, and ′ not just for ordinary, unconditional events but for conditional ones, as well. Because Ω* is a space defined by an infinitely long Cartesian product, the Boolean algebra of conditional-event subsets of Ω* is called a product-space CEA. This type of CEA was introduced by van Fraassen (1976), in response to Lewis’s result, and was later discovered independently by Goodman and Nguyen (1994). The probability functions associated with product-space CEAs satisfy conditions 1 and 2 above. However, given probability function P that satisfies conditions 1 and 2, if P(A) > 0, it can be shown that PA(C | B) = P(C | A ∧ B) and PA(B → C) = P(B ∧ C | A) + P(B′ | A)P(C | B). If A, B and C are pairwise compatible but P(A ∧ B ∧ C) = 0, then P(C | A ∧ B) = P(B ∧ C | A) = 0 but P(B′ | A)P (C | B) > 0. Therefore, PA(B → C) does not reliably equal PA(C | B). Since PA fails condition 2, P fails condition 3. === Nested if–thens === What about nested conditional constructions? In a tri-event CEA, right-nested constructions are handled more or less automatically, since it is natural to say that A → (B → C) takes the value of B → C (possibly undecided) when A is true and is undecided when A is false. Left-nesting, however, requires a more deliberate choice: when A → B is undecided, should (A → B) → C be undecided, or should it take the value of C? Opinions vary. Calabrese adopts the latter view, identifying (A → B) → (C → D) with ((¬A ∨ B) ∧ C) → D. With a product-space CEA, nested conditionals call for nested sequence-constructions: evaluating P((A → B) → (C → D)) requires a sample space of metasequences of sequences of ordinary outcomes. The probabilities of the ordinary sequences are calculated as before. Given a series of trials where the outcomes are sequences of ordinary outcomes, P((A → B) → (C → D)) is P(C → D | A → B) = P((A → B) ∧ (C → D)) / P(A → B), the probability that an ((A → B) ∧ (C → B))-sequence will be encountered before an ((A → B) ∧ ¬(C → B))-sequence. Higher-order-iterations of conditionals require higher-order metasequential constructions. In either of the two leading types of tri-event CEA, A → (B → C) = (A ∧ B) → C. Product space CEAs, on the other hand, do not support this identity. The latter fact can be inferred from the failure, already noted, of PA(B → C) to equal PA(C | B), since PA(C | B) = P((A ∧ B) → C) and PA(B → C) = P(A → (B → C)). For a direct analysis, however, consider a metasequence whose first member-sequence starts with an (A ∧ ¬B ∧ C)-outcome, followed by a (¬A ∧ B ∧ C)-outcome, followed by an (A ∧ B ∧ ¬C)-outcome. That metasequence will belong to the event A → (B → C), because the first member-sequence is an (A ∧ (B → C))-sequence, but the metasequence will not belong to the event (A ∧ B) → C, because the first member-sequence is an ((A ∧ B) → ¬C)-sequence. == Applications == The initial impetus for CEAs is theoretical—namely, the challenge of responding to Lewis's triviality result—but practical applications have been proposed. If, for instance, events A and C involve signals emitted by military radar stations and events B and D involve missile launches, an opposing military force with an automated missile defense system may want the system to be able to calculate P((A → B) ∧ (C → D)) and/or P((A → B) → (C → D)). Other applications range from image interpretation to the detection of denial-of-service attacks on computer networks. == Notes == == References == Adams, E. W. 1975. The Logic of Conditionals. D. Reidel, Dordrecht. Bamber, D., Goodman, I. R. and Nguyen, H. T. 2004. "Deduction from Conditional Knowledge". Soft Computing 8: 247–255. Belnap, N. D. 1973. "Restricted quantification and conditional assertion", in H. Leblanc (ed.), Truth, Syntax and Modality North-Holland, Amsterdam. 48–75. Calabrese, P. 1987. "An algebraic synthesis of the foundations of logic and probability". Information Sciences 42:187-237. de Finetti, Bruno. 1935. "La logique de la probabilité". Actes du Congrès International Philosophie Scientifique. Paris. van Fraassen, Bas C. 1976. "Probabilities of conditionals” in W. L. Harper and C. A. Hooker (eds.), Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, Vol. I. D. Reidel, Dordrecht, pp. 261–308. Goodman, I. R., Mahler, R. P. S. and Nguyen, H. T. 1999. "What is conditional event algebra and why should you care?" SPIE Proceedings, Vol. 3720. Goodman, I. R., Nguyen, H. T. and Walker, E .A. 1991. Conditional Inference and Logic for Intelligent Systems: A Theory of Measure-Free Conditioning. Office of Chief of Naval Research, Arlington, Virginia. Goodman, I. R. and Nguyen, H. T. 1994. "A theory of conditional information for probabilistic inference in intelligent systems: II, Product space approach; III Mathematical appendix". Information Sciences 76:13-42; 75: 253-277. Goodman, I. R. and Nguyen, H. T. 1995. "Mathematical foundations of conditionals and their probabilistic assignments". International Journal of Uncertainty, Fuzziness and Knowledge-based Systems 3(3): 247-339 Kelly, P. A., Derin, H., and Gong, W.-B. 1999. "Some applications of conditional events and random sets for image estimation and system modeling". SPIE Proceedings 3720: 14-24. Łukasiewicz, J. 1920. "O logice trójwartościowej" (in Polish). Ruch Filozoficzny 5:170–171. English translation: "On three-valued logic", in L. Borkowski (ed.), Selected works by Jan Łukasiewicz, North–Holland, Amsterdam, 1970, pp. 87–88. ISBN 0-7204-2252-3 Schay, Geza. 1968. "An algebra of conditional events". Journal of Mathematical Analysis and Applications 24: 334-344. Sobociński, B. 1952. "Axiomatization of a partial system of three-valued calculus of propositions". Journal of Computing Systems 1(1):23-55. Sun, D., Yang, K., Jing, X., Lv, B., and Wang, Y. 2014. "Abnormal network traffic detection based on conditional event algebra". Applied Mechanics and Materials 644-650: 1093-1099.
Wikipedia/Conditional_event_algebra
In mathematics, a De Morgan algebra (named after Augustus De Morgan, a British mathematician and logician) is a structure A = (A, ∨, ∧, 0, 1, ¬) such that: (A, ∨, ∧, 0, 1) is a bounded distributive lattice, and ¬ is a De Morgan involution: ¬(x ∧ y) = ¬x ∨ ¬y and ¬¬x = x. (i.e. an involution that additionally satisfies De Morgan's laws) In a De Morgan algebra, the laws ¬x ∨ x = 1 (law of the excluded middle), and ¬x ∧ x = 0 (law of noncontradiction) do not always hold. In the presence of the De Morgan laws, either law implies the other, and an algebra which satisfies them becomes a Boolean algebra. Remark: It follows that ¬(x ∨ y) = ¬x ∧ ¬y, ¬1 = 0 and ¬0 = 1 (e.g. ¬1 = ¬1 ∨ 0 = ¬1 ∨ ¬¬0 = ¬(1 ∧ ¬0) = ¬¬0 = 0). Thus ¬ is a dual automorphism of (A, ∨, ∧, 0, 1). If the lattice is defined in terms of the order instead, i.e. (A, ≤) is a bounded partial order with a least upper bound and greatest lower bound for every pair of elements, and the meet and join operations so defined satisfy the distributive law, then the complementation can also be defined as an involutive anti-automorphism, that is, a structure A = (A, ≤, ¬) such that: (A, ≤) is a bounded distributive lattice, and ¬¬x = x, and x ≤ y → ¬y ≤ ¬x. De Morgan algebras were introduced by Grigore Moisil around 1935, although without the restriction of having a 0 and a 1. They were then variously called quasi-boolean algebras in the Polish school, e.g. by Rasiowa and also distributive i-lattices by J. A. Kalman. (i-lattice being an abbreviation for lattice with involution.) They have been further studied in the Argentinian algebraic logic school of Antonio Monteiro. De Morgan algebras are important for the study of the mathematical aspects of fuzzy logic. The standard fuzzy algebra F = ([0, 1], max(x, y), min(x, y), 0, 1, 1 − x) is an example of a De Morgan algebra where the laws of excluded middle and noncontradiction do not hold. Another example is Dunn's four-valued semantics for De Morgan algebra, which has the values T(rue), F(alse), B(oth), and N(either), where F < B < T, F < N < T, and B and N are not comparable. == Kleene algebra == If a De Morgan algebra additionally satisfies x ∧ ¬x ≤ y ∨ ¬y, it is called a Kleene algebra. (This notion should not be confused with the other Kleene algebra generalizing regular expressions.) This notion has also been called a normal i-lattice by Kalman. Examples of Kleene algebras in the sense defined above include: lattice-ordered groups, Post algebras and Łukasiewicz algebras. Boolean algebras also meet this definition of Kleene algebra. The simplest Kleene algebra that is not Boolean is Kleene's three-valued logic K3. K3 made its first appearance in Kleene's On notation for ordinal numbers (1938). The algebra was named after Kleene by Brignole and Monteiro. == Related notions == De Morgan algebras are not the only plausible way to generalize Boolean algebras. Another way is to keep ¬x ∧ x = 0 (i.e. the law of noncontradiction) but to drop the law of the excluded middle and the law of double negation. This approach (called semicomplementation) is well-defined even for a (meet) semilattice; if the set of semicomplements has a greatest element it is usually called pseudocomplement, and the resulting algebra is a Heyting algebra. If the pseudocomplement satisfies the law of the excluded middle, the resulting algebra is also Boolean. However, if only the weaker law ¬x ∨ ¬¬x = 1 is required, this results in Stone algebras. More generally, both De Morgan and Stone algebras are proper subclasses of Ockham algebras. == See also == orthocomplemented lattice == References == == Further reading == Balbes, Raymond; Dwinger, Philip (1975). "Chapter IX. De Morgan Algebras and Lukasiewicz Algebras". Distributive lattices. University of Missouri Press. ISBN 978-0-8262-0163-8. Birkhoff, G. (1936). "Reviews: Moisil Gr. C.. Recherches sur l'algèbre de la logique. Annales scientifiques de l'Université de Jassy, vol. 22 (1936), pp. 1–118". The Journal of Symbolic Logic. 1 (2): 63. doi:10.2307/2268551. JSTOR 2268551. Batyrshin, I.Z. (1990). "On fuzzinesstic measures of entropy on Kleene algebras". Fuzzy Sets and Systems. 34 (1): 47–60. doi:10.1016/0165-0114(90)90126-Q. Kalman, J. A. (1958). "Lattices with involution" (PDF). Transactions of the American Mathematical Society. 87 (2): 485–491. doi:10.1090/S0002-9947-1958-0095135-X. JSTOR 1993112. Pagliani, Piero; Chakraborty, Mihir (2008). A Geometry of Approximation: Rough Set Theory: Logic, Algebra and Topology of Conceptual Patterns. Springer Science & Business Media. Part II. Chapter 6. Basic Logico-Algebraic Structures, pp. 193-210. ISBN 978-1-4020-8622-9. Cattaneo, G.; Ciucci, D. (2009). "Lattices with Interior and Closure Operators and Abstract Approximation Spaces". Transactions on Rough Sets X. Lecture Notes in Computer Science 67–116. Vol. 5656. pp. 67–116. doi:10.1007/978-3-642-03281-3_3. ISBN 978-3-642-03280-6. Gehrke, M.; Walker, C.; Walker, E. (2003). "Fuzzy Logics Arising From Strict De Morgan Systems". In Rodabaugh, S. E.; Klement, E. P. (eds.). Topological and Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets. Springer. ISBN 978-1-4020-1515-1. Dalla Chiara, Maria Luisa; Giuntini, Roberto; Greechie, Richard (2004). Reasoning in Quantum Theory: Sharp and Unsharp Quantum Logics. Springer. ISBN 978-1-4020-1978-4.
Wikipedia/Kleene_algebra_(with_involution)
In mathematics, a De Morgan algebra (named after Augustus De Morgan, a British mathematician and logician) is a structure A = (A, ∨, ∧, 0, 1, ¬) such that: (A, ∨, ∧, 0, 1) is a bounded distributive lattice, and ¬ is a De Morgan involution: ¬(x ∧ y) = ¬x ∨ ¬y and ¬¬x = x. (i.e. an involution that additionally satisfies De Morgan's laws) In a De Morgan algebra, the laws ¬x ∨ x = 1 (law of the excluded middle), and ¬x ∧ x = 0 (law of noncontradiction) do not always hold. In the presence of the De Morgan laws, either law implies the other, and an algebra which satisfies them becomes a Boolean algebra. Remark: It follows that ¬(x ∨ y) = ¬x ∧ ¬y, ¬1 = 0 and ¬0 = 1 (e.g. ¬1 = ¬1 ∨ 0 = ¬1 ∨ ¬¬0 = ¬(1 ∧ ¬0) = ¬¬0 = 0). Thus ¬ is a dual automorphism of (A, ∨, ∧, 0, 1). If the lattice is defined in terms of the order instead, i.e. (A, ≤) is a bounded partial order with a least upper bound and greatest lower bound for every pair of elements, and the meet and join operations so defined satisfy the distributive law, then the complementation can also be defined as an involutive anti-automorphism, that is, a structure A = (A, ≤, ¬) such that: (A, ≤) is a bounded distributive lattice, and ¬¬x = x, and x ≤ y → ¬y ≤ ¬x. De Morgan algebras were introduced by Grigore Moisil around 1935, although without the restriction of having a 0 and a 1. They were then variously called quasi-boolean algebras in the Polish school, e.g. by Rasiowa and also distributive i-lattices by J. A. Kalman. (i-lattice being an abbreviation for lattice with involution.) They have been further studied in the Argentinian algebraic logic school of Antonio Monteiro. De Morgan algebras are important for the study of the mathematical aspects of fuzzy logic. The standard fuzzy algebra F = ([0, 1], max(x, y), min(x, y), 0, 1, 1 − x) is an example of a De Morgan algebra where the laws of excluded middle and noncontradiction do not hold. Another example is Dunn's four-valued semantics for De Morgan algebra, which has the values T(rue), F(alse), B(oth), and N(either), where F < B < T, F < N < T, and B and N are not comparable. == Kleene algebra == If a De Morgan algebra additionally satisfies x ∧ ¬x ≤ y ∨ ¬y, it is called a Kleene algebra. (This notion should not be confused with the other Kleene algebra generalizing regular expressions.) This notion has also been called a normal i-lattice by Kalman. Examples of Kleene algebras in the sense defined above include: lattice-ordered groups, Post algebras and Łukasiewicz algebras. Boolean algebras also meet this definition of Kleene algebra. The simplest Kleene algebra that is not Boolean is Kleene's three-valued logic K3. K3 made its first appearance in Kleene's On notation for ordinal numbers (1938). The algebra was named after Kleene by Brignole and Monteiro. == Related notions == De Morgan algebras are not the only plausible way to generalize Boolean algebras. Another way is to keep ¬x ∧ x = 0 (i.e. the law of noncontradiction) but to drop the law of the excluded middle and the law of double negation. This approach (called semicomplementation) is well-defined even for a (meet) semilattice; if the set of semicomplements has a greatest element it is usually called pseudocomplement, and the resulting algebra is a Heyting algebra. If the pseudocomplement satisfies the law of the excluded middle, the resulting algebra is also Boolean. However, if only the weaker law ¬x ∨ ¬¬x = 1 is required, this results in Stone algebras. More generally, both De Morgan and Stone algebras are proper subclasses of Ockham algebras. == See also == orthocomplemented lattice == References == == Further reading == Balbes, Raymond; Dwinger, Philip (1975). "Chapter IX. De Morgan Algebras and Lukasiewicz Algebras". Distributive lattices. University of Missouri Press. ISBN 978-0-8262-0163-8. Birkhoff, G. (1936). "Reviews: Moisil Gr. C.. Recherches sur l'algèbre de la logique. Annales scientifiques de l'Université de Jassy, vol. 22 (1936), pp. 1–118". The Journal of Symbolic Logic. 1 (2): 63. doi:10.2307/2268551. JSTOR 2268551. Batyrshin, I.Z. (1990). "On fuzzinesstic measures of entropy on Kleene algebras". Fuzzy Sets and Systems. 34 (1): 47–60. doi:10.1016/0165-0114(90)90126-Q. Kalman, J. A. (1958). "Lattices with involution" (PDF). Transactions of the American Mathematical Society. 87 (2): 485–491. doi:10.1090/S0002-9947-1958-0095135-X. JSTOR 1993112. Pagliani, Piero; Chakraborty, Mihir (2008). A Geometry of Approximation: Rough Set Theory: Logic, Algebra and Topology of Conceptual Patterns. Springer Science & Business Media. Part II. Chapter 6. Basic Logico-Algebraic Structures, pp. 193-210. ISBN 978-1-4020-8622-9. Cattaneo, G.; Ciucci, D. (2009). "Lattices with Interior and Closure Operators and Abstract Approximation Spaces". Transactions on Rough Sets X. Lecture Notes in Computer Science 67–116. Vol. 5656. pp. 67–116. doi:10.1007/978-3-642-03281-3_3. ISBN 978-3-642-03280-6. Gehrke, M.; Walker, C.; Walker, E. (2003). "Fuzzy Logics Arising From Strict De Morgan Systems". In Rodabaugh, S. E.; Klement, E. P. (eds.). Topological and Algebraic Structures in Fuzzy Sets: A Handbook of Recent Developments in the Mathematics of Fuzzy Sets. Springer. ISBN 978-1-4020-1515-1. Dalla Chiara, Maria Luisa; Giuntini, Roberto; Greechie, Richard (2004). Reasoning in Quantum Theory: Sharp and Unsharp Quantum Logics. Springer. ISBN 978-1-4020-1978-4.
Wikipedia/De_Morgan_algebra
In graph theory, the hypercube graph Qn is the graph formed from the vertices and edges of an n-dimensional hypercube. For instance, the cube graph Q3 is the graph formed by the 8 vertices and 12 edges of a three-dimensional cube. Qn has 2n vertices, 2n − 1n edges, and is a regular graph with n edges touching each vertex. The hypercube graph Qn may also be constructed by creating a vertex for each subset of an n-element set, with two vertices adjacent when their subsets differ in a single element, or by creating a vertex for each n-digit binary number, with two vertices adjacent when their binary representations differ in a single digit. It is the n-fold Cartesian product of the two-vertex complete graph, and may be decomposed into two copies of Qn − 1 connected to each other by a perfect matching. Hypercube graphs should not be confused with cubic graphs, which are graphs that have exactly three edges touching each vertex. The only hypercube graph Qn that is a cubic graph is the cubical graph Q3. == Construction == The hypercube graph Qn may be constructed from the family of subsets of a set with n elements, by making a vertex for each possible subset and joining two vertices by an edge whenever the corresponding subsets differ in a single element. Equivalently, it may be constructed using 2n vertices labeled with n-bit binary numbers and connecting two vertices by an edge whenever the Hamming distance of their labels is one. These two constructions are closely related: a binary number may be interpreted as a set (the set of positions where it has a 1 digit), and two such sets differ in a single element whenever the corresponding two binary numbers have Hamming distance one. Alternatively, Qn may be constructed from the disjoint union of two hypercubes Qn − 1, by adding an edge from each vertex in one copy of Qn − 1 to the corresponding vertex in the other copy, as shown in the figure. The joining edges form a perfect matching. The above construction gives a recursive algorithm for constructing the adjacency matrix of a hypercube, An. Copying is done via the Kronecker product, so that the two copies of Qn − 1 have an adjacency matrix 1 2 ⊗ K A n − 1 {\displaystyle \mathrm {1} _{2}\otimes _{K}A_{n-1}} ,where 1 d {\displaystyle 1_{d}} is the identity matrix in d {\displaystyle d} dimensions. Meanwhile the joining edges have an adjacency matrix A 1 ⊗ K 1 2 n − 1 {\displaystyle A_{1}\otimes _{K}1_{2^{n-1}}} . The sum of these two terms gives a recursive function function for the adjacency matrix of a hypercube: A n = { 1 2 ⊗ K A n − 1 + A 1 ⊗ K 1 2 n − 1 if n > 1 [ 0 1 1 0 ] if n = 1 {\displaystyle A_{n}={\begin{cases}1_{2}\otimes _{K}A_{n-1}+A_{1}\otimes _{K}1_{2^{n-1}}&{\text{if }}n>1\\{\begin{bmatrix}0&1\\1&0\end{bmatrix}}&{\text{if }}n=1\end{cases}}} Another construction of Qn is the Cartesian product of n two-vertex complete graphs K2. More generally the Cartesian product of copies of a complete graph is called a Hamming graph; the hypercube graphs are examples of Hamming graphs. == Examples == The graph Q0 consists of a single vertex, while Q1 is the complete graph on two vertices. Q2 is a cycle of length 4. The graph Q3 is the 1-skeleton of a cube and is a planar graph with eight vertices and twelve edges. The graph Q4 is the Levi graph of the Möbius configuration. It is also the knight's graph for a toroidal 4 × 4 {\displaystyle 4\times 4} chessboard. == Properties == === Bipartiteness === Every hypercube graph is bipartite: it can be colored with only two colors. The two colors of this coloring may be found from the subset construction of hypercube graphs, by giving one color to the subsets that have an even number of elements and the other color to the subsets with an odd number of elements. === Hamiltonicity === Every hypercube Qn with n > 1 has a Hamiltonian cycle, a cycle that visits each vertex exactly once. Additionally, a Hamiltonian path exists between two vertices u and v if and only if they have different colors in a 2-coloring of the graph. Both facts are easy to prove using the principle of induction on the dimension of the hypercube, and the construction of the hypercube graph by joining two smaller hypercubes with a matching. Hamiltonicity of the hypercube is tightly related to the theory of Gray codes. More precisely there is a bijective correspondence between the set of n-bit cyclic Gray codes and the set of Hamiltonian cycles in the hypercube Qn. An analogous property holds for acyclic n-bit Gray codes and Hamiltonian paths. A lesser known fact is that every perfect matching in the hypercube extends to a Hamiltonian cycle. The question whether every matching extends to a Hamiltonian cycle remains an open problem. === Other properties === The hypercube graph Qn (for n > 1) : is the Hasse diagram of a finite Boolean algebra. is a median graph. Every median graph is an isometric subgraph of a hypercube, and can be formed as a retraction of a hypercube. has more than 22n − 2 perfect matchings. (this is another consequence that follows easily from the inductive construction.) is arc transitive and symmetric. The symmetries of hypercube graphs can be represented as signed permutations. contains all the cycles of length 4, 6, ..., 2n and is thus a bipancyclic graph. can be drawn as a unit distance graph in the Euclidean plane by using the construction of the hypercube graph from subsets of a set of n elements, choosing a distinct unit vector for each set element, and placing the vertex corresponding to the set S at the sum of the vectors in S. is a n-vertex-connected graph, by Balinski's theorem. is planar (can be drawn with no crossings) if and only if n ≤ 3. For larger values of n, the hypercube has genus (n − 4)2n − 3 + 1. has exactly 2 2 n − n − 1 ∏ k = 2 n k ( n k ) {\displaystyle 2^{2^{n}-n-1}\prod _{k=2}^{n}k^{n \choose k}} spanning trees. has bandwidth exactly ∑ i = 0 n − 1 ( i ⌊ i / 2 ⌋ ) {\displaystyle \sum _{i=0}^{n-1}{\binom {i}{\lfloor i/2\rfloor }}} . has achromatic number proportional to n 2 n {\displaystyle {\sqrt {n2^{n}}}} , but the constant of proportionality is not known precisely. has as the eigenvalues of its adjacency matrix the numbers (−n, −n + 2, −n + 4, ... , n − 4, n − 2, n) and as the eigenvalues of its Laplacian matrix the numbers (0, 2, ..., 2n). The kth eigenvalue has multiplicity ( n k ) {\displaystyle {\binom {n}{k}}} in both cases. has isoperimetric number h(G) = 1. The family Qn for all n > 1 is a Lévy family of graphs. == Problems == The problem of finding the longest path or cycle that is an induced subgraph of a given hypercube graph is known as the snake-in-the-box problem. Szymanski's conjecture concerns the suitability of a hypercube as a network topology for communications. It states that, no matter how one chooses a permutation connecting each hypercube vertex to another vertex with which it should be connected, there is always a way to connect these pairs of vertices by paths that do not share any directed edge. == See also == de Bruijn graph Cube-connected cycles Fibonacci cube Folded cube graph Frankl–Rödl graph Halved cube graph Hypercube internetwork topology Partial cube == Notes == == References == Harary, F.; Hayes, J. P.; Wu, H.-J. (1988), "A survey of the theory of hypercube graphs", Computers & Mathematics with Applications, 15 (4): 277–289, doi:10.1016/0898-1221(88)90213-1, hdl:2027.42/27522.
Wikipedia/Hypercube_graph
In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators, such that: Each element of the Boolean algebra can be expressed as a finite combination of generators, using the Boolean operations, and The generators are as independent as possible, in the sense that there are no relationships among them (again in terms of finite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter which elements are chosen. == A simple example == The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propositions "John is tall" and "Mary is rich". These generate a Boolean algebra with four atoms, namely: John is tall, and Mary is rich; John is tall, and Mary is not rich; John is not tall, and Mary is rich; John is not tall, and Mary is not rich. Other elements of the Boolean algebra are then logical disjunctions of the atoms, such as "John is tall and Mary is not rich, or John is not tall and Mary is rich". In addition there is one more element, FALSE, which can be thought of as the empty disjunction; that is, the disjunction of no atoms. This example yields a Boolean algebra with 16 elements; in general, for finite n, the free Boolean algebra with n generators has 2n atoms, and therefore 2 2 n {\displaystyle 2^{2^{n}}} elements. If there are infinitely many generators, a similar situation prevails except that now there are no atoms. Each element of the Boolean algebra is a combination of finitely many of the generating propositions, with two such elements deemed identical if they are logically equivalent. Another way to see why the free Boolean algebra on an n-element set has 2 2 n {\displaystyle 2^{2^{n}}} elements is to note that each element is a function from n bits to one. There are 2 n {\displaystyle 2^{n}} possible inputs to such a function and the function will choose 0 or 1 to output for each input, so there are 2 2 n {\displaystyle 2^{2^{n}}} possible functions. == Category-theoretic definition == In the language of category theory, free Boolean algebras can be defined simply in terms of an adjunction between the category of sets and functions, Set, and the category of Boolean algebras and Boolean algebra homomorphisms, BA. In fact, this approach generalizes to any algebraic structure definable in the framework of universal algebra. Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way; alternatively, one might start with a set and ask which algebra it generates. Every set X generates a free Boolean algebra FX defined as the algebra such that for every algebra B and function f : X → B, there is a unique Boolean algebra homomorphism f′ : FX → B that extends f. Diagrammatically, where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that once one chooses where to send the elements of X, the laws for Boolean algebra homomorphisms determine where to send everything else in the free algebra FX. If FX contained elements inexpressible as combinations of elements of X, then f′ wouldn't be unique, and if the elements of X weren't sufficiently independent, then f′ wouldn't be well defined! It is easily shown that FX is unique (up to isomorphism), so this definition makes sense. It is also easily shown that a free Boolean algebra with generating set X, as defined originally, is isomorphic to FX, so the two definitions agree. One shortcoming of the above definition is that the diagram doesn't capture that f′ is a homomorphism; since it is a diagram in Set each arrow denotes a mere function. We can fix this by separating it into two diagrams, one in BA and one in Set. To relate the two, we introduce a functor U : BA → Set that "forgets" the algebraic structure, mapping algebras and homomorphisms to their underlying sets and functions. If we interpret the top arrow as a diagram in BA and the bottom triangle as a diagram in Set, then this diagram properly expresses that every function f : X → UB extends to a unique Boolean algebra homomorphism f′ : FX → B. The functor U can be thought of as a device to pull the homomorphism f′ back into Set so it can be related to f. The remarkable aspect of this is that the latter diagram is one of the various (equivalent) definitions of when two functors are adjoint. Our F easily extends to a functor Set → BA, and our definition of X generating a free Boolean algebra FX is precisely that U has a left adjoint F. == Topological realization == The free Boolean algebra with κ generators, where κ is a finite or infinite cardinal number, may be realized as the collection of all clopen subsets of {0,1}κ, given the product topology assuming that {0,1} has the discrete topology. For each α<κ, the αth generator is the set of all elements of {0,1}κ whose αth coordinate is 1. In particular, the free Boolean algebra with ℵ 0 {\displaystyle \aleph _{0}} generators is the collection of all clopen subsets of a Cantor space, sometimes called the Cantor algebra. This collection is countable. In fact, while the free Boolean algebra with n generators, n finite, has cardinality 2 2 n {\displaystyle 2^{2^{n}}} , the free Boolean algebra with ℵ 0 {\displaystyle \aleph _{0}} generators, as for any free algebra with ℵ 0 {\displaystyle \aleph _{0}} generators and countably many finitary operations, has cardinality ℵ 0 {\displaystyle \aleph _{0}} . For more on this topological approach to free Boolean algebra, see Stone's representation theorem for Boolean algebras. == See also == Boolean algebra (structure) Generating set == References == Steve Awodey (2006) Category Theory (Oxford Logic Guides 49). Oxford University Press. Paul Halmos and Steven Givant (1998) Logic as Algebra. Mathematical Association of America. Saunders Mac Lane (1998) Categories for the Working Mathematician. 2nd ed. (Graduate Texts in Mathematics 5). Springer-Verlag. Saunders Mac Lane (1999) Algebra, 3d. ed. American Mathematical Society. ISBN 0-8218-1646-2. Robert R. Stoll, 1963. Set Theory and Logic, chpt. 6.7. Dover reprint 1979.
Wikipedia/Free_Boolean_algebra
Journal of Algebra (ISSN 0021-8693) is an international mathematical research journal in algebra. An imprint of Academic Press, it is published by Elsevier. Journal of Algebra was founded by Graham Higman, who was its editor from 1964 to 1984. From 1985 until 2000, Walter Feit served as its editor-in-chief. In 2004, Journal of Algebra announced (vol. 276, no. 1 and 2) the creation of a new section on computational algebra, with a separate editorial board. The first issue completely devoted to computational algebra was vol. 292, no. 1 (October 2005). The Editor-in-Chief of the Journal of Algebra is Michel Broué, Université Paris Diderot, and Gerhard Hiß, Rheinisch-Westfälische Technische Hochschule Aachen (RWTH) is Editor of the computational algebra section. == See also == Susan Montgomery, an editor of the journal == External links == Journal of Algebra at ScienceDirect
Wikipedia/Journal_of_Algebra
The Quine–McCluskey algorithm (QMC), also known as the method of prime implicants, is a method used for minimization of Boolean functions that was developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956. As a general principle this approach had already been demonstrated by the logician Hugh McColl in 1878, was proved by Archie Blake in 1937, and was rediscovered by Edward W. Samson and Burton E. Mills in 1954 and by Raymond J. Nelson in 1955. Also in 1955, Paul W. Abrahams and John G. Nordahl as well as Albert A. Mullin and Wayne G. Kellner proposed a decimal variant of the method. The Quine–McCluskey algorithm is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean F has been reached. It is sometimes referred to as the tabulation method. The Quine-McCluskey algorithm works as follows: Finding all prime implicants of the function. Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as other prime implicants that are necessary to cover the function. == Complexity == Although more practical than Karnaugh mapping when dealing with more than four variables, the Quine–McCluskey algorithm also has a limited range of use since the problem it solves is NP-complete. The running time of the Quine–McCluskey algorithm grows exponentially with the number of variables. For a function of n variables the number of prime implicants can be as large as 3 n / n {\displaystyle 3^{n}/{\sqrt {n}}} , e.g. for 32 variables there may be over 534 × 1012 prime implicants. Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of which the Espresso heuristic logic minimizer was the de facto standard in 1995. For one natural class of functions f {\displaystyle f} , the precise complexity of finding all prime implicants is better-understood: Milan Mossé, Harry Sha, and Li-Yang Tan discovered a near-optimal algorithm for finding all prime implicants of a formula in conjunctive normal form. Step two of the algorithm amounts to solving the set cover problem; NP-hard instances of this problem may occur in this algorithm step. == Example == === Input === In this example, the input is a Boolean function in four variables, f : { 0 , 1 } 4 → { 0 , 1 } {\displaystyle f:\{0,1\}^{4}\to \{0,1\}} which evaluates to 1 {\displaystyle 1} on the values 4 , 8 , 10 , 11 , 12 {\displaystyle 4,8,10,11,12} and 15 {\displaystyle 15} , evaluates to an unknown value on 9 {\displaystyle 9} and 14 {\displaystyle 14} , and to 0 {\displaystyle 0} everywhere else (where these integers are interpreted in their binary form for input to f {\displaystyle f} for succinctness of notation). The inputs that evaluate to 1 {\displaystyle 1} are called 'minterms'. We encode all of this information by writing f ( A , B , C , D ) = ∑ m ( 4 , 8 , 10 , 11 , 12 , 15 ) + d ( 9 , 14 ) . {\displaystyle f(A,B,C,D)=\sum m(4,8,10,11,12,15)+d(9,14).\,} This expression says that the output function f will be 1 for the minterms 4 , 8 , 10 , 11 , 12 {\displaystyle 4,8,10,11,12} and 15 {\displaystyle 15} (denoted by the 'm' term) and that we don't care about the output for 9 {\displaystyle 9} and 14 {\displaystyle 14} combinations (denoted by the 'd' term). The summation symbol ∑ {\displaystyle \sum } denotes the logical sum (logical OR, or disjunction) of all the terms being summed over. === Step 1: Finding the prime implicants === First, we write the function as a table (where 'x' stands for don't care): One can easily form the canonical sum of products expression from this table, simply by summing the minterms (leaving out don't-care terms) where the function evaluates to one: fA,B,C,D = A'BC'D' + AB'C'D' + AB'CD' + AB'CD + ABC'D' + ABCD. which is not minimal. So to optimize, all minterms that evaluate to one are first placed in a minterm table. Don't-care terms are also added into this table (names in parentheses), so they can be combined with minterms: At this point, one can start combining minterms with other minterms in adjacent groups; as in, we compare minterms in nth group with (n+1)th group. So for the m4 minterm in with only one Number of 1s, we compare it to m9, m10, and m12 which have two Number of 1s. If two terms differ by only a single digit, that digit is replaced with a dash indicating that the digit doesn't matter. For instance 1000 and 1001 can be combined to give 100-, indicating that both minterms imply the first digit is 1 and the next two are 0. Terms that can't be combined any more are marked with an asterisk (*). When going from Size 2 to Size 4, treat - as a third bit value. Match up the -'s first. The terms represent products and to combine two product terms they must have the same variables. One of the variables should be complemented in one term and uncomplemented in the other. The remaining variables present should agree. So to match two terms the -'s must align and all but one of the other digits must be the same. For instance, -110 and -100 can be combined to give -1-0, as can -110 and -010 to give --10, but -110 and 011- cannot since the -'s do not align. -110 corresponds to BCD' while 011- corresponds to A'BC, and BCD' + A'BC is not equivalent to a product term. Note: In this example, none of the terms in the size 4 implicants table can be combined any further. In general, this process is continued in sizes that are powers of 2 (sizes 8, 16 etc.) until no more terms can be combined. === Step 2: Prime implicant chart === None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table. Along the side goes the prime implicants that have just been generated (these are the ones that have been marked with a "*" in the previous step), and along the top go the minterms specified earlier. The don't care terms are not placed on top—they are omitted from this section because they are not necessary inputs. To find the essential prime implicants, we look for columns with only one "✓". If a column has only one "✓", this means that the minterm can only be covered by one prime implicant. This prime implicant is essential. For example: in the first column, with minterm 4, there is only one "✓". This means that m(4,12) is essential (hence marked by #). Minterm 15 also has only one "✓", so m(10,11,14,15) is also essential. Now all columns with one "✓" are covered. The rows with minterm m(4,12) and m(10,11,14,15) can now be removed, together with all the columns they cover. The second prime implicant can be 'covered' by the third and fourth, and the third prime implicant can be 'covered' by the second and first, and neither is thus essential. If a prime implicant is essential then, as would be expected, it is necessary to include it in the minimized boolean equation. In some cases, the essential prime implicants do not cover all minterms, in which case additional procedures for chart reduction can be employed. The simplest "additional procedure" is trial and error, but a more systematic way is Petrick's method. In the current example, the essential prime implicants do not handle all of the minterms, so, in this case, the essential implicants can be combined with one of the two non-essential ones to yield one equation: fA,B,C,D = BC'D' + AB' + AC or fA,B,C,D = BC'D' + AD' + AC Both of those final equations are functionally equivalent to the original, verbose equation: fA,B,C,D = A'BC'D' + AB'C'D' + AB'C'D + AB'CD' + AB'CD + ABC'D' + ABCD' + ABCD. == Algorithm == === Step 1: Finding the prime implicants === The pseudocode below recursively computes the prime implicants given the list of minterms of a boolean function. It does this by trying to merge all possible minterms and filtering out minterms that have been merged until no more merges of the minterms can be performed and hence, the prime implicants of the function have been found. // Computes the prime implicants from a list of minterms. // each minterm is of the form "1001", "1010", etc and can be represented with a string. function getPrimeImplicants(list minterms) is primeImplicants ← empty list merges ← new boolean array of length equal to the number of minterms, each set to false numberOfmerges ← 0 mergedMinterm, minterm1, minterm2 ← empty strings for i = 0 to length(minterms) do for c = i + 1 to length(minterms) do minterm1 ← minterms[i] minterm2 ← minterms[c] // Checking that two minterms can be merged if CheckDashesAlign(minterm1, minterm2) && CheckMintermDifference(minterm1, minterm2) then mergedMinterm ← MergeMinterms(minterm1, minterm2) if primeImplicants Does Not Contain mergedMinterm then primeImplicants.Add(mergedMinterm) numberOfMerges ← numberOfMerges + 1 merges[i] ← true merges[c] ← true // Filtering all minterms that have not been merged as they are prime implicants. Also removing duplicates for j = 0 to length(minterms) do if merges[j] == false && primeImplicants Does Not Contain minterms[j] then primeImplicants.Add(minterms[j]) // if no merges have taken place then all of the prime implicants have been found so return, otherwise // keep merging the minterms. if numberOfmerges == 0 then return primeImplicants else return getPrimeImplicants(primeImplicants) In this example the CheckDashesAlign and CheckMintermDifference functions perform the necessary checks for determining whether two minterms can be merged. The function MergeMinterms merges the minterms and adds the dashes where necessary. The utility functions below assume that each minterm will be represented using strings. function MergeMinterms(minterm1, minterm2) is mergedMinterm ← empty string for i = 0 to length(minterm1) do //If the bits vary then replace it with a dash, otherwise the bit remains in the merged minterm. if minterm[i] != minterm2[i] then mergedMinterm ← mergedMinterm + '-' else mergedMinterm ← mergedMinterm + minterm1[i] return mergedMinterm function CheckDashesAlign(minterm1, minterm2) is for i = 0 to length(minterm1) do // If one minterm has dashes and the other does not then the minterms cannot be merged. if minterm1[i] != '-' && minterm2[i] == '-' then return false return true function CheckMintermDifference(minterm1, minterm2) is // minterm1 and minterm2 are strings representing all of the currently found prime implicants and merged // minterms. Examples include '01--' and '10-0' m1, m2 ← integer representation of minterm1 and minterm2 with the dashes removed, these are replaced with 0 // ^ here is a bitwise XOR res ← m1 ^ m2 return res != 0 && (res & res - 1) == 0 === Step 2: Prime implicant chart === The pseudocode below can be split into two sections: Creating the prime implicant chart using the prime implicants Reading the prime implicant chart to find the essential prime implicants. ==== Creating the prime implicant chart ==== The prime implicant chart can be represented by a dictionary where each key is a prime implicant and the corrresponding value is an empty string that will store a binary string once this step is complete. Each bit in the binary string is used to represent the ticks within the prime implicant chart. The prime implicant chart can be created using the following steps: Iterate through each key (prime implicant of the dictionary). Replace each dash in the prime implicant with the \d character code. This creates a regular expression that can be checked against each of the minterms, looking for matches. Iterate through each minterm, comparing the regular expression with the binary representation of the minterm, if there is a match append a "1" to the corresponding string in the dictionary. Otherwise append a "0". Repeat for all prime implicants to create the completed prime implicant chart. When written in pseudocode, the algorithm described above is: function CreatePrimeImplicantChart(list primeImplicants, list minterms) primeImplicantChart ← new dictionary with key of type string and value of type string // Creating the empty chart with the prime implicants as the key and empty strings as the value. for i = 0 to length(primeImplicants) do // Adding a new prime implicant to the chart. primeImplicantChart.Add(primeImplicants[i], "") for i = 0 to length(primeImplicantChart.Keys) do primeImplicant ← primeImplicantChart.Keys[i] // Convert the "-" to "\d" which can be used to find the row of ticks above. regularExpression ← ConvertToRegularExpression(primeImplicant) for j = 0 to length(minterms) do // If there is a match between the regular expression and the minterm than append a 1 otherwise 0. if regularExpression.matches(minterms[j]) then primeImplicantChart[primeImplicant] += "1" else primeImplicantChart[primeImplicant] += "0" // The prime implicant chart is complete so return the completed chart. return primeImplicantChart The utility function, ConvertToRegularExpression, is used to convert the prime implicant into the regular expression to check for matches between the implicant and the minterms. function ConvertToRegularExpression(string primeImplicant) regularExpression ← new string for i = 0 to length(primeImplicant) do if primeImplicant[i] == "-" then // Add the literal character "\d". regularExpression += @"\d" else regularExpression += primeImplicant[i] return regularExpression ==== Finding the essential prime implicants ==== Using the function, CreatePrimeImplicantChart, defined above, we can find the essential prime implicants by simply iterating column by column of the values in the dictionary, and where a single "1" is found then an essential prime implicant has been found. This process is described by the pseudocode below. function getEssentialPrimeImplicants(Dictionary primeImplicantChart, list minterms) essentialPrimeImplicants ← new list mintermCoverages ← list with all of the values in the dictionary for i = 0 to length(ticks) do mintermCoverage ← ticks[i] for j = 0 to length(mintermCoverage) do if mintermCoverage[j] == "1" then essentialPrimeImplicants.Add(primeImplicantChart.Keys[i]) return essentialPrimeImplicants Using the algorithm above it is now possible to find the minimised boolean expression, by converting the essential prime implicants into the canonical form ie. -100 -> BC'D' and separating the implicants by logical OR. The pseudocode assumes that the essential prime implicants will cover the entire boolean expression. == See also == Blake canonical form Buchberger's algorithm – analogous algorithm for algebraic geometry Petrick's method Qualitative comparative analysis (QCA) == References == == Further reading == Curtis, Herbert Allen (1962). "Chapter 2.3. McCluskey's Method". A new approach to the design of switching circuits. The Bell Laboratories Series (1 ed.). Princeton, New Jersey, USA: D. van Nostrand Company, Inc. pp. 90–160. ISBN 0-44201794-4. OCLC 1036797958. S2CID 57068910. ISBN 978-0-44201794-1. ark:/13960/t56d6st0q. {{cite book}}: ISBN / Date incompatibility (help) (viii+635 pages) (NB. This book was reprinted by Chin Jih in 1969.) Coudert, Olivier (October 1994). "Two-level logic minimization: an overview" (PDF). Integration, the VLSI Journal. 17–2 (2): 97–140. doi:10.1016/0167-9260(94)00007-7. ISSN 0167-9260. Archived (PDF) from the original on 2020-05-10. Retrieved 2020-05-10. (47 pages) Jadhav, Vitthal; Buchade, Amar (2012-03-08). "Modified Quine-McCluskey Method". arXiv:1203.2289 [cs.OH]. (4 pages) Crenshaw, Jack (2004-08-19). "All about Quine-McClusky". embedded.com. Archived from the original on 2020-05-10. Retrieved 2020-05-10. Tomaszewski, Sebastian P.; Celik, Ilgaz U.; Antoniou, George E. (December 2003) [2003-03-05, 2002-04-09]. "WWW-based Boolean function minimization" (PDF). International Journal of Applied Mathematics and Computer Science. 13 (4): 577–584. Archived (PDF) from the original on 2020-05-10. Retrieved 2020-05-10. [7][8] (7 pages) Duşa, Adrian (2008-10-01) [September 2007]. "A mathematical approach to the boolean minimization problem". Quality & Quantity. 44: 99–113. doi:10.1007/s11135-008-9183-x. S2CID 123042755. Article number: 99 (2010). [9] (22 pages) Duşa, Adrian (2007). "Enhancing Quine-McCluskey" (PDF). University of Bucharest. Archived (PDF) from the original on 2020-05-12. Retrieved 2020-05-12. (16 pages) (NB. == External links == Tutorial Tutorial on Quine-McCluskey and Petrick's method. For a fully worked out example visit: http://www.cs.ualberta.ca/~amaral/courses/329/webslides/Topic5-QuineMcCluskey/sld024.htm
Wikipedia/Quine–McCluskey_algorithm
In Boolean algebra, any Boolean function can be expressed in the canonical disjunctive normal form (CDNF), minterm canonical form, or Sum of Products (SoP or SOP) as a disjunction (OR) of minterms. The De Morgan dual is the canonical conjunctive normal form (CCNF), maxterm canonical form, or Product of Sums (PoS or POS) which is a conjunction (AND) of maxterms. These forms can be useful for the simplification of Boolean functions, which is of great importance in the optimization of Boolean formulas in general and digital circuits in particular. Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the algebraic normal form (also called Zhegalkin or Reed–Muller). == Minterms == For a boolean function of n {\displaystyle n} variables x 1 , … , x n {\displaystyle {x_{1},\dots ,x_{n}}} , a minterm is a product term in which each of the n {\displaystyle n} variables appears exactly once (either in its complemented or uncomplemented form). Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator (logical AND). A minterm gives a true value for just one combination of the input variables, the minimum nontrivial amount. For example, a b' c, is true only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 1. === Indexing minterms === There are 2n minterms of n variables, since a variable in the minterm expression can be in either its direct or its complemented form—two choices per variable. Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form ( x i {\displaystyle x_{i}} ) and 0 to the complemented form ( x i ′ {\displaystyle x'_{i}} ); the minterm is then ∑ i = 1 n 2 i − 1 value ⁡ ( x i ) {\displaystyle \sum \limits _{i=1}^{n}2^{i-1}\operatorname {value} (x_{i})} . For example, minterm a b c ′ {\displaystyle abc'} is numbered 1102 = 610 and denoted m 6 {\displaystyle m_{6}} . === Minterm canonical form === Given the truth table of a logical function, it is possible to write the function as a "sum of products" or "sum of minterms". This is a special form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit u of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci: Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write u as a sum of minterms m 1 , m 2 , m 4 , {\displaystyle m_{1},m_{2},m_{4},} and m 7 {\displaystyle m_{7}} . If we wish to verify this: u ( c i , x , y ) = m 1 + m 2 + m 4 + m 7 = ( c i ′ , x ′ , y ) + ( c i ′ , x , y ′ ) + ( c i , x ′ , y ′ ) + ( c i , x , y ) {\displaystyle u(ci,x,y)=m_{1}+m_{2}+m_{4}+m_{7}=(ci',x',y)+(ci',x,y')+(ci,x',y')+(ci,x,y)} evaluated for all 8 combinations of the three variables will match the table. == Maxterms == For a boolean function of n variables x 1 , … , x n {\displaystyle {x_{1},\dots ,x_{n}}} , a maxterm is a sum term in which each of the n variables appears exactly once (either in its complemented or uncomplemented form). Thus, a maxterm is a logical expression of n variables that employs only the complement operator and the disjunction operator (logical OR). Maxterms are a dual of the minterm idea, following the complementary symmetry of De Morgan's laws. Instead of using ANDs and complements, we use ORs and complements and proceed similarly. It is apparent that a maxterm gives a false value for just one combination of the input variables, i.e. it is true at the maximal number of possibilities. For example, the maxterm a′ + b + c′ is false only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 0. === Indexing maxterms === There are again 2n maxterms of n variables, since a variable in the maxterm expression can also be in either its direct or its complemented form—two choices per variable. The numbering is chosen so that the complement of a minterm is the respective maxterm. That is, each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The maxterm convention assigns the value 0 to the direct form ( x i ) {\displaystyle (x_{i})} and 1 to the complemented form ( x i ′ ) {\displaystyle (x'_{i})} . For example, we assign the index 6 to the maxterm a ′ + b ′ + c {\displaystyle a'+b'+c} (110) and denote that maxterm as M6. The complement ( a ′ + b ′ + c ) ′ {\displaystyle (a'+b'+c)'} is the minterm a b c ′ = m 6 {\displaystyle abc'=m_{6}} , using de Morgan's law. === Maxterm canonical form === If one is given a truth table of a logical function, it is possible to write the function as a "product of sums" or "product of maxterms". This is a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit co of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci: Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write co as a product of maxterms M 0 , M 1 , M 2 {\displaystyle M_{0},M_{1},M_{2}} and M 4 {\displaystyle M_{4}} . If we wish to verify this: c o ( c i , x , y ) = M 0 M 1 M 2 M 4 = ( c i + x + y ) ( c i + x + y ′ ) ( c i + x ′ + y ) ( c i ′ + x + y ) {\displaystyle co(ci,x,y)=M_{0}M_{1}M_{2}M_{4}=(ci+x+y)(ci+x+y')(ci+x'+y)(ci'+x+y)} evaluated for all 8 combinations of the three variables will match the table. == Minimal PoS and SoP forms == It is often the case that the canonical minterm form is equivalent to a smaller SoP form. This smaller form would still consist of a sum of product terms, but have fewer product terms and/or product terms that contain fewer variables. For example, the following 3-variable function: has the canonical minterm representation f = a ′ b c + a b c {\displaystyle f=a'bc+abc} , but it has an equivalent SoP form f = b c {\displaystyle f=bc} . In this trivial example, it is obvious that b c = a ′ b c + a b c {\displaystyle bc=a'bc+abc} , and the smaller form has both fewer product terms and fewer variables within each term. The minimal SoP representations of a function according to this notion of "smallest" are referred to as minimal SoP forms. In general, there may be multiple minimal SoP forms, none clearly smaller or larger than another. In a similar manner, a canonical maxterm form can be reduced to various minimal PoS forms. While this example was simplified by applying normal algebraic methods [ f = ( a ′ + a ) b c {\displaystyle f=(a'+a)bc} ], in less obvious cases a convenient method for finding minimal PoS/SoP forms of a function with up to four variables is using a Karnaugh map. The Quine–McCluskey algorithm can solve slightly larger problems. The field of logic optimization developed from the problem of finding optimal implementations of Boolean functions, such as minimal PoS and SoP forms. == Application example == The sample truth tables for minterms and maxterms above are sufficient to establish the canonical form for a single bit position in the addition of binary numbers, but are not sufficient to design the digital logic unless your inventory of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The values are defined as voltage states, one near ground and one near the DC supply voltage Vcc, e.g. +5 VDC. If the higher voltage is defined as the 1 "true" value, a NOR gate is the simplest possible useful logical element. Specifically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their collectors tied together and linked to Vcc through a load impedance. Each base is connected to an input signal, and the common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistor's emitter to its collector, causing current to flow through the load impedance, which brings the collector voltage (the output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current flows, and the voltage-divider effect with the load impedance imposes on the collector point a high voltage very near to Vcc. The complementing property of these gate circuits may seem like a drawback when trying to implement a function in canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing function, which is required frequently in digital logic. This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplified by supposing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs). === Canonical and non-canonical consequences of NOR gates === A set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3 input variables ci, x, and y, always produce minterms, never maxterms—that is, of the 8 gates required to process all combinations of 3 input variables, only one has the output value 1. That's because a NOR gate, despite its name, could better be viewed (using De Morgan's law) as the AND of the complements of its input signals. The reason this is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the complement of the like-indexed minterm, and vice versa. In the minterm example above, we wrote u ( c i , x , y ) = m 1 + m 2 + m 4 + m 7 {\displaystyle u(ci,x,y)=m_{1}+m_{2}+m_{4}+m_{7}} but to perform this with a 4-input NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is, u ( c i , x , y ) = A N D ( M 0 , M 3 , M 5 , M 6 ) = N O R ( m 0 , m 3 , m 5 , m 6 ) . {\displaystyle u(ci,x,y)=\mathrm {AND} (M_{0},M_{3},M_{5},M_{6})=\mathrm {NOR} (m_{0},m_{3},m_{5},m_{6}).} In the maxterm example above, we wrote c o ( c i , x , y ) = M 0 M 1 M 2 M 4 {\displaystyle co(ci,x,y)=M_{0}M_{1}M_{2}M_{4}} but to perform this with a 4-input NOR gate we need to notice the equality to the NOR of the same minterms. That is, c o ( c i , x , y ) = A N D ( M 0 , M 1 , M 2 , M 4 ) = N O R ( m 0 , m 1 , m 2 , m 4 ) . {\displaystyle co(ci,x,y)=\mathrm {AND} (M_{0},M_{1},M_{2},M_{4})=\mathrm {NOR} (m_{0},m_{1},m_{2},m_{4}).} === Design trade-offs considered in addition to canonical forms === One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that all 3 of the input variables have to appear in both their direct and complement forms. There's no difficulty about the addends x and y in this respect, because they are static throughout the addition and thus are normally held in latch circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a pair of gates cross-coupled to make a flip-flop: the output of each is wired as one of the inputs to the other.) There is also no need to create the complement form of the sum u. However, the carry out of one bit position must be passed as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this is to pass co through a 1-input NOR gate and label the output co′, but that would add a gate delay in the worst possible place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical form of co′ (out of the opposite minterms as co) solves this problem. c o ′ ( c i , x , y ) = A N D ( M 3 , M 5 , M 6 , M 7 ) = N O R ( m 3 , m 5 , m 6 , m 7 ) . {\displaystyle co'(ci,x,y)=\mathrm {AND} (M_{3},M_{5},M_{6},M_{7})=\mathrm {NOR} (m_{3},m_{5},m_{6},m_{7}).} The trade-off to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger gate). If we'd just used that 1-input gate to complement co, there would have been no use for the minterm m 7 {\displaystyle m_{7}} , and the gate that generated it could have been eliminated. Nevertheless, it is still a good trade. Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning NOR gates into the functions specified. A NOR gate is made into an OR gate by passing its output through a 1-input NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However, this approach not only increases the number of gates used, but also doubles the number of gate delays processing the signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile. === Top-down vs. bottom-up design === We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. That's the "top-down" way to design the digital circuit for this function, but is it the best way? The discussion has focused on identifying "fastest" as "best," and the augmented canonical form meets that criterion flawlessly, but sometimes other factors predominate. The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and finally compare the results. The bottom-up development involves noticing that u = ci XOR (x XOR y), where XOR means eXclusive OR [true when either input is true but not when both are true], and that co = ci x + x y + y ci. One such development takes twelve NOR gates in all: six 2-input gates and two 1-input gates to produce u in 5 gate delays, plus three 2-input gates and one 3-input gate to produce co′ in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus three 4-input NOR gates to produce u, co and co′ in 2 gate delays. If the circuit inventory actually includes 4-input NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the sum digit u considerably faster. The fanout comparison is tabulated as: The description of the bottom-up development mentions co′ as an output but not co. Does that design simply never need the direct form of the carry out? Well, yes and no. At each stage, the calculation of co′ depends only on ci′, x′ and y′, which means that the carry propagation ripples along the bit positions just as fast as in the canonical design without ever developing co. The calculation of u, which does require ci to be made from ci′ by a 1-input NOR, is slower but for any word length the design only pays that penalty once (when the leftmost sum digit is developed). That's because those calculations overlap, each in what amounts to its own little pipeline without affecting when the next bit position's sum bit can be calculated. And, to be sure, the co′ out of the leftmost bit position will probably have to be complemented as part of the logic determining whether the addition overflowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate count and/or fanout are paramount! We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the interested reader, assisted by one more algebraic formula: u = ci(x XOR y) + ci′(x XOR y)′]′. Decoupling the carry propagation from the sum formation in this way is what elevates the performance of a carry-lookahead adder over that of a ripple carry adder. == Application in digital circuit design == One application of Boolean algebra is digital circuit design, with one goal to minimize the number of gates and another to minimize the settling time. There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement only four of them: conjunction (AND), disjunction (inclusive OR), and the respective complements of those (NAND and NOR). Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR, whose output is true only when all 3 inputs are false. == See also == List of Boolean algebra topics == References == == Further reading == Bender, Edward A.; Williamson, S. Gill (2005). A Short Course in Discrete Mathematics. Mineola, NY: Dover Publications, Inc. ISBN 0-486-43946-1. The authors demonstrate a proof that any Boolean (logic) function can be expressed in either disjunctive or conjunctive normal form (cf pages 5–6); the proof simply proceeds by creating all 2N rows of N Boolean variables and demonstrates that each row ("minterm" or "maxterm") has a unique Boolean expression. Any Boolean function of the N variables can be derived from a composite of the rows whose minterm or maxterm are logical 1s ("trues") McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits. NY: McGraw–Hill Book Company. p. 78. LCCN 65-17394. Canonical expressions are defined and described Hill, Fredrick J.; Peterson, Gerald R. (1974). Introduction to Switching Theory and Logical Design (2nd ed.). NY: John Wiley & Sons. p. 101. ISBN 0-471-39882-9. Minterm and maxterm designation of functions == External links == Boole, George (1848). "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III. Translated by Wilkins, David R.: 183–198.
Wikipedia/Canonical_form_(Boolean_algebra)
In propositional logic and Boolean algebra, there is a duality between conjunction and disjunction, also called the duality principle. It is the most widely known example of duality in logic. The duality consists in these metalogical theorems: In classical propositional logic, the connectives for conjunction and disjunction can be defined in terms of each other, and consequently, only one of them needs to be taken as primitive. If φ D {\displaystyle \varphi ^{D}} is used as notation to designate the result of replacing every instance of conjunction with disjunction, and every instance of disjunction with conjunction (e.g. p ∧ q {\displaystyle p\land q} with q ∨ p {\displaystyle q\lor p} , or vice-versa), in a given formula φ {\displaystyle \varphi } , and if φ ¯ {\displaystyle {\overline {\varphi }}} is used as notation for replacing every sentence-letter in φ {\displaystyle \varphi } with its negation (e.g., p {\displaystyle p} with ¬ p {\displaystyle \neg p} ), and if the symbol ⊨ {\displaystyle \models } is used for semantic consequence and ⟚ for semantical equivalence between logical formulas, then it is demonstrable that φ D {\displaystyle \varphi ^{D}} ⟚ ¬ φ ¯ {\displaystyle \neg {\overline {\varphi }}} , and also that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ψ D ⊨ φ D {\displaystyle \psi ^{D}\models \varphi ^{D}} , and furthermore that if φ {\displaystyle \varphi } ⟚ ψ {\displaystyle \psi } then φ D {\displaystyle \varphi ^{D}} ⟚ ψ D {\displaystyle \psi ^{D}} . (In this context, φ ¯ D {\displaystyle {\overline {\varphi }}^{D}} is called the dual of a formula φ {\displaystyle \varphi } .) == Mutual definability == The connectives may be defined in terms of each other as follows: φ ∨ ψ :≡ ¬ ( ¬ φ ∧ ¬ ψ ) . {\displaystyle \varphi \vee \psi :\equiv \neg (\neg \varphi \land \neg \psi ).} (1) φ ∧ ψ :≡ ¬ ( ¬ φ ∨ ¬ ψ ) . {\displaystyle \varphi \land \psi :\equiv \neg (\neg \varphi \vee \neg \psi ).} (2) ¬ ( ¬ φ ∨ ¬ ψ ) ≡ ¬ ¬ ( ¬ ¬ φ ∧ ¬ ¬ ψ ) ≡ φ ∧ ψ . {\displaystyle \neg (\neg \varphi \vee \neg \psi )\equiv \neg \neg (\neg \neg \varphi \land \neg \neg \psi )\equiv \varphi \land \psi .} (3) === Functional completeness === Since the Disjunctive Normal Form Theorem shows that the set of connectives { ∧ , ∨ , ¬ } {\displaystyle \{\land ,\vee ,\neg \}} is functionally complete, these results show that the sets of connectives { ∧ , ¬ } {\displaystyle \{\land ,\neg \}} and { ∨ , ¬ } {\displaystyle \{\vee ,\neg \}} are themselves functionally complete as well. === De Morgan's laws === De Morgan's laws also follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. ¬ ( φ ∨ ψ ) ≡ ¬ φ ∧ ¬ ψ . {\displaystyle \neg (\varphi \vee \psi )\equiv \neg \varphi \land \neg \psi .} (4) ¬ ( φ ∧ ψ ) ≡ ¬ φ ∨ ¬ ψ . {\displaystyle \neg (\varphi \land \psi )\equiv \neg \varphi \vee \neg \psi .} (5) == Duality properties == The dual of a sentence is what you get by swapping all occurrences of ∨ {\textstyle \vee } and ∧ {\textstyle \land } , while also negating all propositional constants. For example, the dual of ( A ∧ B ∨ C ) {\textstyle (A\land B\vee C)} would be ( ¬ A ∨ ¬ B ∧ ¬ C ) {\textstyle (\neg A\vee \neg B\land \neg C)} . The dual of a formula φ {\textstyle \varphi } is notated as φ ∗ {\textstyle \varphi ^{*}} . The Duality Principle states that in classical propositional logic, any sentence is equivalent to the negation of its dual. Duality Principle: For all φ {\textstyle \varphi } , we have that φ = ¬ ( φ ∗ ) {\textstyle \varphi =\neg (\varphi ^{*})} . Proof: By induction on complexity. For the base case, we consider an arbitrary atomic sentence A {\textstyle A} . Since its dual is ¬ A {\textstyle \neg A} , the negation of its dual will be ¬ ¬ A {\textstyle \neg \neg A} , which is indeed equivalent to A {\textstyle A} . For the induction step, we consider an arbitrary φ {\textstyle \varphi } and assume that the result holds for all sentences of lower complexity. Three cases: If φ {\textstyle \varphi } is of the form ¬ ψ {\textstyle \neg \psi } for some ψ {\textstyle \psi } , then its dual will be ¬ ( ψ ∗ ) {\textstyle \neg (\psi ^{*})} and the negation of its dual will therefore be ¬ ¬ ( ψ ∗ ) {\textstyle \neg \neg (\psi ^{*})} . Now, since ψ {\textstyle \psi } is less complex than φ {\textstyle \varphi } , the induction hypothesis gives us that ψ = ¬ ( ψ ∗ ) {\textstyle \psi =\neg (\psi ^{*})} . By substitution, this gives us that φ = ¬ ¬ ( ψ ∗ ) {\textstyle \varphi =\neg \neg (\psi ^{*})} , which is to say that φ {\textstyle \varphi } is equivalent to the negation of its dual. If φ {\textstyle \varphi } is of the form ( ψ ∨ χ ) {\textstyle (\psi \vee \chi )} for some ψ {\textstyle \psi } and χ {\textstyle \chi } , then its dual will be ( ψ ∗ ∧ χ ∗ ) {\textstyle (\psi ^{*}\land \chi ^{*})} , and the negation of its dual will therefore be ¬ ( ψ ∗ ∧ χ ∗ ) {\textstyle \neg (\psi ^{*}\land \chi ^{*})} . Now, since ψ {\textstyle \psi } and χ {\textstyle \chi } are less complex than φ {\textstyle \varphi } , the induction hypothesis gives us that ψ = ¬ ( ψ ∗ ) {\textstyle \psi =\neg (\psi ^{*})} and χ = ¬ ( χ ∗ ) {\textstyle \chi =\neg (\chi ^{*})} . By substitution, this gives us that φ = ¬ ( ψ ∗ ) ∨ ¬ ( χ ∗ ) {\textstyle \varphi =\neg (\psi ^{*})\vee \neg (\chi ^{*})} which in turn gives us that φ = ¬ ( ψ ∗ ∧ χ ∗ ) {\textstyle \varphi =\neg (\psi ^{*}\land \chi ^{*})} by DeMorgan's Law. And that is once again just to say that φ {\textstyle \varphi } is equivalent to the negation of its dual. If φ {\textstyle \varphi } is of the form ψ ∨ χ {\textstyle \psi \vee \chi } , the result follows by analogous reasoning. === Further duality theorems === Assume φ ⊨ ψ {\displaystyle \varphi \models \psi } . Then φ ¯ ⊨ ψ ¯ {\displaystyle {\overline {\varphi }}\models {\overline {\psi }}} by uniform substitution of ¬ P i {\displaystyle \neg P_{i}} for P i {\displaystyle P_{i}} . Hence, ¬ ψ ⊨ ¬ φ {\displaystyle \neg \psi \models \neg \varphi } , by contraposition; so finally, ψ D ⊨ φ D {\displaystyle \psi ^{D}\models \varphi ^{D}} , by the property that φ D {\displaystyle \varphi ^{D}} ⟚ ¬ φ ¯ {\displaystyle \neg {\overline {\varphi }}} , which was just proved above. And since φ D D = φ {\displaystyle \varphi ^{DD}=\varphi } , it is also true that φ ⊨ ψ {\displaystyle \varphi \models \psi } if, and only if, ψ D ⊨ φ D {\displaystyle \psi ^{D}\models \varphi ^{D}} . And it follows, as a corollary, that if φ ⊨ ¬ ψ {\displaystyle \varphi \models \neg \psi } , then φ D ⊨ ¬ ψ D {\displaystyle \varphi ^{D}\models \neg \psi ^{D}} . == Conjunctive and disjunctive normal forms == For a formula φ {\displaystyle \varphi } in disjunctive normal form, the formula φ ¯ D {\displaystyle {\overline {\varphi }}^{D}} will be in conjunctive normal form, and given the result that § Negation is semantically equivalent to dual, it will be semantically equivalent to ¬ φ {\displaystyle \neg \varphi } . This provides a procedure for converting between conjunctive normal form and disjunctive normal form. Since the Disjunctive Normal Form Theorem shows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. == References ==
Wikipedia/Duality_principle_(Boolean_algebra)
In mathematics, a cofinite subset of a set X {\displaystyle X} is a subset A {\displaystyle A} whose complement in X {\displaystyle X} is a finite set. In other words, A {\displaystyle A} contains all but finitely many elements of X . {\displaystyle X.} If the complement is not finite, but is countable, then one says the set is cocountable. These arise naturally when generalizing structures on finite sets to infinite sets, particularly on infinite products, as in the product topology or direct sum. This use of the prefix "co" to describe a property possessed by a set's complement is consistent with its use in other terms such as "comeagre set". == Boolean algebras == The set of all subsets of X {\displaystyle X} that are either finite or cofinite forms a Boolean algebra, which means that it is closed under the operations of union, intersection, and complementation. This Boolean algebra is the finite–cofinite algebra on X . {\displaystyle X.} In the other direction, a Boolean algebra A {\displaystyle A} has a unique non-principal ultrafilter (that is, a maximal filter not generated by a single element of the algebra) if and only if there exists an infinite set X {\displaystyle X} such that A {\displaystyle A} is isomorphic to the finite–cofinite algebra on X . {\displaystyle X.} In this case, the non-principal ultrafilter is the set of all cofinite subsets of X {\displaystyle X} . == Cofinite topology == The cofinite topology or the finite complement topology is a topology that can be defined on every set X . {\displaystyle X.} It has precisely the empty set and all cofinite subsets of X {\displaystyle X} as open sets. As a consequence, in the cofinite topology, the only closed subsets are finite sets, or the whole of X . {\displaystyle X.} For this reason, the cofinite topology is also known as the finite-closed topology. Symbolically, one writes the topology as T = { A ⊆ X : A = ∅ or X ∖ A is finite } . {\displaystyle {\mathcal {T}}=\{A\subseteq X:A=\varnothing {\mbox{ or }}X\setminus A{\mbox{ is finite}}\}.} This topology occurs naturally in the context of the Zariski topology. Since polynomials in one variable over a field K {\displaystyle K} are zero on finite sets, or the whole of K , {\displaystyle K,} the Zariski topology on K {\displaystyle K} (considered as affine line) is the cofinite topology. The same is true for any irreducible algebraic curve; it is not true, for example, for X Y = 0 {\displaystyle XY=0} in the plane. === Properties === Subspaces: Every subspace topology of the cofinite topology is also a cofinite topology. Compactness: Since every open set contains all but finitely many points of X , {\displaystyle X,} the space X {\displaystyle X} is compact and sequentially compact. Separation: The cofinite topology is the coarsest topology satisfying the T1 axiom; that is, it is the smallest topology for which every singleton set is closed. In fact, an arbitrary topology on X {\displaystyle X} satisfies the T1 axiom if and only if it contains the cofinite topology. If X {\displaystyle X} is finite then the cofinite topology is simply the discrete topology. If X {\displaystyle X} is not finite then this topology is not Hausdorff (T2), regular or normal because no two nonempty open sets are disjoint (that is, it is hyperconnected). === Double-pointed cofinite topology === The double-pointed cofinite topology is the cofinite topology with every point doubled; that is, it is the topological product of the cofinite topology with the indiscrete topology on a two-element set. It is not T0 or T1, since the points of each doublet are topologically indistinguishable. It is, however, R0 since topologically distinguishable points are separated. The space is compact as the product of two compact spaces; alternatively, it is compact because each nonempty open set contains all but finitely many points. For an example of the countable double-pointed cofinite topology, the set Z {\displaystyle \mathbb {Z} } of integers can be given a topology such that every even number 2 n {\displaystyle 2n} is topologically indistinguishable from the following odd number 2 n + 1 {\displaystyle 2n+1} . The closed sets are the unions of finitely many pairs 2 n , 2 n + 1 , {\displaystyle 2n,2n+1,} or the whole set. The open sets are the complements of the closed sets; namely, each open set consists of all but a finite number of pairs 2 n , 2 n + 1 , {\displaystyle 2n,2n+1,} or is the empty set. == Other examples == === Product topology === The product topology on a product of topological spaces ∏ X i {\displaystyle \prod X_{i}} has basis ∏ U i {\displaystyle \prod U_{i}} where U i ⊆ X i {\displaystyle U_{i}\subseteq X_{i}} is open, and cofinitely many U i = X i . {\displaystyle U_{i}=X_{i}.} The analog without requiring that cofinitely many factors are the whole space is the box topology. === Direct sum === The elements of the direct sum of modules ⨁ M i {\displaystyle \bigoplus M_{i}} are sequences α i ∈ M i {\displaystyle \alpha _{i}\in M_{i}} where cofinitely many α i = 0. {\displaystyle \alpha _{i}=0.} The analog without requiring that cofinitely many summands are zero is the direct product. == See also == Fréchet filter – frechet filterPages displaying wikidata descriptions as a fallback List of topologies – List of concrete topologies and topological spaces == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 (See example 18)
Wikipedia/Finite–cofinite_algebra
Algorithmic topology, or computational topology, is a subfield of topology with an overlap with areas of computer science, in particular, computational geometry and computational complexity theory. A primary concern of algorithmic topology, as its name suggests, is to develop efficient algorithms for solving problems that arise naturally in fields such as computational geometry, graphics, robotics, social science, structural biology, and chemistry, using methods from computable topology. == Major algorithms by subject area == === Algorithmic 3-manifold theory === A large family of algorithms concerning 3-manifolds revolve around normal surface theory, which is a phrase that encompasses several techniques to turn problems in 3-manifold theory into integer linear programming problems. Rubinstein and Thompson's 3-sphere recognition algorithm. This is an algorithm that takes as input a triangulated 3-manifold and determines whether or not the manifold is homeomorphic to the 3-sphere. It has exponential run-time in the number of tetrahedral simplexes in the initial 3-manifold, and also an exponential memory profile. Saul Schleimer went on to show the problem lies in the complexity class NP. Furthermore, Raphael Zentner showed that the problem lies in the complexity class coNP, provided that the generalized Riemann hypothesis holds. He uses instanton gauge theory, the geometrization theorem of 3-manifolds, and subsequent work of Greg Kuperberg on the complexity of knottedness detection. The connect-sum decomposition of 3-manifolds is also implemented in Regina, has exponential run-time and is based on a similar algorithm to the 3-sphere recognition algorithm. Determining that the Seifert-Weber 3-manifold contains no incompressible surface has been algorithmically implemented by Burton, Rubinstein and Tillmann and based on normal surface theory. The Manning algorithm is an algorithm to find hyperbolic structures on 3-manifolds whose fundamental group have a solution to the word problem. At present the JSJ decomposition has not been implemented algorithmically in computer software. Neither has the compression-body decomposition. There are some very popular and successful heuristics, such as SnapPea which has much success computing approximate hyperbolic structures on triangulated 3-manifolds. It is known that the full classification of 3-manifolds can be done algorithmically, in fact, it is known that deciding whether two closed, oriented 3-manifolds given by triangulations (simplicial complexes) are equivalent (homeomorphic) is elementary recursive. This generalizes the result on 3-sphere recognition. ==== Conversion algorithms ==== SnapPea implements an algorithm to convert a planar knot or link diagram into a cusped triangulation. This algorithm has a roughly linear run-time in the number of crossings in the diagram, and low memory profile. The algorithm is similar to the Wirthinger algorithm for constructing presentations of the fundamental group of link complements given by planar diagrams. Similarly, SnapPea can convert surgery presentations of 3-manifolds into triangulations of the presented 3-manifold. D. Thurston and F. Costantino have a procedure to construct a triangulated 4-manifold from a triangulated 3-manifold. Similarly, it can be used to construct surgery presentations of triangulated 3-manifolds, although the procedure is not explicitly written as an algorithm in principle it should have polynomial run-time in the number of tetrahedra of the given 3-manifold triangulation. S. Schleimer has an algorithm which produces a triangulated 3-manifold, given input a word (in Dehn twist generators) for the mapping class group of a surface. The 3-manifold is the one that uses the word as the attaching map for a Heegaard splitting of the 3-manifold. The algorithm is based on the concept of a layered triangulation. === Algorithmic knot theory === Determining whether or not a knot is trivial is known to be in the complexity classes NP as well as co-NP. The problem of determining the genus of a knot in a 3-manifold is NP-complete; however, while NP remains an upper bound on the complexity of determining the genus of a knot in R3 or S3, as of 2006 it was unknown whether the algorithmic problem of determining the genus of a knot in those particular 3-manifolds was still NP-hard. === Computational homotopy === Computational methods for homotopy groups of spheres. Computational methods for solving systems of polynomial equations. Brown has an algorithm to compute the homotopy groups of spaces that are finite Postnikov complexes, although it is not widely considered implementable. === Computational homology === Computation of homology groups of cell complexes reduces to bringing the boundary matrices into Smith normal form. Although this is a completely solved problem algorithmically, there are various technical obstacles to efficient computation for large complexes. There are two central obstacles. Firstly, the basic Smith form algorithm has cubic complexity in the size of the matrix involved since it uses row and column operations which makes it unsuitable for large cell complexes. Secondly, the intermediate matrices which result from the application of the Smith form algorithm get filled-in even if one starts and ends with sparse matrices. Efficient and probabilistic Smith normal form algorithms, as found in the LinBox library. Simple homotopic reductions for pre-processing homology computations, as in the Perseus software package. Algorithms to compute persistent homology of filtered complexes, as in the TDAstats R package. In some applications, such as in TDA, it is useful to have representatives of (co)homology classes that are as "small" as possible. This is known as the problem of (co)homology localization. On triangulated manifolds, given a chain representing a homology class, it is in general NP-hard to approximate the minimum-support homologous chain. However, the particular setting of approximating 1-cohomology localization on triangulated 2-manifolds is one of only three known problems whose hardness is equivalent to the Unique Games Conjecture. == See also == Computable topology (the study of the topological nature of computation) Computational geometry Digital topology Topological data analysis Spatial-temporal reasoning Experimental mathematics Geometric modeling == References == == External links == CompuTop software archive Workshop on Application of Topology in Science and Engineering Computational Topology at Stanford University Archived 2007-06-22 at the Wayback Machine Computational Homology Software (CHomP) at Rutgers University. Computational Homology Software (RedHom) at Jagellonian University Archived 2013-07-15 at the Wayback Machine. The Perseus software project for (persistent) homology. The javaPlex Persistent Homology software at Stanford. PHAT: persistent homology algorithms toolbox. == Books == Tomasz Kaczynski; Konstantin Mischaikow; Marian Mrozek (2004). Computational Homology. Springer. ISBN 0-387-40853-3. Afra J. Zomorodian (2005). Topology for Computing. Cambridge. ISBN 0-521-83666-2. Computational Topology: An Introduction, Herbert Edelsbrunner, John L. Harer, AMS Bookstore, 2010, ISBN 978-0-8218-4925-5
Wikipedia/Computational_topology
In mathematical logic, the Lindenbaum–Tarski algebra (or Lindenbaum algebra) of a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation. The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski. Starting in the academic year 1926-1927, Lindenbaum pioneered his method in Jan Łukasiewicz's mathematical logic seminar, and the method was popularized and generalized in subsequent decades through work by Tarski. The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic. == Operations == The operations in a Lindenbaum–Tarski algebra A are inherited from those in the underlying theory T. These typically include conjunction and disjunction, which are well-defined on the equivalence classes. When negation is also present in T, then A is a Boolean algebra, provided the logic is classical. If the theory T consists of the propositional tautologies, the Lindenbaum–Tarski algebra is the free Boolean algebra generated by the propositional variables. If T is closed for deduction, then the embedding of T/~ in A is a filter. Moreover, an ultrafilter in A corresponds to a complete consistent theory, establishing the equivalence between Lindenbaum's Lemma and the Ultrafilter Lemma. == Related algebras == Heyting algebras and interior algebras are the Lindenbaum–Tarski algebras for intuitionistic logic and the modal logic S4, respectively. A logic for which Tarski's method is applicable, is called algebraizable. There are however a number of logics where this is not the case, for instance the modal logics S1, S2, or S3, which lack the rule of necessitation (⊢φ implying ⊢□φ), so ~ (defined above) is not a congruence (because ⊢φ→ψ does not imply ⊢□φ→□ψ). Another type of logic where Tarski's method is inapplicable is relevance logics, because given two theorems an implication from one to the other may not itself be a theorem in a relevance logic. The study of the algebraization process (and notion) as topic of interest by itself, not necessarily by Tarski's method, has led to the development of abstract algebraic logic. == See also == Algebraic semantics (mathematical logic) Leibniz operator List of Boolean algebra topics == References == Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.
Wikipedia/Lindenbaum_algebra
In the branch of mathematics known as universal algebra (and in its applications), a subdirectly irreducible algebra is an algebra that cannot be factored as a subdirect product of "simpler" algebras. Subdirectly irreducible algebras play a somewhat analogous role in algebra to primes in number theory. == Definition == A universal algebra A is said to be subdirectly irreducible when A has more than one element, and when any subdirect representation of A includes (as a factor) an algebra isomorphic to A, with the isomorphism being given by the projection map. == Examples == The two-element chain, as either a Boolean algebra, a Heyting algebra, a lattice: 56 , or a semilattice, is subdirectly irreducible. In fact, the two-element chain is the only subdirectly irreducible distributive lattice.: 56  Any finite chain with two or more elements, as a Heyting algebra, is subdirectly irreducible. (This is not the case for chains of three or more elements as either lattices or semilattices, which are subdirectly reducible to the two-element chain. The difference with Heyting algebras is that a → b need not be comparable with a under the lattice order even when b is.) Any finite cyclic group of order a power of a prime (i.e. any cyclic p-group) is subdirectly irreducible.: 56  (One weakness of the analogy between subdirect irreducibles and prime numbers is that the integers are subdirectly representable by any infinite family of nonisomorphic prime-power cyclic groups, e.g. just those of order a Mersenne prime assuming there are infinitely many.) In fact, an abelian group is subdirectly irreducible if and only if it is isomorphic to a cyclic p-group or isomorphic to a Prüfer group (an infinite but countable p-group, which is the direct limit of its finite p-subgroups).: 61  A vector space is subdirectly irreducible if and only if it has dimension one. == Properties == The subdirect representation theorem of universal algebra states that every algebra is subdirectly representable by its subdirectly irreducible quotients. An equivalent definition of "subdirect irreducible" therefore is any algebra A that is not subdirectly representable by those of its quotients not isomorphic to A. (This is not quite the same thing as "by its proper quotients" because a proper quotient of A may be isomorphic to A, for example the quotient of the semilattice (Z, min) obtained by identifying just the two elements 3 and 4.) An immediate corollary is that any variety, as a class closed under homomorphisms, subalgebras, and direct products, is determined by its subdirectly irreducible members, since every algebra A in the variety can be constructed as a subalgebra of a suitable direct product of the subdirectly irreducible quotients of A, all of which belong to the variety because A does. For this reason one often studies not the variety itself but just its subdirect irreducibles. An algebra A is subdirectly irreducible if and only if it contains two elements that are identified by every proper quotient, equivalently, if and only if its lattice Con A of congruences has a least nonidentity element. That is, any subdirect irreducible must contain a specific pair of elements witnessing its irreducibility in this way. Given such a witness (a, b) to subdirect irreducibility we say that the subdirect irreducible is (a, b)-irreducible. Given any class C of similar algebras, Jónsson's lemma (due to Bjarni Jónsson) states that if the variety HSP(C) generated by C is congruence-distributive, its subdirect irreducibles are in HSPU(C), that is, they are quotients of subalgebras of ultraproducts of members of C. (If C is a finite set of finite algebras, the ultraproduct operation is redundant.) == Applications == A necessary and sufficient condition for a Heyting algebra to be subdirectly irreducible is for there to be a greatest element strictly below 1. The witnessing pair is that element and 1, and identifying any other pair a, b of elements identifies both a→b and b→a with 1 thereby collapsing everything above those two implications to 1. Hence every finite chain of two or more elements as a Heyting algebra is subdirectly irreducible. By Jónsson's Lemma, subdirectly irreducible algebras of a congruence-distributive variety generated by a finite set of finite algebras are no larger than the generating algebras, since the quotients and subalgebras of an algebra A are never larger than A itself. For example, the subdirect irreducibles in the variety generated by a finite linearly ordered Heyting algebra H must be just the nondegenerate quotients of H, namely all smaller linearly ordered nondegenerate Heyting algebras. The conditions cannot be dropped in general: for example, the variety of all Heyting algebras is generated by the set of its finite subdirectly irreducible algebras, but there exist subdirectly irreducible Heyting algebras of arbitrary (infinite) cardinality. There also exists a single finite algebra generating a (non-congruence-distributive) variety with arbitrarily large subdirect irreducibles. == References == Pierre Antoine Grillet (2007). Abstract algebra. Springer. ISBN 978-0-387-71567-4.
Wikipedia/Subdirectly_irreducible_algebra
In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras. == Definition == An interior algebra is an algebraic structure with the signature ⟨S, ·, +, ′, 0, 1, I⟩ where ⟨S, ·, +, ′, 0, 1⟩ is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities: xI ≤ x xII = xI (xy)I = xIyI 1I = 1 xI is called the interior of x. The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities: xC ≥ x xCC = xC (x + y)C = xC + yC 0C = 0 If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok. == Open and closed elements == Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements that are both open and closed are called clopen. 0 and 1 are clopen. An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras, which are the single element interior algebras characterized by the identity 0 = 1. == Morphisms of interior algebras == === Homomorphisms === Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence: f(xI) = f(x)I; f(xC) = f(x)C. === Topomorphisms === Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence: If x is open in A, then f(x) is open in B; If x is closed in A, then f(x) is closed in B. (Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism. === Boolean homomorphisms === Early research often considered mappings between interior algebras that were homomorphisms of the underlying Boolean algebras but that did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms—these preserve countable meets and joins. === Continuous morphisms === The earliest generalization of continuity to interior algebras was Sikorski's, based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map—the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.) == Relationships to other areas of mathematics == === Topology === Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X: ⟨P(X), ∩, ∪, ′, ø, X⟩ and extend it to an interior algebra A(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩, where I is the usual topological interior operator. For all S ⊆ X it is defined by SI = ∪ {O | O ⊆ S and O is open in X} For all S ⊆ X the corresponding closure operator is given by SC = ∩ {C | S ⊆ C and C is closed in X} SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense. Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras. Given a continuous map between two topological spaces f : X → Y we can define a complete topomorphism A(f) : A(Y) → A(X) by A(f)(S) = f−1[S] for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top → Cit is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map. Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties: X is empty if and only if A(X) is trivial X is indiscrete if and only if A(X) is simple X is discrete if and only if A(X) is Boolean X is almost discrete if and only if A(X) is semisimple X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively X is connected if and only if A(X) is directly indecomposable X is ultraconnected if and only if A(X) is finitely subdirectly irreducible X is compact ultra-connected if and only if A(X) is subdirectly irreducible ==== Generalized topology ==== The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form ⟨B, ·, +, ′, 0, 1, T⟩ where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that: 0,1 ∈ T T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T) T is closed under finite meets For every element b of B, the join Σ{a ∈T | a ≤ b} exists T is said to be a generalized topology in the Boolean algebra. Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space ⟨B, ·, +, ′, 0, 1, T⟩ we can define an interior operator on B by bI = Σ{a ∈T | a ≤ b} thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras. Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply. ==== Neighbourhood functions and neighbourhood lattices ==== The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if x ≤ yI. The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras: A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that: For all x ∈ B, max{y ∈ B | x ∈ N(y)} exists For all x,y ∈ B, x ∈ N(y) if and only if there is a z ∈ B such that y ≤ z ≤ x and z ∈ N(z). The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by xI = max{y ∈ B | x ∈ N(y)} thereby obtaining an interior algebra. ⁠ N ( x ) {\displaystyle N(x)} ⁠ will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions. In terms of neighbourhood functions, the open elements are precisely those elements x such that x ∈ N(x). In terms of open elements x ∈ N(y) if and only if there is an open element z such that y ≤ z ≤ x. Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra. === Modal logic === Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra: L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩ where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic. The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false. Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5. === Preorders === Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a Kripke frame. The Kripke frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic. Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra B(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩ from the power set Boolean algebra of X where the interior operator I is given by SI = {x ∈ X | for all y ∈ X, x « y implies y ∈ S} for all S ⊆ X. The corresponding closure operator is given by SC = {x ∈ X | there exists a y ∈ S with y « x} for all S ⊆ X. SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field). This construction and representation theorem is a special case of the more general result for modal algebras and Kripke frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are: {O ⊆ X | for all x ∈ O and all y ∈ X, x « y implies y ∈ O}. The corresponding closed sets are: {C ⊆ X | for all x ∈ C and all y ∈ X, y « x implies y ∈ C}. In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)). === Monadic Boolean algebras === Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras. In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation. === Heyting algebras === The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to be an interior algebra generated by its open elements—such interior algebras correspond one-to-one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter. Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one-to-one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz. === Derivative algebras === Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator. Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic wK4. Hence derivative algebras stand to topological derived sets and wK4 as interior/closure algebras stand to topological interiors/closures and S4. Given a derivative algebra V with derivative operator D, we can form an interior algebra I(V) with the same underlying Boolean algebra as V, with interior and closure operators defined by xI = x·x ′ D ′ and xC = x + xD, respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V. == Stone duality and representation for interior algebras == Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras. Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem, which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets—a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps, C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras). The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and Kripke frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey–Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey–Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey–Tarski topology of an interior algebra is the intersection of the former two topologies. == Metamathematics == Grzegorczyk proved the first-order theory of closure algebras undecidable. Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories. == Notes == == References == Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam. Esakia, L., 2004, "Intuitionistic logic and modality via topology," Annals of Pure and Applied Logic 127: 155-70. McKinsey, J.C.C. and Alfred Tarski, 1944, "The Algebra of Topology," Annals of Mathematics 45: 141-91. Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics. Bezhanishvili, G., Mines, R. and Morandi, P.J., 2008, Topo-canonical completions of closure algebras and Heyting algebras, Algebra Universalis 58: 1-34. Schmid, J., 1973, On the compactification of closure algebras, Fundamenta Mathematicae 79: 33-48 Sikorski R., 1955, Closure homomorphisms and interior mappings, Fundamenta Mathematicae 41: 12-20
Wikipedia/Generalized_topology
In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951. == Definition == A Horn clause is a disjunctive clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal. Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause. A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause; a definite clause with no negative literals is a unit clause, and a unit clause without variables is a fact; a Horn clause without a positive literal is a goal clause. The empty clause, consisting of no literals (which is equivalent to false), is a goal clause. These three kinds of Horn clauses are illustrated in the following propositional example: All variables in a clause are implicitly universally quantified with the scope being the entire clause. Thus, for example: stands for: which is logically equivalent to: === Significance === Horn clauses play a basic role in constructive logic and computational logic. They are important in automated theorem proving by first-order resolution, because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent of a goal clause and a definite clause is a goal clause. These properties of Horn clauses can lead to greater efficiency of proving a theorem: the goal clause is the negation of this theorem; see Goal clause in the above table. Intuitively, if we wish to prove φ, we assume ¬φ (the goal) and check whether such assumption leads to a contradiction. If so, then φ must hold. This way, a mechanical proving tool needs to maintain only one set of formulas (assumptions), rather than two sets (assumptions and (sub)goals). Propositional Horn clauses are also of interest in computational complexity. The problem of finding truth-value assignments to make a conjunction of propositional Horn clauses true is known as HORNSAT. This problem is P-complete and solvable in linear time. In contrast, the unrestricted Boolean satisfiability problem is an NP-complete problem. In universal algebra, definite Horn clauses are generally called quasi-identities; classes of algebras definable by a set of quasi-identities are called quasivarieties and enjoy some of the good properties of the more restrictive notion of a variety, i.e., an equational class. From the model-theoretical point of view, Horn sentences are important since they are exactly (up to logical equivalence) those sentences preserved under reduced products; in particular, they are preserved under direct products. On the other hand, there are sentences that are not Horn but are nevertheless preserved under arbitrary direct products. == Logic programming == Horn clauses are also the basis of logic programming, where it is common to write definite clauses in the form of an implication: In fact, the resolution of a goal clause with a definite clause to produce a new goal clause is the basis of the SLD resolution inference rule, used in implementation of the logic programming language Prolog. In logic programming, a definite clause behaves as a goal-reduction procedure. For example, the Horn clause written above behaves as the procedure: To emphasize this reverse use of the clause, it is often written in the reverse form: In Prolog this is written as: In logic programming, a goal clause, which has the logical form represents the negation of a problem to be solved. The problem itself is an existentially quantified conjunction of positive literals: The Prolog notation does not have explicit quantifiers and is written in the form: This notation is ambiguous in the sense that it can be read either as a statement of the problem or as a statement of the denial of the problem. However, both readings are correct. In both cases, solving the problem amounts to deriving the empty clause. In Prolog notation this is equivalent to deriving: If the top-level goal clause is read as the denial of the problem, then the empty clause represents false and the proof of the empty clause is a refutation of the denial of the problem. If the top-level goal clause is read as the problem itself, then the empty clause represents true, and the proof of the empty clause is a proof that the problem has a solution. The solution of the problem is a substitution of terms for the variables X in the top-level goal clause, which can be extracted from the resolution proof. Used in this way, goal clauses are similar to conjunctive queries in relational databases, and Horn clause logic is equivalent in computational power to a universal Turing machine. Van Emden and Kowalski (1976) investigated the model-theoretic properties of Horn clauses in the context of logic programming, showing that every set of definite clauses D has a unique minimal model M. An atomic formula A is logically implied by D if and only if A is true in M. It follows that a problem P represented by an existentially quantified conjunction of positive literals is logically implied by D if and only if P is true in M. The minimal model semantics of Horn clauses is the basis for the stable model semantics of logic programs. == See also == Constrained Horn clauses Propositional calculus == Notes == == References == Burris, Stanley; Sankappanavar, H.P., eds. (1981). A Course in Universal Algebra. Springer-Verlag. ISBN 0-387-90578-2. Buss, Samuel R. (1998). "An Introduction to Proof Theory". In Samuel R. Buss (ed.). Handbook of Proof Theory. Studies in Logic and the Foundations of Mathematics. Vol. 137. Elsevier B.V. pp. 1–78. doi:10.1016/S0049-237X(98)80016-5. ISBN 978-0-444-89840-1. ISSN 0049-237X. Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3. Dowling, William F.; Gallier, Jean H. (1984). "Linear-time algorithms for testing the satisfiability of propositional Horn formulae". Journal of Logic Programming. 1 (3): 267–284. doi:10.1016/0743-1066(84)90014-1. van Emden, M. H.; Kowalski, R. A. (1976). "The semantics of predicate logic as a programming language" (PDF). Journal of the ACM. 23 (4): 733–742. CiteSeerX 10.1.1.64.9246. doi:10.1145/321978.321991. S2CID 11048276. Horn, Alfred (1951). "On sentences which are true of direct unions of algebras". Journal of Symbolic Logic. 16 (1): 14–21. doi:10.2307/2268661. JSTOR 2268661. S2CID 42534337. Lau, Kung-Kiu; Ornaghi, Mario (2004). "Specifying Compositional Units for Correct Program Development in Computational Logic". Program Development in Computational Logic. Lecture Notes in Computer Science. Vol. 3049. pp. 1–29. doi:10.1007/978-3-540-25951-0_1. ISBN 978-3-540-22152-4. Makowsky, J.A. (1987). "Why Horn Formulas Matter in Computer Science: Initial Structures and Generic Examples" (PDF). Journal of Computer and System Sciences. 34 (2–3): 266–292. doi:10.1016/0022-0000(87)90027-4.
Wikipedia/Universal_Horn_theory
Łukasiewicz–Moisil algebras (LMn algebras) were introduced in the 1940s by Grigore Moisil (initially under the name of Łukasiewicz algebras) in the hope of giving algebraic semantics for the n-valued Łukasiewicz logic. However, in 1956 Alan Rose discovered that for n ≥ 5, the Łukasiewicz–Moisil algebra does not model the Łukasiewicz logic. A faithful model for the ℵ0-valued (infinitely-many-valued) Łukasiewicz–Tarski logic was provided by C. C. Chang's MV-algebra, introduced in 1958. For the axiomatically more complicated (finite) n-valued Łukasiewicz logics, suitable algebras were published in 1977 by Revaz Grigolia and called MVn-algebras. MVn-algebras are a subclass of LMn-algebras, and the inclusion is strict for n ≥ 5. In 1982 Roberto Cignoli published some additional constraints that added to LMn-algebras produce proper models for n-valued Łukasiewicz logic; Cignoli called his discovery proper Łukasiewicz algebras. Moisil however, published in 1964 a logic to match his algebra (in the general n ≥ 5 case), now called Moisil logic. After coming in contact with Zadeh's fuzzy logic, in 1968 Moisil also introduced an infinitely-many-valued logic variant and its corresponding LMθ algebras. Although the Łukasiewicz implication cannot be defined in a LMn algebra for n ≥ 5, the Heyting implication can be, i.e. LMn algebras are Heyting algebras; as a result, Moisil logics can also be developed (from a purely logical standpoint) in the framework of Brower's intuitionistic logic. == Definition == A LMn algebra is a De Morgan algebra (a notion also introduced by Moisil) with n-1 additional unary, "modal" operations: ∇ 1 , … , ∇ n − 1 {\displaystyle \nabla _{1},\ldots ,\nabla _{n-1}} , i.e. an algebra of signature ( A , ∨ , ∧ , ¬ , ∇ j ∈ J , 0 , 1 ) {\displaystyle (A,\vee ,\wedge ,\neg ,\nabla _{j\in J},0,1)} where J = { 1, 2, ... n-1 }. (Some sources denote the additional operators as ∇ j ∈ J n {\displaystyle \nabla _{j\in J}^{n}} to emphasize that they depend on the order n of the algebra.) The additional unary operators ∇j must satisfy the following axioms for all x, y ∈ A and j, k ∈ J: ∇ j ( x ∨ y ) = ( ∇ j x ) ∨ ( ∇ j y ) {\displaystyle \nabla _{j}(x\vee y)=(\nabla _{j}\;x)\vee (\nabla _{j}\;y)} ∇ j x ∨ ¬ ∇ j x = 1 {\displaystyle \nabla _{j}\;x\vee \neg \nabla _{j}\;x=1} ∇ j ( ∇ k x ) = ∇ k x {\displaystyle \nabla _{j}(\nabla _{k}\;x)=\nabla _{k}\;x} ∇ j ¬ x = ¬ ∇ n − j x {\displaystyle \nabla _{j}\neg x=\neg \nabla _{n-j}\;x} ∇ 1 x ≤ ∇ 2 x ⋯ ≤ ∇ n − 1 x {\displaystyle \nabla _{1}\;x\leq \nabla _{2}\;x\cdots \leq \nabla _{n-1}\;x} if ∇ j x = ∇ j y {\displaystyle \nabla _{j}\;x=\nabla _{j}\;y} for all j ∈ J, then x = y. (The adjective "modal" is related to the [ultimately failed] program of Tarksi and Łukasiewicz to axiomatize modal logic using many-valued logic.) == Elementary properties == The duals of some of the above axioms follow as properties: ∇ j ( x ∧ y ) = ( ∇ j x ) ∧ ( ∇ j y ) {\displaystyle \nabla _{j}(x\wedge y)=(\nabla _{j}\;x)\wedge (\nabla _{j}\;y)} ∇ j x ∧ ¬ ∇ j x = 0 {\displaystyle \nabla _{j}\;x\wedge \neg \nabla _{j}\;x=0} Additionally: ∇ j 0 = 0 {\displaystyle \nabla _{j}\;0=0} and ∇ j 1 = 1 {\displaystyle \nabla _{j}\;1=1} . In other words, the unary "modal" operations ∇ j {\displaystyle \nabla _{j}} are lattice endomorphisms. == Examples == LM2 algebras are the Boolean algebras. The canonical Łukasiewicz algebra L n {\displaystyle {\mathcal {L}}_{n}} that Moisil had in mind were over the set L n = { 0 , 1 n − 1 , 2 n − 1 , . . . , n − 2 n − 1 , 1 } {\displaystyle L_{n}=\{0,\ {\frac {1}{n-1}},{\frac {2}{n-1}},...,{\frac {n-2}{n-1}}\ ,1\}} with negation ¬ x = 1 − x {\displaystyle \neg x=1-x} conjunction x ∧ y = min { x , y } {\displaystyle x\wedge y=\min\{x,y\}} and disjunction x ∨ y = max { x , y } {\displaystyle x\vee y=\max\{x,y\}} and the unary "modal" operators: ∇ j ( i n − 1 ) = { 0 if i + j < n 1 if i + j ≥ n i ∈ { 0 } ∪ J , j ∈ J . {\displaystyle \nabla _{j}\left({\frac {i}{n-1}}\right)=\;{\begin{cases}0&{\mbox{if }}i+j<n\\1&{\mbox{if }}i+j\geq n\\\end{cases}}\quad i\in \{0\}\cup J,\;j\in J.} If B is a Boolean algebra, then the algebra over the set B[2] ≝ {(x, y) ∈ B×B | x ≤ y} with the lattice operations defined pointwise and with ¬(x, y) ≝ (¬y, ¬x), and with the unary "modal" operators ∇2(x, y) ≝ (y, y) and ∇1(x, y) = ¬∇2¬(x, y) = (x, x) [derived by axiom 4] is a three-valued Łukasiewicz algebra. == Representation == Moisil proved that every LMn algebra can be embedded in a direct product (of copies) of the canonical L n {\displaystyle {\mathcal {L}}_{n}} algebra. As a corollary, every LMn algebra is a subdirect product of subalgebras of L n {\displaystyle {\mathcal {L}}_{n}} . The Heyting implication can be defined as: x ⇒ y = d e f y ∨ ⋀ j ∈ J ( ¬ ∇ j x ) ∨ ( ∇ j y ) {\displaystyle x\Rightarrow y\;{\overset {\mathrm {def} }{=}}\;y\vee \bigwedge _{j\in J}(\neg \nabla _{j}\;x)\vee (\nabla _{j}\;y)} Antonio Monteiro showed that for every monadic Boolean algebra one can construct a trivalent Łukasiewicz algebra (by taking certain equivalence classes) and that any trivalent Łukasiewicz algebra is isomorphic to a Łukasiewicz algebra thus derived from a monadic Boolean algebra. Cignoli summarizes the importance of this result as: "Since it was shown by Halmos that monadic Boolean algebras are the algebraic counterpart of classical first order monadic calculus, Monteiro considered that the representation of three-valued Łukasiewicz algebras into monadic Boolean algebras gives a proof of the consistency of Łukasiewicz three-valued logic relative to classical logic." == References == == Further reading == Raymond Balbes; Philip Dwinger (1975). Distributive lattices. University of Missouri Press. Chapter IX. De Morgan Algebras and Lukasiewicz Algebras. ISBN 978-0-8262-0163-8. Boicescu, V., Filipoiu, A., Georgescu, G., Rudeanu, S.: Łukasiewicz-Moisil Algebras. North-Holland, Amsterdam (1991) ISBN 0080867898 Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz–Moisil algebras—II. Discrete Math. 202, 113–134 (1999) doi:10.1016/S0012-365X(98)00289-1 Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz-Moisil—III. Unpublished Manuscript Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz–Moisil algebras—IV. J. Univers. Comput. Sci. 6, 139–154 (2000) doi:10.3217/jucs-006-01-0139 R. Cignoli, Algebras de Moisil de orden n, Ph.D. Thesis, Universidad National del Sur, Bahia Blanca, 1969 http://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093635424
Wikipedia/Łukasiewicz–Moisil_algebra
In abstract algebra, a branch of pure mathematics, an MV-algebra is an algebraic structure with a binary operation ⊕ {\displaystyle \oplus } , a unary operation ¬ {\displaystyle \neg } , and the constant 0 {\displaystyle 0} , satisfying certain axioms. MV-algebras are the algebraic semantics of Łukasiewicz logic; the letters MV refer to the many-valued logic of Łukasiewicz. MV-algebras coincide with the class of bounded commutative BCK algebras. == Definitions == An MV-algebra is an algebraic structure ⟨ A , ⊕ , ¬ , 0 ⟩ , {\displaystyle \langle A,\oplus ,\lnot ,0\rangle ,} consisting of a non-empty set A , {\displaystyle A,} a binary operation ⊕ {\displaystyle \oplus } on A , {\displaystyle A,} a unary operation ¬ {\displaystyle \lnot } on A , {\displaystyle A,} and a constant 0 {\displaystyle 0} denoting a fixed element of A , {\displaystyle A,} which satisfies the following identities: ( x ⊕ y ) ⊕ z = x ⊕ ( y ⊕ z ) , {\displaystyle (x\oplus y)\oplus z=x\oplus (y\oplus z),} x ⊕ 0 = x , {\displaystyle x\oplus 0=x,} x ⊕ y = y ⊕ x , {\displaystyle x\oplus y=y\oplus x,} ¬ ¬ x = x , {\displaystyle \lnot \lnot x=x,} x ⊕ ¬ 0 = ¬ 0 , {\displaystyle x\oplus \lnot 0=\lnot 0,} and ¬ ( ¬ x ⊕ y ) ⊕ y = ¬ ( ¬ y ⊕ x ) ⊕ x . {\displaystyle \lnot (\lnot x\oplus y)\oplus y=\lnot (\lnot y\oplus x)\oplus x.} By virtue of the first three axioms, ⟨ A , ⊕ , 0 ⟩ {\displaystyle \langle A,\oplus ,0\rangle } is a commutative monoid. Being defined by identities, MV-algebras form a variety of algebras. The variety of MV-algebras is a subvariety of the variety of BL-algebras and contains all Boolean algebras. An MV-algebra can equivalently be defined (Hájek 1998) as a prelinear commutative bounded integral residuated lattice ⟨ L , ∧ , ∨ , ⊗ , → , 0 , 1 ⟩ {\displaystyle \langle L,\wedge ,\vee ,\otimes ,\rightarrow ,0,1\rangle } satisfying the additional identity x ∨ y = ( x → y ) → y . {\displaystyle x\vee y=(x\rightarrow y)\rightarrow y.} == Examples of MV-algebras == A simple numerical example is A = [ 0 , 1 ] , {\displaystyle A=[0,1],} with operations x ⊕ y = min ( x + y , 1 ) {\displaystyle x\oplus y=\min(x+y,1)} and ¬ x = 1 − x . {\displaystyle \lnot x=1-x.} In mathematical fuzzy logic, this MV-algebra is called the standard MV-algebra, as it forms the standard real-valued semantics of Łukasiewicz logic. The trivial MV-algebra has the only element 0 and the operations defined in the only possible way, 0 ⊕ 0 = 0 {\displaystyle 0\oplus 0=0} and ¬ 0 = 0. {\displaystyle \lnot 0=0.} The two-element MV-algebra is actually the two-element Boolean algebra { 0 , 1 } , {\displaystyle \{0,1\},} with ⊕ {\displaystyle \oplus } coinciding with Boolean disjunction and ¬ {\displaystyle \lnot } with Boolean negation. In fact adding the axiom x ⊕ x = x {\displaystyle x\oplus x=x} to the axioms defining an MV-algebra results in an axiomatization of Boolean algebras. If instead the axiom added is x ⊕ x ⊕ x = x ⊕ x {\displaystyle x\oplus x\oplus x=x\oplus x} , then the axioms define the MV3 algebra corresponding to the three-valued Łukasiewicz logic Ł3. Other finite linearly ordered MV-algebras are obtained by restricting the universe and operations of the standard MV-algebra to the set of n {\displaystyle n} equidistant real numbers between 0 and 1 (both included), that is, the set { 0 , 1 / ( n − 1 ) , 2 / ( n − 1 ) , … , 1 } , {\displaystyle \{0,1/(n-1),2/(n-1),\dots ,1\},} which is closed under the operations ⊕ {\displaystyle \oplus } and ¬ {\displaystyle \lnot } of the standard MV-algebra; these algebras are usually denoted MVn. Another important example is Chang's MV-algebra, consisting just of infinitesimals (with the order type ω) and their co-infinitesimals. Chang also constructed an MV-algebra from an arbitrary totally ordered abelian group G by fixing a positive element u and defining the segment [0, u] as { x ∈ G | 0 ≤ x ≤ u }, which becomes an MV-algebra with x ⊕ y = min(u, x + y) and ¬x = u − x. Furthermore, Chang showed that every linearly ordered MV-algebra is isomorphic to an MV-algebra constructed from a group in this way. Daniele Mundici extended the above construction to abelian lattice-ordered groups. If G is such a group with strong (order) unit u, then the "unit interval" { x ∈ G | 0 ≤ x ≤ u } can be equipped with ¬x = u − x, x ⊕ y = u ∧G (x + y), and x ⊗ y = 0 ∨G (x + y − u). This construction establishes a categorical equivalence between lattice-ordered abelian groups with strong unit and MV-algebras. An effect algebra that is lattice-ordered and has the Riesz decomposition property is an MV-algebra. Conversely, any MV-algebra is a lattice-ordered effect algebra with the Riesz decomposition property. == Relation to Łukasiewicz logic == C. C. Chang devised MV-algebras to study many-valued logics, introduced by Jan Łukasiewicz in 1920. In particular, MV-algebras form the algebraic semantics of Łukasiewicz logic, as described below. Given an MV-algebra A, an A-valuation is a homomorphism from the algebra of propositional formulas (in the language consisting of ⊕ , ¬ , {\displaystyle \oplus ,\lnot ,} and 0) into A. Formulas mapped to 1 (that is, to ¬ {\displaystyle \lnot } 0) for all A-valuations are called A-tautologies. If the standard MV-algebra over [0,1] is employed, the set of all [0,1]-tautologies determines so-called infinite-valued Łukasiewicz logic. Chang's (1958, 1959) completeness theorem states that any MV-algebra equation holding in the standard MV-algebra over the interval [0,1] will hold in every MV-algebra. Algebraically, this means that the standard MV-algebra generates the variety of all MV-algebras. Equivalently, Chang's completeness theorem says that MV-algebras characterize infinite-valued Łukasiewicz logic, defined as the set of [0,1]-tautologies. The way the [0,1] MV-algebra characterizes all possible MV-algebras parallels the well-known fact that identities holding in the two-element Boolean algebra hold in all possible Boolean algebras. Moreover, MV-algebras characterize infinite-valued Łukasiewicz logic in a manner analogous to the way that Boolean algebras characterize classical bivalent logic (see Lindenbaum–Tarski algebra). In 1984, Font, Rodriguez and Torrens introduced the Wajsberg algebra as an alternative model for the infinite-valued Łukasiewicz logic. Wajsberg algebras and MV-algebras are term-equivalent. === MVn-algebras === In the 1940s, Grigore Moisil introduced his Łukasiewicz–Moisil algebras (LMn-algebras) in the hope of giving algebraic semantics for the (finitely) n-valued Łukasiewicz logic. However, in 1956, Alan Rose discovered that for n ≥ 5, the Łukasiewicz–Moisil algebra does not model the Łukasiewicz n-valued logic. Although C. C. Chang published his MV-algebra in 1958, it is a faithful model only for the ℵ0-valued (infinitely-many-valued) Łukasiewicz–Tarski logic. For the axiomatically more complicated (finitely) n-valued Łukasiewicz logics, suitable algebras were published in 1977 by Revaz Grigolia and called MVn-algebras. MVn-algebras are a subclass of LMn-algebras; the inclusion is strict for n ≥ 5. The MVn-algebras are MV-algebras that satisfy some additional axioms, just like the n-valued Łukasiewicz logics have additional axioms added to the ℵ0-valued logic. In 1982, Roberto Cignoli published some additional constraints that added to LMn-algebras yield proper models for n-valued Łukasiewicz logic; Cignoli called his discovery proper n-valued Łukasiewicz algebras. The LMn-algebras that are also MVn-algebras are precisely Cignoli's proper n-valued Łukasiewicz algebras. == Relation to functional analysis == MV-algebras were related by Daniele Mundici to approximately finite-dimensional C*-algebras by establishing a bijective correspondence between all isomorphism classes of approximately finite-dimensional C*-algebras with lattice-ordered dimension group and all isomorphism classes of countable MV algebras. Some instances of this correspondence include: == In software == There are multiple frameworks implementing fuzzy logic (type II), and most of them implement what has been called a multi-adjoint logic. This is no more than the implementation of an MV-algebra. == References == Chang, C. C. (1958) "Algebraic analysis of many-valued logics," Transactions of the American Mathematical Society 88: 476–490. ------ (1959) "A new proof of the completeness of the Lukasiewicz axioms," Transactions of the American Mathematical Society 88: 74–80. Cignoli, R. L. O., D'Ottaviano, I. M. L., Mundici, D. (2000) Algebraic Foundations of Many-valued Reasoning. Kluwer. Di Nola A., Lettieri A. (1993) "Equational characterization of all varieties of MV-algebras," Journal of Algebra 221: 463–474 doi:10.1006/jabr.1999.7900. Hájek, Petr (1998) Metamathematics of Fuzzy Logic. Kluwer. Mundici, D.: Interpretation of AF C*-algebras in Łukasiewicz sentential calculus. J. Funct. Anal. 65, 15–63 (1986) doi:10.1016/0022-1236(86)90015-7 == Further reading == Daniele Mundici, MV-ALGEBRAS. A short tutorial D. Mundici (2011). Advanced Łukasiewicz calculus and MV-algebras. Springer. ISBN 978-94-007-0839-6. Mundici, D. The C*-Algebras of Three-Valued Logic. Logic Colloquium '88, Proceedings of the Colloquium held in Padova 61–77 (1989). doi:10.1016/s0049-237x(08)70262-3 Cabrer, L. M. & Mundici, D. A Stone-Weierstrass theorem for MV-algebras and unital ℓ-groups. Journal of Logic and Computation (2014). doi:10.1093/logcom/exu023 Olivia Caramello, Anna Carla Russo (2014) The Morita-equivalence between MV-algebras and abelian ℓ-groups with strong unit == External links == Stanford Encyclopedia of Philosophy: "Many-valued logic"—by Siegfried Gottwald.
Wikipedia/MV-algebras
In mathematics, an Ockham algebra is a bounded distributive lattice L {\displaystyle L} with a dual endomorphism, that is, an operation ∼ : L → L {\displaystyle \sim \colon L\to L} satisfying ∼ ( x ∧ y ) = ∼ x ∨ ∼ y {\displaystyle \sim (x\wedge y)={}\sim x\vee {}\sim y} , ∼ ( x ∨ y ) = ∼ x ∧ ∼ y {\displaystyle \sim (x\vee y)={}\sim x\wedge {}\sim y} , ∼ 0 = 1 {\displaystyle \sim 0=1} , ∼ 1 = 0 {\displaystyle \sim 1=0} . They were introduced by Berman (1977), and were named after William of Ockham by Urquhart (1979). Ockham algebras form a variety. Examples of Ockham algebras include Boolean algebras, De Morgan algebras, Kleene algebras, and Stone algebras. == References == Berman, Joel (1977), "Distributive lattices with an additional unary operation", Aequationes Mathematicae, 16 (1): 165–171, doi:10.1007/BF01837887, ISSN 0001-9054, MR 0480238 (pdf available from GDZ) Blyth, Thomas Scott (2001) [1994], "Ockham algebra", Encyclopedia of Mathematics, EMS Press Blyth, Thomas Scott; Varlet, J. C. (1994). Ockham algebras. Oxford University Press. ISBN 978-0-19-859938-8. Urquhart, Alasdair (1979), "Distributive lattices with a dual homomorphic operation", Polska Akademia Nauk. Institut Filozofii i Socijologii. Studia Logica, 38 (2): 201–209, doi:10.1007/BF00370442, hdl:10338.dmlcz/102014, ISSN 0039-3215, MR 0544616
Wikipedia/Ockham_algebra
In the mathematical area of order theory, the compact elements or finite elements of a partially ordered set are those elements that cannot be subsumed by a supremum of any non-empty directed set that does not already contain members above the compact element. This notion of compactness simultaneously generalizes the notions of finite sets in set theory, compact sets in topology, and finitely generated modules in algebra. (There are other notions of compactness in mathematics.) == Formal definition == In a partially ordered set (P,≤) an element c is called compact (or finite) if it satisfies one of the following equivalent conditions: For every directed subset D of P, if D has a supremum sup D and c ≤ sup D then c ≤ d for some element d of D. For every ideal I of P, if I has a supremum sup I and c ≤ sup I then c is an element of I. If the poset P additionally is a join-semilattice (i.e., if it has binary suprema) then these conditions are equivalent to the following statement: For every subset S of P, if S has a supremum sup S and c ≤ sup S, then c ≤ sup T for some finite subset T of S. In particular, if c = sup S, then c is the supremum of a finite subset of S. These equivalences are easily verified from the definitions of the concepts involved. For the case of a join-semilattice, any set can be turned into a directed set with the same supremum by closing under finite (non-empty) suprema. When considering directed complete partial orders or complete lattices the additional requirements that the specified suprema exist can of course be dropped. A join-semilattice that is directed complete is almost a complete lattice (possibly lacking a least element)—see completeness (order theory) for details. == Examples == The most basic example is obtained by considering the power set of some set A, ordered by subset inclusion. Within this complete lattice, the compact elements are exactly the finite subsets of A. This justifies the name "finite element". The term "compact" is inspired by the definition of (topologically) compact subsets of a topological space T. A set Y is compact if for every collection of open sets S, if the union over S includes Y as a subset, then Y is included as a subset of the union of a finite subcollection of S. Considering the power set of T as a complete lattice with the subset inclusion order, where the supremum of a collection of sets is given by their union, the topological condition for compactness mimics the condition for compactness in join-semilattices, but for the additional requirement of openness. If it exists, the least element of a poset is always compact. It may be that this is the only compact element, as the example of the real unit interval [0,1] (with the standard ordering inherited from the real numbers) shows. Every completely join-prime element of a lattice is compact. == Algebraic posets == A poset in which every element is the supremum of the directed set formed by the compact elements below it is called an algebraic poset. Such posets that are dcpos are much used in domain theory. As an important special case, an algebraic lattice is a complete lattice L where every element x of L is the supremum of the compact elements below x. A typical example (which served as the motivation for the name "algebraic") is the following: For any algebra A (for example, a group, a ring, a field, a lattice, etc.; or even a mere set without any operations), let Sub(A) be the set of all substructures of A, i.e., of all subsets of A which are closed under all operations of A (group addition, ring addition and multiplication, etc.). Here the notion of substructure includes the empty substructure in case the algebra A has no nullary operations. Then: The set Sub(A), ordered by set inclusion, is a lattice. The greatest element of Sub(A) is the set A itself. For any S, T in Sub(A), the greatest lower bound of S and T is the set theoretic intersection of S and T; the smallest upper bound is the subalgebra generated by the union of S and T. The set Sub(A) is even a complete lattice. The greatest lower bound of any family of substructures is their intersection (or A if the family is empty). The compact elements of Sub(A) are exactly the finitely generated substructures of A. Every substructure is the union of its finitely generated substructures; hence Sub(A) is an algebraic lattice. Also, a kind of converse holds: Every algebraic lattice is isomorphic to Sub(A) for some algebra A. There is another algebraic lattice that plays an important role in universal algebra: For every algebra A we let Con(A) be the set of all congruence relations on A. Each congruence on A is a subalgebra of the product algebra AxA, so Con(A) ⊆ Sub(AxA). Again we have Con(A), ordered by set inclusion, is a lattice. The greatest element of Con(A) is the set AxA, which is the congruence corresponding to the constant homomorphism. The smallest congruence is the diagonal of AxA, corresponding to isomorphisms. Con(A) is a complete lattice. The compact elements of Con(A) are exactly the finitely generated congruences. Con(A) is an algebraic lattice. Again there is a converse: By a theorem of George Grätzer and E. T. Schmidt, every algebraic lattice is isomorphic to Con(A) for some algebra A. == Applications == Compact elements are important in computer science in the semantic approach called domain theory, where they are considered as a kind of primitive element: the information represented by compact elements cannot be obtained by any approximation that does not already contain this knowledge. Compact elements cannot be approximated by elements strictly below them. On the other hand, it may happen that all non-compact elements can be obtained as directed suprema of compact elements. This is a desirable situation, since the set of compact elements is often smaller than the original poset—the examples above illustrate this. == Literature == See the literature given for order theory and domain theory. == References ==
Wikipedia/Algebraic_poset
In the mathematical area of order theory, the compact elements or finite elements of a partially ordered set are those elements that cannot be subsumed by a supremum of any non-empty directed set that does not already contain members above the compact element. This notion of compactness simultaneously generalizes the notions of finite sets in set theory, compact sets in topology, and finitely generated modules in algebra. (There are other notions of compactness in mathematics.) == Formal definition == In a partially ordered set (P,≤) an element c is called compact (or finite) if it satisfies one of the following equivalent conditions: For every directed subset D of P, if D has a supremum sup D and c ≤ sup D then c ≤ d for some element d of D. For every ideal I of P, if I has a supremum sup I and c ≤ sup I then c is an element of I. If the poset P additionally is a join-semilattice (i.e., if it has binary suprema) then these conditions are equivalent to the following statement: For every subset S of P, if S has a supremum sup S and c ≤ sup S, then c ≤ sup T for some finite subset T of S. In particular, if c = sup S, then c is the supremum of a finite subset of S. These equivalences are easily verified from the definitions of the concepts involved. For the case of a join-semilattice, any set can be turned into a directed set with the same supremum by closing under finite (non-empty) suprema. When considering directed complete partial orders or complete lattices the additional requirements that the specified suprema exist can of course be dropped. A join-semilattice that is directed complete is almost a complete lattice (possibly lacking a least element)—see completeness (order theory) for details. == Examples == The most basic example is obtained by considering the power set of some set A, ordered by subset inclusion. Within this complete lattice, the compact elements are exactly the finite subsets of A. This justifies the name "finite element". The term "compact" is inspired by the definition of (topologically) compact subsets of a topological space T. A set Y is compact if for every collection of open sets S, if the union over S includes Y as a subset, then Y is included as a subset of the union of a finite subcollection of S. Considering the power set of T as a complete lattice with the subset inclusion order, where the supremum of a collection of sets is given by their union, the topological condition for compactness mimics the condition for compactness in join-semilattices, but for the additional requirement of openness. If it exists, the least element of a poset is always compact. It may be that this is the only compact element, as the example of the real unit interval [0,1] (with the standard ordering inherited from the real numbers) shows. Every completely join-prime element of a lattice is compact. == Algebraic posets == A poset in which every element is the supremum of the directed set formed by the compact elements below it is called an algebraic poset. Such posets that are dcpos are much used in domain theory. As an important special case, an algebraic lattice is a complete lattice L where every element x of L is the supremum of the compact elements below x. A typical example (which served as the motivation for the name "algebraic") is the following: For any algebra A (for example, a group, a ring, a field, a lattice, etc.; or even a mere set without any operations), let Sub(A) be the set of all substructures of A, i.e., of all subsets of A which are closed under all operations of A (group addition, ring addition and multiplication, etc.). Here the notion of substructure includes the empty substructure in case the algebra A has no nullary operations. Then: The set Sub(A), ordered by set inclusion, is a lattice. The greatest element of Sub(A) is the set A itself. For any S, T in Sub(A), the greatest lower bound of S and T is the set theoretic intersection of S and T; the smallest upper bound is the subalgebra generated by the union of S and T. The set Sub(A) is even a complete lattice. The greatest lower bound of any family of substructures is their intersection (or A if the family is empty). The compact elements of Sub(A) are exactly the finitely generated substructures of A. Every substructure is the union of its finitely generated substructures; hence Sub(A) is an algebraic lattice. Also, a kind of converse holds: Every algebraic lattice is isomorphic to Sub(A) for some algebra A. There is another algebraic lattice that plays an important role in universal algebra: For every algebra A we let Con(A) be the set of all congruence relations on A. Each congruence on A is a subalgebra of the product algebra AxA, so Con(A) ⊆ Sub(AxA). Again we have Con(A), ordered by set inclusion, is a lattice. The greatest element of Con(A) is the set AxA, which is the congruence corresponding to the constant homomorphism. The smallest congruence is the diagonal of AxA, corresponding to isomorphisms. Con(A) is a complete lattice. The compact elements of Con(A) are exactly the finitely generated congruences. Con(A) is an algebraic lattice. Again there is a converse: By a theorem of George Grätzer and E. T. Schmidt, every algebraic lattice is isomorphic to Con(A) for some algebra A. == Applications == Compact elements are important in computer science in the semantic approach called domain theory, where they are considered as a kind of primitive element: the information represented by compact elements cannot be obtained by any approximation that does not already contain this knowledge. Compact elements cannot be approximated by elements strictly below them. On the other hand, it may happen that all non-compact elements can be obtained as directed suprema of compact elements. This is a desirable situation, since the set of compact elements is often smaller than the original poset—the examples above illustrate this. == Literature == See the literature given for order theory and domain theory. == References ==
Wikipedia/Algebraic_lattices
In mathematics, in the branch of complex analysis, a holomorphic function on an open subset of the complex plane is called univalent if it is injective. == Examples == The function f : z ↦ 2 z + z 2 {\displaystyle f\colon z\mapsto 2z+z^{2}} is univalent in the open unit disc, as f ( z ) = f ( w ) {\displaystyle f(z)=f(w)} implies that f ( z ) − f ( w ) = ( z − w ) ( z + w + 2 ) = 0 {\displaystyle f(z)-f(w)=(z-w)(z+w+2)=0} . As the second factor is non-zero in the open unit disc, z = w {\displaystyle z=w} so f {\displaystyle f} is injective. == Basic properties == One can prove that if G {\displaystyle G} and Ω {\displaystyle \Omega } are two open connected sets in the complex plane, and f : G → Ω {\displaystyle f:G\to \Omega } is a univalent function such that f ( G ) = Ω {\displaystyle f(G)=\Omega } (that is, f {\displaystyle f} is surjective), then the derivative of f {\displaystyle f} is never zero, f {\displaystyle f} is invertible, and its inverse f − 1 {\displaystyle f^{-1}} is also holomorphic. More, one has by the chain rule ( f − 1 ) ′ ( f ( z ) ) = 1 f ′ ( z ) {\displaystyle (f^{-1})'(f(z))={\frac {1}{f'(z)}}} for all z {\displaystyle z} in G . {\displaystyle G.} == Comparison with real functions == For real analytic functions, unlike for complex analytic (that is, holomorphic) functions, these statements fail to hold. For example, consider the function f : ( − 1 , 1 ) → ( − 1 , 1 ) {\displaystyle f:(-1,1)\to (-1,1)\,} given by f ( x ) = x 3 {\displaystyle f(x)=x^{3}} . This function is clearly injective, but its derivative is 0 at x = 0 {\displaystyle x=0} , and its inverse is not analytic, or even differentiable, on the whole interval ( − 1 , 1 ) {\displaystyle (-1,1)} . Consequently, if we enlarge the domain to an open subset G {\displaystyle G} of the complex plane, it must fail to be injective; and this is the case, since (for example) f ( ε ω ) = f ( ε ) {\displaystyle f(\varepsilon \omega )=f(\varepsilon )} (where ω {\displaystyle \omega } is a primitive cube root of unity and ε {\displaystyle \varepsilon } is a positive real number smaller than the radius of G {\displaystyle G} as a neighbourhood of 0 {\displaystyle 0} ). == See also == Biholomorphic mapping – Bijective holomorphic function with a holomorphic inversePages displaying short descriptions of redirect targets De Branges's theorem – Statement in complex analysis; formerly the Bieberbach conjecture Koebe quarter theorem – Statement in complex analysis Riemann mapping theorem – Mathematical theorem Nevanlinna's criterion – Characterization of starlike univalent holomorphic functions == Note == == References == Conway, John B. (1995). "Conformal Equivalence for Simply Connected Regions". Functions of One Complex Variable II. Graduate Texts in Mathematics. Vol. 159. doi:10.1007/978-1-4612-0817-4. ISBN 978-1-4612-6911-3. "Univalent Functions". Sources in the Development of Mathematics. 2011. pp. 907–928. doi:10.1017/CBO9780511844195.041. ISBN 9780521114707. Duren, P. L. (1983). Univalent Functions. Springer New York, NY. p. XIV, 384. ISBN 978-1-4419-2816-0. Gong, Sheng (1998). Convex and Starlike Mappings in Several Complex Variables. doi:10.1007/978-94-011-5206-8. ISBN 978-94-010-6191-9. Jarnicki, Marek; Pflug, Peter (2006). "A remark on separate holomorphy". Studia Mathematica. 174 (3): 309–317. arXiv:math/0507305. doi:10.4064/SM174-3-5. S2CID 15660985. Nehari, Zeev (1975). Conformal mapping. New York: Dover Publications. p. 146. ISBN 0-486-61137-X. OCLC 1504503. This article incorporates material from univalent analytic function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Univalent_function
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers C {\displaystyle \mathbb {C} } . The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ⁡ ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ⁡ ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin ⁡ 0 = 0 , cos ⁡ 0 = 1 , ∂ sin ⁡ x ∂ x ( 0 ) = 1 , ∂ cos ⁡ x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
Wikipedia/Empty_function
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations. == Introduction == The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited. Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist. In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory. The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations. In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional. == Historical background == By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster, sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points. In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions. In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description. The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied. In 2019, Bannwarth et al. introduced the GFN2-xTB method, primarily for the calculation of structures and non-covalent interaction energies. == Mathematical formulation == We introduce the atomic orbitals φ m ( r ) {\displaystyle \varphi _{m}(\mathbf {r} )} , which are eigenfunctions of the Hamiltonian H a t {\displaystyle H_{\rm {at}}} of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential Δ U {\displaystyle \Delta U} required to obtain the true Hamiltonian H {\displaystyle H} of the system, are assumed small: H ( r ) = H a t ( r ) + ∑ R n ≠ 0 V ( r − R n ) = H a t ( r ) + Δ U ( r ) , {\displaystyle H(\mathbf {r} )=H_{\mathrm {at} }(\mathbf {r} )+\sum _{\mathbf {R} _{n}\neq \mathbf {0} }V(\mathbf {r} -\mathbf {R} _{n})=H_{\mathrm {at} }(\mathbf {r} )+\Delta U(\mathbf {r} )\ ,} where V ( r − R n ) {\displaystyle V(\mathbf {r} -\mathbf {R} _{n})} denotes the atomic potential of one atom located at site R n {\displaystyle \mathbf {R} _{n}} in the crystal lattice. A solution ψ m {\displaystyle \psi _{m}} to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals φ m ( r − R n ) {\displaystyle \varphi _{m}(\mathbf {r-R_{n}} )} : ψ m ( r ) = ∑ R n b m ( R n ) φ m ( r − R n ) {\displaystyle \psi _{m}(\mathbf {r} )=\sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})} , where m {\displaystyle m} refers to the m-th atomic energy level. === Translational symmetry and normalization === The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor: ψ ( r + R ℓ ) = e i k ⋅ R ℓ ψ ( r ) , {\displaystyle \psi (\mathbf {r+R_{\ell }} )=e^{i\mathbf {k\cdot R_{\ell }} }\psi (\mathbf {r} )\ ,} where k {\displaystyle \mathbf {k} } is the wave vector of the wave function. Consequently, the coefficients satisfy ∑ R n b m ( R n ) φ m ( r − R n + R ℓ ) = e i k ⋅ R ℓ ∑ R n b m ( R n ) φ m ( r − R n ) . {\displaystyle \sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n}+\mathbf {R} _{\ell })=e^{i\mathbf {k} \cdot \mathbf {R} _{\ell }}\sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})\ .} By substituting R p = R n − R ℓ {\displaystyle \mathbf {R} _{p}=\mathbf {R} _{n}-\mathbf {R_{\ell }} } , we find b m ( R p + R ℓ ) = e i k ⋅ R ℓ b m ( R p ) , {\displaystyle b_{m}(\mathbf {R} _{p}+\mathbf {R} _{\ell })=e^{i\mathbf {k\cdot R_{\ell }} }b_{m}(\mathbf {R} _{p})\ ,} (where in RHS we have replaced the dummy index R n {\displaystyle \mathbf {R} _{n}} with R p {\displaystyle \mathbf {R} _{p}} ) or b m ( R ℓ ) = e i k ⋅ R ℓ b m ( 0 ) . {\displaystyle b_{m}(\mathbf {R} _{\ell })=e^{i\mathbf {k} \cdot \mathbf {R} _{\ell }}b_{m}(\mathbf {0} )\ .} Normalizing the wave function to unity: ∫ d 3 r ψ m ∗ ( r ) ψ m ( r ) = 1 {\displaystyle \int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )\psi _{m}(\mathbf {r} )=1} = ∑ R n b m ∗ ( R n ) ∑ R ℓ b m ( R ℓ ) ∫ d 3 r φ m ∗ ( r − R n ) φ m ( r − R ℓ ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\sum _{\mathbf {R_{\ell }} }b_{m}(\mathbf {R_{\ell }} )\int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\varphi _{m}(\mathbf {r} -\mathbf {R} _{\ell })} = b m ∗ ( 0 ) b m ( 0 ) ∑ R n e − i k ⋅ R n ∑ R ℓ e i k ⋅ R ℓ ∫ d 3 r φ m ∗ ( r − R n ) φ m ( r − R ℓ ) {\displaystyle =b_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k\cdot R_{n}} }\sum _{\mathbf {R_{\ell }} }e^{i\mathbf {k\cdot R_{\ell }} }\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\varphi _{m}(\mathbf {r} -\mathbf {R} _{\ell })} = N b m ∗ ( 0 ) b m ( 0 ) ∑ R p e − i k ⋅ R p ∫ d 3 r φ m ∗ ( r − R p ) φ m ( r ) {\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{p}}e^{-i\mathbf {k} \cdot \mathbf {R} _{p}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{p})\varphi _{m}(\mathbf {r} )\ } = N b m ∗ ( 0 ) b m ( 0 ) ∑ R p e i k ⋅ R p ∫ d 3 r φ m ∗ ( r ) φ m ( r − R p ) , {\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{p}}e^{i\mathbf {k} \cdot \mathbf {R} _{p}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} )\varphi _{m}(\mathbf {r} -\mathbf {R} _{p})\ ,} so the normalization sets b m ( 0 ) {\displaystyle b_{m}(0)} as b m ∗ ( 0 ) b m ( 0 ) = 1 N ⋅ 1 1 + ∑ R p ≠ 0 e i k ⋅ R p α m ( R p ) , {\displaystyle b_{m}^{*}(0)b_{m}(0)={\frac {1}{N}}\ \cdot \ {\frac {1}{1+\sum _{\mathbf {R} _{p}\neq 0}e^{i\mathbf {k} \cdot \mathbf {R} _{p}}\alpha _{m}(\mathbf {R} _{p})}}\ ,} where α m ( R p ) {\displaystyle {\alpha _{m}(\mathbf {R} _{p})}} are the atomic overlap integrals, which frequently are neglected resulting in b m ( 0 ) ≈ 1 N , {\displaystyle b_{m}(0)\approx {\frac {1}{\sqrt {N}}}\ ,} and ψ m ( r ) ≈ 1 N ∑ R n e i k ⋅ R n φ m ( r − R n ) . {\displaystyle \psi _{m}(\mathbf {r} )\approx {\frac {1}{\sqrt {N}}}\sum _{\mathbf {R} _{n}}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})\ .} === The tight binding Hamiltonian === Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies ε m {\displaystyle \varepsilon _{m}} are of the form ε m = ∫ d 3 r ψ m ∗ ( r ) H ( r ) ψ m ( r ) {\displaystyle \varepsilon _{m}=\int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )H(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) H ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) H a t ( r ) ψ m ( r ) + ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H_{\mathrm {at} }(\mathbf {r} )\psi _{m}(\mathbf {r} )+\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n , R l b m ∗ ( R n ) b m ( R l ) ∫ d 3 r φ m ∗ ( r − R n ) H a t ( r ) φ m ( r − R l ) + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n},\mathbf {R} _{l}}b_{m}^{*}(\mathbf {R} _{n})b_{m}(\mathbf {R} _{l})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H_{\mathrm {at} }(\mathbf {r} )\varphi _{m}(\mathbf {r} -\mathbf {R} _{l})+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} = b m ∗ ( 0 ) b m ( 0 ) N ∫ d 3 r φ m ∗ ( r ) H a t ( r ) φ m ( r ) + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =b_{m}^{*}(\mathbf {0} )b_{m}(\mathbf {0} )\ N\int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} )H_{\mathrm {at} }(\mathbf {r} )\varphi _{m}(\mathbf {r} )+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} ≈ E m + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) . {\displaystyle \approx E_{m}+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )\ .} Here in the last step it was assumed that the overlap integral is zero and thus b m ∗ ( 0 ) b m ( 0 ) = 1 N {\displaystyle b_{m}^{*}(\mathbf {0} )b_{m}(\mathbf {0} )={\frac {1}{N}}} . The energy then becomes ε m ( k ) = E m − N | b m ( 0 ) | 2 ( β m + ∑ R n ≠ 0 ∑ l γ m , l ( R n ) e i k ⋅ R n ) , {\displaystyle \varepsilon _{m}(\mathbf {k} )=E_{m}-N\ |b_{m}(0)|^{2}\left(\beta _{m}+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}\gamma _{m,l}(\mathbf {R} _{n})e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\right)\ ,} = E m − β m + ∑ R n ≠ 0 ∑ l e i k ⋅ R n γ m , l ( R n ) 1 + ∑ R n ≠ 0 ∑ l e i k ⋅ R n α m , l ( R n ) , {\displaystyle =E_{m}-\ {\frac {\beta _{m}+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\gamma _{m,l}(\mathbf {R} _{n})}{\ \ 1+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\alpha _{m,l}(\mathbf {R} _{n})}}\ ,} where Em is the energy of the m-th atomic level, and α m , l {\displaystyle \alpha _{m,l}} , β m {\displaystyle \beta _{m}} and γ m , l {\displaystyle \gamma _{m,l}} are the tight binding matrix elements discussed below. === The tight binding matrix elements === The elements β m = − ∫ φ m ∗ ( r ) Δ U ( r ) φ m ( r ) d 3 r , {\displaystyle \beta _{m}=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{m}(\mathbf {r} )\,d^{3}r}{\text{,}}} are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom. The next class of terms γ m , l ( R n ) = − ∫ φ m ∗ ( r ) Δ U ( r ) φ l ( r − R n ) d 3 r , {\displaystyle \gamma _{m,l}(\mathbf {R} _{n})=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{l}(\mathbf {r} -\mathbf {R} _{n})\,d^{3}r}{\text{,}}} is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model. The last class of terms α m , l ( R n ) = ∫ φ m ∗ ( r ) φ l ( r − R n ) d 3 r , {\displaystyle \alpha _{m,l}(\mathbf {R} _{n})=\int {\varphi _{m}^{*}(\mathbf {r} )\varphi _{l}(\mathbf {r} -\mathbf {R} _{n})\,d^{3}r}{\text{,}}} denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom. == Evaluation of the matrix elements == As mentioned before the values of the β m {\displaystyle \beta _{m}} -matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If β m {\displaystyle \beta _{m}} is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example. The interatomic matrix elements γ m , l {\displaystyle \gamma _{m,l}} can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources. The interatomic overlap matrix elements α m , l {\displaystyle \alpha _{m,l}} should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model. The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model. == Connection to Wannier functions == Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series ψ m ( k , r ) = 1 N ∑ n a m ( R n , r ) e i k ⋅ R n , {\displaystyle \psi _{m}(\mathbf {k} ,\mathbf {r} )={\frac {1}{\sqrt {N}}}\sum _{n}{a_{m}(\mathbf {R} _{n},\mathbf {r} )}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\ ,} where R n {\displaystyle \mathbf {R} _{n}} denotes an atomic site in a periodic crystal lattice, k {\displaystyle \mathbf {k} } is the wave vector of the Bloch's function, r {\displaystyle \mathbf {r} } is the electron position, m {\displaystyle m} is the band index, and the sum is over all N {\displaystyle N} atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy E m ( k ) {\displaystyle E_{m}(\mathbf {k} )} , and is spread over the entire crystal volume. Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions: a m ( R n , r ) = 1 N ∑ k e − i k ⋅ R n ψ m ( k , r ) = 1 N ∑ k e i k ⋅ ( r − R n ) u m ( k , r ) . {\displaystyle a_{m}(\mathbf {R} _{n},\mathbf {r} )={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\psi _{m}(\mathbf {k} ,\mathbf {r} )}={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{i\mathbf {k} \cdot (\mathbf {r} -\mathbf {R} _{n})}u_{m}(\mathbf {k} ,\mathbf {r} )}.} These real space wave functions a m ( R n , r ) {\displaystyle {a_{m}(\mathbf {R} _{n},\mathbf {r} )}} are called Wannier functions, and are fairly closely localized to the atomic site R n {\displaystyle \mathbf {R} _{n}} . Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform. However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation. == Second quantization == Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model. Tight binding can be understood by working under a second quantization formalism. Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as: H = − t ∑ ⟨ i , j ⟩ , σ ( c i , σ † c j , σ + h . c . ) {\displaystyle H=-t\sum _{\langle i,j\rangle ,\sigma }(c_{i,\sigma }^{\dagger }c_{j,\sigma }^{}+h.c.)} , c i σ † , c j σ {\displaystyle c_{i\sigma }^{\dagger },c_{j\sigma }} - creation and annihilation operators σ {\displaystyle \displaystyle \sigma } - spin polarization t {\displaystyle \displaystyle t} - hopping integral ⟨ i , j ⟩ {\displaystyle \displaystyle \langle i,j\rangle } - nearest neighbor index h . c . {\displaystyle \displaystyle h.c.} - the hermitian conjugate of the other term(s) Here, hopping integral t {\displaystyle \displaystyle t} corresponds to the transfer integral γ {\displaystyle \displaystyle \gamma } in tight binding model. Considering extreme cases of t → 0 {\displaystyle t\rightarrow 0} , it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on ( t > 0 {\displaystyle \displaystyle t>0} ) electrons can stay in both sites lowering their kinetic energy. In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in H e e = 1 2 ∑ n , m , σ ⟨ n 1 m 1 , n 2 m 2 | e 2 | r 1 − r 2 | | n 3 m 3 , n 4 m 4 ⟩ c n 1 m 1 σ 1 † c n 2 m 2 σ 2 † c n 4 m 4 σ 2 c n 3 m 3 σ 1 {\displaystyle \displaystyle H_{ee}={\frac {1}{2}}\sum _{n,m,\sigma }\langle n_{1}m_{1},n_{2}m_{2}|{\frac {e^{2}}{|r_{1}-r_{2}|}}|n_{3}m_{3},n_{4}m_{4}\rangle c_{n_{1}m_{1}\sigma _{1}}^{\dagger }c_{n_{2}m_{2}\sigma _{2}}^{\dagger }c_{n_{4}m_{4}\sigma _{2}}c_{n_{3}m_{3}\sigma _{1}}} This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions. == Example: one-dimensional s-band == Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites. To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals | k ⟩ = 1 N ∑ n = 1 N e i n k a | n ⟩ {\displaystyle |k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n=1}^{N}e^{inka}|n\rangle } where N = total number of sites and k {\displaystyle k} is a real parameter with − π a ≦ k ≦ π a {\displaystyle -{\frac {\pi }{a}}\leqq k\leqq {\frac {\pi }{a}}} . (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as ⟨ n | H | n ⟩ = E 0 = E i − U . {\displaystyle \langle n|H|n\rangle =E_{0}=E_{i}-U\ .} ⟨ n ± 1 | H | n ⟩ = − Δ {\displaystyle \langle n\pm 1|H|n\rangle =-\Delta \ } ⟨ n | n ⟩ = 1 ; {\displaystyle \langle n|n\rangle =1\ ;} ⟨ n ± 1 | n ⟩ = S . {\displaystyle \langle n\pm 1|n\rangle =S\ .} The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The ⟨ n ± 1 | H | n ⟩ = − Δ {\displaystyle \langle n\pm 1|H|n\rangle =-\Delta } elements, which are the Slater and Koster interatomic matrix elements, are the bond energies E i , j {\displaystyle E_{i,j}} . In this one dimensional s-band model we only have σ {\displaystyle \sigma } -bonds between the s-orbitals with bond energy E s , s = V s s σ {\displaystyle E_{s,s}=V_{ss\sigma }} . The overlap between states on neighboring atoms is S. We can derive the energy of the state | k ⟩ {\displaystyle |k\rangle } using the above equation: H | k ⟩ = 1 N ∑ n e i n k a H | n ⟩ {\displaystyle H|k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n}e^{inka}H|n\rangle } ⟨ k | H | k ⟩ = 1 N ∑ n , m e i ( n − m ) k a ⟨ m | H | n ⟩ {\displaystyle \langle k|H|k\rangle ={\frac {1}{N}}\sum _{n,\ m}e^{i(n-m)ka}\langle m|H|n\rangle } = 1 N ∑ n ⟨ n | H | n ⟩ + 1 N ∑ n ⟨ n − 1 | H | n ⟩ e + i k a + 1 N ∑ n ⟨ n + 1 | H | n ⟩ e − i k a {\displaystyle ={\frac {1}{N}}\sum _{n}\langle n|H|n\rangle +{\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}+{\frac {1}{N}}\sum _{n}\langle n+1|H|n\rangle e^{-ika}} = E 0 − 2 Δ cos ⁡ ( k a ) , {\displaystyle =E_{0}-2\Delta \,\cos(ka)\ ,} where, for example, 1 N ∑ n ⟨ n | H | n ⟩ = E 0 1 N ∑ n 1 = E 0 , {\displaystyle {\frac {1}{N}}\sum _{n}\langle n|H|n\rangle =E_{0}{\frac {1}{N}}\sum _{n}1=E_{0}\ ,} and 1 N ∑ n ⟨ n − 1 | H | n ⟩ e + i k a = − Δ e i k a 1 N ∑ n 1 = − Δ e i k a . {\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}=-\Delta e^{ika}{\frac {1}{N}}\sum _{n}1=-\Delta e^{ika}\ .} 1 N ∑ n ⟨ n − 1 | n ⟩ e + i k a = S e i k a 1 N ∑ n 1 = S e i k a . {\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|n\rangle e^{+ika}=Se^{ika}{\frac {1}{N}}\sum _{n}1=Se^{ika}\ .} Thus the energy of this state | k ⟩ {\displaystyle |k\rangle } can be represented in the familiar form of the energy dispersion: E ( k ) = E 0 − 2 Δ cos ⁡ ( k a ) 1 + 2 S cos ⁡ ( k a ) {\displaystyle E(k)={\frac {E_{0}-2\Delta \,\cos(ka)}{1+2S\,\cos(ka)}}} . For k = 0 {\displaystyle k=0} the energy is E = ( E 0 − 2 Δ ) / ( 1 + 2 S ) {\displaystyle E=(E_{0}-2\Delta )/(1+2S)} and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals. For k = π / ( 2 a ) {\displaystyle k=\pi /(2a)} the energy is E = E 0 {\displaystyle E=E_{0}} and the state consists of a sum of atomic orbitals which are a factor e i π / 2 {\displaystyle e^{i\pi /2}} out of phase. This state can be viewed as a chain of non-bonding orbitals. Finally for k = π / a {\displaystyle k=\pi /a} the energy is E = ( E 0 + 2 Δ ) / ( 1 − 2 S ) {\displaystyle E=(E_{0}+2\Delta )/(1-2S)} and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals. This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a. Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished. == Table of interatomic matrix elements == In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements E i , j ( r → n , n ′ ) = ⟨ n , i | H | n ′ , j ⟩ {\displaystyle E_{i,j}({\vec {\mathbf {r} }}_{n,n'})=\langle n,i|H|n',j\rangle } which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the V s s σ {\displaystyle V_{ss\sigma }} , V p p π {\displaystyle V_{pp\pi }} and V d d δ {\displaystyle V_{dd\delta }} for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of ( l , m , n ) {\displaystyle (l,m,n)} , even though it is not explicitly stated every time.). The interatomic vector is expressed as r → n , n ′ = ( r x , r y , r z ) = d ( l , m , n ) {\displaystyle {\vec {\mathbf {r} }}_{n,n'}=(r_{x},r_{y},r_{z})=d(l,m,n)} where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom. E s , s = V s s σ {\displaystyle E_{s,s}=V_{ss\sigma }} E s , x = l V s p σ {\displaystyle E_{s,x}=lV_{sp\sigma }} E x , x = l 2 V p p σ + ( 1 − l 2 ) V p p π {\displaystyle E_{x,x}=l^{2}V_{pp\sigma }+(1-l^{2})V_{pp\pi }} E x , y = l m V p p σ − l m V p p π {\displaystyle E_{x,y}=lmV_{pp\sigma }-lmV_{pp\pi }} E x , z = l n V p p σ − l n V p p π {\displaystyle E_{x,z}=lnV_{pp\sigma }-lnV_{pp\pi }} E s , x y = 3 l m V s d σ {\displaystyle E_{s,xy}={\sqrt {3}}lmV_{sd\sigma }} E s , x 2 − y 2 = 3 2 ( l 2 − m 2 ) V s d σ {\displaystyle E_{s,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}(l^{2}-m^{2})V_{sd\sigma }} E s , 3 z 2 − r 2 = [ n 2 − ( l 2 + m 2 ) / 2 ] V s d σ {\displaystyle E_{s,3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]V_{sd\sigma }} E x , x y = 3 l 2 m V p d σ + m ( 1 − 2 l 2 ) V p d π {\displaystyle E_{x,xy}={\sqrt {3}}l^{2}mV_{pd\sigma }+m(1-2l^{2})V_{pd\pi }} E x , y z = 3 l m n V p d σ − 2 l m n V p d π {\displaystyle E_{x,yz}={\sqrt {3}}lmnV_{pd\sigma }-2lmnV_{pd\pi }} E x , z x = 3 l 2 n V p d σ + n ( 1 − 2 l 2 ) V p d π {\displaystyle E_{x,zx}={\sqrt {3}}l^{2}nV_{pd\sigma }+n(1-2l^{2})V_{pd\pi }} E x , x 2 − y 2 = 3 2 l ( l 2 − m 2 ) V p d σ + l ( 1 − l 2 + m 2 ) V p d π {\displaystyle E_{x,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}l(l^{2}-m^{2})V_{pd\sigma }+l(1-l^{2}+m^{2})V_{pd\pi }} E y , x 2 − y 2 = 3 2 m ( l 2 − m 2 ) V p d σ − m ( 1 + l 2 − m 2 ) V p d π {\displaystyle E_{y,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}m(l^{2}-m^{2})V_{pd\sigma }-m(1+l^{2}-m^{2})V_{pd\pi }} E z , x 2 − y 2 = 3 2 n ( l 2 − m 2 ) V p d σ − n ( l 2 − m 2 ) V p d π {\displaystyle E_{z,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}n(l^{2}-m^{2})V_{pd\sigma }-n(l^{2}-m^{2})V_{pd\pi }} E x , 3 z 2 − r 2 = l [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ − 3 l n 2 V p d π {\displaystyle E_{x,3z^{2}-r^{2}}=l[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}ln^{2}V_{pd\pi }} E y , 3 z 2 − r 2 = m [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ − 3 m n 2 V p d π {\displaystyle E_{y,3z^{2}-r^{2}}=m[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}mn^{2}V_{pd\pi }} E z , 3 z 2 − r 2 = n [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ + 3 n ( l 2 + m 2 ) V p d π {\displaystyle E_{z,3z^{2}-r^{2}}=n[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }+{\sqrt {3}}n(l^{2}+m^{2})V_{pd\pi }} E x y , x y = 3 l 2 m 2 V d d σ + ( l 2 + m 2 − 4 l 2 m 2 ) V d d π + ( n 2 + l 2 m 2 ) V d d δ {\displaystyle E_{xy,xy}=3l^{2}m^{2}V_{dd\sigma }+(l^{2}+m^{2}-4l^{2}m^{2})V_{dd\pi }+(n^{2}+l^{2}m^{2})V_{dd\delta }} E x y , y z = 3 l m 2 n V d d σ + l n ( 1 − 4 m 2 ) V d d π + l n ( m 2 − 1 ) V d d δ {\displaystyle E_{xy,yz}=3lm^{2}nV_{dd\sigma }+ln(1-4m^{2})V_{dd\pi }+ln(m^{2}-1)V_{dd\delta }} E x y , z x = 3 l 2 m n V d d σ + m n ( 1 − 4 l 2 ) V d d π + m n ( l 2 − 1 ) V d d δ {\displaystyle E_{xy,zx}=3l^{2}mnV_{dd\sigma }+mn(1-4l^{2})V_{dd\pi }+mn(l^{2}-1)V_{dd\delta }} E x y , x 2 − y 2 = 3 2 l m ( l 2 − m 2 ) V d d σ + 2 l m ( m 2 − l 2 ) V d d π + [ l m ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{xy,x^{2}-y^{2}}={\frac {3}{2}}lm(l^{2}-m^{2})V_{dd\sigma }+2lm(m^{2}-l^{2})V_{dd\pi }+[lm(l^{2}-m^{2})/2]V_{dd\delta }} E y z , x 2 − y 2 = 3 2 m n ( l 2 − m 2 ) V d d σ − m n [ 1 + 2 ( l 2 − m 2 ) ] V d d π + m n [ 1 + ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{yz,x^{2}-y^{2}}={\frac {3}{2}}mn(l^{2}-m^{2})V_{dd\sigma }-mn[1+2(l^{2}-m^{2})]V_{dd\pi }+mn[1+(l^{2}-m^{2})/2]V_{dd\delta }} E z x , x 2 − y 2 = 3 2 n l ( l 2 − m 2 ) V d d σ + n l [ 1 − 2 ( l 2 − m 2 ) ] V d d π − n l [ 1 − ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{zx,x^{2}-y^{2}}={\frac {3}{2}}nl(l^{2}-m^{2})V_{dd\sigma }+nl[1-2(l^{2}-m^{2})]V_{dd\pi }-nl[1-(l^{2}-m^{2})/2]V_{dd\delta }} E x y , 3 z 2 − r 2 = 3 [ l m ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ − 2 l m n 2 V d d π + [ l m ( 1 + n 2 ) / 2 ] V d d δ ] {\displaystyle E_{xy,3z^{2}-r^{2}}={\sqrt {3}}\left[lm(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }-2lmn^{2}V_{dd\pi }+[lm(1+n^{2})/2]V_{dd\delta }\right]} E y z , 3 z 2 − r 2 = 3 [ m n ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ + m n ( l 2 + m 2 − n 2 ) V d d π − [ m n ( l 2 + m 2 ) / 2 ] V d d δ ] {\displaystyle E_{yz,3z^{2}-r^{2}}={\sqrt {3}}\left[mn(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+mn(l^{2}+m^{2}-n^{2})V_{dd\pi }-[mn(l^{2}+m^{2})/2]V_{dd\delta }\right]} E z x , 3 z 2 − r 2 = 3 [ l n ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ + l n ( l 2 + m 2 − n 2 ) V d d π − [ l n ( l 2 + m 2 ) / 2 ] V d d δ ] {\displaystyle E_{zx,3z^{2}-r^{2}}={\sqrt {3}}\left[ln(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+ln(l^{2}+m^{2}-n^{2})V_{dd\pi }-[ln(l^{2}+m^{2})/2]V_{dd\delta }\right]} E x 2 − y 2 , x 2 − y 2 = 3 4 ( l 2 − m 2 ) 2 V d d σ + [ l 2 + m 2 − ( l 2 − m 2 ) 2 ] V d d π + [ n 2 + ( l 2 − m 2 ) 2 / 4 ] V d d δ {\displaystyle E_{x^{2}-y^{2},x^{2}-y^{2}}={\frac {3}{4}}(l^{2}-m^{2})^{2}V_{dd\sigma }+[l^{2}+m^{2}-(l^{2}-m^{2})^{2}]V_{dd\pi }+[n^{2}+(l^{2}-m^{2})^{2}/4]V_{dd\delta }} E x 2 − y 2 , 3 z 2 − r 2 = 3 [ ( l 2 − m 2 ) [ n 2 − ( l 2 + m 2 ) / 2 ] V d d σ / 2 + n 2 ( m 2 − l 2 ) V d d π + [ ( 1 + n 2 ) ( l 2 − m 2 ) / 4 ] V d d δ ] {\displaystyle E_{x^{2}-y^{2},3z^{2}-r^{2}}={\sqrt {3}}\left[(l^{2}-m^{2})[n^{2}-(l^{2}+m^{2})/2]V_{dd\sigma }/2+n^{2}(m^{2}-l^{2})V_{dd\pi }+[(1+n^{2})(l^{2}-m^{2})/4]V_{dd\delta }\right]} E 3 z 2 − r 2 , 3 z 2 − r 2 = [ n 2 − ( l 2 + m 2 ) / 2 ] 2 V d d σ + 3 n 2 ( l 2 + m 2 ) V d d π + 3 4 ( l 2 + m 2 ) 2 V d d δ {\displaystyle E_{3z^{2}-r^{2},3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]^{2}V_{dd\sigma }+3n^{2}(l^{2}+m^{2})V_{dd\pi }+{\frac {3}{4}}(l^{2}+m^{2})^{2}V_{dd\delta }} Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices is the same as a spatial inversion. According to the parity properties of spherical harmonics, Y M L ( − r ) = ( − 1 ) l Y M L ( r ) {\displaystyle Y_{M}^{L}(-\mathbf {r} )=(-1)^{l}Y_{M}^{L}(\mathbf {r} )} . The bond integrals are proportional to the integral of the product of two real spherical harmonics; the real spherical harmonics (e.g. the p x , p y , p z , d x y , ⋯ {\displaystyle p_{x},p_{y},p_{z},d_{xy},\cdots } functions) have the same parity properties as the complex spherical harmonics. Then the bond integrals transform under inversion (i.e. swapping orbitals) as V L ′ L M = ( − 1 ) L + L ′ V L L ′ M {\displaystyle V_{L'LM}=(-1)^{L+L'}V_{LL'M}} , with L , L ′ , M {\displaystyle L,~L',~M} the angular momenta and magnetic quantum number. For example, E x , s = − l V s p σ = − E s , x {\displaystyle E_{x,s}=-lV_{sp\sigma }=-E_{s,x}} and E y , x = E x , y {\displaystyle E_{y,x}=E_{x,y}} . == See also == == References == N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976). Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001). S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004). John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001). == Further reading == Walter Ashley Harrison (1989). Electronic Structure and the Properties of Solids. Dover Publications. ISBN 0-486-66021-4. N. W. Ashcroft and N. D. Mermin (1976). Solid State Physics. Toronto: Thomson Learning. Davies, John H. (1998). The physics of low-dimensional semiconductors: An introduction. Cambridge, United Kingdom: Cambridge University Press. ISBN 0-521-48491-X. Goringe, C M; Bowler, D R; Hernández, E (1997). "Tight-binding modelling of materials". Reports on Progress in Physics. 60 (12): 1447–1512. Bibcode:1997RPPh...60.1447G. doi:10.1088/0034-4885/60/12/001. S2CID 250846071. Slater, J. C.; Koster, G. F. (1954). "Simplified LCAO Method for the Periodic Potential Problem". Physical Review. 94 (6): 1498–1524. Bibcode:1954PhRv...94.1498S. doi:10.1103/PhysRev.94.1498. == External links == Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, ISBN 978-3-89336-796-2 Tight-Binding Studio: A Technical Software Package to Find the Parameters of Tight-Binding Hamiltonian
Wikipedia/Tight_binding_model
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. == Origins == The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. == Validity == In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest. == Formal approach (Hamiltonian) == The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian H = H 0 + Δ H {\displaystyle {\mathcal {H}}={\mathcal {H}}_{0}+\Delta {\mathcal {H}}} has the following upper bound: F ≤ F 0 = d e f ⟨ H ⟩ 0 − T S 0 , {\displaystyle F\leq F_{0}\ {\stackrel {\mathrm {def} }{=}}\ \langle {\mathcal {H}}\rangle _{0}-TS_{0},} where S 0 {\displaystyle S_{0}} is the entropy, and F {\displaystyle F} and F 0 {\displaystyle F_{0}} are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian H 0 {\displaystyle {\mathcal {H}}_{0}} . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as H 0 = ∑ i = 1 N h i ( ξ i ) , {\displaystyle {\mathcal {H}}_{0}=\sum _{i=1}^{N}h_{i}(\xi _{i}),} where ξ i {\displaystyle \xi _{i}} are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation. For the most common case that the target Hamiltonian contains only pairwise interactions, i.e., H = ∑ ( i , j ) ∈ P V i , j ( ξ i , ξ j ) , {\displaystyle {\mathcal {H}}=\sum _{(i,j)\in {\mathcal {P}}}V_{i,j}(\xi _{i},\xi _{j}),} where P {\displaystyle {\mathcal {P}}} is the set of pairs that interact, the minimising procedure can be carried out formally. Define Tr i ⁡ f ( ξ i ) {\displaystyle \operatorname {Tr} _{i}f(\xi _{i})} as the generalized sum of the observable f {\displaystyle f} over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by F 0 = Tr 1 , 2 , … , N ⁡ H ( ξ 1 , ξ 2 , … , ξ N ) P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) + k T Tr 1 , 2 , … , N ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) log ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) , {\displaystyle {\begin{aligned}F_{0}&=\operatorname {Tr} _{1,2,\ldots ,N}{\mathcal {H}}(\xi _{1},\xi _{2},\ldots ,\xi _{N})P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\\&+kT\,\operatorname {Tr} _{1,2,\ldots ,N}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\log P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N}),\end{aligned}}} where P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle P_{0}^{(N)}(\xi _{1},\xi _{2},\dots ,\xi _{N})} is the probability to find the reference system in the state specified by the variables ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle (\xi _{1},\xi _{2},\dots ,\xi _{N})} . This probability is given by the normalized Boltzmann factor P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) = 1 Z 0 ( N ) e − β H 0 ( ξ 1 , ξ 2 , … , ξ N ) = ∏ i = 1 N 1 Z 0 e − β h i ( ξ i ) = d e f ∏ i = 1 N P 0 ( i ) ( ξ i ) , {\displaystyle {\begin{aligned}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})&={\frac {1}{Z_{0}^{(N)}}}e^{-\beta {\mathcal {H}}_{0}(\xi _{1},\xi _{2},\ldots ,\xi _{N})}\\&=\prod _{i=1}^{N}{\frac {1}{Z_{0}}}e^{-\beta h_{i}(\xi _{i})}\ {\stackrel {\mathrm {def} }{=}}\ \prod _{i=1}^{N}P_{0}^{(i)}(\xi _{i}),\end{aligned}}} where Z 0 {\displaystyle Z_{0}} is the partition function. Thus F 0 = ∑ ( i , j ) ∈ P Tr i , j ⁡ V i , j ( ξ i , ξ j ) P 0 ( i ) ( ξ i ) P 0 ( j ) ( ξ j ) + k T ∑ i = 1 N Tr i ⁡ P 0 ( i ) ( ξ i ) log ⁡ P 0 ( i ) ( ξ i ) . {\displaystyle {\begin{aligned}F_{0}&=\sum _{(i,j)\in {\mathcal {P}}}\operatorname {Tr} _{i,j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(i)}(\xi _{i})P_{0}^{(j)}(\xi _{j})\\&+kT\sum _{i=1}^{N}\operatorname {Tr} _{i}P_{0}^{(i)}(\xi _{i})\log P_{0}^{(i)}(\xi _{i}).\end{aligned}}} In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities P 0 ( i ) {\displaystyle P_{0}^{(i)}} using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations P 0 ( i ) ( ξ i ) = 1 Z 0 e − β h i M F ( ξ i ) , i = 1 , 2 , … , N , {\displaystyle P_{0}^{(i)}(\xi _{i})={\frac {1}{Z_{0}}}e^{-\beta h_{i}^{MF}(\xi _{i})},\quad i=1,2,\ldots ,N,} where the mean field is given by h i MF ( ξ i ) = ∑ { j ∣ ( i , j ) ∈ P } Tr j ⁡ V i , j ( ξ i , ξ j ) P 0 ( j ) ( ξ j ) . {\displaystyle h_{i}^{\text{MF}}(\xi _{i})=\sum _{\{j\mid (i,j)\in {\mathcal {P}}\}}\operatorname {Tr} _{j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(j)}(\xi _{j}).} == Applications == Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions. === Ising model === ==== Formal derivation ==== The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian, − m ∑ i s i {\displaystyle -m\sum _{i}s_{i}} , the variational free energy is F V = F 0 + ⟨ ( − J ∑ s i s j − h ∑ s i ) − ( − m ∑ s i ) ⟩ 0 . {\displaystyle F_{V}=F_{0}+\left\langle \left(-J\sum s_{i}s_{j}-h\sum s_{i}\right)-\left(-m\sum s_{i}\right)\right\rangle _{0}.} By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is m = J ∑ ⟨ s j ⟩ 0 + h , {\displaystyle m=J\sum \langle s_{j}\rangle _{0}+h,} which is the ensemble average of spin. This simplifies to m = tanh ( z J β m ) + h . {\displaystyle m={\text{tanh}}(zJ\beta m)+h.} Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins. ==== Non-interacting spins approximation ==== Consider the Ising model on a d {\displaystyle d} -dimensional lattice. The Hamiltonian is given by H = − J ∑ ⟨ i , j ⟩ s i s j − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }s_{i}s_{j}-h\sum _{i}s_{i},} where the ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} indicates summation over the pair of nearest neighbors ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } , and s i , s j = ± 1 {\displaystyle s_{i},s_{j}=\pm 1} are neighboring Ising spins. Let us transform our spin variable by introducing the fluctuation from its mean value m i ≡ ⟨ s i ⟩ {\displaystyle m_{i}\equiv \langle s_{i}\rangle } . We may rewrite the Hamiltonian as H = − J ∑ ⟨ i , j ⟩ ( m i + δ s i ) ( m j + δ s j ) − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }(m_{i}+\delta s_{i})(m_{j}+\delta s_{j})-h\sum _{i}s_{i},} where we define δ s i ≡ s i − m i {\displaystyle \delta s_{i}\equiv s_{i}-m_{i}} ; this is the fluctuation of the spin. If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values. The mean field approximation consists of neglecting this second-order fluctuation term: H ≈ H MF ≡ − J ∑ ⟨ i , j ⟩ ( m i m j + m i δ s j + m j δ s i ) − h ∑ i s i . {\displaystyle H\approx H^{\text{MF}}\equiv -J\sum _{\langle i,j\rangle }(m_{i}m_{j}+m_{i}\delta s_{j}+m_{j}\delta s_{i})-h\sum _{i}s_{i}.} These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions. Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields H MF = − J ∑ ⟨ i , j ⟩ ( m 2 + 2 m ( s i − m ) ) − h ∑ i s i . {\displaystyle H^{\text{MF}}=-J\sum _{\langle i,j\rangle }{\big (}m^{2}+2m(s_{i}-m){\big )}-h\sum _{i}s_{i}.} The summation over neighboring spins can be rewritten as ∑ ⟨ i , j ⟩ = 1 2 ∑ i ∑ j ∈ n n ( i ) {\displaystyle \sum _{\langle i,j\rangle }={\frac {1}{2}}\sum _{i}\sum _{j\in nn(i)}} , where n n ( i ) {\displaystyle nn(i)} means "nearest neighbor of i {\displaystyle i} ", and the 1 / 2 {\displaystyle 1/2} prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression H MF = J m 2 N z 2 − ( h + m J z ) ⏟ h eff. ∑ i s i , {\displaystyle H^{\text{MF}}={\frac {Jm^{2}Nz}{2}}-\underbrace {(h+mJz)} _{h^{\text{eff.}}}\sum _{i}s_{i},} where z {\displaystyle z} is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field h eff. = h + J z m {\displaystyle h^{\text{eff.}}=h+Jzm} , which is the sum of the external field h {\displaystyle h} and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension d {\displaystyle d} , z = 2 d {\displaystyle z=2d} ). Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain Z = e − β J m 2 N z 2 [ 2 cosh ⁡ ( h + m J z k B T ) ] N , {\displaystyle Z=e^{-{\frac {\beta Jm^{2}Nz}{2}}}\left[2\cosh \left({\frac {h+mJz}{k_{\text{B}}T}}\right)\right]^{N},} where N {\displaystyle N} is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization m {\displaystyle m} as a function of h eff. {\displaystyle h^{\text{eff.}}} . We thus have two equations between m {\displaystyle m} and h eff. {\displaystyle h^{\text{eff.}}} , allowing us to determine m {\displaystyle m} as a function of temperature. This leads to the following observation: For temperatures greater than a certain value T c {\displaystyle T_{\text{c}}} , the only solution is m = 0 {\displaystyle m=0} . The system is paramagnetic. For T < T c {\displaystyle T<T_{\text{c}}} , there are two non-zero solutions: m = ± m 0 {\displaystyle m=\pm m_{0}} . The system is ferromagnetic. T c {\displaystyle T_{\text{c}}} is given by the following relation: T c = J z k B {\displaystyle T_{\text{c}}={\frac {Jz}{k_{B}}}} . This shows that MFT can account for the ferromagnetic phase transition. === Application to other systems === Similarly, MFT can be applied to other types of Hamiltonian as in the following cases: To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap Δ {\displaystyle \Delta } . The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero. To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)). To determine the elastic properties of a composite material. Variationally minimisation like mean field theory can be also be used in statistical inference. == Extension to time-dependent mean fields == In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition. == See also == Dynamical mean field theory Mean field game theory == References ==
Wikipedia/Mean-field_approximation
In computational physics and chemistry, the Hartree–Fock (HF) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state. The method is named after Douglas Hartree and Vladimir Fock. The Hartree–Fock method often assumes that the exact N-body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions) or by a single permanent (in the case of bosons) of N spin-orbitals. By invoking the variational method, one can derive a set of N-coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system. Hartree–Fock approximation is an instance of mean-field theory, where neglecting higher-order fluctuations in order parameter allows interaction terms to be replaced with quadratic terms, obtaining exactly solvable Hamiltonians. Especially in the older literature, the Hartree–Fock method is also called the self-consistent field method (SCF). In deriving what is now called the Hartree equation as an approximate solution of the Schrödinger equation, Hartree required the final field as computed from the charge distribution to be "self-consistent" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below), and hence the terminology continued. The equations are almost universally solved by means of an iterative method, although the fixed-point iteration algorithm does not always converge. This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method. The Hartree–Fock method finds its typical application in the solution of the Schrödinger equation for atoms, molecules, nanostructures and solids but it has also found widespread use in nuclear physics. (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels, and consequently, the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state. For both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately. The rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by either the restricted open-shell or the unrestricted Hartree–Fock methods. == Brief history == === Early semi-empirical methods === The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. Douglas Hartree's methods were guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr. In the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as E = − 1 / n 2 {\displaystyle E=-1/n^{2}} . It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula E = − 1 / ( n + d ) 2 {\displaystyle E=-1/(n+d)^{2}} , in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law). The existence of a non-zero quantum defect was attributed to electron–electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data. === Hartree method === In 1927, D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. Hartree sought to do away with empirical parameters and solve the many-body time-independent Schrödinger equation from fundamental physical principles, i.e., ab initio. His first proposed method of solution became known as the Hartree method, or Hartree product. However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body Schrödinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions. In 1930, Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics. === Hartree–Fock === A solution to the lack of anti-symmetry in the Hartree method came when it was shown that a Slater determinant, a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle. The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange. Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935, Hartree reformulated the method to be more suitable for the purposes of calculation. The Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation to impose the condition that electrons in the same shell have the same radial part and to restrict the variational solution to be a spin eigenfunction. Even so, calculating a solution by hand using the Hartree–Fock equations for a medium-sized atom was laborious; small molecules required computational resources far beyond what was available before 1950. == Hartree–Fock algorithm == The Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation. Since there are no known analytic solutions for many-electron systems (there are solutions for one-electron systems such as hydrogenic atoms and the diatomic hydrogen cation), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration, which gives rise to the name "self-consistent field method." === Approximations === The Hartree–Fock method makes five major simplifications to deal with this task: The Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons. Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic. The variational solution is assumed to be a linear combination of a finite number of basis functions, which are usually (but not always) chosen to be orthogonal. The finite basis set is assumed to be approximately complete. Each energy eigenfunction is assumed to be describable by a single Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals). The mean-field approximation is implied. Effects arising from deviations from this assumption are neglected. These effects are often collectively used as a definition of the term electron correlation. However, the label "electron correlation" strictly spoken encompasses both the Coulomb correlation and Fermi correlation, and the latter is an effect of electron exchange, which is fully accounted for in the Hartree–Fock method. Stated in this terminology, the method only neglects the Coulomb correlation. However, this is an important flaw, accounting for (among others) Hartree–Fock's inability to capture London dispersion. Relaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods. === Variational optimization of orbitals === The variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground-state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground-state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness. (The other is the full-CI limit, where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant. The starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals. For an atomic orbital calculation, these are typically the orbitals for a hydrogen-like atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO). The orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator. At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic-energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear–electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method and is equivalent to the fifth simplification in the above list. Since the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals, which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals is calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed from these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed. == Mathematical formulation == === Derivation === According to the Slater–Condon rules, the energy expectation value of the molecular electronic Hamiltonian H ^ e {\displaystyle {\hat {H}}^{e}} for a Slater determinant is E [ ψ H F ] = ⟨ ψ H F | H ^ e | ψ H F ⟩ = ∑ i = 1 N ∫ d x i ϕ i ∗ ( x i ) h ^ ( x i ) ϕ i ( x i ) + 1 2 ∑ i = 1 N ∑ j = 1 N ∫ d x i ∫ d x j ϕ i ∗ ( x i ) ϕ j ∗ ( x j ) 1 | x i − x j | ϕ i ( x i ) ϕ j ( x j ) − 1 2 ∑ i = 1 N ∑ j = 1 N ∫ d x i ∫ d x j ϕ i ∗ ( x i ) ϕ j ∗ ( x j ) 1 | x i − x j | ϕ i ( x j ) ϕ j ( x i ) {\textstyle {\begin{aligned}E[\psi ^{HF}]&=\left\langle \psi ^{HF}|{\hat {H}}^{e}|\psi ^{HF}\right\rangle \\&=\sum _{i=1}^{N}\int {\text{d}}\mathbf {x} _{i}\,\phi _{i}^{*}(\mathbf {x} _{i}){\hat {h}}(\mathbf {x} _{i})\phi _{i}(\mathbf {x} _{i})\\&+{\frac {1}{2}}\sum _{i=1}^{N}\sum _{j=1}^{N}\int \mathrm {d} \mathbf {x} _{i}\int {\text{d}}\mathbf {x} _{j}\phi _{i}^{*}(\mathbf {x} _{i})\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{i}-\mathbf {x} _{j}|}}\phi _{i}(\mathbf {x} _{i})\phi _{j}(\mathbf {x} _{j})\\&-{\frac {1}{2}}\sum _{i=1}^{N}\sum _{j=1}^{N}\int {\text{d}}\mathbf {x} _{i}\int {\text{d}}\mathbf {x} _{j}\phi _{i}^{*}(\mathbf {x} _{i})\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{i}-\mathbf {x} _{j}|}}\phi _{i}(\mathbf {x} _{j})\phi _{j}(\mathbf {x} _{i})\end{aligned}}} where h ^ {\displaystyle {\hat {h}}} is the one electron operator including electronic kinetic energy and electron-nucleus Coulombic interaction, and ψ H F = ψ ( x 1 , x 2 , … , x N ) = 1 N ! | ϕ 1 ( x 1 ) ϕ 2 ( x 1 ) ⋯ ϕ N ( x 1 ) ϕ 1 ( x 2 ) ϕ 2 ( x 2 ) ⋯ ϕ N ( x 2 ) ⋮ ⋮ ⋱ ⋮ ϕ 1 ( x N ) ϕ 2 ( x N ) ⋯ ϕ N ( x N ) | . {\displaystyle {\begin{aligned}\psi ^{HF}=\psi (\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{N})={\frac {1}{\sqrt {N!}}}{\begin{vmatrix}\phi _{1}(\mathbf {x} _{1})&\phi _{2}(\mathbf {x} _{1})&\cdots &\phi _{N}(\mathbf {x} _{1})\\\phi _{1}(\mathbf {x} _{2})&\phi _{2}(\mathbf {x} _{2})&\cdots &\phi _{N}(\mathbf {x} _{2})\\\vdots &\vdots &\ddots &\vdots \\\phi _{1}(\mathbf {x} _{N})&\phi _{2}(\mathbf {x} _{N})&\cdots &\phi _{N}(\mathbf {x} _{N})\end{vmatrix}}.\end{aligned}}} To derive the Hartree-Fock equation we minimize the energy functional for N electrons with orthonormal constraints. δ E [ ϕ k ∗ ( x k ) ] = δ ⟨ ψ H F | H ^ e | ψ H F ⟩ − δ [ ∑ i = 1 N ∑ j = 1 N λ i j ( ⟨ ϕ i , ϕ j ⟩ − δ i j ) ] = ! 0 , {\displaystyle \delta E[\phi _{k}^{*}(x_{k})]=\delta \left\langle \psi ^{HF}|{\hat {H}}^{e}|\psi ^{HF}\right\rangle -\delta \left[\sum _{i=1}^{N}\sum _{j=1}^{N}\lambda _{ij}\left(\left\langle \phi _{i},\phi _{j}\right\rangle -\delta _{ij}\right)\right]{\stackrel {!}{=}}\,0,} We choose a basis set ϕ i ( x i ) {\displaystyle \phi _{i}(x_{i})} in which the Lagrange multiplier matrix λ i j {\displaystyle \lambda _{ij}} becomes diagonal, i.e. λ i j = ϵ i δ i j {\displaystyle \lambda _{ij}=\epsilon _{i}\delta _{ij}} . Performing the variation, we obtain δ E [ ϕ k ∗ ( x k ) ] = ∑ i = 1 N ∫ d x i h ^ ( x i ) ϕ i ( x i ) δ ( x i − x k ) δ i k + ∑ i = 1 N ∑ j = 1 N ∫ d x i ∫ d x j ϕ j ∗ ( x j ) 1 | x i − x j | ϕ i ( x i ) ϕ j ( x j ) δ ( x i − x k ) δ i k − ∑ i = 1 N ∑ j = 1 N ∫ d x i ∫ d x j ϕ j ∗ ( x j ) 1 | x i − x j | ϕ i ( x j ) ϕ j ( x i ) δ ( x i − x k ) δ i k − ∑ i = 1 N ϵ i ∫ d x i ϕ i ( x i ) δ ( x i − x k ) δ i k = h ^ ( x k ) ϕ k ( x k ) + ∑ j = 1 N ∫ d x j ϕ j ∗ ( x j ) 1 | x k − x j | ϕ k ( x k ) ϕ j ( x j ) − ∑ j = 1 N ∫ d x j ϕ j ∗ ( x j ) 1 | x k − x j | ϕ k ( x j ) ϕ j ( x k ) − ϵ k ϕ k ( x k ) = 0. {\displaystyle {\begin{aligned}\delta E[\phi _{k}^{*}(x_{k})]&=\sum _{i=1}^{N}\int {\text{d}}\mathbf {x} _{i}\,{\hat {h}}(\mathbf {x} _{i})\phi _{i}(\mathbf {x} _{i})\delta (\mathbf {x} _{i}-\mathbf {x} _{k})\delta _{ik}\\&+\sum _{i=1}^{N}\sum _{j=1}^{N}\int \mathrm {d} \mathbf {x} _{i}\int {\text{d}}\mathbf {x} _{j}\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{i}-\mathbf {x} _{j}|}}\phi _{i}(\mathbf {x} _{i})\phi _{j}(\mathbf {x} _{j})\delta (\mathbf {x} _{i}-\mathbf {x} _{k})\delta _{ik}\\&-\sum _{i=1}^{N}\sum _{j=1}^{N}\int {\text{d}}\mathbf {x} _{i}\int {\text{d}}\mathbf {x} _{j}\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{i}-\mathbf {x} _{j}|}}\phi _{i}(\mathbf {x} _{j})\phi _{j}(\mathbf {x} _{i})\delta (\mathbf {x} _{i}-\mathbf {x} _{k})\delta _{ik}\\&-\sum _{i=1}^{N}\epsilon _{i}\int {\text{d}}\mathbf {x} _{i}\,\phi _{i}(\mathbf {x} _{i})\delta (\mathbf {x} _{i}-\mathbf {x} _{k})\delta _{ik}\\&={\hat {h}}(\mathbf {x} _{k})\phi _{k}(\mathbf {x} _{k})\\&+\sum _{j=1}^{N}\int {\text{d}}\mathbf {x} _{j}\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{k}-\mathbf {x} _{j}|}}\phi _{k}(\mathbf {x} _{k})\phi _{j}(\mathbf {x} _{j})\\&-\sum _{j=1}^{N}\int {\text{d}}\mathbf {x} _{j}\phi _{j}^{*}(\mathbf {x} _{j}){\frac {1}{|\mathbf {x} _{k}-\mathbf {x} _{j}|}}\phi _{k}(\mathbf {x} _{j})\phi _{j}(\mathbf {x} _{k})\\&-\epsilon _{k}\phi _{k}(\mathbf {x} _{k})=0.\\\end{aligned}}} The factor 1/2 before the double integrals in the molecular Hamiltonian drops out due to symmetry and the product rule. We may define the Fock operator to rewrite the equation F ^ ( x k ) ϕ k ( x k ) ≡ [ h ^ ( x k ) + J ^ ( x k ) − K ^ ( x k ) ] ϕ k ( x k ) = ϵ k ϕ k ( x k ) , {\displaystyle {\hat {F}}(\mathbf {x} _{k})\phi _{k}(\mathbf {x} _{k})\equiv \left[{\hat {h}}(\mathbf {x} _{k})+{\hat {J}}(\mathbf {x} _{k})-{\hat {K}}(\mathbf {x} _{k})\right]\phi _{k}(\mathbf {x} _{k})=\epsilon _{k}\phi _{k}(\mathbf {x} _{k}),} where the Coulomb operator J ^ ( x k ) {\displaystyle {\hat {J}}(\mathbf {x} _{k})} and the exchange operator K ^ ( x k ) {\displaystyle {\hat {K}}(\mathbf {x} _{k})} are defined as follows J ^ ( x k ) ≡ ∑ j = 1 N ∫ d x j ϕ j ∗ ( x j ) ϕ j ( x j ) | x k − x j | = ∑ j = 1 N ∫ d x j ρ ( x j ) | x k − x j | , K ^ ( x k ) ϕ k ( x k ) ≡ ∑ j = 1 N ϕ j ( x k ) ∫ d x j ϕ j ∗ ( x j ) ϕ k ( x j ) | x k − x j | . {\displaystyle {\begin{aligned}{\hat {J}}(\mathbf {x_{k}} )&\equiv \sum _{j=1}^{N}\int \mathrm {d} \mathbf {x} _{j}{\frac {\phi _{j}^{*}(\mathbf {x} _{j})\phi _{j}(\mathbf {x} _{j})}{|\mathbf {x} _{k}-\mathbf {x} _{j}|}}=\sum _{j=1}^{N}\int \mathrm {d} \mathbf {x} _{j}{\frac {\rho (\mathbf {x} _{j})}{|\mathbf {x} _{k}-\mathbf {x} _{j}|}},\\{\hat {K}}(\mathbf {x_{k}} )\phi _{k}(\mathbf {x} _{k})&\equiv \sum _{j=1}^{N}\phi _{j}(\mathbf {x} _{k})\int {\text{d}}\mathbf {x} _{j}{\frac {\phi _{j}^{*}(\mathbf {x} _{j})\phi _{k}(\mathbf {x} _{j})}{|\mathbf {x} _{k}-\mathbf {x} _{j}|}}.\\\end{aligned}}} The exchange operator has no classical analogue and can only be defined as an integral operator. The solution ϕ k {\displaystyle \phi _{k}} and ϵ k {\displaystyle \epsilon _{k}} are called molecular orbital and orbital energy respectively. Although Hartree-Fock equation appears in the form of a eigenvalue problem, the Fock operator itself depends on ϕ {\displaystyle \phi } and must be solved by a different technique. === Total energy === The optimal total energy E H F {\displaystyle E_{HF}} can be written in terms of molecular orbitals. E H F = ∑ i = 1 N h ^ i i + ∑ i = 1 N ∑ j = 1 N / 2 [ 2 J ^ i j − K ^ i j ] + V nucl {\displaystyle E_{HF}=\sum _{i=1}^{N}{\hat {h}}_{ii}+\sum _{i=1}^{N}\sum _{j=1}^{N/2}[2{\hat {J}}_{ij}-{\hat {K}}_{ij}]+V_{\text{nucl}}} J ^ i j {\displaystyle {\hat {J}}_{ij}} and K ^ i j {\displaystyle {\hat {K}}_{ij}} are matrix elements of the Coulomb and exchange operators respectively, and V nucl {\displaystyle V_{\text{nucl}}} is the total electrostatic repulsion between all the nuclei in the molecule. The total energy is not equal to the sum of orbital energies. If the atom or molecule is closed shell, the total energy according to the Hartree-Fock method is E H F = 2 ∑ i = 1 N / 2 h ^ i i + ∑ i = 1 N / 2 ∑ j = 1 N / 2 [ 2 J ^ i j − K ^ i j ] + V nucl . {\displaystyle E_{HF}=2\sum _{i=1}^{N/2}{\hat {h}}_{ii}+\sum _{i=1}^{N/2}\sum _{j=1}^{N/2}[2{\hat {J}}_{ij}-{\hat {K}}_{ij}]+V_{\text{nucl}}.} === Linear combination of atomic orbitals === Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals. These atomic orbitals are called Slater-type orbitals. Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time. Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem, of which the Roothaan–Hall equations are an example. == Numerical stability == Numerical stability can be a problem with this procedure and there are various ways of combatting this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single-electron wave function is calculated, it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used, the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations. == Weaknesses, extensions, and alternatives == Of the five simplifications outlined in the section "Hartree–Fock algorithm", the fifth is typically the most important. Neglect of electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, Møller–Plesset perturbation theory, treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinants—such as multi-configurational self-consistent field, configuration interaction, quadratic configuration interaction, and complete active space SCF (CASSCF). Still others (such as variational quantum Monte Carlo) modify the Hartree–Fock wave function by multiplying it by a correlation function ("Jastrow" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions. An alternative to Hartree–Fock calculations used in some cases is density functional theory, which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methods—the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods. == Software packages == For a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software. == See also == == References == == Sources == Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. pp. 455–544. ISBN 0-205-12770-3. Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd. pp. 153–189. ISBN 0-471-48552-7. Szabo, A.; Ostlund, N. S. (1996). Modern Quantum Chemistry. Mineola, New York: Dover Publishing. ISBN 0-486-69186-1. == External links == Hartree, D. R. (January 1928). "The Wave Mechanics of an Atom with a Non-Coulomb Central Field. Part II. Some Results and Discussion". Mathematical Proceedings of the Cambridge Philosophical Society. 24: 111–132. doi:10.1017/S0305004100011920. An Introduction to Hartree-Fock Molecular Orbital Theory by C. David Sherrill (June 2000) Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, ISBN 978-3-95806-159-0
Wikipedia/Hartree–Fock_method
In solid-state physics, the free electron model is a quantum mechanical model for the behaviour of charge carriers in a metallic solid. It was developed in 1927, principally by Arnold Sommerfeld, who combined the classical Drude model with quantum mechanical Fermi–Dirac statistics and hence it is also known as the Drude–Sommerfeld model. Given its simplicity, it is surprisingly successful in explaining many experimental phenomena, especially the Wiedemann–Franz law which relates electrical conductivity and thermal conductivity; the temperature dependence of the electron heat capacity; the shape of the electronic density of states; the range of binding energy values; electrical conductivities; the Seebeck coefficient of the thermoelectric effect; thermal electron emission and field electron emission from bulk metals. The free electron model solved many of the inconsistencies related to the Drude model and gave insight into several other properties of metals. The free electron model considers that metals are composed of a quantum electron gas where ions play almost no role. The model can be very predictive when applied to alkali and noble metals. == Ideas and assumptions == In the free electron model four main assumptions are taken into account: Free electron approximation: The interaction between the ions and the valence electrons is mostly neglected, except in boundary conditions. The ions only keep the charge neutrality in the metal. Unlike in the Drude model, the ions are not necessarily the source of collisions. Independent electron approximation: The interactions between electrons are ignored. The electrostatic fields in metals are weak because of the screening effect. Relaxation-time approximation: There is some unknown scattering mechanism such that the electron probability of collision is inversely proportional to the relaxation time τ {\displaystyle \tau } , which represents the average time between collisions. The collisions do not depend on the electronic configuration. Pauli exclusion principle: Each quantum state of the system can only be occupied by a single electron. This restriction of available electron states is taken into account by Fermi–Dirac statistics (see also Fermi gas). Main predictions of the free-electron model are derived by the Sommerfeld expansion of the Fermi–Dirac occupancy for energies around the Fermi level. The name of the model comes from the first two assumptions, as each electron can be treated as free particle with a respective quadratic relation between energy and momentum. The crystal lattice is not explicitly taken into account in the free electron model, but a quantum-mechanical justification was given a year later (1928) by Bloch's theorem: an unbound electron moves in a periodic potential as a free electron in vacuum, except for the electron mass me becoming an effective mass m* which may deviate considerably from me (one can even use negative effective mass to describe conduction by electron holes). Effective masses can be derived from band structure computations that were not originally taken into account in the free electron model. == From the Drude model == Many physical properties follow directly from the Drude model, as some equations do not depend on the statistical distribution of the particles. Taking the classical velocity distribution of an ideal gas or the velocity distribution of a Fermi gas only changes the results related to the speed of the electrons. Mainly, the free electron model and the Drude model predict the same DC electrical conductivity σ for Ohm's law, that is J = σ E {\displaystyle \mathbf {J} =\sigma \mathbf {E} \quad } with σ = n e 2 τ m e , {\displaystyle \quad \sigma ={\frac {ne^{2}\tau }{m_{e}}},} where J {\displaystyle \mathbf {J} } is the current density, E {\displaystyle \mathbf {E} } is the external electric field, n {\displaystyle n} is the electronic density (number of electrons/volume), τ {\displaystyle \tau } is the mean free time and e {\displaystyle e} is the electron electric charge. Other quantities that remain the same under the free electron model as under Drude's are the AC susceptibility, the plasma frequency, the magnetoresistance, and the Hall coefficient related to the Hall effect. == Properties of an electron gas == Many properties of the free electron model follow directly from equations related to the Fermi gas, as the independent electron approximation leads to an ensemble of non-interacting electrons. For a three-dimensional electron gas we can define the Fermi energy as E F = ℏ 2 2 m e ( 3 π 2 n ) 2 3 , {\displaystyle E_{\rm {F}}={\frac {\hbar ^{2}}{2m_{e}}}\left(3\pi ^{2}n\right)^{\frac {2}{3}},} where ℏ {\displaystyle \hbar } is the reduced Planck constant. The Fermi energy defines the energy of the highest energy electron at zero temperature. For metals the Fermi energy is in the order of units of electronvolts above the free electron band minimum energy. === Density of states === The 3D density of states (number of energy states, per energy per volume) of a non-interacting electron gas is given by: g ( E ) = m e π 2 ℏ 3 2 m e E = 3 2 n E F E E F , {\displaystyle g(E)={\frac {m_{e}}{\pi ^{2}\hbar ^{3}}}{\sqrt {2m_{e}E}}={\frac {3}{2}}{\frac {n}{E_{\rm {F}}}}{\sqrt {\frac {E}{E_{\rm {F}}}}},} where E ≥ 0 {\textstyle E\geq 0} is the energy of a given electron. This formula takes into account the spin degeneracy but does not consider a possible energy shift due to the bottom of the conduction band. For 2D the density of states is constant and for 1D is inversely proportional to the square root of the electron energy. === Fermi level === The chemical potential μ {\displaystyle \mu } of electrons in a solid is also known as the Fermi level and, like the related Fermi energy, often denoted E F {\displaystyle E_{\rm {F}}} . The Sommerfeld expansion can be used to calculate the Fermi level ( T > 0 {\displaystyle T>0} ) at higher temperatures as: E F ( T ) = E F ( T = 0 ) [ 1 − π 2 12 ( T T F ) 2 − π 4 80 ( T T F ) 4 + ⋯ ] , {\displaystyle E_{\rm {F}}(T)=E_{\rm {F}}(T=0)\left[1-{\frac {\pi ^{2}}{12}}\left({\frac {T}{T_{\rm {F}}}}\right)^{2}-{\frac {\pi ^{4}}{80}}\left({\frac {T}{T_{\rm {F}}}}\right)^{4}+\cdots \right],} where T {\displaystyle T} is the temperature and we define T F = E F / k B {\textstyle T_{\rm {F}}=E_{\rm {F}}/k_{\rm {B}}} as the Fermi temperature ( k B {\displaystyle k_{\rm {B}}} is Boltzmann constant). The perturbative approach is justified as the Fermi temperature is usually of about 105 K for a metal, hence at room temperature or lower the Fermi energy E F ( T = 0 ) {\displaystyle E_{\rm {F}}(T=0)} and the chemical potential E F ( T > 0 ) {\displaystyle E_{\rm {F}}(T>0)} are practically equivalent. === Compressibility of metals and degeneracy pressure === The total energy per unit volume (at T = 0 {\textstyle T=0} ) can also be calculated by integrating over the phase space of the system, we obtain u ( 0 ) = 3 5 n E F , {\displaystyle u(0)={\frac {3}{5}}nE_{\rm {F}},} which does not depend on temperature. Compare with the energy per electron of an ideal gas: 3 2 k B T {\textstyle {\frac {3}{2}}k_{\rm {B}}T} , which is null at zero temperature. For an ideal gas to have the same energy as the electron gas, the temperatures would need to be of the order of the Fermi temperature. Thermodynamically, this energy of the electron gas corresponds to a zero-temperature pressure given by P = − ( ∂ U ∂ V ) T , μ = 2 3 u ( 0 ) , {\displaystyle P=-\left({\frac {\partial U}{\partial V}}\right)_{T,\mu }={\frac {2}{3}}u(0),} where V {\textstyle V} is the volume and U ( T ) = u ( T ) V {\textstyle U(T)=u(T)V} is the total energy, the derivative performed at temperature and chemical potential constant. This pressure is called the electron degeneracy pressure and does not come from repulsion or motion of the electrons but from the restriction that no more than two electrons (due to the two values of spin) can occupy the same energy level. This pressure defines the compressibility or bulk modulus of the metal B = − V ( ∂ P ∂ V ) T , μ = 5 3 P = 2 3 n E F . {\displaystyle B=-V\left({\frac {\partial P}{\partial V}}\right)_{T,\mu }={\frac {5}{3}}P={\frac {2}{3}}nE_{\rm {F}}.} This expression gives the right order of magnitude for the bulk modulus for alkali metals and noble metals, which show that this pressure is as important as other effects inside the metal. For other metals the crystalline structure has to be taken into account. === Magnetic response === According to the Bohr–Van Leeuwen theorem, a classical system at thermodynamic equilibrium cannot have a magnetic response. The magnetic properties of matter in terms of a microscopic theory are purely quantum mechanical. For an electron gas, the total magnetic response is paramagnetic and its magnetic susceptibility given by χ = 2 3 μ 0 μ B 2 g ( E F ) , {\displaystyle \chi ={\frac {2}{3}}\mu _{0}\mu _{\mathrm {B} }^{2}g(E_{\mathrm {F} }),} where μ 0 {\textstyle \mu _{0}} is the vacuum permittivity and the μ B {\textstyle \mu _{\rm {B}}} is the Bohr magneton. This value results from the competition of two contributions: a diamagnetic contribution (known as Landau's diamagnetism) coming from the orbital motion of the electrons in the presence of a magnetic field, and a paramagnetic contribution (Pauli's paramagnetism). The latter contribution is three times larger in absolute value than the diamagnetic contribution and comes from the electron spin, an intrinsic quantum degree of freedom that can take two discrete values and it is associated to the electron magnetic moment. == Corrections to Drude's model == === Heat capacity === One open problem in solid-state physics before the arrival of quantum mechanics was to understand the heat capacity of metals. While most solids had a constant volumetric heat capacity given by Dulong–Petit law of about 3 n k B {\displaystyle 3nk_{\rm {B}}} at large temperatures, it did correctly predict its behavior at low temperatures. In the case of metals that are good conductors, it was expected that the electrons contributed also the heat capacity. The classical calculation using Drude's model, based on an ideal gas, provides a volumetric heat capacity given by c V Drude = 3 2 n k B {\displaystyle c_{V}^{\text{Drude}}={\frac {3}{2}}nk_{\rm {B}}} . If this was the case, the heat capacity of a metals should be 1.5 of that obtained by the Dulong–Petit law. Nevertheless, such a large additional contribution to the heat capacity of metals was never measured, raising suspicions about the argument above. By using Sommerfeld's expansion one can obtain corrections of the energy density at finite temperature and obtain the volumetric heat capacity of an electron gas, given by: c V = ( ∂ u ∂ T ) n = π 2 2 T T F n k B {\displaystyle c_{V}=\left({\frac {\partial u}{\partial T}}\right)_{n}={\frac {\pi ^{2}}{2}}{\frac {T}{T_{\rm {F}}}}nk_{\rm {B}}} , where the prefactor to n k B {\displaystyle nk_{B}} is considerably smaller than the 3/2 found in c V Drude {\textstyle c_{V}^{\text{Drude}}} , about 100 times smaller at room temperature and much smaller at lower T {\textstyle T} . Evidently, the electronic contribution alone does not predict the Dulong–Petit law, i.e. the observation that the heat capacity of a metal is still constant at high temperatures. The free electron model can be improved in this sense by adding the contribution of the vibrations of the crystal lattice. Two famous quantum corrections include the Einstein solid model and the more refined Debye model. With the addition of the latter, the volumetric heat capacity of a metal at low temperatures can be more precisely written in the form, c V ≈ γ T + A T 3 {\displaystyle c_{V}\approx \gamma T+AT^{3}} , where γ {\displaystyle \gamma } and A {\displaystyle A} are constants related to the material. The linear term comes from the electronic contribution while the cubic term comes from Debye model. At high temperature this expression is no longer correct, the electronic heat capacity can be neglected, and the total heat capacity of the metal tends to a constant given by the Dulong–petit law. === Mean free path === Notice that without the relaxation time approximation, there is no reason for the electrons to deflect their motion, as there are no interactions, thus the mean free path should be infinite. The Drude model considered the mean free path of electrons to be close to the distance between ions in the material, implying the earlier conclusion that the diffusive motion of the electrons was due to collisions with the ions. The mean free paths in the free electron model are instead given by λ = v F τ {\textstyle \lambda =v_{\rm {F}}\tau } (where v F = 2 E F / m e {\textstyle v_{\rm {F}}={\sqrt {2E_{\rm {F}}/m_{e}}}} is the Fermi speed) and are in the order of hundreds of ångströms, at least one order of magnitude larger than any possible classical calculation. The mean free path is then not a result of electron–ion collisions but instead is related to imperfections in the material, either due to defects and impurities in the metal, or due to thermal fluctuations. === Thermal conductivity and thermopower === While Drude's model predicts a similar value for the electric conductivity as the free electron model, the models predict slightly different thermal conductivities. The thermal conductivity is given by κ = c V τ ⟨ v 2 ⟩ / 3 {\displaystyle \kappa =c_{V}\tau \langle v^{2}\rangle /3} for free particles, which is proportional to the heat capacity and the mean free path which depend on the model ( ⟨ v 2 ⟩ 1 / 2 {\displaystyle \langle v^{2}\rangle ^{1/2}} is the mean (square) speed of the electrons or the Fermi speed in the case of the free electron model). This implies that the ratio between thermal and electric conductivity is given by the Wiedemann–Franz law, κ σ = m e c V ⟨ v 2 ⟩ 3 n e 2 = L T {\displaystyle {\frac {\kappa }{\sigma }}={\frac {m_{\rm {e}}c_{V}\langle v^{2}\rangle }{3ne^{2}}}=LT} where L {\displaystyle L} is the Lorenz number, given by L = { 3 2 ( k B e ) 2 , Drude π 2 3 ( k B e ) 2 , free electron model. {\displaystyle L=\left\{{\begin{matrix}\displaystyle {\frac {3}{2}}\left({\frac {k_{\rm {B}}}{e}}\right)^{2}\;,&{\text{Drude}}\\\displaystyle {\frac {\pi ^{2}}{3}}\left({\frac {k_{\rm {B}}}{e}}\right)^{2}\;,&{\text{free electron model.}}\end{matrix}}\right.} The free electron model is closer to the measured value of L = 2.44 × 10 − 8 {\displaystyle L=2.44\times 10^{-8}} V2/K2 while the Drude prediction is off by about half the value, which is not a large difference. The close prediction to the Lorenz number in the Drude model was a result of the classical kinetic energy of electron being about 100 smaller than the quantum version, compensating the large value of the classical heat capacity. However, Drude's mode predicts the wrong order of magnitude for the Seebeck coefficient (thermopower), which relates the generation of a potential difference by applying a temperature gradient across a sample ∇ V = − S ∇ T {\displaystyle \nabla V=-S\nabla T} . This coefficient can be showed to be S = − c V / | n e | {\displaystyle S=-{c_{\rm {V}}}/{|ne|}} , which is just proportional to the heat capacity, so the Drude model predicts a constant that is hundred times larger than the value of the free electron model. While the latter get as coefficient that is linear in temperature and provides much more accurate absolute values in the order of a few tens of μV/K at room temperature. However this models fails to predict the sign change of the thermopower in lithium and noble metals like gold and silver. == Inaccuracies and extensions == The free electron model presents several inadequacies that are contradicted by experimental observation. We list some inaccuracies below: Temperature dependence The free electron model presents several physical quantities that have the wrong temperature dependence, or no dependence at all like the electrical conductivity. The thermal conductivity and specific heat are well predicted for alkali metals at low temperatures, but fails to predict high temperature behaviour coming from ion motion and phonon scattering. Hall effect and magnetoresistance The Hall coefficient has a constant value R H = − 1 / | n e | {\displaystyle R_{\mathrm {H} }=-1/|ne|} in Drude's model and in the free electron model. This value is independent of temperature and the strength of the magnetic field. The Hall coefficient is actually dependent on the band structure and the difference with the model can be quite dramatic when studying elements like magnesium and aluminium that have a strong magnetic field dependence. The free electron model also predicts that the traverse magnetoresistance, the resistance in the direction of the current, does not depend on the strength of the field. In almost all the cases it does. Directional The conductivity of some metals can depend of the orientation of the sample with respect to the electric field. Sometimes even the electrical current is not parallel to the field. This possibility is not described because the model does not integrate the crystallinity of metals, i.e. the existence of a periodic lattice of ions. Diversity in the conductivity Not all materials are electrical conductors, some do not conduct electricity very well (insulators), some can conduct when impurities are added like semiconductors. Semimetals, with narrow conduction bands also exist. This diversity is not predicted by the model and can only by explained by analysing the valence and conduction bands. Additionally, electrons are not the only charge carriers in a metal, electron vacancies or holes can be seen as quasiparticles carrying positive electric charge. Conduction of holes leads to an opposite sign for the Hall and Seebeck coefficients predicted by the model. Other inadequacies are present in the Wiedemann–Franz law at intermediate temperatures and the frequency-dependence of metals in the optical spectrum. More exact values for the electrical conductivity and Wiedemann–Franz law can be obtained by softening the relaxation-time approximation by appealing to the Boltzmann transport equations. The exchange interaction is totally excluded from this model and its inclusion can lead to other magnetic responses like ferromagnetism. An immediate continuation to the free electron model can be obtained by assuming the empty lattice approximation, which forms the basis of the band structure model known as the nearly free electron model. Adding repulsive interactions between electrons does not change very much the picture presented here. Lev Landau showed that a Fermi gas under repulsive interactions, can be seen as a gas of equivalent quasiparticles that slightly modify the properties of the metal. Landau's model is now known as the Fermi liquid theory. More exotic phenomena like superconductivity, where interactions can be attractive, require a more refined theory. == See also == Bloch's theorem Electronic entropy Tight binding Two-dimensional electron gas Bose–Einstein statistics Fermi surface White dwarf Jellium == References == Citations References General Kittel, Charles (1972). Introduction to Solid State Physics. University of Michigan: Wiley & Sons. ISBN 978-0-471-49024-1. Ashcroft, Neil; Mermin, N. David (1976). Solid State Physics. New York: Holt, Rinehart and Winston. ISBN 978-0-03-083993-1. Sommerfeld, Arnold; Bethe, Hans (1933). Elektronentheorie der Metalle. Berlin Heidelberg: Springer Verlag. ISBN 978-3642950025. {{cite book}}: ISBN / Date incompatibility (help) Ziman, J.M. (1972). Principles of the theory of solids (2nd ed.). Cambridge university press. ISBN 0-521-29733-8.
Wikipedia/Free_electron_model
A material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials, and the nature and quantity of materials used may form part of the calculation for the cost of a product or delivery under contract, such as where contract costs are calculated on a "time and materials" basis. == Historical elements == Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. == Classification by use == Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. == Classification by structure == The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. === Microstructure === In engineering, materials can be categorised according to their microscopic structure:: 15–17  Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingredient. Ceramics: non-metal, inorganic solids Glasses: amorphous solids Crystals: a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. Metals: pure or combined chemical elements with specific chemical bonding behavior Alloys: a mixture of chemical elements of which at least one is often a metal. Polymers: materials based on long carbon or silicon chains Hybrids: Combinations of multiple materials, for example composites. === Larger-scale structure === A metamaterial is any material engineered to have a property that is not found in naturally occurring materials, usually by combining several materials to form a composite and / or tuning the shape, geometry, size, orientation and arrangement to achieve the desired property. In foams and textiles, the chemical structure is less relevant to immediately observable properties than larger-scale material features: the holes in foams, and the weave in textiles. == Classification by properties == Materials can be compared and classified by their large-scale physical properties. === Mechanical properties === Mechanical properties determine how a material responds to applied forces. Examples include: Stiffness Strength Toughness Hardness === Thermal properties === Materials may degrade or undergo changes of properties at different temperatures. Thermal properties also include the material's thermal conductivity and heat capacity, relating to the transfer and storage of thermal energy by the material. === Other properties === Materials can be compared and categorized by any quantitative measure of their behavior under various conditions. Notable additional properties include the optical, electrical, and magnetic behavior of materials.: 5–7  == See also == Hyle, the Greek term, relevant for the philosophy of matter Matter Category:Materials == References == == External links ==
Wikipedia/Materials
In condensed matter physics, the Laughlin wavefunction is an ansatz, proposed by Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of a uniform jellium background when the filling factor of the lowest Landau level is ν = 1 / n {\displaystyle \nu =1/n} where n {\displaystyle n} is an odd positive integer. It was constructed to explain the observation of the ν = 1 / 3 {\displaystyle \nu =1/3} fractional quantum Hall effect (FQHE), and predicted the existence of additional ν = 1 / n {\displaystyle \nu =1/n} states as well as quasiparticle excitations with fractional electric charge e / n {\displaystyle e/n} , both of which were later experimentally observed. Laughlin received one third of the Nobel Prize in Physics in 1998 for this discovery. == Context and analytical expression == If we ignore the jellium and mutual Coulomb repulsion between the electrons as a zeroth order approximation, we have an infinitely degenerate lowest Landau level (LLL) and with a filling factor of 1/n, we'd expect that all of the electrons would lie in the LLL. Turning on the interactions, we can make the approximation that all of the electrons lie in the LLL. If ψ 0 {\displaystyle \psi _{0}} is the single particle wavefunction of the LLL state with the lowest orbital angular momenta, then the Laughlin ansatz for the multiparticle wavefunction is ⟨ z 1 , z 2 , z 3 , … , z N ∣ n , N ⟩ = ψ n , N ( z 1 , z 2 , z 3 , … , z N ) = D [ ∏ N ⩾ i > j ⩾ 1 ( z i − z j ) n ] ∏ k = 1 N exp ⁡ ( − ∣ z k ∣ 2 ) {\displaystyle \langle z_{1},z_{2},z_{3},\ldots ,z_{N}\mid n,N\rangle =\psi _{n,N}(z_{1},z_{2},z_{3},\ldots ,z_{N})=D\left[\prod _{N\geqslant i>j\geqslant 1}\left(z_{i}-z_{j}\right)^{n}\right]\prod _{k=1}^{N}\exp \left(-\mid z_{k}\mid ^{2}\right)} where position is denoted by z = 1 2 l B ( x + i y ) {\displaystyle z={1 \over 2{\mathit {l}}_{B}}\left(x+iy\right)} in (Gaussian units) l B = ℏ c e B {\displaystyle {\mathit {l}}_{B}={\sqrt {\hbar c \over eB}}} and x {\displaystyle x} and y {\displaystyle y} are coordinates in the x–y plane. Here ℏ {\displaystyle \hbar } is the reduced Planck constant, e {\displaystyle e} is the electron charge, N {\displaystyle N} is the total number of particles, and B {\displaystyle B} is the magnetic field, which is perpendicular to the xy plane. The subscripts on z identify the particle. In order for the wavefunction to describe fermions, n must be an odd integer. This forces the wavefunction to be antisymmetric under particle interchange. The angular momentum for this state is n ℏ {\displaystyle n\hbar } . == True ground state in FQHE at ν = 1/3 == Consider n = 3 {\displaystyle n=3} above: resultant Ψ L ( z 1 , z 2 , z 3 , … , z N ) ∝ Π i < j ( z i − z j ) 3 {\displaystyle \Psi _{L}(z_{1},z_{2},z_{3},\ldots ,z_{N})\propto \Pi _{i<j}(z_{i}-z_{j})^{3}} is a trial wavefunction; it is not exact, but qualitatively, it reproduces many features of the exact solution and quantitatively, it has very high overlaps with the exact ground state for small systems. Assuming Coulomb repulsion between any two electrons, that ground state Ψ E D {\displaystyle \Psi _{ED}} can be determined using exact diagonalisation and the overlaps have been calculated to be close to one. Moreover, with short-range interaction (Haldane pseudopotentials for m > 3 {\displaystyle m>3} set to zero), Laughlin wavefunction becomes exact, i.e. ⟨ Ψ E D | Ψ L ⟩ = 1 {\displaystyle \langle \Psi _{ED}|\Psi _{L}\rangle =1} . == Energy of interaction for two particles == The Laughlin wavefunction is the multiparticle wavefunction for quasiparticles. The expectation value of the interaction energy for a pair of quasiparticles is ⟨ V ⟩ = ⟨ n , N ∣ V ∣ n , N ⟩ , N = 2 {\displaystyle \langle V\rangle =\langle n,N\mid V\mid n,N\rangle ,\;\;\;N=2} where the screened potential is (see Static forces and virtual-particle exchange § Coulomb potential between two current loops embedded in a magnetic field) V ( r 12 ) = ( 2 e 2 L B ) ∫ 0 ∞ k d k k 2 + k B 2 r B 2 M ( l + 1 , 1 , − k 2 4 ) M ( l ′ + 1 , 1 , − k 2 4 ) J 0 ( k r 12 r B ) {\displaystyle V\left(r_{12}\right)=\left({2e^{2} \over L_{B}}\right)\int _{0}^{\infty }{{k\;dk\;} \over k^{2}+k_{B}^{2}r_{B}^{2}}\;M\left({\mathit {l}}+1,1,-{k^{2} \over 4}\right)\;M\left({\mathit {l}}^{\prime }+1,1,-{k^{2} \over 4}\right)\;{\mathcal {J}}_{0}\left(k{r_{12} \over r_{B}}\right)} where M {\displaystyle M} is a confluent hypergeometric function and J 0 {\displaystyle {\mathcal {J}}_{0}} is a Bessel function of the first kind. Here, r 12 {\displaystyle r_{12}} is the distance between the centers of two current loops, e {\displaystyle e} is the magnitude of the electron charge, r B = 2 l B {\displaystyle r_{B}={\sqrt {2}}{\mathit {l}}_{B}} is the quantum version of the Larmor radius, and L B {\displaystyle L_{B}} is the thickness of the electron gas in the direction of the magnetic field. The angular momenta of the two individual current loops are l ℏ {\displaystyle {\mathit {l}}\hbar } and l ′ ℏ {\displaystyle {\mathit {l}}^{\prime }\hbar } where l + l ′ = n {\displaystyle {\mathit {l}}+{\mathit {l}}^{\prime }=n} . The inverse screening length is given by (Gaussian units) k B 2 = 4 π e 2 ℏ ω c A L B {\displaystyle k_{B}^{2}={4\pi e^{2} \over \hbar \omega _{c}AL_{B}}} where ω c {\displaystyle \omega _{c}} is the cyclotron frequency, and A {\displaystyle A} is the area of the electron gas in the xy plane. The interaction energy evaluates to: To obtain this result we have made the change of integration variables u 12 = z 1 − z 2 2 {\displaystyle u_{12}={z_{1}-z_{2} \over {\sqrt {2}}}} and v 12 = z 1 + z 2 2 {\displaystyle v_{12}={z_{1}+z_{2} \over {\sqrt {2}}}} and noted (see Common integrals in quantum field theory) 1 ( 2 π ) 2 2 2 n n ! ∫ d 2 z 1 d 2 z 2 ∣ z 1 − z 2 ∣ 2 n exp ⁡ [ − 2 ( ∣ z 1 ∣ 2 + ∣ z 2 ∣ 2 ) ] J 0 ( 2 k ∣ z 1 − z 2 ∣ ) = {\displaystyle {1 \over \left(2\pi \right)^{2}\;2^{2n}\;n!}\int d^{2}z_{1}\;d^{2}z_{2}\;\mid z_{1}-z_{2}\mid ^{2n}\;\exp \left[-2\left(\mid z_{1}\mid ^{2}+\mid z_{2}\mid ^{2}\right)\right]\;{\mathcal {J}}_{0}\left({\sqrt {2}}\;{k\mid z_{1}-z_{2}\mid }\right)=} 1 ( 2 π ) 2 2 n n ! ∫ d 2 u 12 d 2 v 12 ∣ u 12 ∣ 2 n exp ⁡ [ − 2 ( ∣ u 12 ∣ 2 + ∣ v 12 ∣ 2 ) ] J 0 ( 2 k ∣ u 12 ∣ ) = {\displaystyle {1 \over \left(2\pi \right)^{2}\;2^{n}\;n!}\int d^{2}u_{12}\;d^{2}v_{12}\;\mid u_{12}\mid ^{2n}\;\exp \left[-2\left(\mid u_{12}\mid ^{2}+\mid v_{12}\mid ^{2}\right)\right]\;{\mathcal {J}}_{0}\left({2}k\mid u_{12}\mid \right)=} M ( n + 1 , 1 , − k 2 2 ) . {\displaystyle M\left(n+1,1,-{k^{2} \over 2}\right).} The interaction energy has minima for (Figure 1) l n = 1 3 , 2 5 , 3 7 , etc., {\displaystyle {{\mathit {l}} \over n}={1 \over 3},{2 \over 5},{3 \over 7},{\mbox{etc.,}}} and l n = 2 3 , 3 5 , 4 7 , etc. {\displaystyle {{\mathit {l}} \over n}={2 \over 3},{3 \over 5},{4 \over 7},{\mbox{etc.}}} For these values of the ratio of angular momenta, the energy is plotted in Figure 2 as a function of n {\displaystyle n} . == References == == See also == Landau level Fractional quantum Hall effect Coulomb potential between two current loops embedded in a magnetic field
Wikipedia/Laughlin_wavefunction
Mesoscopic physics is a subdiscipline of condensed matter physics that deals with materials of an intermediate size. These materials range in size between the nanoscale for a quantity of atoms (such as a molecule) and of materials measuring micrometres. The lower limit can also be defined as being the size of individual atoms. At the microscopic scale are bulk materials. Both mesoscopic and macroscopic objects contain many atoms. Whereas average properties derived from constituent materials describe macroscopic objects, as they usually obey the laws of classical mechanics, a mesoscopic object, by contrast, is affected by thermal fluctuations around the average, and its electronic behavior may require modeling at the level of quantum mechanics. A macroscopic electronic device, when scaled down to a meso-size, starts revealing quantum mechanical properties. For example, at the macroscopic level the conductance of a wire increases continuously with its diameter. However, at the mesoscopic level, the wire's conductance is quantized: the increases occur in discrete, or individual, whole steps. During research, mesoscopic devices are constructed, measured and observed experimentally and theoretically in order to advance understanding of the physics of insulators, semiconductors, metals, and superconductors. The applied science of mesoscopic physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when a macroscopic object is miniaturized, as with the miniaturization of transistors in semiconductor electronics. The mechanical, chemical, and electronic properties of materials change as their size approaches the nanoscale, where the percentage of atoms at the surface of the material becomes significant. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no rigid definition for mesoscopic physics but the systems studied are normally in the range of 100 nm (the size of a typical virus) to 1 000 nm (the size of a typical bacterium): 100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a close connection to the fields of nanofabrication and nanotechnology. Devices used in nanotechnology are examples of mesoscopic systems. Three categories of new electronic phenomena in such systems are interference effects, quantum confinement effects and charging effects. == Quantum confinement effects == Quantum confinement effects describe electrons in terms of energy levels, potential wells, valence bands, conduction bands, and electron energy band gaps. Electrons in bulk dielectric materials (larger than 10 nm) can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands. In bulk materials these energy levels are described as continuous because the difference in energy is negligible. As electrons stabilize at various energy levels, most vibrate in valence bands below a forbidden energy level, named the band gap. This region is an energy range in which no electron states exist. A smaller amount have energy levels above the forbidden gap, and this is the conduction band. The quantum confinement effect can be observed once the diameter of the particle is of the same magnitude as the wavelength of the electron's wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. As the material is miniaturized towards nano-scale the confining dimension naturally decreases. The characteristics are no longer averaged by bulk, and hence continuous, but are at the level of quanta and thus discrete. In other words, the energy spectrum becomes discrete, measured as quanta, rather than continuous as in bulk materials. As a result, the bandgap asserts itself: there is a small and finite separation between energy levels. This situation of discrete energy levels is called quantum confinement. In addition, quantum confinement effects consist of isolated islands of electrons that may be formed at the patterned interface between two different semiconducting materials. The electrons typically are confined to disk-shaped regions termed quantum dots. The confinement of the electrons in these systems changes their interaction with electromagnetic radiation significantly, as noted above. Because the electron energy levels of quantum dots are discrete rather than continuous, the addition or subtraction of just a few atoms to the quantum dot has the effect of altering the boundaries of the bandgap. Changing the geometry of the surface of the quantum dot also changes the bandgap energy, owing again to the small size of the dot, and the effects of quantum confinement. == Interference effects == In the mesoscopic regime, scattering from defects – such as impurities – induces interference effects which modulate the flow of electrons. The experimental signature of mesoscopic interference effects is the appearance of reproducible fluctuations in physical quantities. For example, the conductance of a given specimen oscillates in an apparently random manner as a function of fluctuations in experimental parameters. However, the same pattern may be retraced if the experimental parameters are cycled back to their original values; in fact, the patterns observed are reproducible over a period of days. These are known as universal conductance fluctuations. == Time-resolved mesoscopic dynamics == Time-resolved experiments in mesoscopic dynamics: the observation and study, at nanoscales, of condensed phase dynamics such as crack formation in solids, phase separation, and rapid fluctuations in the liquid state or in biologically relevant environments; and the observation and study, at nanoscales, of the ultrafast dynamics of non-crystalline materials. == Related == == References == == External links == Beenakker, Carlo (1995). "Chaos in Quantum Billiards" (PDF). Universiteit Leiden. Retrieved 14 June 2018. Harmans, C. (2003). "Mesoscopic physics: an introduction" (PDF). OpenCourseWare TU Delft. Retrieved 14 June 2018. Jalabert, Rodolfo A. (2016). "Mesoscopic transport and quantum chaos". Scholarpedia. 11 (1): 30946. arXiv:1601.02237. Bibcode:2016SchpJ..1130946J. doi:10.4249/scholarpedia.30946. S2CID 26633032.
Wikipedia/Mesoscopic_physics
A composite or composite material (also composition material) is a material which is produced from two or more constituent materials. These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. Within the finished structure, the individual elements remain separate and distinct, distinguishing composites from mixtures and solid solutions. Composite materials with more than one distinct layer are called composite laminates. Typical engineered composite materials are made up of a binding agent forming the matrix and a filler material (particulates or fibres) giving substance, e.g.: Concrete, reinforced concrete and masonry with cement, lime or mortar (which is itself a composite material) as a binder Composite wood such as glulam and plywood with wood glue as a binder Reinforced plastics, such as fiberglass and fibre-reinforced polymer with resin or thermoplastics as a binder Ceramic matrix composites (composite ceramic and metal matrices) Metal matrix composites advanced composite materials, often first developed for spacecraft and aircraft applications. Composite materials can be less expensive, lighter, stronger or more durable than common materials. Some are inspired by biological structures found in plants and animals. Robotic materials are composites that include sensing, actuation, computation, and communication components. Composite materials are used for construction and technical structures such as boat hulls, swimming pool panels, racing car bodies, shower stalls, bathtubs, storage tanks, imitation granite, and cultured marble sinks and countertops. They are also being increasingly used in general automotive applications. == History == The earliest composite materials were made from straw and mud combined to form bricks for building construction. Ancient brick-making was documented by Egyptian tomb paintings. Wattle and daub might be the oldest composite materials, at over 6000 years old. Woody plants, both true wood from trees and such plants as palms and bamboo, yield natural composites that were used prehistorically by humankind and are still used widely in construction and scaffolding. Plywood, 3400 BC, by the Ancient Mesopotamians; gluing wood at different angles gives better properties than natural wood. Cartonnage, layers of linen or papyrus soaked in plaster dates to the First Intermediate Period of Egypt c. 2181–2055 BC and was used for death masks. Cob mud bricks, or mud walls, (using mud (clay) with straw or gravel as a binder) have been used for thousands of years. Concrete was described by Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of aggregate appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana, which were volcanic sands from the sandlike beds of Pozzuoli brownish-yellow-gray in colour near Naples and reddish-brown at Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for cements used in buildings and a 1:2 ratio of lime to pulvis Puteolanus for underwater work, essentially the same ratio mixed today for concrete used at sea. Natural cement-stones, after burning, produced cements used in concretes from post-Roman times into the 20th century, with some properties superior to manufactured Portland cement. Papier-mâché, a composite of paper and glue, has been used for hundreds of years. The first artificial fibre reinforced plastic was a combination of fiber glass and bakelite, performed in 1935 by Al Simison and Arthur D Little in Owens Corning Company One of the most common and familiar composite is fibreglass, in which small glass fibre are embedded within a polymeric material (normally an epoxy or polyester). The glass fibre is relatively strong and stiff (but also brittle), whereas the polymer is ductile (but also weak and flexible). Thus the resulting fibreglass is relatively stiff, strong, flexible, and ductile. Composite bow Leather cannon, wooden cannon == Examples == === Composite materials === Concrete is the most common artificial composite material of all. As of 2009, about 7.5 billion cubic metres of concrete are made each year. Concrete typically consists of loose stones (construction aggregate) held with a matrix of cement. Concrete is an inexpensive material resisting large compressive forces, however, susceptible to tensile loading. To give concrete the ability to resist being stretched, steel bars, which can resist high stretching (tensile) forces, are often added to concrete to form reinforced concrete. Fibre-reinforced polymers include carbon-fiber-reinforced polymers and glass-reinforced plastic. If classified by matrix then there are thermoplastic composites, short fibre thermoplastics, long fibre thermoplastics or long-fiber-reinforced thermoplastics. There are numerous thermoset composites, including paper composite panels. Many advanced thermoset polymer matrix systems usually incorporate aramid fibre and carbon fibre in an epoxy resin matrix. Shape-memory polymer composites are high-performance composites, formulated using fibre or fabric reinforcements and shape-memory polymer resin as the matrix. Since a shape-memory polymer resin is used as the matrix, these composites have the ability to be easily manipulated into various configurations when they are heated above their activation temperatures and will exhibit high strength and stiffness at lower temperatures. They can also be reheated and reshaped repeatedly without losing their material properties. These composites are ideal for applications such as lightweight, rigid, deployable structures; rapid manufacturing; and dynamic reinforcement. High strain composites are another type of high-performance composites that are designed to perform in a high deformation setting and are often used in deployable systems where structural flexing is advantageous. Although high strain composites exhibit many similarities to shape-memory polymers, their performance is generally dependent on the fibre layout as opposed to the resin content of the matrix. Composites can also use metal fibres reinforcing other metals, as in metal matrix composites (MMC) or ceramic matrix composites (CMC), which includes bone (hydroxyapatite reinforced with collagen fibres), cermet (ceramic and metal), and concrete. Ceramic matrix composites are built primarily for fracture toughness, not for strength. Another class of composite materials involve woven fabric composite consisting of longitudinal and transverse laced yarns. Woven fabric composites are flexible as they are in form of fabric. Organic matrix/ceramic aggregate composites include asphalt concrete, polymer concrete, mastic asphalt, mastic roller hybrid, dental composite, syntactic foam, and mother of pearl. Chobham armour is a special type of composite armour used in military applications. Additionally, thermoplastic composite materials can be formulated with specific metal powders resulting in materials with a density range from 2 g/cm3 to 11 g/cm3 (same density as lead). The most common name for this type of material is "high gravity compound" (HGC), although "lead replacement" is also used. These materials can be used in place of traditional materials such as aluminium, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting, balancing (for example, modifying the centre of gravity of a tennis racquet), vibration damping, and radiation shielding applications. High density composites are an economically viable option when certain materials are deemed hazardous and are banned (such as lead) or when secondary operations costs (such as machining, finishing, or coating) are a factor. There have been several studies indicating that interleaving stiff and brittle epoxy-based carbon-fiber-reinforced polymer laminates with flexible thermoplastic laminates can help to make highly toughened composites that show improved impact resistance. Another interesting aspect of such interleaved composites is that they are able to have shape memory behaviour without needing any shape-memory polymers or shape-memory alloys e.g. balsa plies interleaved with hot glue, aluminium plies interleaved with acrylic polymers or PVC and carbon-fiber-reinforced polymer laminates interleaved with polystyrene. A sandwich-structured composite is a special class of composite material that is fabricated by attaching two thin but stiff skins to a lightweight but thick core. The core material is normally low strength material, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density. Wood is a naturally occurring composite comprising cellulose fibres in a lignin and hemicellulose matrix. Engineered wood includes a wide variety of different products such as wood fibre board, plywood, oriented strand board, wood plastic composite (recycled wood fibre in polyethylene matrix), Pykrete (sawdust in ice matrix), plastic-impregnated or laminated paper or textiles, Arborite, Formica (plastic), and Micarta. Other engineered laminate composites, such as Mallite, use a central core of end grain balsa wood, bonded to surface skins of light alloy or GRP. These generate low-weight, high rigidity materials. Particulate composites have particle as filler material dispersed in matrix, which may be nonmetal, such as glass, epoxy. Automobile tire is an example of particulate composite. Advanced diamond-like carbon (DLC) coated polymer composites have been reported where the coating increases the surface hydrophobicity, hardness and wear resistance. Ferromagnetic composites, including those with a polymer matrix consisting, for example, of nanocrystalline filler of Fe-based powders and polymers matrix. Amorphous and nanocrystalline powders obtained, for example, from metallic glasses can be used. Their use makes it possible to obtain ferromagnetic nanocomposites with controlled magnetic properties. === Products === Fibre-reinforced composite materials have gained popularity (despite their generally high cost) in high-performance products that need to be lightweight, yet strong enough to take harsh loading conditions such as aerospace components (tails, wings, fuselages, propellers), boat and scull hulls, bicycle frames, and racing car bodies. Other uses include fishing rods, storage tanks, swimming pool panels, and baseball bats. The Boeing 787 and Airbus A350 structures including the wings and fuselage are composed largely of composites. Composite materials are also becoming more common in the realm of orthopedic surgery, and it is the most common hockey stick material. Carbon composite is a key material in today's launch vehicles and heat shields for the re-entry phase of spacecraft. It is widely used in solar panel substrates, antenna reflectors and yokes of spacecraft. It is also used in payload adapters, inter-stage structures and heat shields of launch vehicles. Furthermore, disk brake systems of airplanes and racing cars are using carbon/carbon material, and the composite material with carbon fibres and silicon carbide matrix has been introduced in luxury vehicles and sports cars. In 2006, a fibre-reinforced composite pool panel was introduced for in-ground swimming pools, residential as well as commercial, as a non-corrosive alternative to galvanized steel. In 2007, an all-composite military Humvee was introduced by TPI Composites Inc and Armor Holdings Inc, the first all-composite military vehicle. By using composites the vehicle is lighter, allowing higher payloads. In 2008, carbon fibre and DuPont Kevlar (five times stronger than steel) were combined with enhanced thermoset resins to make military transit cases by ECS Composites creating 30-percent lighter cases with high strength. Pipes and fittings for various purpose like transportation of potable water, fire-fighting, irrigation, seawater, desalinated water, chemical and industrial waste, and sewage are now manufactured in glass reinforced plastics. Composite materials used in tensile structures for facade application provides the advantage of being translucent. The woven base cloth combined with the appropriate coating allows better light transmission. This provides a very comfortable level of illumination compared to the full brightness of outside. The wings of wind turbines, in growing sizes in the order of 50 m length are fabricated in composites since several years. Two-lower-leg-amputees run on carbon-composite spring-like artificial feet as quick as non-amputee athletes. High-pressure gas cylinders typically about 7–9 litre volume x 300 bar pressure for firemen are nowadays constructed from carbon composite. Type-4-cylinders include metal only as boss that carries the thread to screw in the valve. On 5 September 2019, HMD Global unveiled the Nokia 6.2 and Nokia 7.2 which are claimed to be using polymer composite for the frames. == Overview == Composite materials are created from individual materials. These individual materials are known as constituent materials, and there are two main categories of it. One is the matrix (binder) and the other reinforcement. A portion of each kind is needed at least. The reinforcement receives support from the matrix as the matrix surrounds the reinforcement and maintains its relative positions. The properties of the matrix are improved as the reinforcements impart their exceptional physical and mechanical properties. The mechanical properties become unavailable from the individual constituent materials by synergism. At the same time, the designer of the product or structure receives options to choose an optimum combination from the variety of matrix and strengthening materials. To shape the engineered composites, it must be formed. The reinforcement is placed onto the mould surface or into the mould cavity. Before or after this, the matrix can be introduced to the reinforcement. The matrix undergoes a melding event which sets the part shape necessarily. This melding event can happen in several ways, depending upon the matrix nature, such as solidification from the melted state for a thermoplastic polymer matrix composite or chemical polymerization for a thermoset polymer matrix. According to the requirements of end-item design, various methods of moulding can be used. The natures of the chosen matrix and reinforcement are the key factors influencing the methodology. The gross quantity of material to be made is another main factor. To support high capital investments for rapid and automated manufacturing technology, vast quantities can be used. Cheaper capital investments but higher labour and tooling expenses at a correspondingly slower rate assists the small production quantities. Many commercially produced composites use a polymer matrix material often called a resin solution. There are many different polymers available depending upon the starting raw ingredients. There are several broad categories, each with numerous variations. The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide, polyamide, polypropylene, PEEK, and others. The reinforcement materials are often fibres but also commonly ground minerals. The various methods described below have been developed to reduce the resin content of the final product, or the fibre content is increased. As a rule of thumb, lay up results in a product containing 60% resin and 40% fibre, whereas vacuum infusion gives a final product with 40% resin and 60% fibre content. The strength of the product is greatly dependent on this ratio. Martin Hubbe and Lucian A Lucia consider wood to be a natural composite of cellulose fibres in a matrix of lignin. == Cores in composites == Several layup designs of composite also involve a co-curing or post-curing of the prepreg with many other media, such as foam or honeycomb. Generally, this is known as a sandwich structure. This is a more general layup for the production of cowlings, doors, radomes or non-structural parts. Open- and closed-cell-structured foams like polyvinyl chloride, polyurethane, polyethylene, or polystyrene foams, balsa wood, syntactic foams, and honeycombs are generally utilized core materials. Open- and closed-cell metal foam can also be utilized as core materials. Recently, 3D graphene structures ( also called graphene foam) have also been employed as core structures. A recent review by Khurram and Xu et al., have provided the summary of the state-of-the-art techniques for fabrication of the 3D structure of graphene, and the examples of the use of these foam like structures as a core for their respective polymer composites. === Semi-crystalline polymers === Although the two phases are chemically equivalent, semi-crystalline polymers can be described both quantitatively and qualitatively as composite materials. The crystalline portion has a higher elastic modulus and provides reinforcement for the less stiff, amorphous phase. Polymeric materials can range from 0% to 100% crystallinity aka volume fraction depending on molecular structure and thermal history. Different processing techniques can be employed to vary the percent crystallinity in these materials and thus the mechanical properties of these materials as described in the physical properties section. This effect is seen in a variety of places from industrial plastics like polyethylene shopping bags to spiders which can produce silks with different mechanical properties. In many cases these materials act like particle composites with randomly dispersed crystals known as spherulites. However they can also be engineered to be anisotropic and act more like fiber reinforced composites. In the case of spider silk, the properties of the material can even be dependent on the size of the crystals, independent of the volume fraction. Ironically, single component polymeric materials are some of the most easily tunable composite materials known. == Methods of fabrication == Normally, the fabrication of composite includes wetting, mixing or saturating the reinforcement with the matrix. The matrix is then induced to bind together (with heat or a chemical reaction) into a rigid structure. Usually, the operation is done in an open or closed forming mould. However, the order and ways of introducing the constituents alters considerably. Composites fabrication is achieved by a wide variety of methods, including advanced fibre placement (automated fibre placement), fibreglass spray lay-up process, filament winding, lanxide process, tailored fibre placement, tufting, and z-pinning. === Overview of mould === The reinforcing and matrix materials are merged, compacted, and cured (processed) within a mould to undergo a melding event. The part shape is fundamentally set after the melding event. However, under particular process conditions, it can deform. The melding event for a thermoset polymer matrix material is a curing reaction that is caused by the possibility of extra heat or chemical reactivity such as an organic peroxide. The melding event for a thermoplastic polymeric matrix material is a solidification from the melted state. The melding event for a metal matrix material such as titanium foil is a fusing at high pressure and a temperature near the melting point. It is suitable for many moulding methods to refer to one mould piece as a "lower" mould and another mould piece as an "upper" mould. Lower and upper does not refer to the mould's configuration in space, but the different faces of the moulded panel. There is always a lower mould, and sometimes an upper mould in this convention. Part construction commences by applying materials to the lower mould. Lower mould and upper mould are more generalized descriptors than more common and specific terms such as male side, female side, a-side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing utilizes a different nomenclature. Usually, the moulded product is referred to as a panel. It can be referred to as casting for certain geometries and material combinations. It can be referred to as a profile for certain continuous processes. Some of the processes are autoclave moulding, vacuum bag moulding, pressure bag moulding, resin transfer moulding, and light resin transfer moulding. === Other fabrication methods === Other types of fabrication include casting, centrifugal casting, braiding (onto a former), continuous casting, filament winding, press moulding, transfer moulding, pultrusion moulding, and slip forming. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding, to name a few. The practice of curing ovens and paint booths is also required for some projects. ==== Finishing methods ==== The composite parts finishing is also crucial in the final design. Many of these finishes will involve rain-erosion coatings or polyurethane coatings. === Tooling === The mould and mould inserts are referred to as "tooling". The mould/tooling can be built from different materials. Tooling materials include aluminium, carbon fibre, invar, nickel, reinforced silicone rubber and steel. The tooling material selection is normally based on, but not limited to, the coefficient of thermal expansion, expected number of cycles, end item tolerance, desired or expected surface condition, cure method, glass transition temperature of the material being moulded, moulding method, matrix, cost, and other various considerations. == Physical properties == Usually, the composite's physical properties are dependent on the direction of consideration, and so are anisotropic. This applies to many properties including elastic modulus, ultimate tensile strength, thermal conductivity, and electrical conductivity. The rule of mixtures and inverse rule of mixtures give upper and lower bounds for these properties. The real value will lie somewhere between these values and can depend on many factors including: the orientation of interest the length of the fibres the accuracy of the fibre alignment the properties of the matrix and fibres delamination of the fibres and matrix the inclusion of any impurities For some material property E {\displaystyle E} , the rule of mixtures states that the overall property in the direction parallel to the fibers could be as high as E ∥ = f E f + ( 1 − f ) E m {\displaystyle E_{\parallel }=fE_{f}+\left(1-f\right)E_{m}} The inverse rule of mixtures states that in the direction perpendicular to the fibers, the elastic modulus of a composite could be as low as E ⊥ = ( f E f + 1 − f E m ) − 1 . {\displaystyle E_{\perp }=\left({\frac {f}{E_{f}}}+{\frac {1-f}{E_{m}}}\right)^{-1}.} where f = V f V f + V m {\displaystyle f={\frac {V_{f}}{V_{f}+V_{m}}}} is the volume fraction of the fibers E ∥ {\displaystyle E_{\parallel }} is the material property of the composite parallel to the fibers E ⊥ {\displaystyle E_{\perp }} is the material property of the composite perpendicular to the fibers E f {\displaystyle E_{f}} is the material property of the fibers E m {\displaystyle E_{m}} is the material property of the matrix The majority of commercial composites are formed with random dispersion and orientation of the strengthening fibres, in which case the composite Young's modulus will fall between the isostrain and isostress bounds. However, in applications where the strength-to-weight ratio is engineered to be as high as possible (such as in the aerospace industry), fibre alignment may be tightly controlled. In contrast to composites, isotropic materials (for example, aluminium or steel), in standard wrought forms, possess the same stiffness typically despite the directional orientation of the applied forces and/or moments. The relationship between forces/moments and strains/curvatures for an isotropic material can be described with the following material properties: Young's Modulus, the shear modulus, and the Poisson's ratio, in relatively simple mathematical relationships. For the anisotropic material, it needs the mathematics of a second-order tensor and up to 21 material property constants. For the special case of orthogonal isotropy, there are three distinct material property constants for each of Young's Modulus, Shear Modulus and Poisson's ratio—a total of 9 constants to express the relationship between forces/moments and strains/curvatures. Techniques that take benefit of the materials' anisotropic properties involve mortise and tenon joints (in natural composites such as wood) and pi joints in synthetic composites. == Mechanical properties of composites == === Particle reinforcement === In general, particle reinforcement is strengthening the composites less than fiber reinforcement. It is used to enhance the stiffness of the composites while increasing the strength and the toughness. Because of their mechanical properties, they are used in applications in which wear resistance is required. For example, hardness of cement can be increased by reinforcing gravel particles, drastically. Particle reinforcement a highly advantageous method of tuning mechanical properties of materials since it is very easy implement while being low cost. The elastic modulus of particle-reinforced composites can be expressed as, E c = V m E m + K c V p E p {\displaystyle E_{c}=V_{m}E_{m}+K_{c}V_{p}E_{p}} where E is the elastic modulus, V is the volume fraction. The subscripts c, p and m are indicating composite, particle and matrix, respectively. K c {\displaystyle K_{c}} is a constant can be found empirically. Similarly, tensile strength of particle-reinforced composites can be expressed as, ( T . S . ) c = V m ( T . S . ) m + K s V p ( T . S . ) p {\displaystyle (T.S.)_{c}=V_{m}(T.S.)_{m}+K_{s}V_{p}(T.S.)_{p}} where T.S. is the tensile strength, and K s {\displaystyle K_{s}} is a constant (not equal to K c {\displaystyle K_{c}} ) that can be found empirically. === Short fiber reinforcement (shear lag theory) === Short fibers are often cheaper or more convenient to manufacture than longer continuous fibers, but still provide better properties than particle reinforcement. A common example is carbon fiber reinforced 3D printing filaments, which use chopped short carbon fibers mixed into a matrix, typically PLA or PETG. Shear lag theory uses the shear lag model to predict properties such as the Young's modulus for short fiber composites. The model assumes that load is transferred from the matrix to the fibers solely through the interfacial shear stresses τ i {\displaystyle \tau _{i}} acting on the cylindrical interface. Shear lag theory says then that the rate of change of the axial stress in the fiber as you move along the fiber is proportional to the ratio of the interfacial shear stresses over the radius of the fibre r 0 {\displaystyle r_{0}} : d σ f d x = − 2 τ i r 0 {\displaystyle {\frac {d\sigma _{f}}{dx}}=-{\frac {2\tau _{i}}{r_{0}}}} This leads to the average fiber stress over the full length of the fibre being given by: σ f = E f ε 1 ( 1 − tanh ⁡ ( n s ) n s ) {\displaystyle \sigma _{f}=E_{f}\varepsilon _{1}\left(1-{\frac {\tanh(ns)}{ns}}\right)} where ε 1 {\displaystyle \varepsilon _{1}} is the macroscopic strain in the composite s {\displaystyle s} is the fiber aspect ratio (length over diameter) n = ( 2 E m E f ( 1 + ν m ) ln ⁡ ( 1 / f ) ) 1 / 2 {\displaystyle n=\left({\frac {2E_{m}}{E_{f}(1+\nu _{m})\ln(1/f)}}\right)^{1/2}} is a dimensionless constant ν m {\displaystyle \nu _{m}} is the Poisson's ratio of the matrix By assuming a uniform tensile strain, this results in: E 1 = σ 1 ε 1 = f E f ( 1 − tanh ⁡ ( n s ) n s ) + ( 1 − f ) E m {\displaystyle E_{1}={\frac {\sigma _{1}}{\varepsilon _{1}}}=fE_{f}\left(1-{\frac {\tanh(ns)}{ns}}\right)+(1-f)E_{m}} As s becomes larger, this tends towards the rule of mixtures, which represents the Young's modulus parallel to continuous fibers. === Continuous fiber reinforcement === In general, continuous fiber reinforcement is implemented by incorporating a fiber as the strong phase into a weak phase, matrix. The reason for the popularity of fiber usage is materials with extraordinary strength can be obtained in their fiber form. Non-metallic fibers are usually showing a very high strength to density ratio compared to metal fibers because of the covalent nature of their bonds. The most famous example of this is carbon fibers that have many applications extending from sports gear to protective equipment to space industries. The stress on the composite can be expressed in terms of the volume fraction of the fiber and the matrix. σ c = V f σ f + V m σ m {\displaystyle \sigma _{c}=V_{f}\sigma _{f}+V_{m}\sigma _{m}} where σ {\displaystyle \sigma } is the stress, V is the volume fraction. The subscripts c, f and m are indicating composite, fiber and matrix, respectively. Although the stress–strain behavior of fiber composites can only be determined by testing, there is an expected trend, three stages of the stress–strain curve. The first stage is the region of the stress–strain curve where both fiber and the matrix are elastically deformed. This linearly elastic region can be expressed in the following form. σ c − E c ϵ c = ϵ c ( V f E f + V m E m ) {\displaystyle \sigma _{c}-E_{c}\epsilon _{c}=\epsilon _{c}(V_{f}E_{f}+V_{m}E_{m})} where σ {\displaystyle \sigma } is the stress, ϵ {\displaystyle \epsilon } is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. After passing the elastic region for both fiber and the matrix, the second region of the stress–strain curve can be observed. In the second region, the fiber is still elastically deformed while the matrix is plastically deformed since the matrix is the weak phase. The instantaneous modulus can be determined using the slope of the stress–strain curve in the second region. The relationship between stress and strain can be expressed as, σ c = V f E f ϵ c + V m σ m ( ϵ c ) {\displaystyle \sigma _{c}=V_{f}E_{f}\epsilon _{c}+V_{m}\sigma _{m}(\epsilon _{c})} where σ {\displaystyle \sigma } is the stress, ϵ {\displaystyle \epsilon } is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. To find the modulus in the second region derivative of this equation can be used since the slope of the curve is equal to the modulus. E c ′ = d σ c d ϵ c = V f E f + V m ( d σ c d ϵ c ) {\displaystyle E_{c}'={\frac {d\sigma _{c}}{d\epsilon _{c}}}=V_{f}E_{f}+V_{m}\left({\frac {d\sigma _{c}}{d\epsilon _{c}}}\right)} In most cases it can be assumed E c ′ = V f E f {\displaystyle E_{c}'=V_{f}E_{f}} since the second term is much less than the first one. In reality, the derivative of stress with respect to strain is not always returning the modulus because of the binding interaction between the fiber and matrix. The strength of the interaction between these two phases can result in changes in the mechanical properties of the composite. The compatibility of the fiber and matrix is a measure of internal stress. The covalently bonded high strength fibers (e.g. carbon fibers) experience mostly elastic deformation before the fracture since the plastic deformation can happen due to dislocation motion. Whereas, metallic fibers have more space to plastically deform, so their composites exhibit a third stage where both fiber and the matrix are plastically deforming. Metallic fibers have many applications to work at cryogenic temperatures that is one of the advantages of composites with metal fibers over nonmetallic. The stress in this region of the stress–strain curve can be expressed as, σ c ( ϵ c ) = V f σ f ϵ c + V m σ m ( ϵ c ) {\displaystyle \sigma _{c}(\epsilon _{c})=V_{f}\sigma _{f}\epsilon _{c}+V_{m}\sigma _{m}(\epsilon _{c})} where σ {\displaystyle \sigma } is the stress, ϵ {\displaystyle \epsilon } is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. σ f ( ϵ c ) {\displaystyle \sigma _{f}(\epsilon _{c})} and σ m ( ϵ c ) {\displaystyle \sigma _{m}(\epsilon _{c})} are for fiber and matrix flow stresses respectively. Just after the third region the composite exhibit necking. The necking strain of composite is happened to be between the necking strain of the fiber and the matrix just like other mechanical properties of the composites. The necking strain of the weak phase is delayed by the strong phase. The amount of the delay depends upon the volume fraction of the strong phase. Thus, the tensile strength of the composite can be expressed in terms of the volume fraction. ( T . S . ) c = V f ( T . S . ) f + V m σ m ( ϵ m ) {\displaystyle (T.S.)_{c}=V_{f}(T.S.)_{f}+V_{m}\sigma _{m}(\epsilon _{m})} where T.S. is the tensile strength, σ {\displaystyle \sigma } is the stress, ϵ {\displaystyle \epsilon } is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. The composite tensile strength can be expressed as ( T . S . ) c = V m ( T . S . ) m {\displaystyle (T.S.)_{c}=V_{m}(T.S.)_{m}} for V f {\displaystyle V_{f}} is less than or equal to V c {\displaystyle V_{c}} (arbitrary critical value of volume fraction) ( T . S . ) c = V f ( T . S . ) f + V m ( σ m ) {\displaystyle (T.S.)_{c}=V_{f}(T.S.)_{f}+V_{m}(\sigma _{m})} for V f {\displaystyle V_{f}} is greater than or equal to V c {\displaystyle V_{c}} The critical value of volume fraction can be expressed as, V c = [ ( T . S . ) m − σ m ( ϵ f ) ] [ ( T . S . ) f + ( T . S . ) m − σ m ( ϵ f ) ] {\displaystyle V_{c}={\frac {[(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}{[(T.S.)_{f}+(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}}} Evidently, the composite tensile strength can be higher than the matrix if ( T . S . ) c {\displaystyle (T.S.)_{c}} is greater than ( T . S . ) m {\displaystyle (T.S.)_{m}} . Thus, the minimum volume fraction of the fiber can be expressed as, V c = [ ( T . S . ) m − σ m ( ϵ f ) ] [ ( T . S . ) f − σ m ( ϵ f ) ] {\displaystyle V_{c}={\frac {[(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}{[(T.S.)_{f}-\sigma _{m}(\epsilon _{f})]}}} Although this minimum value is very low in practice, it is very important to know since the reason for the incorporation of continuous fibers is to improve the mechanical properties of the materials/composites, and this value of volume fraction is the threshold of this improvement. === The effect of fiber orientation === ==== Aligned fibers ==== A change in the angle between the applied stress and fiber orientation will affect the mechanical properties of fiber-reinforced composites, especially the tensile strength. This angle, θ {\displaystyle \theta } , can be used predict the dominant tensile fracture mechanism. At small angles, θ ≈ 0 ∘ {\displaystyle \theta \approx 0^{\circ }} , the dominant fracture mechanism is the same as with load-fiber alignment, tensile fracture. The resolved force acting upon the length of the fibers is reduced by a factor of cos ⁡ θ {\displaystyle \cos \theta } from rotation. F res = F cos ⁡ θ {\displaystyle F_{\mbox{res}}=F\cos \theta } . The resolved area on which the fiber experiences the force is increased by a factor of cos ⁡ θ {\displaystyle \cos \theta } from rotation. A res = A 0 / cos ⁡ θ {\displaystyle A_{\mbox{res}}=A_{0}/\cos \theta } . Taking the effective tensile strength to be ( T.S. ) c = F res / A res {\displaystyle ({\mbox{T.S.}})_{\mbox{c}}=F_{\mbox{res}}/A_{\mbox{res}}} and the aligned tensile strength σ ∥ ∗ = F / A {\displaystyle \sigma _{\parallel }^{*}=F/A} . ( T.S. ) c ( longitudinal fracture ) = σ ∥ ∗ cos 2 ⁡ θ {\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{longitudinal fracture}})={\frac {\sigma _{\parallel }^{*}}{\cos ^{2}\theta }}} At moderate angles, θ ≈ 45 ∘ {\displaystyle \theta \approx 45^{\circ }} , the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction. F res = F cos ⁡ θ {\displaystyle F_{\mbox{res}}=F\cos \theta } . The resolved area on which the force acts is A res = A m / sin ⁡ θ {\displaystyle A_{\mbox{res}}=A_{m}/\sin \theta } . The resulting tensile strength depends on the shear strength of the matrix, τ m {\displaystyle \tau _{m}} . ( T.S. ) c ( shear failure ) = τ m sin ⁡ θ cos ⁡ θ {\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{shear failure}})={\frac {\tau _{m}}{\sin {\theta }\cos {\theta }}}} At extreme angles, θ ≈ 90 ∘ {\displaystyle \theta \approx 90^{\circ }} , the dominant mode of failure is tensile fracture in the matrix in the perpendicular direction. As in the isostress case of layered composite materials, the strength in this direction is lower than in the aligned direction. The effective areas and forces act perpendicular to the aligned direction so they both scale by sin ⁡ θ {\displaystyle \sin \theta } . The resolved tensile strength is proportional to the transverse strength, σ ⊥ ∗ {\displaystyle \sigma _{\perp }^{*}} . ( T.S. ) c ( transverse fracture ) = σ ⊥ ∗ sin 2 ⁡ θ {\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{transverse fracture}})={\frac {\sigma _{\perp }^{*}}{\sin ^{2}\theta }}} The critical angles from which the dominant fracture mechanism changes can be calculated as, θ c 1 = tan − 1 ⁡ ( τ m σ ∥ ∗ ) {\displaystyle \theta _{c_{1}}=\tan ^{-1}\left({\frac {\tau _{m}}{\sigma _{\parallel }^{*}}}\right)} θ c 2 = tan − 1 ⁡ ( σ ⊥ ∗ τ m ) {\displaystyle \theta _{c_{2}}=\tan ^{-1}\left({\frac {\sigma _{\perp }^{*}}{\tau _{m}}}\right)} where θ c 1 {\displaystyle \theta _{c_{1}}} is the critical angle between longitudinal fracture and shear failure, and θ c 2 {\displaystyle \theta _{c_{2}}} is the critical angle between shear failure and transverse fracture. By ignoring length effects, this model is most accurate for continuous fibers and does not effectively capture the strength-orientation relationship for short fiber reinforced composites. Furthermore, most realistic systems do not experience the local maxima predicted at the critical angles. The Tsai-Hill criterion provides a more complete description of fiber composite tensile strength as a function of orientation angle by coupling the contributing yield stresses: σ ∥ ∗ {\displaystyle \sigma _{\parallel }^{*}} , σ ⊥ ∗ {\displaystyle \sigma _{\perp }^{*}} , and τ m {\displaystyle \tau _{m}} . ( T.S. ) c ( Tsai-Hill ) = [ cos 4 ⁡ θ ( σ ∥ ∗ ) 2 + cos 2 ⁡ θ sin 2 ⁡ θ ( 1 ( τ m ) 2 − 1 ( σ ∥ ∗ ) 2 ) + sin 4 ⁡ θ ( σ ⊥ ∗ ) 2 ] − 1 / 2 {\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{Tsai-Hill}})={\bigg [}{\frac {\cos ^{4}\theta }{({\sigma _{\parallel }^{*}})^{2}}}+\cos ^{2}\theta \sin ^{2}\theta \left({\frac {1}{({\tau _{m}})^{2}}}-{\frac {1}{({\sigma _{\parallel }^{*}})^{2}}}\right)+{\frac {\sin ^{4}\theta }{({\sigma _{\perp }^{*}})^{2}}}{\bigg ]}^{-1/2}} ==== Randomly oriented fibers ==== Anisotropy in the tensile strength of fiber reinforced composites can be removed by randomly orienting the fiber directions within the material. It sacrifices the ultimate strength in the aligned direction for an overall, isotropically strengthened material. E c = K V f E f + V m E m {\displaystyle E_{c}=KV_{f}E_{f}+V_{m}E_{m}} Where K is an empirically determined reinforcement factor; similar to the particle reinforcement equation. For fibers with randomly distributed orientations in a plane, K ≈ 0.38 {\displaystyle K\approx 0.38} , and for a random distribution in 3D, K ≈ 0.20 {\displaystyle K\approx 0.20} . === Stiffness and Compliance Elasticity === Composite materials are generally anisotropic, and in many cases are orthotropic. Voigt notation can be used to reduce the rank of the stress and strain tensors such that the stiffness C {\displaystyle C} (often also referred to by Q {\displaystyle Q} ) and compliance S {\displaystyle S} can be written as a matrix: [ σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 ] = [ C 11 C 12 C 13 C 14 C 15 C 16 C 12 C 22 C 23 C 24 C 25 C 26 C 13 C 23 C 33 C 34 C 35 C 36 C 14 C 24 C 34 C 44 C 45 C 46 C 15 C 25 C 35 C 45 C 55 C 56 C 16 C 26 C 36 C 46 C 56 C 66 ] [ ε 1 ε 2 ε 3 ε 4 ε 5 ε 6 ] {\displaystyle {\begin{bmatrix}\sigma _{1}\\\sigma _{2}\\\sigma _{3}\\\sigma _{4}\\\sigma _{5}\\\sigma _{6}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&C_{13}&C_{14}&C_{15}&C_{16}\\C_{12}&C_{22}&C_{23}&C_{24}&C_{25}&C_{26}\\C_{13}&C_{23}&C_{33}&C_{34}&C_{35}&C_{36}\\C_{14}&C_{24}&C_{34}&C_{44}&C_{45}&C_{46}\\C_{15}&C_{25}&C_{35}&C_{45}&C_{55}&C_{56}\\C_{16}&C_{26}&C_{36}&C_{46}&C_{56}&C_{66}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\end{bmatrix}}} and [ ε 1 ε 2 ε 3 ε 4 ε 5 ε 6 ] = [ S 11 S 12 S 13 S 14 S 15 S 16 S 12 S 22 S 23 S 24 S 25 S 26 S 13 S 23 S 33 S 34 S 35 S 36 S 14 S 24 S 34 S 44 S 45 S 46 S 15 S 25 S 35 S 45 S 55 S 56 S 16 S 26 S 36 S 46 S 56 S 66 ] [ σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 ] {\displaystyle {\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\end{bmatrix}}={\begin{bmatrix}S_{11}&S_{12}&S_{13}&S_{14}&S_{15}&S_{16}\\S_{12}&S_{22}&S_{23}&S_{24}&S_{25}&S_{26}\\S_{13}&S_{23}&S_{33}&S_{34}&S_{35}&S_{36}\\S_{14}&S_{24}&S_{34}&S_{44}&S_{45}&S_{46}\\S_{15}&S_{25}&S_{35}&S_{45}&S_{55}&S_{56}\\S_{16}&S_{26}&S_{36}&S_{46}&S_{56}&S_{66}\end{bmatrix}}{\begin{bmatrix}\sigma _{1}\\\sigma _{2}\\\sigma _{3}\\\sigma _{4}\\\sigma _{5}\\\sigma _{6}\end{bmatrix}}} When considering each ply individually, it is assumed that they can be treated as thi lamina and so out–of–plane stresses and strains are negligible. That is σ 3 = σ 4 = σ 5 = 0 {\displaystyle \sigma _{3}=\sigma _{4}=\sigma _{5}=0} and ε 4 = ε 5 = 0 {\displaystyle \varepsilon _{4}=\varepsilon _{5}=0} . This allows the stiffness and compliance matrices to be reduced to 3x3 matrices as follows: C = [ E 1 1 − ν 12 ν 21 E 2 ν 12 1 − ν 12 ν 21 0 E 2 ν 12 1 − ν 12 ν 21 E 2 1 − ν 12 ν 21 0 0 0 G 12 ] {\displaystyle C={\begin{bmatrix}{\tfrac {E_{\rm {1}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&{\tfrac {E_{\rm {2}}{\nu _{\rm {12}}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&0\\{\tfrac {E_{\rm {2}}{\nu _{\rm {12}}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&{\tfrac {E_{\rm {2}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&0\\0&0&G_{\rm {12}}\\\end{bmatrix}}\quad } and S = [ 1 E 1 − ν 21 E 2 0 − ν 12 E 1 1 E 2 0 0 0 1 G 12 ] {\displaystyle \quad S={\begin{bmatrix}{\tfrac {1}{E_{\rm {1}}}}&-{\tfrac {\nu _{\rm {21}}}{E_{\rm {2}}}}&0\\-{\tfrac {\nu _{\rm {12}}}{E_{\rm {1}}}}&{\tfrac {1}{E_{\rm {2}}}}&0\\0&0&{\tfrac {1}{G_{\rm {12}}}}\\\end{bmatrix}}} For fiber-reinforced composite, the fiber orientation in material affect anisotropic properties of the structure. From characterizing technique i.e. tensile testing, the material properties were measured based on sample (1-2) coordinate system. The tensors above express stress-strain relationship in (1-2) coordinate system. While the known material properties is in the principal coordinate system (x-y) of material. Transforming the tensor between two coordinate system help identify the material properties of the tested sample. The transformation matrix with θ {\displaystyle \theta } degree rotation is T ( θ ) ϵ = [ cos 2 ⁡ θ sin 2 ⁡ θ cos ⁡ θ sin ⁡ θ s i n 2 θ cos 2 ⁡ θ − cos ⁡ θ sin ⁡ θ − 2 cos ⁡ θ sin ⁡ θ 2 cos ⁡ θ sin ⁡ θ cos 2 ⁡ θ − sin 2 ⁡ θ ] {\displaystyle T(\theta )_{\epsilon }={\begin{bmatrix}\cos ^{2}\theta &\sin ^{2}\theta &\cos \theta \sin \theta \\sin^{2}\theta &\cos ^{2}\theta &-\cos \theta \sin \theta \\-2\cos \theta \sin \theta &2\cos \theta \sin \theta &\cos ^{2}\theta -\sin ^{2}\theta \end{bmatrix}}} for [ ϵ ´ ] = T ( θ ) ϵ [ ϵ ] {\displaystyle {\begin{bmatrix}{\acute {\epsilon }}\end{bmatrix}}=T(\theta )_{\epsilon }{\begin{bmatrix}\epsilon \end{bmatrix}}} T ( θ ) σ = [ cos 2 ⁡ θ sin 2 ⁡ θ 2 cos ⁡ θ sin ⁡ θ s i n 2 θ cos 2 ⁡ θ − 2 cos ⁡ θ sin ⁡ θ − cos ⁡ θ sin ⁡ θ cos ⁡ θ sin ⁡ θ cos 2 ⁡ θ − sin 2 ⁡ θ ] {\displaystyle T(\theta )_{\sigma }={\begin{bmatrix}\cos ^{2}\theta &\sin ^{2}\theta &2\cos \theta \sin \theta \\sin^{2}\theta &\cos ^{2}\theta &-2\cos \theta \sin \theta \\-\cos \theta \sin \theta &\cos \theta \sin \theta &\cos ^{2}\theta -\sin ^{2}\theta \end{bmatrix}}} for [ σ ´ ] = T ( θ ) σ [ σ ] {\displaystyle {\begin{bmatrix}{\acute {\sigma }}\end{bmatrix}}=T(\theta )_{\sigma }{\begin{bmatrix}\sigma \end{bmatrix}}} === Types of fibers and mechanical properties === The most common types of fibers used in industry are glass fibers, carbon fibers, and kevlar due to their ease of production and availability. Their mechanical properties are very important to know, therefore the table of their mechanical properties is given below to compare them with S97 steel. The angle of fiber orientation is very important because of the anisotropy of fiber composites (please see the section "Physical properties" for a more detailed explanation). The mechanical properties of the composites can be tested using standard mechanical testing methods by positioning the samples at various angles (the standard angles are 0°, 45°, and 90°) with respect to the orientation of fibers within the composites. In general, 0° axial alignment makes composites resistant to longitudinal bending and axial tension/compression, 90° hoop alignment is used to obtain resistance to internal/external pressure, and ± 45° is the ideal choice to obtain resistance against pure torsion. ==== Mechanical properties of fiber composite materials ==== ==== Carbon fiber & fiberglass composites vs. aluminum alloy and steel ==== Although strength and stiffness of steel and aluminum alloys are comparable to fiber composites, specific strength and stiffness of composites (i.e. in relation to their weight) are significantly higher. === Failure === Shock, impact of varying speed, or repeated cyclic stresses can provoke the laminate to separate at the interface between two layers, a condition known as delamination. Individual fibres can separate from the matrix, for example, fibre pull-out. Composites can fail on the macroscopic or microscopic scale. Compression failures can happen at both the macro scale or at each individual reinforcing fibre in compression buckling. Tension failures can be net section failures of the part or degradation of the composite at a microscopic scale where one or more of the layers in the composite fail in tension of the matrix or failure of the bond between the matrix and fibres. Some composites are brittle and possess little reserve strength beyond the initial onset of failure while others may have large deformations and have reserve energy absorbing capacity past the onset of damage. The distinctions in fibres and matrices that are available and the mixtures that can be made with blends leave a very broad range of properties that can be designed into a composite structure. The most famous failure of a brittle ceramic matrix composite occurred when the carbon-carbon composite tile on the leading edge of the wing of the Space Shuttle Columbia fractured when impacted during take-off. It directed to the catastrophic break-up of the vehicle when it re-entered the Earth's atmosphere on 1 February 2003. Composites have relatively poor bearing strength compared to metals. Another failure mode is fiber tensile fracture, which becomes more likely when fibers are aligned with the loading direction, so is the possibility of fiber tensile fracture, assuming the tensile strength exceeds that of the matrix. When a fiber has some angle of misorientation θ, several fracture modes are possible. For small values of θ the stress required to initiate fracture is increased by a factor of (cos θ)−2 due to the increased cross-sectional area (A cos θ) of the fibre and reduced force (F/cos θ) experienced by the fiber, leading to a composite tensile strength of σparallel /cos2 θ where σparallel is the tensile strength of the composite with fibers aligned parallel with the applied force. Intermediate angles of misorientation θ lead to matrix shear failure. Again the cross sectional area is modified but since shear stress is now the driving force for failure the area of the matrix parallel to the fibers is of interest, increasing by a factor of 1/sin θ. Similarly, the force parallel to this area again decreases (F/cos θ) leading to a total tensile strength of τmy /sin θ cos θ where τmy is the matrix shear strength. Finally, for large values of θ (near π/2) transverse matrix failure is the most likely to occur, since the fibers no longer carry the majority of the load. Still, the tensile strength will be greater than for the purely perpendicular orientation, since the force perpendicular to the fibers will decrease by a factor of 1/sin θ and the area decreases by a factor of 1/sin θ producing a composite tensile strength of σperp /sin2θ where σperp is the tensile strength of the composite with fibers align perpendicular to the applied force. === Testing === Composites are tested before and after construction to assist in predicting and preventing failures. Pre-construction testing may adopt finite element analysis (FEA) for ply-by-ply analysis of curved surfaces and predicting wrinkling, crimping and dimpling of composites. Materials may be tested during manufacturing and after construction by various non-destructive methods including ultrasonic, thermography, shearography and X-ray radiography, and laser bond inspection for NDT of relative bond strength integrity in a localized area. == See also == 3D composites Aluminium composite panel American Composites Manufacturers Association Chemical vapour infiltration Composite laminate Discontinuous aligned composite Epoxy granite Hybrid material Lay-up process Nanocomposite Pykrete Rule of mixtures Scaled Composites Smart material Smart Materials and Structures Void (composites) == References == == Further reading == == External links == cdmHUB – the Global Composites Community Distance learning course in polymers and composites OptiDAT composite material database Archived 2013-11-04 at the Wayback Machine
Wikipedia/Composite_materials
This is a list of computer programs that are predominantly used for molecular mechanics calculations. == See also == Car–Parrinello molecular dynamics Comparison of force-field implementations Comparison of nucleic acid simulation software List of molecular graphics systems List of protein structure prediction software List of quantum chemistry and solid-state physics software List of software for Monte Carlo molecular modeling List of software for nanostructures modeling Molecular design software Molecular dynamics Molecular modeling on GPUs Molecule editor PyMOL == Notes and references == == External links == SINCRIS Linux4Chemistry Collaborative Computational Project World Index of Molecular Visualization Resources Short list of Molecular Modeling resources OpenScience Biological Magnetic Resonance Data Bank Materials modelling and computer simulation codes A few tips on molecular dynamics atomistic.software - atomistic simulation engines and their citation trends
Wikipedia/Comparison_of_software_for_molecular_mechanics_modeling
In physics, critical phenomena is the collective name associated with the physics of critical points. Most of them stem from the divergence of the correlation length, but also the dynamics slows down. Critical phenomena include scaling relations among different quantities, power-law divergences of some quantities (such as the magnetic susceptibility in the ferromagnetic phase transition) described by critical exponents, universality, fractal behaviour, and ergodicity breaking. Critical phenomena take place in second order phase transitions, although not exclusively. The critical behavior is usually different from the mean-field approximation which is valid away from the phase transition, since the latter neglects correlations, which become increasingly important as the system approaches the critical point where the correlation length diverges. Many properties of the critical behavior of a system can be derived in the framework of the renormalization group. In order to explain the physical origin of these phenomena, we shall use the Ising model as a pedagogical example. == Critical point of the 2D Ising model == Consider a 2 D {\displaystyle 2D} square array of classical spins which may only take two positions: +1 and −1, at a certain temperature T {\displaystyle T} , interacting through the Ising classical Hamiltonian: H = − J ∑ [ i , j ] S i ⋅ S j {\displaystyle H=-J\sum _{[i,j]}S_{i}\cdot S_{j}} where the sum is extended over the pairs of nearest neighbours and J {\displaystyle J} is a coupling constant, which we will consider to be fixed. There is a certain temperature, called the Curie temperature or critical temperature, T c {\displaystyle T_{c}} below which the system presents ferromagnetic long range order. Above it, it is paramagnetic and is apparently disordered. At temperature zero, the system may only take one global sign, either +1 or -1. At higher temperatures, but below T c {\displaystyle T_{c}} , the state is still globally magnetized, but clusters of the opposite sign appear. As the temperature increases, these clusters start to contain smaller clusters themselves, in a typical Russian dolls picture. Their typical size, called the correlation length, ξ {\displaystyle \xi } grows with temperature until it diverges at T c {\displaystyle T_{c}} . This means that the whole system is such a cluster, and there is no global magnetization. Above that temperature, the system is globally disordered, but with ordered clusters within it, whose size is again called correlation length, but it is now decreasing with temperature. At infinite temperature, it is again zero, with the system fully disordered. == Divergences at the critical point == The correlation length diverges at the critical point: as T → T c {\displaystyle T\to T_{c}} , ξ → ∞ {\displaystyle \xi \to \infty } . This divergence poses no physical problem. Other physical observables diverge at this point, leading to some confusion at the beginning. The most important is susceptibility. Let us apply a very small magnetic field to the system in the critical point. A very small magnetic field is not able to magnetize a large coherent cluster, but with these fractal clusters the picture changes. It affects easily the smallest size clusters, since they have a nearly paramagnetic behaviour. But this change, in its turn, affects the next-scale clusters, and the perturbation climbs the ladder until the whole system changes radically. Thus, critical systems are very sensitive to small changes in the environment. Other observables, such as the specific heat, may also diverge at this point. All these divergences stem from that of the correlation length. == Critical exponents and universality == As we approach the critical point, these diverging observables behave as A ( T ) ∝ ( T − T c ) α {\displaystyle A(T)\propto (T-T_{c})^{\alpha }} for some exponent α , {\displaystyle \alpha \,,} where, typically, the value of the exponent α is the same above and below Tc. These exponents are called critical exponents and are robust observables. Even more, they take the same values for very different physical systems. This intriguing phenomenon, called universality, is explained, qualitatively and also quantitatively, by the renormalization group. == Critical dynamics == Critical phenomena may also appear for dynamic quantities, not only for static ones. In fact, the divergence of the characteristic time τ {\displaystyle \tau } of a system is directly related to the divergence of the thermal correlation length ξ {\displaystyle \xi } by the introduction of a dynamical exponent z and the relation τ = ξ z {\displaystyle \tau =\xi ^{\,z}} . The voluminous static universality class of a system splits into different, less voluminous dynamic universality classes with different values of z but a common static critical behaviour, and by approaching the critical point one may observe all kinds of slowing-down phenomena. The divergence of relaxation time τ {\displaystyle \tau } at criticality leads to singularities in various collective transport quantities, e.g., the interdiffusivity, shear viscosity η ∼ ξ x η {\displaystyle \eta \sim \xi ^{x_{\eta }}} , and bulk viscosity ζ ∼ ξ x ζ {\displaystyle \zeta \sim \xi ^{x_{\zeta }}} . The dynamic critical exponents follow certain scaling relations, viz., z = d + x η {\displaystyle z=d+x_{\eta }} , where d is the space dimension. There is only one independent dynamic critical exponent. Values of these exponents are dictated by several universality classes. According to the Hohenberg−Halperin nomenclature, for the model H universality class (fluids) x η ≃ 0.068 , z ≃ 3.068 {\displaystyle x_{\eta }\simeq 0.068,z\simeq 3.068} . == Ergodicity breaking == Ergodicity is the assumption that a system, at a given temperature, explores the full phase space, just each state takes different probabilities. In an Ising ferromagnet below T c {\displaystyle T_{c}} this does not happen. If T < T c {\displaystyle T<T_{c}} , never mind how close they are, the system has chosen a global magnetization, and the phase space is divided into two regions. From one of them it is impossible to reach the other, unless a magnetic field is applied, or temperature is raised above T c {\displaystyle T_{c}} . See also superselection sector == Mathematical tools == The main mathematical tools to study critical points are renormalization group, which takes advantage of the Russian dolls picture or the self-similarity to explain universality and predict numerically the critical exponents, and variational perturbation theory, which converts divergent perturbation expansions into convergent strong-coupling expansions relevant to critical phenomena. In two-dimensional systems, conformal field theory is a powerful tool which has discovered many new properties of 2D critical systems, employing the fact that scale invariance, along with a few other requisites, leads to an infinite symmetry group. == Critical point in renormalization group theory == The critical point is described by a conformal field theory. According to the renormalization group theory, the defining property of criticality is that the characteristic length scale of the structure of the physical system, also known as the correlation length ξ, becomes infinite. This can happen along critical lines in phase space. This effect is the cause of the critical opalescence that can be observed as a binary fluid mixture approaches its liquid–liquid critical point. In systems in equilibrium, the critical point is reached only by precisely tuning a control parameter. However, in some non-equilibrium systems, the critical point is an attractor of the dynamics in a manner that is robust with respect to system parameters, a phenomenon referred to as self-organized criticality. == Applications == Applications arise in physics and chemistry, but also in fields such as sociology. For example, it is natural to describe a system of two political parties by an Ising model. Thereby, at a transition from one majority to the other, the above-mentioned critical phenomena may appear. == See also == Catastrophe theory Conformal field theory Critical brain hypothesis Critical exponent Critical opalescence Critical point Ergodicity Ising model Rushbrooke inequality Self-organized criticality Variational perturbation theory Widom scaling == Bibliography == Phase Transitions and Critical Phenomena, vol. 1-20 (1972–2001), Academic Press, Ed.: C. Domb, M.S. Green, J.L. Lebowitz J.J. Binney et al. (1993): The theory of critical phenomena, Clarendon press. N. Goldenfeld (1993): Lectures on phase transitions and the renormalization group, Addison-Wesley. H. Kleinert and V. Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback ISBN 981-02-4659-5 (Read online at [1]) J. M. Yeomans, Statistical Mechanics of Phase Transitions (Oxford Science Publications, 1992) ISBN 0-19-851730-0 M.E. Fisher, Renormalization Group in Theory of Critical Behavior, Reviews of Modern Physics, vol. 46, p. 597-616 (1974) H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena == References == == External links == Media related to Critical phenomena at Wikimedia Commons
Wikipedia/Critical_phenomena
A raw material, also known as a feedstock, unprocessed material, or primary commodity, is a basic material that is used to produce goods, finished goods, energy, or intermediate materials/Intermediate goods that are feedstock for future finished products. As feedstock, the term connotes these materials are bottleneck assets and are required to produce other products. The term raw material denotes materials in unprocessed or minimally processed states such as raw latex, crude oil, cotton, coal, raw biomass, iron ore, plastic, air, logs, and water. The term secondary raw material denotes waste material which has been recycled and injected back into use as productive material. == Raw material in supply chain == Supply chains typically begin with the acquisition or extraction of raw materials. For example, the European Commission notes that food supply chains commence in the agricultural phase of food production. A 2022 report on changes affecting international trade noted that improving sourcing of raw materials has become one of the main objectives of companies reconfiguring their supply chains. In a 2022 survey conducted by SAP, wherein 400 US-based leaders in logistics and supply chain were interviewed, 44% of respondents cited a lack of raw materials as a reason for their supply chain issues. Forecasting for 2023, 50% of respondents expect a reduced availability of raw materials in the US to drive supply chain disruptions. === Raw materials markets === Raw materials markets are affected by consumer behavior, supply chain uncertainty, manufacturing disruptions, and regulations, amongst other factors. This results in volatile raw materials markets that are difficult to optimize and manage. Companies can struggle when faced with raw material volatility due to a lack of understanding of market demands, poor or no visibility into the indirect supply chain, and the time lag of raw materials price changes. Volatility in the raw materials markets can also be driven by natural disasters and geopolitcal conflict. The COVID-19 pandemic disrupted the steel industry, and once demand rebounded, prices increased 250% in the US. The Russian invasion of Ukraine caused the price of natural gas to increase by 50% in 2022. == Raw material processing == === Ceramic === While pottery originated in many different points around the world, it is certain that it was brought to light mostly through the Neolithic Revolution. That is important because it was a way for the first agrarians to store and carry a surplus of supplies. While most jars and pots were fire-clay ceramics, Neolithic communities also created kilns that were able to fire such materials to remove most of the water to create very stable and hard materials. Without the presence of clay on the riverbanks of the Tigris and Euphrates in the Fertile Crescent, such kilns would have been impossible for people in the region to have produced. Using these kilns, the process of metallurgy was possible once the Bronze and Iron Ages came upon the people that lived there. === Metallic === Many raw metallic materials used in industrial purposes must first be processed into a usable state. Metallic ores are first processed through a combination of crushing, roasting, magnetic separation, flotation, and leaching to make them suitable for use in a foundry. Foundries then smelt the ore into usable metal that may be alloyed with other materials to improve certain properties. One metallic raw material that is commonly found across the world is iron, and combined with nickel, this material makes up over 35% of the material in the Earth's inner and outer core. The iron that was initially used as early as 4000 BC was called meteoric iron and was found on the surface of the Earth. This type of iron came from the meteorites that struck the Earth before humans appeared, and was in very limited supply. This type is unlike most of the iron in the Earth, as the iron in the Earth was much deeper than the humans of that time period were able to excavate. The nickel content of the meteoric iron made it not necessary to be heated up, and instead, it was hammered and shaped into tools and weapons. === Iron ore === Iron ore can be found in a multitude of forms and sources. The primary forms of iron ore today are Hematite and Magnetite. While iron ore can be found throughout the world, only the deposits in the order of millions of tonnes are processed for industrial purposes. The top five exporters of Iron ore are Australia, Brazil, South Africa, Canada, and Ukraine. One of the first sources of iron ore is bog iron. Bog iron takes the form of pea-sized nodules that are created under peat bogs at the base of mountains. == Conflicts of raw materials == Places with plentiful raw materials and little economic development often show a phenomenon known as "Dutch disease" or the "resource curse", which occurs when the economy of a country is mainly based upon its exports because of its method of governance. An example of this is the Democratic Republic of the Congo. == See also == == References == == Further reading == Elizabeth Kolbert, "Needful Things: The raw materials for the world we've built come at a cost" (largely based on Ed Conway, Material World: The Six Raw Materials That Shape Modern Civilization, Knopf, 2023; Vince Beiser, The World in a Grain; and Chip Colwell, So Much Stuff: How Humans Discovered Tools, Invented Meaning, and Made More of Everything, Chicago), The New Yorker, 30 October 2023, pp. 20–23. Kolbert mainly discusses the importance to modern civilization, and the finite sources of, six raw materials: high-purity quartz (needed to produce silicon chips), sand, iron, copper, petroleum (which Conway lumps together with another fossil fuel, natural gas), and lithium. Kolbert summarizes archeologist Colwell's review of the evolution of technology, which has ended up giving the Global North a superabundance of "stuff," at an unsustainable cost to the world's environment and reserves of raw materials. Karl Marx, Capital, Vol. 1, Part III, Chap. 7.
Wikipedia/Raw_materials
In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models (when they are solvable) includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers. == Mathematical description == A number of lattice models can be described by the following data: A lattice Λ {\displaystyle \Lambda } , often taken to be a lattice in d {\displaystyle d} -dimensional Euclidean space R d {\displaystyle \mathbb {R} ^{d}} or the d {\displaystyle d} -dimensional torus if the lattice is periodic. Concretely, Λ {\displaystyle \Lambda } is often the cubic lattice. If two points on the lattice are considered 'nearest neighbours', then they can be connected by an edge, turning the lattice into a lattice graph. The vertices of Λ {\displaystyle \Lambda } are sometimes referred to as sites. A spin-variable space S {\displaystyle S} . The configuration space C {\displaystyle {\mathcal {C}}} of possible system states is then the space of functions σ : Λ → S {\displaystyle \sigma :\Lambda \rightarrow S} . For some models, we might instead consider instead the space of functions σ : E → S {\displaystyle \sigma :E\rightarrow S} where E {\displaystyle E} is the edge set of the graph defined above. An energy functional E : C → R {\displaystyle E:{\mathcal {C}}\rightarrow \mathbb {R} } , which might depend on a set of additional parameters or 'coupling constants' { g i } {\displaystyle \{g_{i}\}} . === Examples === The Ising model is given by the usual cubic lattice graph G = ( Λ , E ) {\displaystyle G=(\Lambda ,E)} where Λ {\displaystyle \Lambda } is an infinite cubic lattice in R d {\displaystyle \mathbb {R} ^{d}} or a period n {\displaystyle n} cubic lattice in T d {\displaystyle T^{d}} , and E {\displaystyle E} is the edge set of nearest neighbours (the same letter is used for the energy functional but the different usages are distinguishable based on context). The spin-variable space is S = { + 1 , − 1 } = Z 2 {\displaystyle S=\{+1,-1\}=\mathbb {Z} _{2}} . The energy functional is E ( σ ) = − H ∑ v ∈ Λ σ ( v ) − J ∑ { v 1 , v 2 } ∈ E σ ( v 1 ) σ ( v 2 ) . {\displaystyle E(\sigma )=-H\sum _{v\in \Lambda }\sigma (v)-J\sum _{\{v_{1},v_{2}\}\in E}\sigma (v_{1})\sigma (v_{2}).} The spin-variable space can often be described as a coset. For example, for the Potts model we have S = Z n {\displaystyle S=\mathbb {Z} _{n}} . In the limit n → ∞ {\displaystyle n\rightarrow \infty } , we obtain the XY model which has S = S O ( 2 ) {\displaystyle S=SO(2)} . Generalising the XY model to higher dimensions gives the n {\displaystyle n} -vector model which has S = S n = S O ( n + 1 ) / S O ( n ) {\displaystyle S=S^{n}=SO(n+1)/SO(n)} . === Solvable models === We specialise to a lattice with a finite number of points, and a finite spin-variable space. This can be achieved by making the lattice periodic, with period n {\displaystyle n} in d {\displaystyle d} dimensions. Then the configuration space C {\displaystyle {\mathcal {C}}} is also finite. We can define the partition function Z = ∑ σ ∈ C exp ⁡ ( − β E ( σ ) ) {\displaystyle Z=\sum _{\sigma \in {\mathcal {C}}}\exp(-\beta E(\sigma ))} and there are no issues of convergence (like those which emerge in field theory) since the sum is finite. In theory, this sum can be computed to obtain an expression which is dependent only on the parameters { g i } {\displaystyle \{g_{i}\}} and β {\displaystyle \beta } . In practice, this is often difficult due to non-linear interactions between sites. Models with a closed-form expression for the partition function are known as exactly solvable. Examples of exactly solvable models are the periodic 1D Ising model, and the periodic 2D Ising model with vanishing external magnetic field, H = 0 , {\displaystyle H=0,} but for dimension d > 2 {\displaystyle d>2} , the Ising model remains unsolved. === Mean field theory === Due to the difficulty of deriving exact solutions, in order to obtain analytic results we often must resort to mean field theory. This mean field may be spatially varying, or global. ==== Global mean field ==== The configuration space C {\displaystyle {\mathcal {C}}} of functions σ {\displaystyle \sigma } is replaced by the convex hull of the spin space S {\displaystyle S} , when S {\displaystyle S} has a realisation in terms of a subset of R m {\displaystyle \mathbb {R} ^{m}} . We'll denote this by ⟨ C ⟩ {\displaystyle \langle {\mathcal {C}}\rangle } . This arises as in going to the mean value of the field, we have σ ↦ ⟨ σ ⟩ := 1 | Λ | ∑ v ∈ Λ σ ( v ) {\displaystyle \sigma \mapsto \langle \sigma \rangle :={\frac {1}{|\Lambda |}}\sum _{v\in \Lambda }\sigma (v)} . As the number of lattice sites N = | Λ | → ∞ {\displaystyle N=|\Lambda |\rightarrow \infty } , the possible values of ⟨ σ ⟩ {\displaystyle \langle \sigma \rangle } fill out the convex hull of S {\displaystyle S} . By making a suitable approximation, the energy functional becomes a function of the mean field, that is, E ( σ ) ↦ E ( ⟨ σ ⟩ ) . {\displaystyle E(\sigma )\mapsto E(\langle \sigma \rangle ).} The partition function then becomes Z = ∫ ⟨ C ⟩ d ⟨ σ ⟩ e − β E ( ⟨ σ ⟩ ) Ω ( ⟨ σ ⟩ ) =: ∫ ⟨ C ⟩ d ⟨ σ ⟩ e − N β f ( ⟨ σ ⟩ ) . {\displaystyle Z=\int _{\langle {\mathcal {C}}\rangle }d\langle \sigma \rangle e^{-\beta E(\langle \sigma \rangle )}\Omega (\langle \sigma \rangle )=:\int _{\langle {\mathcal {C}}\rangle }d\langle \sigma \rangle e^{-N\beta f(\langle \sigma \rangle )}.} As N → ∞ {\displaystyle N\rightarrow \infty } , that is, in the thermodynamic limit, the saddle point approximation tells us the integral is asymptotically dominated by the value at which f ( ⟨ σ ⟩ ) {\displaystyle f(\langle \sigma \rangle )} is minimised: Z ∼ e − N β f ( ⟨ σ ⟩ 0 ) {\displaystyle Z\sim e^{-N\beta f(\langle \sigma \rangle _{0})}} where ⟨ σ ⟩ 0 {\displaystyle \langle \sigma \rangle _{0}} is the argument minimising f {\displaystyle f} . A simpler, but less mathematically rigorous approach which nevertheless sometimes gives correct results comes from linearising the theory about the mean field ⟨ σ ⟩ {\displaystyle \langle \sigma \rangle } . Writing configurations as σ ( v ) = ⟨ σ ⟩ + Δ σ ( v ) {\displaystyle \sigma (v)=\langle \sigma \rangle +\Delta \sigma (v)} , truncating terms of O ( Δ σ 2 ) {\displaystyle {\mathcal {O}}(\Delta \sigma ^{2})} then summing over configurations allows computation of the partition function. Such an approach to the periodic Ising model in d {\displaystyle d} dimensions provides insight into phase transitions. ==== Spatially varying mean field ==== Suppose the continuum limit of the lattice Λ {\displaystyle \Lambda } is R d {\displaystyle \mathbb {R} ^{d}} . Instead of averaging over all of Λ {\displaystyle \Lambda } , we average over neighbourhoods of x ∈ R d {\displaystyle \mathbf {x} \in \mathbb {R} ^{d}} . This gives a spatially varying mean field ⟨ σ ⟩ : R d → ⟨ C ⟩ {\displaystyle \langle \sigma \rangle :\mathbb {R} ^{d}\rightarrow \langle {\mathcal {C}}\rangle } . We relabel ⟨ σ ⟩ {\displaystyle \langle \sigma \rangle } with ϕ {\displaystyle \phi } to bring the notation closer to field theory. This allows the partition function to be written as a path integral Z = ∫ D ϕ e − β F [ ϕ ] {\displaystyle Z=\int {\mathcal {D}}\phi e^{-\beta F[\phi ]}} where the free energy F [ ϕ ] {\displaystyle F[\phi ]} is a Wick rotated version of the action in quantum field theory. == Examples == === Condensed matter physics === Ising model ANNNI model Potts model Chiral Potts model XY model Classical Heisenberg model n-vector model Vertex model Toda lattice cellular automata === Polymer physics === Bond fluctuation model 2nd model === High energy physics === QCD lattice model == See also == Crystal structure Continuum limit QCD matter Lattice gas == References == Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics (PDF), London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7, MR 0690578
Wikipedia/Lattice_model_(physics)